top of page
The Communiqué News

Adobe and NVIDIA will collaborate on the development of a new generation of advanced generative AI models. The collaboration will centre on the deep integration of generative AI in creative workflows. Both companies support content transparency and Content Credentials, which are powered by Adobe's Content Authenticity Initiative.


Pritish Bagdi

ree

Image: Gene Silvers. Anil Chakravarthy, president, digital experience; David Wadhwani, president, digital media; Shantanu Narayen, CEO in front of Firefly generated images.


Adobe (Nasdaq:ADBE) and NVIDIA, longtime R&D partners, have announced a new collaboration to harness the power of generative AI to advance creative workflows. Adobe and NVIDIA will collaborate on the development of a new generation of advanced generative AI models, with a focus on deep integration into applications used by the world's top creators and marketers. Some of these models will be co-developed and released through Adobe's Creative Cloud flagship products such as Adobe Photoshop, Adobe Premiere Pro, and Adobe After Effects, as well as through the new NVIDIA Picasso cloud service, allowing third-party developers greater access. Priorities of the partnership include ensuring content transparency and Content Credentials powered by Adobe's Content Authenticity Initiative, as well as supporting the commercial viability of the new technology.

NVIDIA Picasso, a cloud service for generative AI announced today, allows users to build and deploy generative AI-powered image, video, and 3D applications with advanced text-to-image, text-to-video, and text-to-3D capabilities to supercharge productivity for creativity, design, and digital simulation via simple cloud APIs. "Adobe and NVIDIA have a long history of collaborating to advance the technology of creativity and marketing," said Scott Belsky, Adobe's Chief Strategy Officer and EVP, Design and Emerging Products. "We're excited to collaborate with them on how generative AI can provide our customers with more creative options, speed up their work, and help scale content production."

"Generative AI empowers unprecedented creativity," said Greg Estes, NVIDIA's VP of Corporate Marketing and Developer Programs. "With NVIDIA Picasso and Adobe tools like Creative Cloud, we'll be able to bring the transformational capabilities of generative AI to enterprises, allowing them to explore more ideas while producing and scaling incredible creative content and digital experiences."

Adobe Firefly, Adobe's new family of creative generative AI models, was unveiled earlier today, along with the beta of its first model focused on the generation of images and text effects that are safe for commercial use. Firefly will bring even more precision, power, speed, and ease to workflows involving the creation and modification of content in Adobe Creative Cloud, Adobe Document Cloud, and Adobe Experience Cloud. Adobe Firefly is a web browser. Some Adobe Firefly models will be hosted on NVIDIA Picasso, which will optimise performance and generate high-quality assets to meet customer expectations. Adobe is also working on new generative AI services to help with the creation of video and 3D assets, as well as to assist marketers in scaling and personalising content for digital experiences by advancing end-to-end marketing workflows. Initiative for Content Authenticity Adobe established the Content Authenticity Initiative (CAI) to establish open industry standards for attribution and Content Credentials. People will be able to see when content was generated or modified using generative AI by using Content Credentials that CAI adds to content at the point of capture, creation, edit, or generation. Adobe and NVIDIA, as well as the CAI's other 900 members, support Content Credentials so that people can make informed decisions about the content they encounter.

It will add a "Do Not Train" tag for content creators who do not want their content to be used in model training, and that tag will be associated with content wherever it is used, published, or stored.




Quebec [Canada], December 8: Analysis and studying of the different dimensions of a flower has just gotten easier.


Swati Bhat

ree

A research team in biology from the Universite de Montreal, the Montreal Botanical Garden, and McGill University has successfully used photogrammetry to quickly and accurately build a model of a flower from two-dimensional images and transform it into 3D. This is done in order to have greater clarity about the evolution of flowers.

Photogrammetry is commonly used by geographers to reconstruct the topography of a landscape.

However, this is the first time that scientists have used the technique to design 3D models of flowers in order to better study them. They results of their experiment were published in October in the journal New Phytologist. Photogrammetry is an approach based on information gathered from numerous photos taken from all angles. Thanks to the triangulation of common points present on the photos, it is possible to reconstruct a 3D model - in this case, of a flower. Colours can then be applied to the 3D flower using information from the photos. Flowers are complex and extremely varied three-dimensional structures. Characterizing their forms is important in order to understand their development, functioning and evolution. Indeed, 91 percent of flowering plants interact with pollinators to ensure their reproduction in a 3D environment. The morphology and colours of the flowers act like magnets on pollinators in order to attract them. Yet the 3D structure of flowers is rarely studied. The use of photogrammetry has real advantages compared to other existing methods, in particular X-ray microtomography, which is by far the most widely used method to build 3D flower models. "Photogrammetry is much more accessible, since it's cheap, requires little specialized equipment and can even be used directly in nature," said Marion Lemenager, a doctoral student in biological sciences at UdeM and lead author of the study. "In addition, photogrammetry has the advantage of reproducing the colours of flowers, which is not possible with methods using X-rays." It was Daniel Schoen, a McGill biology professor, who first had the idea of applying photogrammetry to flowers, while doing research at Institut de recherche en biologie vegetale.

The first results, although imperfect, were enough to convince Lemenager to devote a chapter of her thesis to it. "The method is not perfect," she said. "Some parts of the flowers remain difficult to reconstruct in 3D, such as reflective, translucent or very hairy surfaces."

"That said," added UdeM biology professor Simon Joly, "thanks to the living collections of the Montreal Botanical Garden, the study of plants of the Gesneriaceae family - plants originating from subtropical to tropical regions, of which the African violet is one of the best known representatives - demonstrates that 3D models produced using this technique make it possible to explore a large number of questions on the evolution of the shape and colour of flowers. "We have also shown that photogrammetry works at least as well as X-ray methods for visible flower structures," said Joly, who conducts research at the Botanical Garden. Photogrammetry has the potential to boost research on flower evolution and ecology by providing a simple way to access three-dimensional morphological data, the researchers believe. Databases of flowers - or even of complete plants - could give scientists and the general public a way to see the unique features of plant species that for now remain hidden. An open-access, detailed protocol has been made available to promote the use of this method in the context of the comparative study of floral morphology.

The goal of free access to natural science collections of this sort is to help stimulate the study of the evolution of flower morphology at large taxonomic, temporal and geographical scales. It is also possible to admire flower models from every angle thanks to a 3D model viewer.



Digital design has allowed for new doors to be opened for designers and creators looking to enter the fashion industry through alternate means.


Swati Bhat

ree

Despite tools and technology continuing to develop for this fairly new method of creation, the medium comes with almost limitless design opportunities and provides creators with a free range of exploration.

Many especially took to this form of fashion digitalisation during the pandemic, when working from home became a norm and issues around sustainability and the climate were heightened. For those who did, it became not necessarily a replacement for traditional design but an extension of their work, offering another route to explore their design identity in a more open format.

FashionUnited spoke to three digital designers making waves in the industry on how they began, how they have translated their values and where they see the digitisation of fashion in the future.

ree

Davina India


Davina India: ‘Digitalisation can’t be stopped’

German-based designer Davina India’s work sees the merging of futuristic and organic shapes as she draws inspiration from the art of nature and by imaging what life on other planets could look like. Her path to digital design stemmed from her love for avant-garde silhouettes, an interest she discovered during her Fashion Design study. The clothing shapes she wished to make were not malleable from conventional materials, she told FashionUnited, so she took to the world of digital 3D design, an area she had not yet explored, as she strived to experiment and create garments that were independent from the laws of physics.

When comparing digital design to that of the physical, India said it gave her endless opportunities and freedom in her design process. When building a piece up, for example, she explained that new ideas that could change the shape could still be implemented, giving her experimental freedom. “The future is digital,” she said, on the importance of this method. “Digitalisation can’t be stopped. Sooner or later it will influence every part of our life. You can even buy digital land these days, so I guess we can also imagine what the future will look like. In the case of fashion, it is and will be the most sustainable way to create fashion.”

Although her designs can not yet be bought, she said that there were a number of photographers, stylists and magazines that had expressed an interest in her digital clothing and editing on the human body. When asked how she envisioned the future of her work, India noted that she is looking to develop her design in both physical and digital spaces, adding: “I would like to produce my digital pieces also in real life. That includes accessories and more. We will see what comes.”

ree

Xtended Identity x Valslooks


Xtended Identity: ‘Our aim is to extend everyone’s identity’

Xtended Identity is best described as a female-led digital design lab, co-founded by creative trio Yunjia Xing, Ziqi Xing and Aria Bao after they met during a Masters degree in London. The collective came together to develop a digital showroom for their work after covid-related difficulties meant they struggled to showcase their design. It was then they began to realise the breadth of what digitalisation could do, with Ziqi Xing noting: “It is actually opening more possibilities and opportunities for young designers.”

While the group is currently prioritising building up their brand image, they are continuing to explore their mythical design aesthetic, often characterised through pastel hues and fantasy-like shapes – elements that Xing, the designer of the group, said have naturally fit with the audience that has discovered them. “We can design things that don’t really exist, that go against gravity, time and space,” Xing noted. “Our aim is to extend everyone’s identity and their digital footprint.”

Users are able to wear the group’s digital apparel and clothing through real-time augmented reality (AR) filters, a medium the trio believes has a lot to offer and something Xing said they would continue to work with in the future. According to the designer, AR tools have also allowed the group to work towards their goal of developing ‘phygital’ products, an element that has been prominent in past collaborations with other brands and designers, seeing them bringing physical items to life in the digital world.

Now, the collective is preparing for the launch of a non-fungible token (NFT) in what will be their first exploration of Web3, while also looking towards the future and where they will stand in both the gaming and fashion industries. For Xing, the most important factor is that they strongly represent women and the LGBTQ community in the digital landscape, expressing their values and efforts for a diverse audience. “We want to build a solid ecosystem for our audiences and our brand,” Xing concluded.

ree

Yimeng Yu


Yimeng Yu: ‘It can break the boundaries of the physical world’

For Yimeng Yu, digital methodology has always played a big role in her research at various stages of her career. It specifically contributed to her time as an independent artist, during which she collaborated with a number of companies, brands and magazines. However, as the pandemic unfolded, her core attention started to turn to digitalisation as she began to realise the endless possibilities that came with digital tools. “It was an interdisciplinary practice,” she added.

Since exploring the realm of digital design, Yu’s overview of the space has expanded, as she finds that the limitless experimentation that comes with it can free her imagination. “It can break the boundaries of the physical world to innovate artistic language in terms of textures, structures, silhouettes and so on,” she said. “At the same time, it provides a sustainable way to greatly improve design efficiency and is also able to link to intelligent manufacturing and accurate counterpoint production.”

Her otherworldly work, which mostly attracts those from the creative and cultural industries, centres around the aesthetic of ‘Parametric Nature’. Yu’s use of artificial editing contrasts that of forms that appear to have been naturally grown, through which she said she has used ‘order’ to create ‘disorder’. Speaking on her designs, Yu commented: “From my work, you can see the symbiosis between artificial and nature, the combination of machinery and biology, and the collision between rationality and sensibility.”

In the future, the young creative hopes to continue making digital fashion a part of her research by focusing on computation design and digital fabrication as the core. She additionally is hoping to expand her practice from fashion into more interdisciplinary fields and innovate new application scenes.




bottom of page