top of page
The Communiqué News

Gemini is a multimodal model that can effortlessly comprehend and combine many sorts of information, including text, code, voice, image, and video, according to Demis Hassabis, CEO and Co-Founder of Google DeepMind.


Pritish Bagdi

Gemini is a multimodal model that can effortlessly comprehend and combine many sorts of information, including text, code, voice, image, and video, according to Demis Hassabis, CEO and Co-Founder of Google DeepMind.


Understand the #GeminiAI with this video:




Gemini is unique in that it is natively multimodal, meaning that different modalities don't require separate components to be sewn together. This innovative strategy, refined through extensive cross-team collaboration across Google teams, presents Gemini as a versatile and effective model that can operate on everything from mobile devices to data centers. Gemini's powerful multimodal reasoning, which allows it to precisely extract insights from large datasets, is one of its most notable qualities. The model is also capable of comprehending and producing well-written code in widely used programming languages.



But even as Google steps into this new AI era, accountability and security are still top priorities. Gemini is subjected to thorough safety reviews, which include toxicity and bias analyses. Google is aggressively working with outside specialists to resolve any potential blind spots and guarantee the moral use of the model.

The Bard chatbot is among the Google products that Gemini 1.0 is now being rolled out. There are plans to integrate Gemini 1.0 with Search, Ads, Chrome, and Duet AI. Nevertheless, the Bard update won't be made available in Europe unless regulators give its approval.

Gemini Pro is available to developers and enterprise users through Google Cloud Vertex AI or Google AI Studio's Gemini API. using Android 14, a new system feature called AICore will enable Android developers to create using Gemini Nano.








Google is rolling out updates to its Maps app that use AI. According to The Verge, the new features include immersive navigation, more easily understood driving directions, and more arranged search results.


Pritish Bagdi

Google hopes to make Maps more like Search—a location where users can locate EV charges, coffee shops, and directions, of course, but also where they can type in general queries like "fall foliage," "latte art," or "things to do in Tokyo" and receive a tonne of genuinely helpful results. According to Google, it wants users of Maps to explore new locations and activities while operating under the guidance of its extremely potent algorithm. According to Chris Phillips, Google's Vice President and General Manager of Geo, artificial intelligence has "supercharged the way we map" and is essential for helping users navigate and make critical decisions.

According to Phillips, Google Maps will evolve into a more "visual and immersive" tool that also assists you in making "more sustainable choices," like using the bus or riding a bike. In order to help developers, cities, and particularly automakers enhance Maps for the in-car navigation experience, Google is also broadening the scope of its API services. According to Miriam Daniel, the Google Maps team leader, one of the ways Google is utilising AI to make Maps more like Search is by analysing "billions" of user-uploaded photographs to assist users in finding odd goods, such as coffee shops that sell lattes with panda faces. Similar to how they can with Search, users can type particular queries into Maps to receive a list of local companies or locations that fit the query based on a real-time analysis of user images.





The exhibition "Rebel: 30 Years of London Fashion" debuted at London's Design Museum. It is unlike many other fashion shows in that it celebrates the 30th anniversary of NewGen, the British Fashion Council's fashion talent incubator that has helped over 300 designers throughout the years.


Pritish Bagdi

The Design Museum not only showcases designers' first steps into fashion and curates 100 innovative looks on display, but it also pioneers a "see now, try now" initiative by allowing visitors to try on nine of those looks - virtually, of course, with the help of augmented reality.

Snapchat built a backstage space with augmented reality vanity mirrors as part of the show. Beyond, an Amsterdam-based creative tech firm, collaborated with Snapchat to create nine classic fashion looks that visitors may try on while sitting in front of the mirrors.

“We are dedicated to delivering an immersive and interactive experience for consumers, empowering them to virtually try on apparel, preview and acquire items in 3D worlds and AR environments, and discover groundbreaking fashion designs,” said Beyond founder and creative director David Robustelli in an interview with FashionUnited before the exhibition.


Which dress visitors can try on?

Among the highlights of the event are Marjan Pejoski's swan gown, which was controversially worn by Icelandic singer Björk at the 2001 Oscars, Harry Styles' Steven Stokey Daley outfit from his 'Golden' video, and Sam Smith's inflatable latex suit by Harri from this year's Brit Awards. Visitors may also view Christopher Kane's breakthrough neon collection, Russell Sage's repurposed Union Jack jacket, which Kate Moss wore for Vogue and a massive Molly Goddard blue ruffle. Lee Alexander McQueen, Christopher Kane, Charles Jeffrey, Christopher Raeburn, Erdem, Henry Holland, Kim Jones, J.W. Anderson, Mary Katrantzou, Molly Goddard, Roksanda, Simone Rocha, Stuart Vevers, Priya Ahluwalia, Saul Nash, Grace Wales Bonner, and Bianca Saunders are among the NewGen alumni featured in the exhibition.

In a backstage section, the exhibition has recreated the moment just before a fashion show, complete with dress models, hair and make-up, and accessories. AR-enhanced mirrors let visitors experiment with make-up and headwear looks from nine actual runway creations.

Among the nine looks that visitors can choose from are creations by Charles Jeffrey’s SS18 collection, Chet Lo SS23, Gareth Pugh SS07, Henry Holland AW08, Liam Hodges, Louise Gray’s collection for Topshop from 2012, Marques’Almeida’s SS15, Matthew Williamson SS98 and Richard Quinn AW18.


How does it work?

Creative tech studio Beyond works with 3D and augmented reality technologies and has done targeted campaigns for Louis Vuitton, Dior, Gucci, Adidas, Tommy Hilfiger, and other brands. As part of a Virgil Abloh tribute, the studio created a version of Louis Vuitton's sold-out fortune cookie bag that people could explore in 3D. “Interactive experiences work realise a higher form of engagement,” says Robustelli.

In terms of the future, the creative director is positive that digital experiences are here to stay for the fashion industry: “They will be an add-on, between identities. There will be different identities — social, physical, and virtual ones. These identities will merge more and more and we will be dressing avatars as we would dress ourselves in real life.”

For brands who want to start out with digital experiences and AR, Robustelli strongly advises collaborating with studios and agencies that have experience. “It is impossible to enter the field without experience,” he emphasises. It is also important to find the right balance between doing too much and too little: “Brands may want to throw in everything but you don’t want to oversell yourself,” he cautions. At the same time, one should not underdo it either but understand the limitations of the technology.

However, brands are well advised to invest in this area: “In the future, consumers are probably more likely to enter a virtual space than an actual store,” believes Robustelli.

Those who would like to get a sneak preview of what AR is capable of can do so at The Design Museum’s exhibition “Rebel: 30 Years of London Fashion”, which will be on display until 11th February 2024.






bottom of page