top of page
The Communiqué News

Mumbai [India], July 29: Ashwin Kumar's recent mythological animated film, 'Mahavatar Narsimha', has become an unexpected hit, captivating audiences nationwide.

Swati Bhat

Poster of Narsimha

Produced by Hombale Films, 'Mahavatar Narsimha' centers on Lord Narasimha, the half-man, half-lion incarnation of Lord Vishnu. The film has garnered a positive response and performed well at the box office, earning praise from ISKCON. ISKCON Siliguri has reserved an entire cinema for devotees to watch the movie. "We have reserved the entire cinema here for our devotees. Tomorrow, more devotees will come to see the film. Many devotees have gathered for the screening. It conveys a powerful message to people...This film promotes Indian culture and makes it easier for people to understand our philosophy," said Nam Krishna Das to TC, the spokesperson for ISKCON Siliguri.

Earlier, the organization shared on their official social media account that 'Mahavatar Narsimha' serves as a "testimony to the dedication of numerous ISKCON members." They elaborated that these members have worked diligently to help the younger generation engage with spiritual content and understand the culture.

Housefull show of Narimha

"Audiences across the country have praised the storyline, the VFX and the overall presentation. Watch it with your family and friends and feel the presence of Lord Narasimha!" the post further added. Written and directed by Ashwin Kumar, 'Mahavatar Narsimha' has connected with viewers from all age groups, with positive responses pouring across social media platforms. The makers have also shared updates on the film's performance, revealing that it became the #1 movie at the Indian box office on Monday, July 28.

According to trade analysts of Bollywood, the animation film's Hindi version has collected Rs 14.70 crore within three days of release, thus bringing another addition to Hombale Films' hit slate.





Pritish Bagdi

Google Ask Photos

Google has paused the rollout of its AI-powered "Ask Photos" feature in Google Photos due to feedback on issues with latency, quality, and user experience. Initially launched as an experimental feature, "Ask Photos" uses Google's Gemini AI models to allow users to search their photo libraries with natural language questions. Product manager Jamie Aspinall announced the pause after criticism, indicating a refined version will be available in about two weeks. The feature aims to improve photo searches by understanding and interpreting photo contents contextually. Alongside this, Google has enhanced search functionality within Google Photos, allowing more accurate searches using quotes for exact text matches. This update expands on features announced at Google I/O 2024, aiming to make searches more intuitive. The pause reflects Google's ongoing scrutiny and refinement of AI features, amid competition in the AI space. Despite the pause, Google's vision for "Ask Photos" remains to enhance user interaction with photo libraries through AI. The timeline for the feature's return remains unannounced.





In an effort to keep ahead of industry rivals, Microsoft-backed OpenAI has announced its latest breakthrough, Sora, a cutting-edge text-to-video model.


Pritish Bagdi

This action demonstrates OpenAI's dedication to preserving a competitive edge in the fast-growing field of artificial intelligence (AI) in an era where text-to-video solutions are becoming increasingly popular.


What is Sora?

Sora, which means sky in Japanese, is a text-to-video diffusion model capable of producing minute-long films that are difficult to distinguish from the original.

OpenAI stated in a post on the X platform (formerly Twitter) that "Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions."

According to the manufacturer, the new model can create lifelike films from still photos or user-supplied footage.

"We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction," the post read.

How are you going to attempt it?

The majority of us will have to wait to use the new AI model. Even though the text-to-video model was unveiled by the corporation on February 15, it is now in the red-teaming stage.

Red teaming is the process of simulating real-world use by a group of experts called the "red team" to find flaws and vulnerabilities in the system.

"We are also granting access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals," the business stated.

Nonetheless, the business posted a number of demonstrations in the blog post, with OpenAI's CEO providing videos of user-requested prompts on X.

How does it operate?

Consider beginning with a loud, static image on a TV and gradually eliminating the fuzziness to reveal a clean, moving video. That's what Sora does. This unique software employs "transformer architecture" to progressively eliminate noise and produce videos.

Not just frames by frames, but complete films can be produced at once by it. Users can direct the video's content by feeding the model text descriptions, such as ensuring that a person remains visible even if they briefly walk off-screen.

Consider GPT models that produce text by word. Similar actions are taken by Sora, but with pictures and movies. Videos are divided into smaller segments known as patches it.

"Sora builds on past research in DALL·E and GPT models. It uses the recaptioning technique from DALL·E 3, which involves generating highly descriptive captions for the visual training data. As a result, the model is able to follow the user’s text instructions in the generated video more faithfully," the company said in the blog post.

However, the company has not provided any details on what kind of data the model is trained on.
















bottom of page