Google expands Gemini with real-time AI features

Gemini’s new live video function enables real-time interpretation of smartphone cameras, helping users get instant answers to questions through interactive AI-powered conversations.

A screen shot of a phone

Google has begun rolling out real-time AI features for its Gemini system, allowing it to analyse smartphone screens and camera feeds instantly. These capabilities, which will be available to select Google One AI Premium subscribers, build on the company’s earlier ‘Project Astra’ demonstration.

The live video feature will enable Gemini to interpret smartphone camera feeds in real time, providing users with instant answers and insights.

The new functionality also allows users to engage in back-and-forth conversations with Gemini based on their screen’s content. A Reddit user recently demonstrated the ‘Share screen with Live’ feature, accessible via the Gemini overlay, showcasing its ability to process and respond to information directly from a device’s display.

Google has confirmed that these updates will first roll out to Gemini Advanced subscribers under the Google One AI Premium Plan, with Pixel and Galaxy S25 owners among the first to gain access.

In addition to real-time AI video capabilities, Google has introduced ‘Canvas,’ a tool designed to help users refine documents and code seamlessly. Canvas allows for real-time edits and streamlines the process of developing prototypes for web apps, Python scripts, and other digital projects.

Another notable addition is ‘Audio Overview,’ which transforms written documents, slides, and research reports into podcast-style discussions between two AI-generated hosts.

An innovation like this aims to make complex information more engaging and accessible by delivering content in a conversational format. Google continues to expand Gemini’s capabilities, reinforcing its position at the forefront of AI-driven user experiences.

For more information on these topics, visit diplomacy.edu.