Seven New Gemini Features Announced by Google at I/O 2025

Google I/O 2025’s keynote could more accurately be called the Google AI Show. Almost everything the company talked about was based on AI, some of which is promised to come in the future, and some of which are available today. The features were spread across Google’s entire product portfolio, but here are some of the ones you’re likely to actually see.

It’s hard to talk about Gemini because it refers to a collection of models (like Gemini Flash, Gemini Pro, and Gemini Pro Deep Research), the various versions of those models (the latest version seems to be 2.5 for most of them) , and the various apps through which those models are accessible. There’s a dedicated Gemini app, the voice assistant in things like Pixel phones and watches, and Gemini tools built into apps like Google Docs, Gmail, or Search.

I’ll try to be as detailed as possible about what features will appear in which products, but keep in mind that Google sometimes tends to announce the same thing multiple times.

Agent mode coming to Gemini, Search and other apps

The Gemini app is getting a new agent mode that can perform tasks for you while you do something else. Google showed an example of asking Gemini to find apartments in a city. The app then searches the web for listings, filters them based on criteria you specify, and can offer to give you tours of the apartments.

The most interesting aspect of this is that Google presents this as a task that you can have Gemini repeat regularly. So, for example, if you want Gemini to search for new apartments every week, the app can repeat the process, continuing with information from previous search iterations.

Agent Mode also appears in Google Search for certain queries. Google uses the example of a query for tickets to an upcoming event. Google crawls ticket listing sites, cross-references your preferences, and presents the results.

Gmail will impersonate you when replying to your emails

Gmail has had Smart Replies for a while now, but they can sound pretty mundane (at least without intervention ). That’s a clear sign that you’re not paying attention. To help you avoid phantom tracking of your friends, Gmail will soon be able to tailor its replies to you by referencing your past emails and even Drive documents.

Google gives the example of a friend asking how you planned your recent vacation, something we all email each other all the time. In this case, Gmail can craft a response based on your email history, with advice you’re likely to give, and even write it the way the AI ​​thinks you would write it.

Thought summary will summarize how the AI ​​summarizes its thought process

Yes, you read that right. AI “reasoning” models typically work by taking your request, generating text that breaks it down into smaller parts, sending those parts back to the AI, and then executing each step. That’s a lot of instructions happening behind the scenes on your behalf. Typically, reasoning models (including Gemini) will have a little drop-down list to show you the steps they’ve taken in between.

If even that’s too much for you, Gemini will now summarize its thought process. In theory, this is to make it easier to understand why Gemini came to the answers it gives you.

Your own audio output will whisper to you (in your nightmares)

This is technically a new feature of the Gemini API, which means developers can use these tools in their apps. The native audio output will allow developers to generate natural-sounding speech. In their demo, Google showed voices that could switch between multiple languages, which was pretty cool.

What do you think at the moment?

What’s not so cool, however, is that the model can also whisper . I don’t yet know what the practical use cases are for an AI-generated voice that can whisper, but I do know that I won’t be able to get it out of my head for a week. At best.

Jules will fix your code errors in the background while you work

Last year, Google announced Jules , a coding agent that can help you with your code, similar to Github’s Copilot . Jules is now in public beta . Google says Jules can fix bugs while you work on other tasks, bump up dependencies, and even provide an audio summary of the changes it makes to your code.

Google Search Lets You Virtually Try On Clothes While Shopping Online

I’m not very good at imagining how clothes will look on my body, so this new try-on feature might come in handy. Google is launching a Search Labs experiment that lets you upload a full-length photo of yourself, which Google will edit to show you how the clothes will look on you.

The company is also integrating shopping tools that can buy items for you and even track the best price. It can then buy items for you via Google Pay using your saved payment and shipping details. This isn’t quite available yet, and frankly, we’d like to know a little more about how this process works and how to prevent purchases you don’t want before we recommend using it.

New Veo and Imagen models will generate audio and video

Video, by definition, is a series of images played at a speed fast enough to convey a sense of motion. With that definition, I can confidently say that the demos of Google’s new Veo 3 model do indeed show video. How good that video is depends on the eye of the beholder , I suppose.

Google seems to be betting that users will find the video produced by Veo 3 (and, by association, the images from Imagen 4) worth checking out, as the company is also building a video editing suite around it. Flow is a video editing tool that ostensibly allows editors to enhance and re-generate clips to get the look they want.

Google also says that Veo 3 can generate sounds to accompany its videos. For example, in the owl scene linked above, Veo also generates forest sound effects. We’ll see how it generates these elements (can you edit individual sounds separately, for example?), but for now, the demos speak for themselves. Veo 3 is available now in the Gemini app for Ultra subscribers.

More…

Seven New Gemini Features Announced by Google at I/O 2025

Google I/O 2025’s keynote could more accurately be called the Google AI Show. Almost everything the company talked about was based on AI, some of which is promised to come in the future, and some of which are available today. The features were spread across Google’s entire product portfolio, but here are some of the ones you’re likely to actually see.

It’s hard to talk about Gemini because it refers to a collection of models (like Gemini Flash, Gemini Pro, and Gemini Pro Deep Research), the various versions of those models (the latest version seems to be 2.5 for most of them) , and the various apps through which those models are accessible. There’s a dedicated Gemini app, the voice assistant in things like Pixel phones and watches, and Gemini tools built into apps like Google Docs, Gmail, or Search.

I’ll try to be as detailed as possible about what features will appear in which products, but keep in mind that Google sometimes tends to announce the same thing multiple times.

Agent mode coming to Gemini, Search and other apps

The Gemini app is getting a new agent mode that can perform tasks for you while you do something else. Google showed an example of asking Gemini to find apartments in a city. The app then searches the web for listings, filters them based on criteria you specify, and can offer to give you tours of the apartments.

The most interesting aspect of this is that Google presents this as a task that you can have Gemini repeat regularly. So, for example, if you want Gemini to search for new apartments every week, the app can repeat the process, continuing with information from previous search iterations.

Agent Mode also appears in Google Search for certain queries. Google uses the example of a query for tickets to an upcoming event. Google crawls ticket listing sites, cross-references your preferences, and presents the results.

Gmail will impersonate you when replying to your emails

Gmail has had Smart Replies for a while now, but they can sound pretty mundane (at least without intervention ). That’s a clear sign that you’re not paying attention. To help you avoid phantom tracking of your friends, Gmail will soon be able to tailor its replies to you by referencing your past emails and even Drive documents.

Google gives the example of a friend asking how you planned your recent vacation, something we all email each other all the time. In this case, Gmail can craft a response based on your email history, with advice you’re likely to give, and even write it the way the AI ​​thinks you would write it.

Thought summary will summarize how the AI ​​summarizes its thought process

Yes, you read that right. AI “reasoning” models typically work by taking your request, generating text that breaks it down into smaller parts, sending those parts back to the AI, and then executing each step. That’s a lot of instructions happening behind the scenes on your behalf. Typically, reasoning models (including Gemini) will have a little drop-down list to show you the steps they’ve taken in between.

If even that’s too much for you, Gemini will now summarize its thought process. In theory, this is to make it easier to understand why Gemini came to the answers it gives you.

Your own audio output will whisper to you (in your nightmares)

This is technically a new feature of the Gemini API, which means developers can use these tools in their apps. The native audio output will allow developers to generate natural-sounding speech. In their demo, Google showed voices that could switch between multiple languages, which was pretty cool.

What do you think at the moment?

What’s not so cool, however, is that the model can also whisper . I don’t yet know what the practical use cases are for an AI-generated voice that can whisper, but I do know that I won’t be able to get it out of my head for a week. At best.

Jules will fix your code errors in the background while you work

Last year, Google announced Jules , a coding agent that can help you with your code, similar to Github’s Copilot . Jules is now in public beta . Google says Jules can fix bugs while you work on other tasks, bump up dependencies, and even provide an audio summary of the changes it makes to your code.

Google Search Lets You Virtually Try On Clothes While Shopping Online

I’m not very good at imagining how clothes will look on my body, so this new try-on feature might come in handy. Google is launching a Search Labs experiment that lets you upload a full-length photo of yourself, which Google will edit to show you how the clothes will look on you.

The company is also integrating shopping tools that can buy items for you and even track the best price. It can then buy items for you via Google Pay using your saved payment and shipping details. This isn’t quite available yet, and frankly, we’d like to know a little more about how this process works and how to prevent purchases you don’t want before we recommend using it.

New Veo and Imagen models will generate audio and video

Video, by definition, is a series of images played at a speed fast enough to convey a sense of motion. With that definition, I can confidently say that the demos of Google’s new Veo 3 model do indeed show video. How good that video is depends on the eye of the beholder , I suppose.

Google seems to be betting that users will find the video produced by Veo 3 (and, by association, the images from Imagen 4) worth checking out, as the company is also building a video editing suite around it. Flow is a video editing tool that ostensibly allows editors to enhance and re-generate clips to get the look they want.

Google also says that Veo 3 can generate sounds to accompany its videos. For example, in the owl scene linked above, Veo also generates forest sound effects. We’ll see how it generates these elements (can you edit individual sounds separately, for example?), but for now, the demos speak for themselves. Veo 3 is now available in the Gemini app for Ultra subscribers.

More…

Leave a Reply