All the Most Important News and Features Announced at Google I/O 2025

It should have been obvious that Google I/O 2025 would be packed, given that the company felt the need to hold a separate event to cover all of its Android news . But let me laugh at the fact that Google held a nearly two-hour presentation full of announcements and revelations, mostly about AI.

Of course, not all AI announcements are created equal. Some news was aimed at enterprise users, and some was aimed at developers. But many of the features discussed are also on their way to consumer devices, some as early as today. Here are the updates I’m going to focus on here — you can expect to try these features today, in the coming weeks, or at some point in the near future.

Gemini Live Coming to iPhone

Earlier this year, Google released Gemini Live to all Android users via the Gemini app. The feature lets you share your camera or screen with Gemini so it can help you answer questions about what you see. Starting today, Google is also bringing the feature to iPhone with the Gemini app . As long as you have the app, you can share your camera and screen with the AI, no matter what platform you’re on.

AI Mode Is the Future of Google Search

Google has been testing an AI mode in search since March. The feature essentially turns Google search into a Gemini experience, allowing you to combine multiple questions into one complex query. Google says its AI can handle breaking down your query and searching the web for the most relevant sources. The result, in theory, is a comprehensive report that answers every aspect of your search, including source links and images.

AI Mode will roll out to all users — not just testers — in the coming weeks . But Google isn’t just testing the AI ​​Mode experience. The company also announced new AI Mode features at its I/O conference.

Combine multiple searches into one

First, there’s Deep Search , which multiplies the number of searches AI Mode would normally do for your query and generates an “expert, fully citable report” for you. I’d still fact-check it carefully, given that AI has a habit of hallucinating . AI Mode also gets access to Gemini Live, so you can share your screen or camera in Search.

Use Agent Mode as a real personal assistant

Project Mariner is also getting an AI Mode. Google says you’ll have access to “agent capabilities,” which essentially means you can rely on AI Mode to do tasks for you. For example, you’ll be able to ask AI Mode to find you “available tickets to the Reds game this Saturday in the lower level,” and the bot will not only do the search for you, but also fill out the necessary forms. Google says this functionality will apply to event tickets, restaurant reservations, and local meetups.

You can see this in action with Agent Mode , which could theoretically perform complex tasks on your behalf. We don’t know much about how it will work yet, but we do have a visual example from Google I/O. During a presentation, Alphabet CEO Sundar Pichai tasked Agent Mode Gemini with finding an apartment with laundry facilities on the block while sticking to a certain budget. Gemini then got to work, opening a browser, loading Zillow, finding apartments, and booking a tour.

AI Mode will use your previous search history to provide you with more relevant results. This includes results that are relevant to your location — like local recommendations for an upcoming trip — as well as preferences (if you tend to make outdoor dining reservations, AI Mode might recommend outdoor dining when you ask to find dinner reservations).

New Gemini Features Coming to Workspace

At I/O, Google announced a number of new Gemini features , some of which will be coming to Workspace.

One of the features Google has been focusing on the most is personalized Smart Replies in Gmail . While Gmail already has an AI-powered Smart Reply feature, this one goes a step further and bases its responses on all of your Google data. The goal is to generate a response that sounds like you wrote it, and includes any questions or comments you might have about the email in question. In practice, I’m not sure why I’d want AI to do all of my communication for me, but the feature will be available later this year, initially for paid subscribers.

If you’re using Google Meet with a paid plan, expect live speech translation to begin today. The feature automatically dubs speakers during a call in the target language, like an instant universal translator. Let’s say you’re speaking English and your meeting partner is speaking Spanish: You’ll hear them start speaking Spanish before the AI ​​voice takes over translating into English.

“Try it”

Google no longer wants you to return the clothes you order online. The company has announced a new feature called “try on” that uses AI to show you what you’ll look like in any clothes you’re thinking about buying.

This isn’t just a concept: Today, Google is inviting Google Search lab users to “try it out.” If you want to learn more about this feature and how to use it, check out our full guide .

Android XR

As rumors suggested, Google has revealed a bit about Android XR , the company’s software for glasses and headsets. Most of the news it shared had been previously announced, but we saw some interesting features in action.

For example, when using the upcoming glasses with Android XR built in, you’ll be able to access a discreet HUD that can show you everything from photos to messages to Google Maps. (My personal favorite would be using Google Maps with augmented reality while walking around a new city.) We also got a live demo of speech translation on stage, with Android XR overlaying an English translation on the screen while two speakers spoke in different languages.

What do you think at the moment?

While there’s no exact timeline for when you’ll be able to try Android XR, the big news from Google is that the company is partnering with Warby Parker and Gentle Monster to create glasses with the service built in.

Veo 3, Imagen 4 and Flow

At this year’s I/O conference, Google unveiled two new generations of artificial intelligence: Imagen 4 (images) and Veo 3 (video).

Imagen 4 now generates higher-quality images with more detail than Imagen 3, Google’s previous image generation model. But the company specifically highlighted Imagen 4’s improvements in text generation. If you ask the model to generate, say, a poster, Google will say that the text will be both accurate to the query and stylistic.

Google started the show with videos created by Veo 3, so it’s safe to say the company is quite proud of its video-making model. While the results are crisp, colorful, and sometimes packed with elements, they’re definitely still plagued by the usual quirks and issues with AI-generated videos . But the biggest story here is Flow , Google’s new AI-powered video editor . Flow uses Veo 3 to create videos that can then be assembled like any other non-linear editor. You can use Imagen 4 to create the element you want in a frame, then ask Flow to add it to the next clip. In addition to being able to cut or expand a frame, you can control the camera movement of each frame independently.

This is the most “impressive” technology I’ve seen, but I can’t imagine any use for it outside of high-tech storyboarding. Maybe I’m in the minority, but I definitely don’t want to watch AI-generated videos, even if they’re created with tools similar to those used by human video creators.

Veo 3 is only available to Google AI Ultra subscribers, though Flow is available in a limited capacity with Veo 2 for AI Pro subscribers.

Two New Chrome Features

Chrome users can look forward to two new features after Google I/O . First, Google is bringing Gemini directly into the browser—no need to open the Gemini website. Second, Chrome can now update your old passwords on your behalf. This feature will launch later this year, though you’ll have to wait for websites themselves to offer support.

New way to pay for AI

Finally, Google is offering new subscriptions to access its AI features . Google AI Premium is now called AI Pro and remains largely the same, except for the new ability to access Flow and Gemini in Chrome. It still costs $20 per month.

The new subscription is Google AI Ultra, which costs a whopping $250 per month. For that price, you get everything in Google AI Pro, but with the highest limits for all AI models, including Gemini, Flow, Whisk, and NotebookLM. You get access to Gemini 2.5 Pro Deep Think (the company’s newest and most advanced reasoning model), Veo 3, Project Mariner, YouTube Premium, and 30 TB of cloud storage. What a bargain.

More…

Leave a Reply