All the New Google I/O Features You Can Try Right Now

Google I/O 2025 was packed with announcements . The problem is that Google doesn’t always clearly define which features are new, which are already released, and which are coming in the future.

While there are plenty of features on the horizon to look out for, and some of them you may have been using for a while, there are brand new features that Google rolled out right after they were announced. Here are all the Google I/O features you can check out right now, though some of them require payment.

Image 4

Credit: Google

Google’s latest AI-powered image generation model, Imagen 4, is available today. Google hasn’t said much about the specific updates to this new model, but it does say that Imagen is faster and can now create images up to 2K resolution in additional aspect ratios.

The change the company has focused on most is typography: Google claims that Imagen 4 can generate text without the usual AI errors you associate with AI image generators. On top of that, the model can incorporate different art styles and design options depending on the context of the prompt. You can see this in the image above, which uses a pixelated design for the text to match the 8-bit look of the comic.

You can try out the latest Imagen model in Gemini, Whisk , Vertex AI, and Workspace apps like Slides, Vids, and Docs.

AI mode

Credit: Lifehacker

The AI ​​mode essentially turns search into a Gemini chat : it lets you ask more complex, multi-step questions. Google then uses a technique called “query forking” to scan the web for relevant links and generate a full answer from those results. I didn’t dive too deeply into the feature, but it mostly works as advertised — I’m just not sure it’s much more useful than searching through the links yourself.

Google has been testing AI mode since March, but it’s now available to everyone in the U.S. If you want to use it, you should see a new “AI mode” option on the right side of the search bar on the Google homepage.

“Try”

Credit: Google

Shopping online is much more convenient than going in person in every way except one: you can’t try on the clothes beforehand. Once they arrive, you try them on, and if they don’t fit or you don’t like the look, you go back to the store where they were.

Google wants to eliminate (or at least significantly reduce) this. Its new “try on” feature scans a picture of yourself you provide to get an idea of ​​your body. Then, when you browse new clothes online, you can select “try on,” and Google’s AI will generate an image of you wearing that item.

It’s an interesting concept, but also a little creepy. Personally, I don’t want Google analyzing my images to more accurately display different types of clothing on me. Personally, I’d rather take the risk and make a refund. But if you want to give it a try, you can try the experimental feature in Google Labs today .

Jules

Jules is Google’s “asynchronous, agent-based coding assistant.” According to Google , the assistant clones your codebase into a secure Google Cloud virtual machine so it can perform tasks like writing tests, creating features, creating changelogs, fixing bugs, and upgrading dependencies.

The assistant runs in the background and doesn’t use your code to learn, which is a bit refreshing from a company like Google. I’m not a coder, so I can’t say for sure whether Jules will be useful. But if you are, you can test it out for yourself. Jules is currently available as a free public beta for anyone who wants to try it out, though Google says there are usage limits and that they’ll charge for different Jules plans once the “platform matures.”

Speech Translation in Google Meet

Credit: Google

If you’re a Google Workspace subscriber, this next feature is pretty awesome. As revealed during the I/O keynote, Google Meet now has a live speech translation feature . Here’s how it works: Let’s say you’re talking to someone in a Google Meet call who speaks Spanish, but you only speak English. You’ll hear the other person speak Spanish for a minute or two before an AI voice overlays an English translation. They’ll get the opposite on their end when you start speaking.

Google is working on adding more languages ​​in the coming weeks.

Google AI Ultra Subscription

Credit: Google

There’s a new subscription, though it’s not for the faint of heart. Yesterday at I/O, Google announced a new “AI Ultra” subscription that costs a whopping $250 per month.

That unusual price tag comes with some important AI features: You get access to the highest limits for all of Google’s AI models, including Gemini 2.5 Deep Think, Veo 3, and Project Mariner. It also comes with 30TB of cloud storage and, funnily enough, a YouTube Premium subscription.

What do you think at the moment?

You really have to be a big fan of AI to spend more than $3,000 a year on this subscription. If you have a nascent curiosity about AI, you might be better off with Google’s “AI Pro” plan — the new name for Google AI Premium, which includes the same benefits and now access to Flow (which I’ll cover below).

Veo

Veo 3 is Google’s newest AI video model. However, unlike Imagen 4, it’s only available to AI Ultra subscribers. Unless you’re willing to spend $250 a month on Google’s service, you’ll have to stick with Veo 2.

Google claims that the Veo 3 is better at real-world physics than the Veo 2, and can handle realistic lip syncing. You can see this in the clip above, which shows the “ancient mariner” reciting a poem. His lips actually match the speech, and the video is crisp with elements of realism. I personally don’t think it looks “real,” and it still has many signs that it’s an AI video , but there’s no doubt that we’re entering dangerous waters with AI video.

However, AI Pro subscribers with access to Veo 2 also have some new video model capabilities. You now have camera controls to dictate how your footage should look; options to adjust the aspect ratio of your clip; tools to add or remove objects from your scene; and controls to “redraw” or add to your clip’s scene.

Flow

Google didn’t just update its AI-powered video model: It also released an AI-powered video editor called Flow.

Flow lets you create videos using Veo 2 and Veo 3, but it also lets you edit those clips on a timeline and control the camera movements of your clips. You can use Imagen to create an element you want to add to a scene, and then ask Veo to create a clip with that element.

I’m sure AI movie fans will love this, but I remain skeptical. I could see it being a useful tool for storyboard ideas, but for creating actual content? I know I don’t want to watch full shows or movies created by AI. Maybe the weird Instagram videos make me laugh, but I don’t think Reels is Google’s end goal here.

Flow is available to both AI Pro and AI Ultra subscribers. If you have AI Pro, you can access Veo 2, but AI Ultra subscribers can choose between Veo 2 and Veo 3.

Gemini in Chrome

Credit: Google

AI Pro and AI Ultra subscribers now have access to Gemini in Google Chrome, which appears in the toolbar of your browser window. You can ask the assistant to make a summary of a web page, as well as request information about elements of that web page. There are plans for agent functionality in the future, so Gemini will be able to check websites for you, but for now you are really limited to two functions.

More…

Leave a Reply