Google Gemini Has Become Much Better at Photoshop and I’m Excited

Did you know that you can customize Google to filter out unwanted results? Follow these steps to improve your search results, including adding Lifehacker as your preferred source for tech news .
Google has updated the Gemini app (and website) to make the image-making process a little more intuitive, and what I once thought was a novelty may now be a worthy alternative to Photoshop. There’s still the typical AI crap, but the new model, tested under the name “ nano banana ” and now available to all Gemini users as Gemini 2.5 Flash Image, does a lot to let you fine-tune the image to your liking. All images still have a watermark and “AI-generated” warnings in the metadata, but be prepared to do a lot more checking to see if a photo is real—the new Gemini blurs those lines like never before.
Google Gemini Now Edits Real Photos Better
The twist on the updated model is its focus on preserving detail across multiple photos. Now, instead of generating a photo from scratch every time you ask the Gemini app for a photo, it can carry over parts of the original photo or a previously generated image and only change what you asked for. There are two big reasons why this is important, and, paradoxically, one of them actually means using less AI.
For example, you have a photo of yourself wearing a red shirt, but you want it to be blue. Previously, you had two options: either manually upload the image to Photoshop and edit it, or use it as a hint for the AI and keep generating until you get something close enough to the original photo, but with a blue shirt. With the changes in nano banana, Google has tweaked its model so that it now leaves most of the image untouched, changing only the shirt.
Here, for example, is the same situation with a couple of photos of me. Notice how the model retains small details like the curls of my hair, my expression, and my pose. The image isn’t perfect, and you’ll notice that my skin looks a little smoother in the edited version, but with the new updates, Gemini can now figure out what I mean by “shirt” and focus most of the edits on that. It’s worth noting that the shirt also looks a little unnatural, especially on the right shoulder, but I didn’t give Gemini much in my cues anyway. That’s where the next big change comes in.
Use Gemini to edit the same result multiple times
Here’s the real secret. Whether an image is entirely AI-generated or not, you can now use previously generated images as a basis for future generations. In other words, if Gemini doesn’t get something right the first time, you can ask it to try again until it gets it right.
To give you an idea of what it looks like, here’s the same photo of me in the blue shirt, but now with polka dots added to better match the red shirt in the original photo.
Here’s an entirely AI-generated image of a cat that I had Gemini change to orange.
This is very important for AI image generation. Previously, if you asked Gemini to make small changes to already generated content, you essentially got completely new photos every time, like these dogs in hats.
Now, the app can process the same photo multiple times, meaning that if the initial result doesn’t look convincing, you have a chance to fix it. In my opinion, this turns an innovation that essentially spins the wheel with each generation in the hope that it will produce something useful into a real threat to Photoshop.
For example, Google suggests using it to see what you would look like if you lived in a different decade or had a different profession. I admit, the results are pretty compelling for regular posts, especially if you upload a real photo for context. Here I am, standing next to the real Mona Lisa, but reimagined by the artist.
It’s not entirely realistic (why would there be a second Mona Lisa next to me?), but I can imagine a certain type of person getting so excited about this that they flood social media with posts like this. Spend a little time tweaking it, and you might even be able to make it look like I just went to the Louvre.
But if you ‘re skeptical about AI like me, there’s another plus that shows this model has potential to grow.
The merging of photos is still not quite right.
While the new Gemini updates make it much easier to iterate on existing photos, creating new content where you can’t rely too much on the original photo still sees a noticeable boost from the AI. One of the additional features Google announced in this update was the ability to use Gemini to merge multiple original photos into one. However, while the other changes mostly involve small tweaks to existing photos, this still requires a lot of AI effort to merge photos, and this is where you’re likely to run into the same old problems.
For example, following one of Google’s suggested examples, I uploaded a photo of myself and my cat to Gemini and asked it to take a photo of us cuddling. But while other tests I’ve run with this update have looked very similar to the original photos, the result here gave me a version of myself in a too-tight shirt, with too-shiny hair, hugging a too-fat cat. The general strokes were right — my face still mostly looks like me, my cat’s fur pattern is more or less intact, and the couch is even the right color and general shape. But aside from some small inconsistencies with, say, the folds in the couch, or my dimples, or the lamp in the background (which appears to have two poles), anyone who’s met my cat knows she’s not all that big. The photo also has that Vaseline-like, over-processed look that’s typical of AI.
To some extent, this is expected. I didn’t upload too many photos, and certainly none of me or my cat in the poses depicted in the AI-generated image. There’s no way the AI could have known what we’d look like from different angles, especially since my selfie was just a portrait. But what I got means that when an AI runs out of useful input and needs to intuit what a scene should look like, it still runs into the same familiar problems that make it fairly easy to distinguish from non-AI photos. Sure, I could have made the AI-generated photo look more realistic by uploading input photos that were closer to what Gemini wanted, but then I wonder what the point of involving an AI in the editing process is.
Either way, I can say with confidence that advanced AI edits will still require significant human intervention to be convincing.
Get ready for a fusion of AI and reality
I find the new Gemini updates most impressive when used for small tweaks, and that’s where I think Photoshop is at risk. I like to think I have a knack for recognizing AI-generated photos, but when scrolling quickly, I’m not sure that a picture of me in a blue shirt would raise any alarm bells.
What does this mean? First, it means that free AI tools have finally reached the point where you can use them to do, with natural language prompts, what used to take minutes to do manually. Adobe has already announced plans to bring nano banana to Photoshop, but be prepared for more changes to traditionally untouchable apps as AI advances. For now, at least in small ways, this could really challenge your traditional workflow.
Those who aren’t content creators will have to develop an even more discerning eye for what’s real and what’s not online. While images created entirely by AI are often fairly easy to spot, and more realistic edits can be largely innocuous (no one cares what color shirt I’m wearing), Gemini’s updates have made it easier than ever to mix reality with a little bit of falsity. Here’s a photo I took of Taylor Swift in a red baseball cap in the new Gemini, if you know what I mean.
While we wait to see how this all plays out, it’s a good time to remember that if an image really does raise red flags, Gemini adds AI watermarks to the bottom left of all search results and marks photos created with it in the metadata. You can see them on both iPhone and Android by swiping up on an uploaded photo. There are ways to clean up the metadata, but as a fallback, since the most convincing edits are likely to use real photos ( which is what I did with Taylor Swift, shown above), in a pinch you can use Google’s reverse image search to try to find the unaltered original. Be careful.