You Can Now Edit ChatGPT Images in Photoshop Style

You’ve no doubt noticed the plethora of AI art generators that have popped up over the last year or so: super-smart engines that can create images that look just like real photographs, or works of art created by real people. As time goes on, they become more powerful and add more and more features – you can now even find an artificial intelligence tool in Microsoft Paint .

New to the DALL-E AI image model, available to ChatGPT Plus members who pay $20 per month, is the ability to edit parts of an image , just like you could in Photoshop: you no longer have to re-create an entirely new image. An image just because you want to change one element of it – you can show DALL-E the part of the image you want to adjust, give it a few new instructions, and leave the rest alone.

It overcomes one of the important limitations of artificial intelligence, which is that every image ( and video ) is something completely unique and different , even if you use identical cues. This makes it difficult to achieve consistency in images or fine-tune an idea. However, these AI art creators, based on so-called diffusion models , still have to overcome many limitations, as we will show you here.

Editing images in ChatGPT

If you subscribe to ChatGPT Plus, you can download the app on the web or on your mobile phone and ask to take a photo of whatever you like: a cartoon dog detective solving a case in a cyberpunk setting, a hilly landscape with a lone figure in the middle distance, and storm clouds gathering over your head, or whatever it is. In a few seconds you will receive your photo.

To edit a picture, you can now click on the generated image and then on the Select button in the top right corner (it looks like a pen drawing a line). You then adjust the size of the selection tool using the slider in the top left corner and draw on the part of the image you want to change.

Editing interface in ChatGPT. Author: Lifehacker.

This is a significant step forward: you can leave part of the image untouched and simply update the selection. Previously, if you sent a follow-up prompt asking to change one specific part of an image, the entire image would be regenerated and quite likely look significantly different than the original.

When you make your selection, you will be prompted to enter new instructions only for the selected area of ​​the image. As usual with these AI art tools, the more specific you are, the better: you can ask for a person to look happier (or less happy) or for a building to be painted a different color. The changes you requested will be applied.

Success! ChatGPT and DALL-E trade one dog for another. Photo: Lifehacker / DALL-E

From my experiments, ChatGPT and DALL-E seem to use the same AI trick we’ve seen in apps like Google’s Magic Eraser : intelligently filling in the background based on existing information in the scene, trying to leave everything out of the scene . the selection is untouched.

It’s not the most advanced selection tool, and I noticed inconsistencies in the boundaries and edges of an object, which is perhaps to be expected given how much control you have when it comes to making selections. Most of the time, the editing feature worked reasonably well, although it is by no means always reliable, and OpenAI will no doubt be looking to improve it in the future.

Where the art of artificial intelligence reaches its limits

I tried a new editing tool to do a lot of tricks. It did a good job of changing the color and position of the dog in the meadow, but less well with reducing the size of the giant man standing on the castle ramparts – the man simply disappeared into a blurry piece of the rampart, suggesting that the AI ​​was trying to draw around him without much success.

In a cyberpunk setting, I asked for a car ride, but the car didn’t show up. In another castle scene, I had the flying dragon rotated so it was facing the other way, turned it from green to red, and added flames coming out of its mouth. After a few seconds of processing, ChatGPT completely removed the dragon.

Failure! ChatGPT and DALL-E removed the dragon instead of changing it. Photo: Lifehacker / DALL-E

This feature is still completely new, and OpenAI is not yet claiming that it can replace human image editing, because it clearly can’t do that. Things will get better, but these mistakes help show where the problems lie in terms of certain types of AI-produced art.

What DALL-E and similar models are very good at is knowing how to arrange pixels to get a good approximation of a castle (for example), based on the millions (?) of castles they’ve trained on. However, the AI ​​doesn’t know what a castle is: it doesn’t understand geometry or physical space, which is why my castles have turrets sticking out of nowhere. You’ll notice this in a lot of AI-generated art, including buildings, furniture, or any objects that aren’t rendered quite right.

It’s quite white, but far from “plain”. Photo: Lifehacker / DALL-E

At their core, these models are probabilistic machines that don’t (yet) understand what they’re actually showing: that’s why in many OpenAI Sora videos people disappear into nothingness, because the AI ​​is very clever at arranging pixels rather than tracking them. People. You may also have read that AI struggles to create images of couples of different races because based on the image training data, couples of the same race are more likely to appear.

Another oddity that has recently been noticed is the inability of these AI image generators to create a plain white background . These are incredibly smart tools in many ways, but they don’t “think” the way you or I would think, or understand what they’re doing the way a human artist would, and that’s important to take away. keep in mind when you use them.

More…

Leave a Reply