You Can’t Use AI to ‘enlarge and Enhance’ Kate Middleton’s Grainy Photo

The Internet’s latest obsession is Kate Middleton , especially regarding her whereabouts following her unexpected January surgery . Despite the initial announcement that the princess would not resume her duties until Easter, the world couldn’t help but speculate and theory about Kate’s health and the status of her marriage to Prince William. It didn’t help, of course, that the only images of the princess published since then are, well, not definitive. There were grainy photographs taken from afar, and of course, the infamous family photo , which was later revealed to have been doctored. (There was later a post on X (formerly Twitter), attributed to Kate Middleton , apologizing for the edited photo.)

Finally, The Sun published a video of Kate and William walking around a farm shop on Monday, which should have put the matter to rest. However, the video has not appeased the most ardent conspiracy theorists, who believe the quality of the video is too low to confirm whether the woman walking is indeed the princess.

In fact, some go so far as to suggest that what we’re seeing shows it’s not Kate Middleton. To prove it, some turned to artificial intelligence photo software to sharpen pixelated video frames to find out once and for all who was hanging out with the future King of England:

The tweet may have been deleted

That’s it, people: this woman is not Kate Middleton. It’s… one of those three. Case is closed! Or, wait, it’s actually the woman in the video:

The tweet may have been deleted

Eh, maybe not. God, these results are completely inconsistent.

That’s because these AI “enhancement” programs don’t do what those users think they do. None of the results prove that the woman in the video is not Kate Middleton. All they prove is that AI can’t tell you what a pixelated person actually looks like.

I don’t necessarily blame those who think AI has that kind of power. After all, over the last year or so we’ve seen AI-powered image and video generators do extraordinary things: if something like Midjourney could render a realistic landscape in seconds , or if OpenAI’s Sora could create a realistic video of non-existent puppies playing in the snow , why can’t the program sharpen the blurry image and show us who is really behind those pixels?

AI is only as good as the information it has.

You see, when you ask an AI program to “enhance” a blurry photo or create additional parts of the image, you are actually asking the AI ​​to add more information to the photo. After all, digital images are just 1s and 0s, and to show more detail on someone’s face, you need more information. However, AI cannot look at a blurry face and use sheer computing power to “know” who is really there. All he can do is take the available information and guess what should actually be there.

So, in the case of this video, the AI ​​programs are processing the pixels we have of the woman in question and, based on their training set, adding more detail to the photo based on what they think should be there, rather than what actually is in fact. That’s why you get such completely different results every time, often even terrible. This is just a guess.

Jason Koebler of 404media offers an excellent demonstration of how these tools simply don’t work . Not only did Kebler try programs like Fotor or Remini on The Sun’s videos, which produced the same terrible results as other online programs, but he also tried them on a blurry image of himself. The results, as you might have guessed, were inaccurate. So it’s clear that Jason Kebler is missing and his role at 404media has been taken over by an impostor. #Koeblergate.

Some AI programs do this better than others, but usually in specific use cases. Again, these programs add data based on what they think should be there, so this works well when the answer is obvious. For example, Samsung’s “Space Zoom,” which the company touted as being capable of taking high-quality images of the moon, turned out to use AI to fill in the rest of the missing data . Your Galaxy will take a photo of the blurry Moon, and the AI ​​will supplement the information with pieces of the real Moon.

But the Moon is one thing; specific individuals are another matter. Of course, if you had a program like “KateAI” that was trained solely on images of Kate Middleton, it would likely be able to turn a pixelated woman’s face into Kate Middleton, but that’s only because it was trained to do so – and it certainly won’t tell you whether the person in the photo was Kate Middleton. There is currently no artificial intelligence program that can “scale and refine” to reveal who a pixelated face actually belongs to. If there is not enough data in the image for you to determine who is really there, then there is not enough data for the AI.

More…

Leave a Reply