Disassembling Google Lens-Bing Visual Search
It looks like Google has been holding on to the ‘point your camera to learn more’ technology market for a while now, first through its Translate app, which lets you customize signage in foreign languages using your smartphone’s camera and receive translations on the fly, and now through Lens . which extends this technology to provide you with a lot of information about the objects in the photographs that you have taken (or are about to take).
But Google’s artificial intelligence for imaging isn’t the only game in town. Microsoft made a big splash in June with the introduction of Visual Search for Bing , which you can access through the Bing Search app ( iOS , Android ), as well as Microsoft Launcher and Edge on Android.
You probably don’t want to load your phone with multiple (or additional) search apps or browsers, so I set up a series of experiments to see how the two visual technologies combine.
Round one: everyday objects
I went around my room, collected a random assortment of things and placed them in a well-lit area on my desk. I then photographed each subject from roughly the same distance and angle – or at least from a point of view, which should be good enough for every application to have a pretty good shot to identify the subject.
And no, this time I didn’t use a pig . First, a delicious bottle of Framboise lambic:
Then Funko Pop:
How about a classic science fiction novel?
To round out the category, here is a video game that I loved a lot and that I haven’t played much yet:
Moving on to more practical uses of these camera technologies, here’s how each app handles a simple business card. I’ve blurred out some key details in the interest of privacy, but I’ll cover how each app works in subtitles.
Second round: monuments
Since Lifehacker doesn’t give me the travel budget for these kinds of experiments, and “scanning landmarks and monuments for more information” is one of the key features of every app, I had to improvise. I pulled photographs of monuments and scanned them in each app – apps can also scan photos you’ve already taken in your camera roll – to see what Google Lens and Bing visual search recognizes.
Anyone want to climb a giant mountain? I’ve heard the cables aren’t all that bad:
And finally, the iconic tourist trap in the San Francisco Bay Area – no, not an In-N-Out Burger:
Round 3: fashion
Both Google Lens and Bing Visual Search claim to be able to identify the clothes you or your friends are wearing and suggest matching items – or the item itself – that you can buy. Let’s see how well this works with two pieces from the David Murphy collection.
Greetings to Santa Claus, first of all, his name.
Verdict: Google Lens is (mostly) doing its job
Overall, I found Google Lens to be a more useful tool for analyzing the content of anything in your camera at any given moment. While it wasn’t perfect – it struggled a bit with the sights and wasn’t quite as fun with fashion – the app crushes it on OCR and usability (especially when scanning contact information). Bing is good at helping you find images that are similar to the composition of your photo, but it’s not as good as Google Lens at identifying specific objects, and I think the latter’s text recognition capabilities are what makes it better.
While most of us are likely to install (or tweak) Google Lens last – something to play with on vacation or impress a friend at a party – it’s worth going from the back of the head to a little more average. I doubt I’m going to walk down the street and constantly get information from Google about what I’m looking at, but the application definitely finds its use. Seeing how accurately he identifies everyday objects, I could play a little with him on my daily travels to see what else he is capable of.