No, the Bing AI Chatbot Is Not Sentient

AI anxiety may never have been higher. Some experts are predicting that an AI singularity could happen within the next decade , and recent screenshots of Microsoft’s new Bing search AI expressing seemingly human fears and desires have some wondering if it isn’t already here.

It is easy to see why this opinion is spreading. The average person hears the terms “AI” and probably thinks of Skynet or HAL 9000 – complex machines with human self-awareness and controlled by powerful processors as complex as the human brain.

However, Hollywood AI is a very different type of “AI” compared to the reality of tools like Midjourney, ChatGPT, or the Google and Microsoft search assistants that make headlines.

In fact, one could argue that calling chatbots, drawing generators, or automated programming tools “AI” is wrong or, more likely, just a marketing tool.

How do AI tools like ChatGPT work?

Simply put, these “AI” tools are programs designed to produce results based on user input, and they require external controls from engineers and users to fine-tune their performance. The software searches its databanks for information that matches the user’s prompt, gathers it together and modifies it as needed, and then echoes it back to the user. As Ted Chang recently wrote in The New Yorker , the process is closer to making blurry photocopies of existing work, rather than creating entirely new work from scratch.

In other words, AI-generated articles, drawings, or code feel so human because they are all based on existing human-generated content. The mid-trip images are evocative because they are copied from paintings, illustrations, and photographs taken by real people who understand composition and color theory. Bing’s responses seem eerily human because they repeat human-written text.

To be fair, this is an impressive piece of technology that is difficult to create and even more difficult to set up to produce reliable results. Remarkable is the fact that it works at all. But no matter what any New York Times reporter will tell you , there are no “ghosts” in these machines who learn to write, draw, or have a therapeutic conversation in order to stay alive.

However, people misinterpret the complexity and power of these tools as evidence that they are somehow intelligent. Or at least on the edge of reason.

And make no mistake: the people who make these tools know this, and they’re more than happy to let people believe that their software is conscious and alive. People are more likely to try your products if they believe they have something in them. The more impressive and “live” Bing AI interactions or Midjourney images are, the more likely people are to continue using them, and, as journalist Ed Zitron points out , the more likely people are to pay for them. That’s why ChatGPT is called AI and not a “predictive text generator”. This is simple marketing.

AI may not be alive, but it’s still a problem

But what about the future? Is it possible that computers could become conscious, self-aware beings, capable of self-learning and creativity like humans?

Well, of course it is possible, but scientists and philosophers are still arguing about what consciousness is , not to mention how it even arises in biological life. We will need to answer these questions before artificial consciousness in inorganic machines is even remotely possible.

And if artificial awareness is achievable , it won’t happen soon, and it certainly won’t spontaneously appear in Midjourney or ChatGPT.

But the question of whether a robotic uprising will eventually happen in the distant future should be less of a concern compared to the significant challenges AI automation poses to work, privacy, and data freedom right now.

Companies are firing writers and media professionals and replacing them with AI-assisted content creation. AI art tools typically use copyrighted material to create images, and deep forgery of pornography is becoming an increasing problem. Technology firms are moving to untrusted machine code, which is often less secure than human-written code. These changes are happening not because AI-generated content is better ( in most cases it’s clearly worse ), but because it’s cheaper to produce.

These issues are far more dangerous than hand-wringing over whether Bing has feelings or not, and it’s important to understand how many vendors of this “AI” technology are using these concerns to promote their products.

More…

Leave a Reply