Conspiracy Bot Shows Computers Can Be As Gullible As Humans

Computers now believe in conspiracy theories. Francis Tseng of New Inquiry trained the bot to recognize patterns in photos and make connections between similar images, forming a kind of conspiracy theory diagram that can be seen in the last act of the Homeland episode or on the front page of Reddit. It’s a cute trick that reminds us that people are gullible (hey, maybe these photos do match!), And that the machines we train to think for us can turn out to be just as gullible.

Humans are exceptionally good at recognizing patterns . This is great for learning and working with different environments, but it can get us in trouble: some research has linked pattern recognition to belief in conspiracy theories. ( Some don’t , but they want you to think so.)

Until recently, computers weren’t particularly good at pattern matching. Advances in machine learning specifically target this gap, training neural networks, such as recognizing bird photographs or detecting credit card fraud by transmitting massive amounts of data to them.

It’s not as easy as replicating a human brain, because we don’t know how to do it. Instead, programmers model the behavior of the brain, allowing the neural network to search for patterns on its own. As technologist David Weinberger writes , these neural networks, free from the baggage of human thought, build their own logic and find amazing and incomprehensible patterns. For example, Google’s AlphaGo can outperform the master of go, but its strategy is not easy to explain in simple terms.

But these machines don’t really know what’s real, so they can just as easily find patterns that don’t exist or don’t matter. It also leads to unexpected “bugs” such as the funky paint colors (dull beige, smelly bob) created by scientist Janelle Shane, or the terrifying mess of dog faces that Google DeepDream finds hidden inside my selfie:

These errors can be much more serious. Weinberger highlights the software that racialized the accused criminals , as well as the CIA system that falsely identified the Al Jazeera journalist as a terrorist threat .

The New Inquiry bot also over-expands its analysis by detecting fake patterns. “If two faces or objects seem similar enough, the bot ties them together,” says Tseng, the bot’s creator. “These perceptual oversights are not presented as errors, but as important discoveries that lead people to read layers of meaning out of chance.”

It’s tempting to think that the bot has hooked on something. But chances are that you are actually just looking at dog faces and painted paints. The more computer programs behave like humans, the less they should be trusted before knowing how they were created and trained. Hell, never trust a computer that behaves like a human, period. This is your conspiracy theory.

More…

Leave a Reply