A Brief History of AI

This post is part of Lifehacker’s “Living with AI” series. We explore the current state of AI, what it can do (and what it can’t do), and assess where this revolutionary technology will go next. Read more here .

You wouldn’t be blamed for thinking that AI has really taken off in the last couple of years. But AI has been in development for a long time, including most of the 20th century. Today, it’s hard to pick up a phone or laptop without seeing some kind of artificial intelligence feature, but that’s only because the work has been going on for almost a century.

Conceptual beginnings of AI

Of course, people have been wondering if we could build machines that think for as long as we’ve had machines. The modern concept came from Alan Turing, a famous mathematician well known for his work in deciphering Nazi Germany’s “unbreakable” code created by their Enigma machine during World War II. As the New York Times points out , Turing essentially predicted what a computer could (and would) become, envisioning it as “one machine for all possible tasks.”

But it was what Turing wrote in Computing and Intelligence that changed the situation forever: the computer scientist posed the question: “Can machines think?” but also argued that this formulation was the wrong approach. Instead, he proposed a thought experiment called the ” Imitation Game “. Imagine that you have three people: a man (A), a woman (B) and an investigator, divided into three rooms. The investigator’s goal is to determine which player is male and which is female using only text communication. If both players were truthful in their answers, this is not such a difficult task. But if one or both decide to lie, things get much more complicated.

But the purpose of the Imitation Game is not to test a person’s powers of deduction. Rather, Turing asks you to imagine a machine standing in for either Player A or B. Can a machine effectively fool an interrogator into thinking it is a human?

The beginning of the idea of ​​neural networks

Turing was the most influential spark for the concept of artificial intelligence, but it was Frank Rosenblatt who actually started the practice of the technology , even if he never saw it realized. Rosenblatt created the Perceptron, a computer modeled after the neurons in the brain that can learn new skills. The computer has a single layer neural network and it works like this: you ask the machine to predict something, such as whether a punched card will be marked left or right. If the computer makes a mistake, it adjusts itself to be more accurate. Over thousands or even millions of tries, it “learns” the correct answers instead of predicting them.

This design is based on neurons: you have input data, such as a piece of information, that you want the computer to recognize. The neuron receives data and, based on its previous knowledge, produces an appropriate result. If that output is incorrect, you tell the computer and adjust the neuron’s “weight” to produce a result that you hope is closer to the desired result. Over time, you will find the right weight, and the computer will successfully “learn.”

Unfortunately, despite some promising attempts, the Perceptron simply failed to realize Rosenblatt’s theories and claims, and interest in both it and the practice of artificial intelligence waned. However, as we know today, Rosenblatt was not mistaken: his machine was too simple. The perceptron neural network only had one layer, which is not enough to support machine learning at any meaningful level.

Multiple Layers Make Machine Learning Powerful

This is exactly what Geoffrey Hinton discovered in the 1980s : when Turing came up with the idea and Rosenblatt created the first machines, Hinton pushed AI to its current version by suggesting that nature had hacked AI based on neural networks already in the human brain. He and other researchers such as Yann LeCun and Yoshua Bengio have proven that neural networks, built on multiple layers and a huge number of connections, can enable machine learning.

Throughout the 1990s and 2000s, researchers gradually demonstrated the potential of neural networks. LeCun, for example, created a neural network capable of recognizing handwritten characters . But progress was still slow: although the theories were correct about money, computers were not powerful enough to process the amount of data needed to unleash the full potential of AI. Moore’s Law certainly finds a way out, and around 2012, both hardware and data sets had advanced to the point where machine learning began to take off : suddenly, researchers were able to train neural networks to do things they had never been able to do before, and we started to see A.I. in action in everything from smart assistants to self-driving cars.

And then at the end of 2022, ChatGPT exploded , showing professionals, enthusiasts and the general public what artificial intelligence is truly capable of, and we’ve been on a wild ride ever since. We don’t know what the future of AI really holds: all we can do is look at how far the technology has come, what we can do with it now, and imagine where we’ll go next.

Life with AI

To do this, browse our collection of articles on living with AI . We define the AI ​​terms you need to know , help you build AI tools without needing to know how to code , talk about how to use AI responsibly at work , and discuss the ethics of creating AI art .

More…

A Brief History of AI

This post is part of Lifehacker’s “Living with AI” series. We explore the current state of AI, what it can do (and what it can’t do), and assess where this revolutionary technology will go next. Read more here .

You wouldn’t be blamed for thinking that AI has really taken off in the last couple of years. However, artificial intelligence has been in development for a long time, including most of the 20th century. Today, it’s hard to pick up a phone or laptop without seeing some kind of artificial intelligence feature, but that’s only because the work has been going on for almost a century.

Conceptual beginnings of AI

Of course, people have been wondering if we could build machines that think for as long as we’ve had machines. The modern concept comes from Alan Turing, a renowned mathematician well known for his work in deciphering Nazi Germany’s “unbreakable” code created by their Enigma machine during World War II. As the New York Times points out , Turing essentially predicted what a computer could (and would) become, envisioning it as “one machine for all possible tasks.”

But it was what Turing wrote in Computing and Intelligence that changed the situation forever: the computer scientist posed the question: “Can machines think?” but also argued that this formulation was the wrong approach. Instead, he proposed a thought experiment called the ” Imitation Game “. Imagine that you have three people: a man (A), a woman (B) and an investigator, divided into three rooms. The investigator’s goal is to determine which player is male and which is female using only text communication. If both players were truthful in their answers, this is not such a difficult task. But if one or both decide to lie, things get much more complicated.

But the purpose of the Imitation Game is not to test a person’s powers of deduction. Rather, Turing asks you to imagine a machine standing in for either Player A or B. Can a machine effectively fool an interrogator into thinking it is a human?

The beginning of the idea of ​​neural networks

Turing was the most influential spark for the concept of artificial intelligence, but it was Frank Rosenblatt who actually started the practice of the technology , even if he never saw it realized. Rosenblatt created the Perceptron, a computer modeled after the neurons in the brain that can learn new skills. The computer has a single-layer neural network, and it works like this: you ask the machine to predict something, such as whether a punched card will be marked left or right. If the computer makes a mistake, it adjusts to be more accurate. Over thousands or even millions of tries, it “learns” the correct answers instead of predicting them.

This design is based on neurons: you have input data, such as a piece of information, that you want the computer to recognize. The neuron receives data and, based on its previous knowledge, produces an appropriate result. If that output is incorrect, you tell the computer and adjust the neuron’s “weight” to produce a result that you hope is closer to the desired result. Over time, you will find the right weight, and the computer will successfully “learn.”

Unfortunately, despite some promising attempts, the Perceptron simply failed to realize Rosenblatt’s theories and claims, and interest in both it and the practice of artificial intelligence waned. However, as we know today, Rosenblatt was not mistaken: his machine was too simple. The perceptron neural network only had one layer, which is not enough to support machine learning at any meaningful level.

Multiple Layers Make Machine Learning Powerful

This is exactly what Geoffrey Hinton discovered in the 1980s : when Turing came up with the idea and Rosenblatt created the first machines, Hinton pushed AI to its current version by suggesting that nature had hacked AI based on neural networks already in the human brain. He and other researchers such as Yann LeCun and Yoshua Bengio have proven that neural networks, built on multiple layers and a huge number of connections, can enable machine learning.

Throughout the 1990s and 2000s, researchers gradually demonstrated the potential of neural networks. LeCun, for example, created a neural network capable of recognizing handwritten characters . But progress was still slow: although the theories were correct about money, computers were not powerful enough to process the amount of data needed to unleash the full potential of AI. Moore’s Law certainly finds a way out, and around 2012, both hardware and data sets had advanced to the point where machine learning began to take off : suddenly, researchers were able to train neural networks to do things they had never been able to do before, and we started to see A.I. in action in everything from smart assistants to self-driving cars.

And then at the end of 2022, ChatGPT exploded , showing professionals, enthusiasts and the general public what artificial intelligence is truly capable of, and we’ve been on a wild ride ever since. We don’t know what the future of AI really holds: all we can do is look at how far the technology has come, what we can do with it now, and imagine where we’ll go next.

Life with AI

To do this, browse our collection of articles on living with AI . We define the AI ​​terms you need to know , help you build AI tools without needing to know how to code , talk about how to use AI responsibly at work , and discuss the ethics of creating AI art .

More…

Leave a Reply