The myth of strong AI
In the 1770s, renowned watchmaker Pierre Jaquet-Droz created The Writer, an automaton that can write a text on a piece of paper. The mechanical boy uses a goose feather, which he inks from time to time while his head turns towards the ink pot, then shakes his wrist to prevent the ink from spilling, and continues the phrase, with his eyes following the text. It’s a 6,000-part engineering masterpiece, but that’s not all: a set of replaceable cams define each letter, making the machine programmable and allowing for any text to be written. The Writer is considered an ancestor of the modern-day computer.
Automata such as The Writer still enchant us today, and they must have seemed truly astonishing 240 years ago. Some people were frightened, thinking it the work of witchcraft. But what makes these devices so fascinating? After all, they are just clockwork. There are, I think, three ingredients that trigger this reaction: (1) natural appearance (a boy), (2) human-like behavior (writing), and (3) lack of understanding of how it works.
These days, programs using artificial intelligence are getting better at performing human activities, such as driving cars, answering questions in natural language, or playing games like Go. We are impressed but also puzzled by some of these developments, and sometimes we wonder if machines will overcome us and take control one day—the so-called “singularity” event.
AI is an exciting research field where scientists develop algorithms that can solve complex problems. But it is also a field prone to sensationalism and misinterpretation, starting with its name. We consider ourselves intelligent beings. Since we don’t always understand how these algorithms work, and they seem capable of performing human tasks, two out of three ingredients are covered. If AI is used by a humanoid robot, than the recipe is complete: just like centuries ago, the result is somewhat disturbing and can lead us to think that AI is on a course to surpass us.
A type of AI that can perform any human intellectual task is called strong AI or artificial general intelligence (AGI). It is a hypothetical concept that is important to distinguish from applied (or weak) AI, which studies algorithms for solving specific problems. It is tempting to see the advancements in the applied AI field as steps towards strong AI, but there is no indication that is the case.
Automated systems have been around for decades, but no one worries that the robots from a car production line will revolt and take over the world, even if they are better than us at assembling cars. It is software exhibiting what we consider to be markers of intelligence that makes us wonder, like answering questions, playing chess, or driving a car and generally adapting and improving their ability to solve a specific problem. But in all these cases, the underlying learning mechanism and the goals are programmed by humans.
AI is just a tool. It helps us achieve goals that we define, and it can make life easier. Like many tools that we’ve built over time, it can be much better than we are at executing specific tasks. It is a powerful tool that can learn and adapt its behavior: machine learning, natural language processing, and computer vision can work together to do things that we believed only humans can. AI advances, but that doesn’t allow it to overcome its condition of being a tool. It doesn’t have conscience or imagination, it’s not self-aware, it has no will to invent its own goals, and there’s no progress in those areas either.
The tasks AI systems can execute are still very narrow, and we humans are more than the sum of a few tasks that we can perform. I had a computer vision course once where we implemented an algorithm for tracking the players during a football game. The program was capable of this tracking activity, but it had no understanding of what it was tracking; it didn’t worry if a player got injured, and it didn’t get excited when “seeing” a beautiful goal. We can program AI to mimic emotions on certain triggers, but it will never take initiative to do the things that seem natural and intuitive to us, because our behavior is the result of complex biological and cultural traits built up over millennia.
AI can be dangerous, just like other tools that we’ve built, such as weapons. AI can be made to be unpredictable and potentially deadly if put in control of certain systems. But the same can be said of a random number generator controlling a nuclear weapon: if it triggers a missile, we wouldn’t consider that it outsmarted us in some way. Worrying about AI taking over the world is a bit like worrying that if you make a very realistic painting of a lion, it might step out of the frame and eat you. No matter how sophisticated, AI will always execute commands programmed by humans, just like The Writer.
I worry about humans, not AI. And if we’re looking for a field to be concerned about, I would choose genetics. Editing the human genome is tempting for therapeutic uses, but it can have unpredictable effects on future generations.