AI has been making a lot of progress lately by almost any standard. It has quietly become part of our world, powering markets, websites, factories, business processes and soon our houses, our cars and everything around us. But the biggest recent successes have also come with surprising failures. Tesla impressed the world by launching a self driving car, but then crashed in cases a human would have easily handled. AlphaGo beat the human champion Go player years before most experts possible, but completely collapsed after its opponent played an unusual move.
These failures might seem baffling if we follow our intuition and think of artificial intelligence the same way we think about human intelligence. AI competes with the world’s best and then fails in seemingly simple situations. But the state of the art in artificial intelligence is different from human intelligence, and its different in a way that really matters as we start deploying in the real world. How?: machine learning doesn’t generalize as well as humans.

The two recent Tesla crashes and the AlphaGo loss highlight how this plays out in real life. Each of the Tesla crashes happened in a very unusual situation — a car stopped on the left side of a highway, a truck with a high clearance perpendicular to the highway, and a wooden stake in an unpainted highway. In the game AlphaGo lost, it fell apart when the Go champion Lee Sedol played a highly unusual move that no expert would have considered.
Why is it that AI can look so brilliant and so stupid at the same time? Well, for starters, it knows less about what’s going on then you think. Let’s look at a simple example to explain. AI can get spectacularly good at distinguishing between the use of the word “cabinet” to refer to a wooden cabinet and to refer to the president’s cabinet. Our intuition, based on our understanding of human intelligence, is that a machine would have to “understand” these two cabinet concept to make this distinction so consistently. The human approach is understand two different concepts by learning about politics and woodworking. Machine learning doesn’t need to do this — it can look at 1,000 sentences containing the word cabinet, each labeled (by a human) as corresponding to one or the other meaning, It learns how frequently words like “wood” or “storage” or “secretary” occur nearby in each case. So it knows that when the word “wood” is present, chances are extremely high that we’re referring to a storage cabinet. But If Obama starts talking about how he’s getting into woodworking, the AI may fail completely.
Artificial intelligence can work as well as it does without “knowing’ the way humans “know” for a simple reason: machines can process far more training data than a human. Peter Norvig, Google’s head of research, most famously first highlighted this idea in a paper and talk called, “The Unreasonable Effectiveness of Data”. This is how modern machine learning works in general — it pours over massive datasets and learns to generalize in smart ways, but not in the same smart way that humans generalize. As a result, it can be brilliant and also get very confused.
So how should we we take all of this into account when we manage artificial intelligence in the real world?
1) Play to AI’s strengths: Collect more training data
Why does Facebook have such amazing facial recognition software? They have fantastic researchers, but the core reason is that they have billions of selfies. Why did Google build a better translation system than the CIA as a side project? They scraped more websites than anyone else, so they had more examples of translated documents.

Real breakthroughs in machine learning always come when there are new data sets. Deep learning isn’t much better than other algorithms on small amounts of data but it continues to improve on larger and larger data sets better than any other method.
2) Cover for AI’s weaknesses: Use human-in-the-loop
Artificial intelligence has a second advantage over human intelligence: it knows where it is having trouble. In the latest Tesla crash, the autopilot knew it was in an unusual situation and told the human repeatedly to take the wheel. Your bank does the same thing when it reads the numbers off a check. As of a few years ago, AI reads numbers off of almost all deposited checks, but checks with particularly bad handwriting still get handed off to a human for review. And more than fifteen years after Deep Blue beat Kasparov, there are still situations where humans can outplay computers at chess.
When done well, keeping a human-in-the-loop can give the best of both worlds: the power and cost savings of automation, without the sometimes unreliability of machine learning. A combined system has the power to be more reliable, since humans and computers make very different kinds of mistakes. The key to success is handing off between humans and computers in smart ways that may very well require new types of interfaces to effectively take advantage of relative strengths and weaknesses. After all, what good is a near perfect self driving car AI that hands off control to a human it has let fall asleep?