I loved Daniel Kahneman’s book Thinking Fast and Slow. For those who haven’t read it, he postulates that there are two types of thinking: Type 1 which is the instinctual thought process that we use to effortlessly drive a car or recognize a face and Type 2 which is the logical thought process that we hear in our brain. He argues very persuasively that, because we hear Type 2 thinking in our heads, we identify more with it. But though we view Type 2 thinking as who we are, in reality, more of our decisions are controlled by Type 1 thinking. He goes on to study how Type 1 thinking seems to work and how good it is at doing things like associations or generating possible answers to a problem that can then be checked by type 2.
I’ve recently been noticing how much these two types of thinking map exactly to the two schools of AI. In the seventies and eighties, AI researchers mostly came from a logic background. They tried to build rule based systems to do things like reasoning, planning and translation. There was even a project Cyc, which hypothesized that if we could just encode all the rules into a system it would eventually become more intelligent than a human.
In the late nineties and when I was in school, researchers started to realize that, in practice, ostensibly dumber statistical systems worked much better for many every-day applications. The best automated translation systems don’t work by diagramming sentences and then using rules to change English grammar into French grammar. They look at millions of pairs of sentences and use statistics to see which English phrases commonly occur with French phrases. Document classifiers don’t try to reason out whether or not some instance of the word “cabinet” means a president’s cabinet or a closet, instead they look over millions of documents to see which associated words tend to occur with which topics. Sometimes the predictive words are surprising and clearly would not have been defined through rule based logic.
Just like Kahneman’s types of thinking, each of these two types of AI has things its good at and things its not good at. Type 1 style AI systems may be proving better at a lot of tasks, but sometimes it makes silly mistakes. Amazon recommends prenatal vitamins to someone buying diapers because customers who purchase one also tend to purchase the other. Facebook’s facial detection algorithm misses an easily identifiable face because its partly covered.
With all these parallels, maybe AI should work like our brains – with a type 1 system generating ideas and a type 2 system checking them when they seem suspicious.