Top-Down vs Bottom-Up AI: Would Turing Have Passed His Own Test?
Computer scientist Alan Turing, responsible for cracking the Enigma code used by the Nazis during World War II, formalized the search for authentic artificial intelligence (AI) in 1950.
He did so by developing a test meant to determine whether a machine was exhibiting signs of real intelligence or not, a test he thought would be easily passed in time for the new millennium.
Throughout his career, Turing published articles and manifestos regarding the topic of AI where he suggested the use of two different approaches we refer to as top-down and bottom-up, which we will get to shortly.
But first, let’s go back to Turing and the state of AI today.
Passing the Turing test
How do you define intelligence? Perhaps by one’s ability to solve complex equations, a high level of reading comprehension or the capacity to think critically.
For Turing, computers would be deemed intelligent the day they could participate in organic human conversation.
To measure this, he proposed a sort of game in which a group of humans have a text conversation with a judge. The judge can’t see who he or she is speaking to, and, at one point, one of the players is replaced by a machine. If the judge can’t tell the difference between the computer and the human players, then the Turing test is passed.
The issue is these bots have built-in mechanisms to digress from conversations they don’t understand, putting into question the validity of their performance.
However, even if the Turing test hasn’t been passed, it doesn’t mean the advancements in AI have not been groundbreaking and valuable contributions to society up to this point.
It’s all about perception
To get to where we are, many a computer scientist has spent countless hours thinking about how to create true artificial intelligence.
Simply choosing which approach to follow is a loaded argument.
As AI seeks to emulate human thinking, the debate doesn’t only center on how to best solve a problem but on figuring out how the brain works.
For computer scientists, it seems the most essential aspect of the human mind is its ability to perceive or process information.
According to the Oxford Dictionary, perception is “the ability to see, hear, or become aware of something through the senses,” but it is also “that in which something is regarded, understood, or interpreted.”
The dichotomy between interpreting and becoming aware is precisely what has created the divide in approaches, but also what has given us varied and rich results still being tested and perfected today.
Top-down vs. bottom-up
When facing a problem or equation, you can find a solution in one of two ways. You can either process the problem in your head by matching it to previous information or you can let the equation present its variables to you without adding any context.
The first describes a top-down approach, used by those who prefer applying previous knowledge to educate their perception.
This method, also called “neats,” suggesting a focus on logic, order and data, is preferred when dealing with high-level tasks like neuro-linguistic programming.
The opposite approach, bottom-up, is based on the belief that development should part from a stimulus. In other words, what drives our perception is that which we sense.
This method, also called “scruffies,” denoting dynamism and functioning on an ad hoc basis, works better with lower-level tasks such as robotics and speech recognition.
In the end, no approach is better than the other, but both of them continuously offer good results.
Perhaps one day we will achieve true artificial intelligence, but as Siri and Alexa have proved, maybe humanity doesn’t need to create sentient computers to make artificial intelligence work for them.