I am not a fan of the term “artificial intelligence.” I know that is my peculiar problem – usage of a term will define its meaning long before or after logical precision gets to take a stab at it, and I have not yet come up with a satisfactory alternative. What is my problem with the term? To begin with, we do not really know what intelligence is, so how can we make an artificial version of it? AI is basically a set of algorithms that, for the most part, behave as they are expected to behave – there is nothing “artificial” about that. And if you believe that the mind is essentially a set of pre-programmed algorithms, then yes, computers are intelligent. But there is nothing in the fields of psychology, neurology, or philosophy, not to mention computer science, that would suggest that this is the case. A lot of this hinges on how you define “intelligence.” For instance,Wikipedia defines intelligence as “one’s capacity for logic, understanding, self-awareness, learning, emotional knowledge, planning, creativity, and problem solving.” Computers or software have no “capacity” for any of that. We can’t really use the term “Machine Intelligence” any more because that evokes images of clacking and sparking relay switches – the vision of the mind as a glorified pinball machine.
Defining intelligence is critical as we move forward with our reliance on algorithms and analytics since many companies and institutions are letting machines do their “thinking” for them. Some of the hugest catastrophes of the recent decades stem in some cases from the misapplication of data: examples abound from military intelligence to meteorological data that was just wrong. How do we define intelligence? Here is where the lack of a Humanities education on the part of our leaders, scientists and technocrats is becoming a real problem. There are more questions on the nature of intelligence than there are answers. What is the relationship between intelligence and consciousness? What about intentionality? (The power of minds to be about, to represent, or to stand for, things, properties and states of affairs). And how do we assess or measure intelligence or consciousness? The Turing Test is not really proof – it just shows that over the years we have become bad at carrying on a conversation. Go ahead, ask any computer if it prefers the Beatles or the Stones – it will not detect the false dichotomy inherent in the question (the real answer is “…Floyd” with a knowing nod). From the Buddhist perspective, all consciousness is emergent: life and consciousness essentially are synergistic – that is, they function at a capacity greater than the sum of their parts. What meaning you attach to that phenomenon is basically your problem. In Buddhism, there is no special qualifier that separates any form of intelligence from another. How can you create artificial consciousness or intelligence from something that is basically an illusion? And more importantly, what would that really prove? The only people who thought that Oleo was like butter in any way were marketers. No one else thought that – it was what was available to get down the equally tasteless thing that commercial producers call “bread.”
The field of neuroscience has revealed many of the mechanisms of the brain but has not really gotten around to defining intelligence or consciousness. I have run into neuroscientists on YouTube who are out to refute things like learning styles theory using smug phrases like “but this is not how the brain works.” Neuroscience seems to be the least humble of the sciences – can you imagine any other field where they would announce that they have finally figured it all out?
The history of computing runs parallel to a history of dystopian literature on artificial intelligence. It is not that I don’t believe in artificial intelligence, its that I do not believe that it is “artificial” or “intelligent.” Most of the models of intelligence that AI is based on are mechanistic visions of the human brain: they have to be because any other model of the human brain would preclude the ability of a computer to mimic it. Our fear and doubt of computers and AI stretch way back to Descartes to 2001: a Space Odyssey.
There is a lot of paranoia around AI because folks seem to think that computers or software have become “intelligent” and that somehow they can work to our harm or detriment. This is simply not the case: AI is not going to hurt us – incompetent technology and business leadership, and incompetent applications of technology are what is going to hurt us. Our over-reliance on algorithms that were supposedly meant to help us, allowed human agents to seed fake news on Facebook and send inappropriate videos out to children on YouTube. How could such things have happened if we were all looked over by machines of loving grace?
The algorithms and data have a lot to teach us: they are useful tools – but they are not thinking. A definition of intelligence should also include the ability to make ethical choices. We are the ones that think and it is dangerous to project that capacity onto un-reasoning software and machines. This is not an abstract philosophical discussion – machine intelligence is driving everything from markets to cars. With innovation, it is never a question of should we do it (someone always will): it is more of a question of the how. The devil hides in the details of implementation. We always seem to be too excited to find some new way to abdicate the responsibility of our choices.
Finally, if you are still overly-impressed with algorithms and data, please read Arthur C. Clarke’s “The Nine Billion Names of God” because there is always a last time for everything.
- Artificial Intelligence and the Rise of Google RankBrain in Search (fruition.net)
- Chatbots (fruition.net)
- Here are the most common uses of AI on your smartphone today (businessinsider.com)
- Artificial intelligence ‘to diagnose heart disease’ (telegraph.co.uk)
- 5 AI trends to watch in 2018 (oreilly.com)
- Hello, Watson: How AI actually learns how to think (mashable.com)