It was good to see Stephen Downes and George Siemens sitting down together and talking again for our Education 3.0 MOOC. It felt just like old times (CCK08, CCK11, etc.). I can’t really say what they do for one another exactly, they seem to play off one another well – but listening to them together has always brought about great moments of catalyst. I read something in Quintilian the other day that seemed to capture something of the experience for me: “…learning does take something away – as a file takes something from a rough surface, or a whetstone from a blunt edge, or age from wine – but it takes away faults, and the work that has been polished by literary skills is diminished only in so far as it is improved.” Both of them have challenged me. Their ideas have asked me to sharpen my own and to refine my own thinking about knowledge, pedagogy, learning and thinking. Their recent discussion on AI was no exception.
I have been thinking about AI almost all of my life. Back in the late 80’s I was at Berkeley as an undergrad and took Hubert Dreyfus’ class on Existentialism. It was a profound experience. I did not like the course delivery at all but Dreyfus’ lectures had me on the edge of my seat and everything we did in the class felt immediate and very personal. For myself, that is the nature of exploring philosophy. I find nothing abstract about it at all. There are abstract philosophers and philosophies but I have never had much time for them. Kierkegaard, Nietzsche, Sartre, Merleau-Ponty and, later, Wittgenstein and semiotics; all had a deeper resonance with my experience than formalistic philosophers. Lao Tzu and Chuang Tzu, at least what I know of them through Gia-fu Feng’s translations, are also at the core of my experience. I would say “thinking” here but I have to admit – I am not really an academic by profession, at least by any traditional definition of the term. My “experience” includes thinking but it also includes what I do, how I engage with others and the world, and my creative expression of that whether it is writing, drawing, making music, having a damn good conversation over a coffee, or implementing shenanigans on the internet.
In the course of hanging out at Berkeley, I also managed to read some of Dreyfus’ work on computers and Artificial Intelligence. I found the work profound – profoundly annoying. My take-away from his positions were that one is not intelligent unless one is “embodied.” Computers are not embodied; therefore, they can’t be intelligent. I agree with a many of his premises – I consider myself an AI skeptic: the mind is not a machine with a series of on/off switches; intelligence is not a set of formal rules; all knowledge cannot be formalized. I don’t think that what we call thinking, knowledge, or knowing can be contained in a formal set of rules or an algorithm. But later, Dreyfus went on to attempt to apply this thinking to online learning. Online learning, according to Dreyfus, is an oxymoron because learning requires “embodiment” and the physical presence of a “master” (fully tenured).
We can avoid a lot of philosophical hand-wringing by all of us admitting that essentially we do not know what human thought is much less have a good working definition of intelligence that would allow us to replicate it.
Before I ran into the ideas around Connectivism, I described my understanding of pedagogy as Constructivist rather than Behaviorist. Connectivism, in the end, is going to account for more of what is happening in the world than just about any other approach, but Constructivism was a part of just about every education program I have been a part of – that doesn’t make it “right” but it informs a lot of work including my own: Constructivism says that all knowledge is a compilation of human-made constructions not the neutral discovery of an objective truth. At the heart of my problems with Behaviorist thought and pedagogy is Objectivism. Objectivism is concerned with the “object of our knowledge,” while constructivism emphasizes how we construct knowledge. Constructivism proposes definitions for knowledge and truth based on inter-subjectivity instead of the classical objectivity we find in behaviorist approaches. Constructivism is based on viability instead of “The Truth.” Where I politely disagree with Constructivism is in its belief (and that is what it is) in objectivity—that constructs that can be validated through experimentation. And here is where I see Connectivism to be the evolution of thinking about Constructivism – the compilation process of knowledge is more important than the knowledge – or at least as important. The “human-made constructions” rely on our connections with others, with ideas experience, and knowledge in other networks. There are some ideas within Connectivism that I find troubling or at least annoying: “Learning may reside in non-human appliances” for instance. For myself, information resides in non-human appliances but learning is a kind of knowledge and knowledge requires a knower. But that is just me.
So what is AI then? AI like most tools are extensions of human abilities. It can be a very good extension. A computer can beat a human at chess because, unlike the human mind, chess is a set of rules and algorithms. I have yet to meet a computer that wanted to play chess. I have yet to meet a computer that invented an interesting game. This is something that AI should be able to do because after all, a game is just a set of rules and algorithms. When a grandmaster plays chess it is interesting. The grandmaster is not just drawing on experience and an encyclopedic knowledge of chess openings, but is fighting emotions, memories, distractions, pressure, and history. That is interesting. To say that a computer can play chess is a bit of an absurdity. It is even more absurd to claim that it is therefore an advance in artificial intelligence – it is an advance in accessing and processing data, nothing more. That is a good thing – processing data in new and powerful ways can help us do things. George pointed out that the computer is better at diagnosing some cancers than a doctor – we want this. This doesn’t mean someone should quit medicine. We need doctors to help define the challenges we want our tools to solve.
We have a lot to learn from the field of AI. There are going to be great advances in many fields, including education, from this work. I welcome that. But we risk something when we call AI “intelligence” – we risk having someone in authority say “well, the computer knows best, after all, who am I to argue with a super-intelligence that can calculate 200 petaflops per second and has access to all the world’s data? I guess we push the button after all.” My fear is that our faith, and that is what it is, that AI is intelligence rather than a tool, an extension of the human, absolves us from making ethical decisions that computers are incapable of making.