Week 0: Seimens and Downes on AI

It was good to see Stephen Downes and George Siemens sitting down together and talking again for our Education 3.0 MOOC. It felt just like old times (CCK08, CCK11, etc.). I can’t really say what they do for one another exactly, they seem to play off one another well – but listening to them together has always brought about great moments of catalyst. I read something in Quintilian the other day that seemed to capture something of the experience for me: “…learning does take something away – as a file takes something from a rough surface, or a whetstone from a blunt edge, or age from wine – but it takes away faults, and the work that has been polished by literary skills is diminished only in so far as it is improved.” Both of them have challenged me. Their ideas have asked me to sharpen my own and to refine my own thinking about knowledge, pedagogy, learning and thinking. Their recent discussion on AI was no exception.

Image of a brain with a digital half and a fluid half
Does knowledge need a knower?

I have been thinking about AI almost all of my life. Back in the late 80’s I was at Berkeley as an undergrad and took Hubert Dreyfus’ class on Existentialism. It was a profound experience. I did not like the course delivery at all but Dreyfus’ lectures had me on the edge of my seat and everything we did in the class felt immediate and very personal. For myself, that is the nature of exploring philosophy. I find nothing abstract about it at all. There are abstract philosophers and philosophies but I have never had much time for them. Kierkegaard, Nietzsche, Sartre, Merleau-Ponty and, later, Wittgenstein and semiotics; all had a deeper resonance with my experience than formalistic philosophers. Lao Tzu and Chuang Tzu, at least what I know of them through Gia-fu Feng’s translations, are also at the core of my experience. I would say “thinking” here but I have to admit – I am not really an academic by profession, at least by any traditional definition of the term. My “experience” includes thinking but it also includes what I do, how I engage with others and the world, and my creative expression of that whether it is writing, drawing, making music, having a damn good conversation over a coffee, or implementing shenanigans on the internet.

In the course of hanging out at Berkeley, I also managed to read some of Dreyfus’ work on computers and Artificial Intelligence. I found the work profound – profoundly annoying. My take-away from his positions were that one is not intelligent unless one is “embodied.” Computers are not embodied; therefore, they can’t be intelligent. I agree with a many of his premises – I consider myself an AI skeptic: the mind is not a machine with a series of on/off switches; intelligence is not a set of formal rules; all knowledge cannot be formalized. I don’t think that what we call thinking, knowledge, or knowing can be contained in a formal set of rules or an algorithm. But later, Dreyfus went on to attempt to apply this thinking to online learning. Online learning, according to Dreyfus, is an oxymoron because learning requires “embodiment” and the physical presence of a “master” (fully tenured).  

We can avoid a lot of philosophical hand-wringing by all of us admitting that essentially we do not know what human thought is much less have a good working definition of intelligence that would allow us to replicate it.

Before I ran into the ideas around Connectivism, I described my understanding of pedagogy as Constructivist rather than Behaviorist. Connectivism, in the end, is going to account for more of what is happening in the world than just about any other approach, but Constructivism was a part of just about every education program I have been a part of – that doesn’t make it “right” but it informs a lot of work including my own: Constructivism says that all knowledge is a compilation of human-made constructions not the neutral discovery of an objective truth. At the heart of my problems with Behaviorist thought and pedagogy is Objectivism. Objectivism is concerned with the “object of our knowledge,” while constructivism emphasizes how we construct knowledge. Constructivism proposes definitions for knowledge and truth based on inter-subjectivity instead of the classical objectivity we find in behaviorist approaches. Constructivism is based on viability instead of “The Truth.”  Where I politely disagree with Constructivism is in its belief (and that is what it is) in objectivity—that constructs that can be validated through experimentation. And here is where I see Connectivism to be the evolution of thinking about Constructivism – the compilation process of knowledge is more important than the knowledge – or at least as important. The “human-made constructions” rely on our connections with others, with ideas experience, and knowledge in other networks. There are some ideas within Connectivism that I find troubling or at least annoying: “Learning may reside in non-human appliances” for instance. For myself, information resides in non-human appliances but learning is a kind of knowledge and knowledge requires a knower. But that is just me.

So what is AI then? AI like most tools are extensions of human abilities. It can be a very good extension. A computer can beat a human at chess because, unlike the human mind, chess is a set of rules and algorithms. I have yet to meet a computer that wanted to play chess. I have yet to meet a computer that invented an interesting game. This is something that AI should be able to do because after all, a game is just a set of rules and algorithms. When a grandmaster plays chess it is interesting. The grandmaster is not just drawing on experience and an encyclopedic knowledge of chess openings, but is fighting emotions, memories, distractions, pressure, and history. That is interesting. To say that a computer can play chess is a bit of an absurdity. It is even more absurd to claim that it is therefore an advance in artificial intelligence – it is an advance in accessing and processing data, nothing more. That is a good thing – processing data in new and powerful ways can help us do things. George pointed out that the computer is better at diagnosing some cancers than a doctor  – we want this. This doesn’t mean someone should quit medicine. We need doctors to help define the challenges we want our tools to solve.

We have a lot to learn from the field of AI. There are going to be great advances in many fields, including education, from this work. I welcome that. But we risk something when we call AI “intelligence” – we risk having someone in authority say “well, the computer knows best, after all, who am I to argue with a super-intelligence that can calculate 200 petaflops per second and has access to all the world’s data? I guess we push the button after all.” My fear is that our faith, and that is what it is, that AI is intelligence rather than a tool, an extension of the human, absolves us from making ethical decisions that computers are incapable of making.

This entry was posted in AI and tagged , , , , , , , , . Bookmark the permalink.

4 Responses to Week 0: Seimens and Downes on AI

  1. I enjoyed reading this. I’d agree and disagree and agree all over again, and there is a comfort in some things too. You’ve picked up on the fact that as humans we value the ‘human’. In learning there is skill and delivery, but as a person I cannot timetable every minute of my day and continue to function productively because we aren’t simply automated delivery vehicles. So my two comforts were in the value of the doctor’s decision – the weighing up the facts alongside the human aspects, the implications, the choices. Only I can have my experience, and that keeps us (people) apart. The other comfort is the A in AI. It’s meant to be artificial, if it was designed to be human intelligence, that would be another matter. I do think the fridge can ‘learn’ and adapt – facts, procedures, patterns. We are many patterns and sometimes we don’t want to acknowledge how hardwired we are as humans, but the great thing is that we are capable of doing what you value from the doctor: deciding. I, you, people can choose to change, sidestep, or stand still, and we are not simply the execution of an algorithm.

    And why have I commented? I t’s the effort to connect. That connectivism thing takes reaching out (which you’ve done) and it also asks us to accept and shake hands. I hesitated at first, and the first sentence I wrote was: “I am coming from a theoretical viewpoint, and not that of a computer science expert – not sure if that should matter” (you know, the ‘I’m not good enough to comment feeling’) and then I stopped myself. Nice to meet you!

  2. Pingback: Data, personal learning and learning analytics – Jenny Connected

  3. Thank you for this really interesting post. It has made me think about Iain McGilchrist’s book, The Master and his Emissary: The Divided Brain and the Making of the Western World. I wonder if you have read it – if not you may find it interesting – although it is a long, hard read ! The reason I mention it is because McGilchrist has quite a lot to say about ‘machines’ and what we lose out on if we put all our eggs in the machine basket (my words, not his – he is extremely eloquent).

    Interestingly I think there are many links between Stephen’s/George’s work on connectivism, and what Stephen has been working on re personal/personalized learning, and McGilchrist’s work on the divided brain. To hugely oversimplify it, I think they are all thinking about a more holistic, less fragmented, less controlled approach to how we attend to learning and ‘being’ in the world.

    Thank you again for your post.

    • admin says:

      I have not read that – thanks for taking a peek at the blog and the hot book tip! It is going to be an interesting class!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.