I think there is a lot of paranoia about Artificial Intelligence. Some of it is warranted but not for the reasons many would suspect. On Twitter, for instance, Elon Musk speculated that an AI system could choose to start a war “if it decides that a prepemptive [sic] strike is most probable path to victory.” In fact, he has said elsewhere that there needs to be regulations curbing AI. Here is where a degree in the humanities would be useful to folks like Musk. There are such laws – they are called Asimov’s Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course, the whole point of Asimov’s “I, Robot” is to illustrate the ethical knots and unintended consequences of such “laws.” Currently, we are so specialized of a society, that there is no one individual who can manage this mischief. We need people like Asimov right now more than ever – people who can think about ethics and consequences.
AI will not destroy us. Our ignorance will do that for us. If we are stupid enough to put algorithms in charge of The Bomb, then we will get exactly what we deserve. Darwin will have done his work. We need to make decisions about politics, business and international relations, but we are woefully under-equipped right now to make those decisions. Trump is in office because of the failures of our education system – the specialists understand the data and analytics – but they don’t understand the bigger picture. The Russians and other actors will take advantage of our political and sociological ignorance, as well as our critical technological illiteracy. We have plenty of programmers out there – we need to think about the humanities and the digital world in new ways. I think that future professionals (teachers, programmers, doctors, administrators, etc.) should not only have a grounding in the humanities but also in technology (Harvard’s open course CS50 for instance). We also need philosophy courses for programmers and poets.
There is a theme in the humanities that you can pick up if you stick with it long enough: humans will go to inordinately absurd lengths to abdicate their responsibility for their choices or actions (or their refusal to act). Blaming AI for any of our ills is just ridiculous – it is like blaming a car for our bad driving. AI is just a tool, and we need to do everything we can to understand the tool and the choices innovation affords.