Uber’s Self-Driving Car Didn’t Malfunction, It Was Just Bad. There were no software glitches or sensor breakdowns that led to a fatal crash, merely poor object recognition, emergency planning, system design, testing methodology, and human operation.
This sentence is exactly what I mean when I say that Artificial Intelligence is not intelligence. All of the “glitches,” poor object recognition, planning, design, testing and operation are all human glitches. In other words, there is only one intelligence (ours) and we are attempting to off-load its work on to algorithms we do not, obviously, fully understand.
I am actually all for self-driving cars. It it can potentially take drunk drivers, exhausted truck drivers, and texters off the road. According to the CDC: “each day in the United States, approximately 9 people are killed and more than 1,000 injured in crashes that are reported to involve a distracted driver.” They define distracted as “driving while doing another activity that takes your attention away from driving.” I think that should include a premature trust in AI.
Rodney Brooks wrote a great article last year on AI for the MIT Technology Review (“The Seven Deadly Sins of AI Predictions“), where he discusses some of the problems of AI predictions, including the case where discusses the prediction by Market Watch that said that “we will go from one million grounds and maintenance workers in the U.S. to only 50,000 in 10 to 20 years, because robots will take over those jobs.” He then goes on to point out that there are currently no robots on those jobs, and no realistic demonstrations of robots currently operational in those jobs.
- We’re almost positive this is the restaurant Google Duplex called for a dinner reservation (mashable.com)
- AI 101: How learning computers are becoming smarter (businessinsider.com)