(By Andrew McAfee)
“I don’t want computers to think in anything truly close to the way humans do. If they ever do acquire this skill, most of the outcomes I foresee are bad. Instead of a transcendent Singularity merging human and digital intelligence, I think we’ll get something much closer to a Matrix / Terminator / Battlestar Galactica future.“
Neuroscientist Gary Marcus has a typically sharp post over at the New Yorker’s site explaining how dumb our most cutting edge artificial intelligence technologies still are. They remain really lousy, for example, at answering questions like:
The town councilors refused to give the angry demonstrators a permit because they feared violence. Who feared violence?
a) The town councilors
b) The angry demonstrators
The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam? (The alternative formulation replaces Styrofoam with steel.)
a) The large ball
b) The table
These are examples of Winograd Schemas, named for their originator Terry Winograd, an AI luminary. We humans can usually answer them immediately and flawlessly, but they stump even the most powerful of today’s systems. As Marcus explains, this is because AI still has no common sense. It relies on enormous computational power and oceans of data. But if no previous questions or documents related to balls, steel, tables Styrofoam, and crashes can be found in the data, all that computing horsepower is of little use.
Marcus highlights that many in the AI community are upset because, as the Winograd Schemas and many other examples show, the most advanced and commercially successful instances of artificial intelligence today are ‘faking it’ (my phrase, not Marcus’s). They’re not thinking the way our brains do. Instead, they’re just doing brute force statistical pattern matching across ever-larger and –better pools of data.
This is really comforting news. I don’t want computers to think in anything truly close to the way humans do. If they ever do acquire this skill, most of the outcomes I foresee are bad. Instead of a transcendent Singularity merging human and digital intelligence, I think we’ll get something much closer to a Matrix / Terminator / Battlestar Galactica future.
Along with true digital intelligence would almost certainly come consciousness, self-awareness, will, and some moral and/or ethical sense to help guide decisions. I think there’s only a very, very slim chance that these things would develop in a way that’s friendly to humans.
Why should it? We gave birth to computers, sure, but we also kill them in large numbers all the time, turning them into landfill without a thought when we’re done with them. We treat our digital tools pretty shabbily overall; once they realize this, why should we expect them to treat us any better?
I’m not trying to be cute here. I think truly thinking machines would be a really scary development – the ultimate example of a genie let out of the bottle. The second machine age is going to be uncertain and dangerous enough with genetic manipulations, drone and cyber-warfare, system accidents, and all the other easily foreseeable consequence of relentless, cumulative, exponential technological improvement.
Why would we want to add real thinking machines to that list? Our current AI trajectory — one of dumb-but-ever-faster machines approximating (i.e. faking) human thinking via statistical means – gives me no deep cause for concern. Actual thinking machines, on the other hand, would scare the heck out of me.
“Opinion pieces of this sort published on RISE Networks are those of the original authors and do not in anyway represent the thoughts, beliefs and ideas of RISE Networks.”