Okay, so maybe they didn’t… but they do have one other thing in common (according to an article in the May 9th Economist): all have warned against the potential threats of artificial intelligence (or “AI”) and all are hardly Luddites. From Hawking’s warning that AI “could spell the end of the human race” to Musk’s fears that AI may be “the biggest existential threat humanity faces,” all have voiced opposition to vast investments in AI by firms like Google and Microsoft.
And with most people today carrying supercomputers in their pockets and robots on every battlefield, their comments may be worth heeding. As the article points out, the question is how to worry wisely.
AI has been around a long time. Lately it’s gotten a boost as “deep learning” systems mimic the neuronal patterns of our human brains along with crunching vast amounts of data, teaching themselves even to perform some tasks including pattern recognition (in case you ever wondered why lately Facebook has been able to identify your friends’ faces automatically – which thanks to something called the DeepFace algorithm it now does accurately 97% of the time).
But today’s AI capacity is still narrow and specific. It’s basically brute-force number crunching, largely ignoring the human mind’s interests, desire and nuance. In other words, computers still cannot truly “infer, judge and decide” in the conventional human sense.
But AI is powerful enough to make a real difference in human life. With AI, doctors can spot cancers in medical images better than ever before. Speech recognition on smartphones will bring the Internet to the illiterate in developing countries. Digital assistants can parse data, from the academic to financial, finding nuggets of value. Wearable computers will change how we view the world.
On the other side of things, AI brings new power to the apparatus of state security, for both autocracies and democracies. Billions of conversations are monitored regularly utilizing AI (see: NSA), and faces can be scanned and identified quickly for risks to security.
But these are not the things of which the luminaries in our title line worry. No, they’re worried about a more apocalyptic threat: that of “autonomous machines with superhuman cognitive capacity and interests that conflict with those” of people today. For example: the car that drives itself is a boon to society; a car with its own ideas of where to go, perhaps not so much. Shades of 2001’s Hal perhaps.
But AI can be developed safely, as the Economist article points out. They point out that “just as armies need oversight, markets are regulated and bureaucracies must be transparent and accountable,” so AI systems need to be open to scrutiny. Because designers cannot oversee every contingency, there also must be an on-off switch. This can be done without compromising progress. From nuclear weapons to traffic laws, humans have “used technical ingenuity and legal strictures to constrain other such powerful innovations.” Such diligence must be applied every bit as carefully and judiciously in the burgeoning world of AI.
Autonomous, non-human intelligence is so compelling and potentially so rewarding that, while there are perils, they should not obscure the huge benefits coming from AI. But they must be managed.