What Could Possibly Go Wrong?
It turns out everything, when it comes to artificial intelligence. The media is filled with dire warnings that the dystopian future is here now. Here in San Francisco, human car drivers, pedestrians, firefighters and even police officers are forced to dodge errant automated vehicles operated by Waymo, a subsidiary of tech giant Google (company motto: "Don't be evil.").
Meanwhile, today's New York Times is awash in terrifying AI stories. Dr. Geoffrey Hinton, the 75-year-old "godfather" of AI, just quit Google, saying the company is, well, dong evil by frantically trying to stay up with Microsoft -- which recently unveiled its new AI-enhanced search engine -- in the new scientific arms race. Hinton, who was driven to explore technology's brave, new frontier like the sophisticated physicist J. Robert Oppenheimer was decades ago, now believes that AI could be even more destructive than nuclear weapons.
International treaties and wise diplomacy have so far prevented a nuclear world war. But robotic research, Hinton points out, can be secretly undertaken by companies or individual scientists. According to Hinton, AI, which is progressing at a truly chilling speed, can eliminate human jobs -- even high-end, creative ones (an artificially-produced song imitating Drake and The Weeknd just went viral, the NYT reported); exponentially escalate surveillance of human activity; and even someday start world wars.
According to today's NYT, that scary future is partly here now: the authoritarian Israeli government is using AI technology to "automate apartheid" and track Palestinians. Scientists at the University of Texas just disclosed they found a a way -- with the help of AI -- to read minds, analyzing the flow of blood to regions of the brain.
What's the solution? Dr. Hinton says socially-minded scientists around the world -- like him -- should place controls on AI research. Right. That self-regulation has worked so well. Hinton himself refused to sign two recent letters of protest against AI research by fellow computer scientists because he didn't want to publicly shame Google.
Self-regulation -- or shame -- doesn't work on tech giants. Their research is driven by capitalist imperatives and overseen by men who are already on the robotic evolutionary scale to universal automation.
Recently, the New York Review of Books covered a spate of new books that urge the elimination of the human race for the good of the planet. These books, which propose that we not reproduce among other "solutions," did make me think. Climate catastrophes, wars that threaten to go nuclear, now the rapid advance of AI... it does seem that we humans are on a death trip and we're determined to take the rest of Earth with us.
But maybe, just maybe, we can still save ourselves?