Is artificial intelligence a threat?

Share:

\"\"

When the world ends, it may not be by fire or ice or an evil robot overlord. Our demise may come at the hands of a superintelligence that just wants more paper clips.

So says Nick Bostrom, a philosopher who founded and directs the Future of Humanity Institute, in the Oxford Martin School at the University of Oxford. He created the \”paper-clip maximizer\” thought experiment to expose flaws in how we conceive of superintelligence. We anthropomorphize such machines as particularly clever math nerds, says Bostrom, whose book Superintelligence: Paths, Dangers, Strategies was released in Britain in July and arrived stateside this month. Spurred by science fiction and pop culture, we assume that the main superintelligence-gone-wrong scenario features a hostile organization programming software to conquer the world. But those assumptions fundamentally misunderstand the nature of superintelligence: The dangers come not necessarily from evil motives, says Bostrom, but from a powerful, wholly nonhuman agent that lacks common sense.

Imagine a machine programmed with the seemingly harmless, and ethically neutral, goal of getting as many paper clips as possible. First it collects them. Then. realizing that it could get more clips if it were smarter, it tries to improve its own algorithm to maximize computing power and collecting abilities. Unrestrained, its power grows by leaps and bounds, until it will do anything to reach its goal: collect paper clips, yes, but also buy paper clips, steal paper clips, perhaps transform all of earth into a paper-clip factory. \”Harmless\” goal, bad programming, end of the human race.

The thought experiment is, of course, exaggerated, but, according to Bostrom, the dangers of artificial intelligence and advanced technology are not. He’s made a career of studying the threats that could wipe out humankind and says the likely culprit will not be a natural disaster.

\”Humans have been around for over 100,000 years. During that time, we have survived earthquakes and firestorms and asteroids and all kinds of other things,\” he says. \”It’s unlikely that any of those natural hazards will do us in within the next 100 years if we’ve already survived 100,000. By contrast, we are introducing, through human activity, entirely new types of dangers by developing powerful new technologies. We have no record of surviving those.\”

Bostrom, who coined the term \”existential risks\” for such threats, jokes that more research has been done on snowboards and dung beetles than on the question of whether we’ll survive disaster. But a movement is growing rapidly, with the help of high-profile champions and Silicon Valley money. In addition to his institute, founded in 2005, there is the Centre for the Study of Existential Risk, which Martin Rees co-founded in 2012 at the University of Cambridge, where he is an emeritus professor of cosmology and astrophysics. New York has leading members of the Global Catastrophic Risk Institute, which opened in 2011. And just this May, the Cambridge, Mass.-based Future of Life Institute held its inaugural event at the Massachusetts Institute of Technology.

Max Tegmark, a professor of physics at MIT, began the Future of Life event—a panel discussion on the role of technology—by showing a slide with pictures of both Justin Bieber and Vasili Arkhipov, a Russian naval officer. He had a couple of questions for the audience. First, who is more famous? Then: \”Which of these two guys should we thank for us all being alive here tonight because he single-handedly stopped a nuclear attack during the Cuban missile crisis?\”

The crowd laughed, but Tegmark had made his point. We’re more aware of a 20-year-old Canadian pop star than of someone to whom we might owe our lives, and he wants to turn that around. \”I’m very much a technology optimist,\” says Tegmark, a founder of the Future of Life Institute, in an interview. \”Technology offers such amazing opportunities for making life better, but because it’s so powerful, it comes with pitfalls. So it’s really important to think in advance about what the pitfalls are, rather than bumble around and mess things up accidentally, like we almost did in the Cold War.\”

But because existential-risk research involves planning for very long-range scenarios, these organizations face skepticism from those who think the resources would be better spent on more-concrete, immediate problems, like global poverty and lack of access to health care.

Not quite, says Jaan Tallinn, an Estonian programmer who helped develop Skype and now funds many existential-risk organizations. Such research is important to support because the cost of not doing so is, by definition, annihilation. \”It would take out not just the number of people alive now but the number of people not yet born,\” Tallinn says. \”Compared to that, even very serious concerns like malaria—one of the biggest things that rich people seem to be acknowledging—are minuscule.\”

Viktoriya Krakovna, a doctoral candidate in statistics at Harvard and another founder of the Future of Life Institute, says it operates grassroots-style to recruit volunteers and younger scholars. \”We wanted to create FLI as an intersection of two circles of people: more-senior scientists, like people on the advisory board, and people from the local community and universities,\” she says. \”We\’re not necessarily selecting for traditional credentials and prestige, but people who are really interested in these issues.\”

The institute will host visiting fellows, hold public workshops and lectures, and, eventually, collaborate with policy makers. But first it must identify which techno-risk might do us in soonest.

Similarly, at Cambridge’s Centre for the Study of Existential Risk, which counts Stephen Hawking among its advisory-board members, \”we want to figure out what we should be worried about, exactly, and what can be done to ameliorate it,\” says Rees. He believes that biotechnology, with its potential for bioterrorism (what he terms \”bio error and bio terror\”), is a strong contender for the greatest techno-risk. Seth Baum, a founder and executive director of the more policy-focused Global Catastrophic Risk Institute, studies nuclear disarmament, in part because he lives near the United Nations headquarters, in New York.

When weighing different risks, he says, \”the bottom-line question is, What can we do about it?\” For a risk that’s \”really high-probability but there’s nothing we can do in terms of regulation or prevention, the fact that it’s high-probability might not matter as much.\”

But as researchers evaluate biotech versus nanotech versus nuclear weapons, superintelligence has moved to the forefront of existential-risk consciousness. \”I always thought superintelligence was the biggest existential risk because of the rate of progress,\” says Bostrom. \”This might be the area where a small amount of well-directed research and effort now can have the biggest difference later on.\”

Concern over artificial intelligence is old news. As far back as 1847, people questioned whether machines—in that case, calculators—might one day do harm, notes Stuart Russell, a computer-science professor at the University of California at Berkeley who advises several of the existential-risk institutes. Without evidence that intelligence is limited by biology, the only obstacles to superintelligence, he says, are physics and ingenuity.

The development of machines with human-level intelligence has been heralded since the 1940s and, to listen to some people, seems to be perpetually 20 years away. Though research hit a standstill during the reduced-funding \”AI Winters\” of the late 1970s and early 90s, the rate of progress is accelerating, in part thanks to financial support from Silicon Valley players. Google acquired the start-up DeepMind in January. Facebook’s Mark Zuckerberg and Elon Musk, of SpaceX and Tesla, have each invested in Vicarious, an AI firm that wants to replicate the human brain.

The two biggest funders of the modest existential-risk ecosystem—which subsists on about $4-million annually—are Peter Thiel, a founder of PayPal, and Tallinn. Seven years ago, Tallinn met with an AI researcher named Eliezer Yudkowsky and spoke with him for four hours about the risks of artificial intelligence. As soon as they parted, Tallinn wired $4,000 to Yudkowsky’s organization, the Machine Intelligence Research Institute, in Berkeley,to compensate him for his time.

\”Once I identified that this is the most important thing I could do with my life, I started consciously looking for opportunities to push this X-risk ecosystem forward,\” says Tallinn. He has helped start the Centre for the Study of Existential Risk and the Future of Life Institute, and contributes about $500,000 each year to research.

At the Machine Intelligence Research Institute, the top donors include Peter Thiel\’s foundation, which has given more than $1.3-million to date, and Jed McCaleb, one of the founders of the MtGox Bitcoin exchange, who has contributed more than $500,000.

The leap from human-level machine intelligence to superintelligence is not as great as the one from where we are now to human-level machine intelligence, says Hector Levesque, a recently retired professor of computer science at the University of Toronto, who is noted for his AI research. \”There are many things about human-level intelligence—how we represent and reason and understand things—that we do not yet understand at all,\” he says. \”These are fundamental questions that we haven’t completely sorted out that are necessary to get to that level of intelligence.\” Turing tests, which ostensibly represent intelligence, can be \”passed\” by evading questions or pulling facts from a data set, but Levesque argues that a computer won’t be \”intelligent\” like a human until it can think in the abstract, perceive something instead of just \”seeing\” it, and understand things that aren’t easily Googleable.

We are already good at developing machines that are smart in one area. Think of Watson, the AI champion of Jeopardy! But \”human-level\” intelligence is more than speedily calling up quiz-game facts. It is general intelligence that can apply knowledge to a wide variety of tasks. And yet, as Stuart Russell notes, \”the program that was able to beat [the chess grandmaster Garry] Kasparov was completely unable to play checkers.\”

What’s more, while there is funding for AI-risk research, there may not be funds for the machines that validate such studies. There’s a clear use (and therefore a revenue stream) for self-driving cars, robots that sweep floors, and algorithms that sift data to further personalize medicine. But the purpose of a general-intelligence machine is less clear, says Levesque. For organizations or individuals dreaming of cures for cancer or remedies for economic inequality, pouring money into a long-term effort to devise a superintelligent machine seems to fail a cost-benefit analysis.

According to Bostrom, combined results from four surveys show that experts believe human-level machine intelligence will almost certainly be achieved within the next century. Although his own predictions are more cautious, some surveys predict a 50-percent chance of machines with human-level intelligence by 2040, and a 10-percent chance within the next decade. From that milestone, it’s not a far leap to superintelligence.

\”Once you reach a certain level of machine intelligence, and the machine becomes clever enough, it can start to apply its intelligence to itself and improve itself,\” says Bostrom, who calls the phenomenon \”seed AI\” or \”recursively self-improving AI.\”

If that intelligence happens within a matter of hours or days, in what is called a \”hard takeoff,\” people will be helpless in its wake, unable to anticipate what might happen next. It’s like the story of the genie who grants three wishes, but never quite in the way the wisher intends, says Russell, the computer scientist from Berkeley. \”If what you have is a system that carries out your instructions to the letter, you’ve got to be extremely careful on what you state. Humans come with all kinds of common sense, but a superintelligence has none.\”

So, what is to be done? The realm of science fiction offers options like a kill switch, but no permanent solutions.

\”It’s not like we’re going to keep the superintelligence bottled up forever and hope that nobody else ever develops a free-roaming superintelligence,\” says Bostrom. The \”motivation-selection problem\”—how to program a computer to have common sense—must be solved before we reach the takeoff point. One approach is to make the machine’s goals deliberately vague. Instead of asking it to \”cure cancer,\” you tell it to \”do something that benefits humanity,\” and let it spend a lot of time checking in with human beings to find out what everyone actually wants.

Another approach is the one pioneered by the Machine Intelligence Research Institute, which is in talks to collaborate with faculty members at Berkeley. There, researchers believe that the most effective way to prevent a nonhuman superintelligence from harming humans is to … make it more human. They are working on developing \”friendly AI,\” with sympathy and altruism for humankind built in. The institute’s publications include papers on the theoretical effects of programming AI with various moral systems, and the extent to which an artificial general intelligence can reason about its own behavior.

If all goes well, and research progresses, we’ll have some practice building, and restraining, subintelligent machines before we try to harness superintelligence, says Russell. It’s the same process but much less risky: \”If something does go wrong, a subintelligent machine is unlikely to be able to take over the world.\”

At the Future of Life Institute’s event, discussions ranged from the ethics of kill switches for superintelligent machines (\”we absolutely should have\” them), to intelligent military drones (the Nobel-laureate physicist Frank Wilczek’s fear) to last-ditch escape plans (space travel).

The first question from the audience challenged the very premise of the meeting: Why is it a good idea that we continue to exist? Given that humans have caused the extinction of others, wouldn’t it be poetic justice if advanced forms of intelligence, which could probably run the world better anyway, caused our extinction?

There’s a clear answer, even just thinking about the \”hard-core economic point of view,\” said Wilczek. \”There’s an enormous investment in human intelligence as opposed to any other kind of intelligence. It’s been developed and enhanced and enabled for many centuries in nontrivial ways. I really don’t think we want to start all over again.\”

There were once two major restraints to existential-risk research, says Jaan Tallinn, the Skype co-founder. One has been lifted, and the other remains.

The first was credibility. Groups like the Machine Intelligence Research Institute (once called the Singularity Institute) have been around for a while, \”but few people would listen because they dismissed us as ‘crazy people in Silicon Valley,’ \” he says. Now the connection of research institutes to top universities has \”taken away the reputation constraint\” for those who might not want to help a nonprofit group but feel comfortable taking the University of Cambridge seriously.

Martin Rees, of the Centre for the Study of Existential Risk, hopes that this clout can extend beyond the ivory tower. The astronomer royal, though a self-described \”political pessimist,\” notes that as a member of the House of Lords, he is a part-time politician himself and can perhaps help the research institute in the political arena.

Yet the problem of funding remains, says Tallinn, who provides over 90 percent of the support for the University of Cambridge center and is helping to leverage donations and raise awareness of its work. He has given talks both at start-up conferences—\”to get young people exposed to these ideas\”—and at universities around the world.

\”I used to have the very standard worldview,\” he says. \”I can easily identify with people who see computers getting faster, and smarter, and technology getting more and more beneficial, without seeing the other side.\”

Max Tegmark, of the Future of Life Institute, has harsher words. \”In terms of how much attention we’re giving to the future of humanity, I don’t think we’re off by a little bit,\” he says. \”I think we’re completely bungling it. If you compare how we handle our survival as a species with how we do it in our personal lives, it’s a total disconnect.\” People continue to buy fire insurance for their homes despite never expecting them to burn down, Tegmark points out, because the possibility of losing everything would be devastating.

\”It’s the same here,\” he says. \”Even if the probability is tiny, it’s worth buying the fire insurance for humanity.\”

Tags: