Leadership

Science Group Wants to Prevent the Rise of the Machines

No, this isn't a movie: The Future of Life Institute, which is backed by some prominent scientific and technology figures, has released an open letter encouraging world governments to avoid building weaponry that relies on artificial intelligence.

From Terminator to Chappie, the idea of artificially intelligent weaponry has mostly remained in the realm of fiction.

But just in case it does spill into reality, a group called the Future of Life Institute is ready to prevent the spread of potentially dangerous weaponry. This week the institute released an open letter outlining the challenges of balancing the benefits of artificial intelligence against the risks of AI being used for dangerous purposes, specifically to develop autonomous weapons.

The institute, instead, emphasizes that artificial intelligence should be used to protect humankind, not cause its destruction.

“Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity,” the letter states. “There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”

The letter drew prominent signatories, including Tesla Motors founder Elon Musk, Apple cofounder Steve Wozniak, and famed theoretical physicist Stephen Hawking. But also on the list were several members of the Association for the Advancement of Artificial Intelligence (AAAI), which seeks to increase the scientific understanding of AI mechanisms and educate the public about this groundbreaking area of computer science.

The move drew positive responses from critics. Benzinga staff writer Wayne Duggan argued that by getting in front of the issue now, tech companies may avoid a potential PR problem down the road—similar to the one that has dogged the rise of drones.

“The vast majority of AI research has nothing to do with weaponry, but a potential public backlash in response to AI weapon attacks could make other areas of AI development more difficult in the future,” he noted.

Ronald Arkin, director of the Mobile Robot Laboratory at Georgia Institute of Technology, tells Popular Mechanics that he’s not opposed to an AI weapons ban, though he thinks “we can do better by continuing to research ways into reducing noncombatant casualties with technology.”

“It does seem odd to me that people who fear that we can make machines ultimately exceed human intelligence balk at the fact that we can make them possibly more moral than we are, given the relative low bar humans have in adhering to ethical behavior,” he said.

Beyond the cinema, the issue has already come up a few times. The Campaign to Stop Killer Robots, a real international coalition made up of human rights groups and scientists, has pushed for a preemptive ban on completely autonomous weapons globally. However, officials in the U.K. demurred.

“At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area,” the U.K. Foreign Office told The Guardian.

(Warner Bros. Pictures)

Ernie Smith

By Ernie Smith

Ernie Smith is a former senior editor for Associations Now. MORE

Got an article tip for us? Contact us and let us know!


Comments