Advertisement

Hawking, Musk, Wozniak call for ban on autonomous weapons

Tech leaders from around the world have co-signed an open letter to regulatory bodies calling for a ban on autonomous weapons.

Autonomous weapons will become “the Kalashnikovs of tomorrow” if action is not taken soon, according to a letter released at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, Tuesday.

The letter — co-signed by world leaders across the science and technology spectrum, including Tesla and SpaceX CEO Elon Musk; Apple Inc. co-founder Steve Wozniak; renowned astrophysicist Stephen Hawking; and cognitive scientist Noam Chomsky, and drafted by the Future of Life Institute — comes days after police in Clinton, Connecticut, arrested a teenager who constructed a drone capable of firing a handgun. It has garnered signatures from 2,116 researchers in the artificial intelligence and robotics industry, as well as 11,583 from tech professionals, executives, professors and students.

FLI defines a weapon as autonomous if it can “select and engage targets without human intervention.” A future with such tools, it argues, will be a bleak and dangerous one, not dissimilar to war-torn swathes of Africa where the proliferation of AK-47s has enabled widespread slaughter and conflict.

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable … Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce,” states the letter, which members of the public can sign online. “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”

Advertisement

The solution is legislative intervention, the signatories claim. Autonomous weapons could be “practically feasible” within a few years; without strict regulation of potentially malicious AI applications, deadly machines could proliferate “beyond meaningful human control” and tarnish a field that has potential to contribute significantly to humanity.

“I’ve stressed that what really worries me about AI is the military application, where people are designing them so that the mindset will be paranoiac and hair-trigger as opposed to altruistic and humanistic,” MIT professor Frank Wilczek, a recipient of the 2004 Nobel Prize for physics who also signed the petition, said in an interview. “It’s not clear where the red line is — or how you draw the red line.”

The letter emphasizes that what some experts have espoused as benefits of autonomous military technology — for example, that human casualties in warfare will be drastically reduced — are mostly veneers that would justify a rush to war and potentially even genocide.

When asked about concerns that AI regulation might hinder positive scientific research, Wilczek expressed doubt.

Advertisement

“Regulation could conceivably impinge on research, but there are so many applications where the morality is straightforward,” Wilczek said. “AI can do a lot of good — we really need it to be able to cope with an aging population and to fulfill our potential, economically and socially.”

He went on to say that while the letter is a move in the right direction, it “doesn’t go far enough.”

“Autonomous weapons are very dangerous,” he said. “But if we can draw red lines and stick with them, we’ll be all right.”

Latest Podcasts