Over 100 AI and robotics experts have signed a petition, calling the United Nations to ban future, lethal autonomous weapons and prevent an unmanageable third age of warfare.
Elon Musk, as a futurist tech luminary, has ambitious plans for space and the Earth alike. But out of all of his seemingly insane projects, now we know that at least one futuristic concept isn’t part of his game plan, and that is lethal autonomous weapons.#AI and #robotics experts call #UN to ban killer robots before it's too late.Click To Tweet
Well-informed on AI issues, Musk is also actively working to raise awareness about its potential dangers.
Musk has spent millions of dollars to fund research in this direction, mainly through OpenAI, and the Future of Life Institute.
The Future of Human Life in the Face of Existential Risks
Established in 2014 by cosmologist Max Tegmark and Skype co-founder Jaan Tallinn among others, the Future of Life Institute (FLI) seeks to prevent “existential risks” that threaten to wipe out humanity.
These risks may be of anthropogenic origins, mainly related to technology, such as nuclear weapons, biotech, and AI, or natural (global catastrophic risks) such as climate change, supervolcanoes and asteroid impacts.
The FLI estimates that natural existential risks are less extreme than those stemming from technological advances.
In 2015, the FLI released an open letter on the research directions needed to keep AI progress.
The “Research Priorities for Robust and Beneficial Artificial Intelligence” letter was then signed by dozens of leading tech figures and scientists, such as Elon Musk and Stephen Hawking.
This time around, the FLI is targeting future autonomous weapons, aka “killer robots”.
UN Urged to ban “Killer Robots” Before it’s too Late
In an open letter to the UN Convention on the Prohibition of Certain Conventional Weapons (August 21), 116 experts in AI and robotics, led by Elon Musk, asked the UN to ban lethal autonomous weapons now and nip an AI arms race in the bud.
The signees of the open letter were alarmed by the postponement of the first meeting to be held on the risks associated with lethal autonomous weapons, scheduled to take place on 21 August 2017.
“AI technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades,” said the signees, “and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.” The text read.
Although some may argue that intelligent weapon systems could prove beneficial on the battlefield (reducing casualties), the signees of the open letter believe that an AI arms race will end up perilous for humanity as a whole.
AI weapons will eventually become the “Kalashnikovs of tomorrow”, readily available on the black market for dictators, warlords, and terrorists alike.
AI really has the potential to help humans live better and make them safer, on battlefields and elsewhere, without having to be the killing machines themselves.