Artificial intelligence researchers from around the world have agreed to ban the development of autonomous AI weapons.

On Tuesday, the Future of Life Institute (FLI) announced that over 2,400 artificial intelligence researchers from 36 countries and 160 companies signed a pledge stating that they won’t participate in the development, trade, or manufacture of lethal autonomous AI weapons.

“Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI,” FLI, a Boston-based volunteer-run research and outreach organization, said in a statement.

“In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.”

Some of the companies and organizations who signed the pledge are Google DeepMind, Element AI, Lucid.ai, and the European Association for Artificial Intelligence. Tesla CEO Elon Musk, DeepMind Co-Founder Shane Legg, GoodAI CEO Marek Rosa, and UC-Berkeley Center for Intelligent Systems director Stuart Russell are among the personalities who made the pledge.

“We would really like to ensure that the overall impact of the technology is positive and not leading to a terrible arms race, or a dystopian future with robots flying around killing everybody,” Anthony Aguirre, a UC-Santa Cruz professor and also a pledge signatory, said in a statement.

Back in 2015, Musk reportedly donated $10 million USD to an FLI research program centered on ensuring that artificial intelligence will be beneficial to people. Last year, Musk together with Hassbis and Suleyman through FLI called on the United Nations to regulate the development of autonomous AI weapons systems.

Do you believe that autonomous AI weapons pose a threat to humanity?

banner ad to seo services page