AI researchers and expert critics call for a South Korea university to be boycotted for opening its AI weapons research lab.
One of South Korea’s prestigious universities has drawn the ire of AI researchers around the world for opening an AI weapons research facility that could allegedly be used to develop “killer robots.”
The AI weapons lab was opened last February 20 in collaboration with arms manufacturer Hanwha Systems.
According to reports, over 50 artificial intelligence and robotics researchers located in 30 countries around the world said that they would be boycotting Korea Advanced Institute of Science and Technology (KAIST).
The group reportedly includes researchers from world-renowned universities like the Cornell University, the University of Cambridge, and UC Berkeley to name a few. In an open letter, the AI researchers said that they would be ceasing “all contact” with the South Korea university.
“At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons,” the letter reads.
After the news broke out about the boycott, KAIST president Shin Sung-Chul said in a statement that the academic institution has no plans of developing killer robots.
In an article published by Financial Times, Shin claimed KAIST greatly values human rights and ethical standards. He further said that their research studies are conducted to serve the world better.
The boycott was organized by University of New South Wales professor and one of Australia’s leading AI experts, Toby Walsh.
“We publicly declare that we will boycott all collaborations with any part of KAIST until such time as the president of KAIST provides assurances – which we have sought but not received – that the centre will not develop autonomous weapons lacking meaningful human control.”
While Walsh believes that artificial intelligence may have many useful application in the military, especially in dangerous missions, he said that the decision of who lives or dies should not be handed over to any machine.
“This crosses a clear moral line,” he said. “We should not let robots decide who lives and who dies.”