The eminent British physicist Stephen Hawking warns that the development of intelligent machines could pose a major threat to humanity.
"The development of full artificial intelligence (AI) could spell the end of the human race,"
Artificial intelligence researchers should focus on ensuring AI systems "do what we want them to do," rather than just advancing and improving the capabilities of the technology, researchers and experts say.
An open letter from the Future of Life Institute this week called for expanded research to ensure AI systems are "robust" and benefit society.
It says: "Our AI systems must do what we want them to do."
Max Tegmark, co-founder of the institute, said in an email that the letter is mainly directed at the AI research community and anyone who funds it.
It has been signed by hundreds of researchers and other AI experts, along with physicist Stephen Hawking and Elon Musk, founder of Tesla Motors and SpaceX, who have both spoken publicly about the risks of AI.
In December, Hawking told reporters that "the development of full artificial intelligence could spell the end of the human race." Tesla Motors and SpaceX founder Elon Musk has said that AI is probably "our biggest existential threat" and "potentially more dangerous than nukes."
Both Hawking and Musk sit on the scientific advisory board for the Future of Life Institute, which describes itself as a "volunteer-based research and outreach organization working to mitigate existential risks facing humanity," especially human-level artificial intelligence.