Tech

A.I. poses existential risk of people being 'harmed or killed,' ex-Google CEO Eric Schmidt says

Key Points
  • Artificial intelligence could pose existential risks and governments need to know how to make sure the technology is not "misused by evil people," former Google CEO Eric Schmidt warned.
  • "And existential risk is defined as many, many, many, many people harmed or killed," Schmidt said.
  • The future of AI has been thrust into the center of conversations among technologists and policymakers grappling with what the technology looks like going forward and how it should be regulated.
Former Google CEO Eric Schmidt said he sees "existential risks" with artificial intelligence as the technology gets more advanced.
Lukas Schulze | Sportsfile | Getty Images

Artificial intelligence could pose existential risks and governments need to know how to make sure the technology is not "misused by evil people," former Google CEO Eric Schmidt warned Wednesday.

The future of AI has been thrust into the center of conversations among technologists and policymakers grappling with what the technology looks like going forward and how it should be regulated.

ChatGPT, the chatbot that went viral last year, has arguably sparked more awareness of artificial intelligence as major firms around the world look to launch rival products and talk up their AI capabilities.

Speaking at The Wall Street Journal's CEO Council Summit in London, Schmidt said his concern is that AI is an "existential risk."

"And existential risk is defined as many, many, many, many people harmed or killed," Schmidt said.

"There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues, or discover new kinds of biology. Now, this is fiction today, but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people."

Zero-day exploits are security vulnerabilities found by hackers in software and systems.

Schmidt, who was CEO of Google from 2001 to 2011, did not have a clear view on how AI should be regulated but said that it is a "broader question for society." However, he said there is unlikely to be a new regulatory agency set up in the U.S. dedicated to regulating AI.

Schmidt is not the first major technology figure to warn about the risks of AI.

Sam Altman, CEO of OpenAI which developed ChatGPT, admitted in March that he is a "little bit scared" of artificial intelligence. He said he worries about authoritarian governments developing the technology,

Tesla CEO Elon Musk said in the past that he thinks AI represents one of the "biggest risks" to civilization.

Even current Google and Alphabet CEO Sundar Pichai, who recently oversaw the company's launch of its own chatbot called Bard AI, said the technology will "impact every product across every company," adding society needs to prepare for the changes.  

Schmidt was part of the National Security Commission on AI in the U.S. which in 2019 began a review of the technology, including a potential regulatory framework. The commission published its review in 2021, warning that the U.S. was underprepared for the age of AI.

Could A.I. take your job?
VIDEO9:2009:20
Could A.I. take your job?