Henry Kissinger says he wants to call attention to the dangers of A.I. the same way he did for nuclear weapons but warns it’s a ‘totally new problem’

Former Secretary of State and National Security Advisor Henry Kissinger, pictured in 2018.
Former Secretary of State and National Security Advisor Henry Kissinger, pictured in 2018.
Thomas Peter - Pool/Getty Images

At 99 years old, an elder statesman like Henry Kissinger could be forgiven for not being up to speed on artificial intelligence. But the former diplomat who served under two presidents and played a key role in defining American foreign policy during the Cold War has become a frequent opiner on A.I.’s latest developments, and Kissinger’s campaign to acknowledge the technology’s dangers might be one of the final puzzle pieces to his legacy.

Talk over A.I. has hit a fever pitch in recent months, since the debut of OpenAI’s ChatGPT in November pushed Microsoft, Google, and other tech companies to kick off an A.I. arms race. People and businesses are now using A.I. in record numbers, while companies could be inching closer to cracking the code on human-like artificial intelligence

But Kissinger, the former Secretary of State and National Security Advisor who will become a centenarian on May 27, was preoccupied with A.I. years before intelligent chatbots entered the cultural zeitgeist. He is now calling for governments to account for the technology’s hazards, similarly to how he spent years championing for the end of nuclear weapon proliferation.

“The speed with which artificial intelligence acts will make it problematical in crisis situations,” Kissinger said in an interview with CBS aired Sunday. “I am now trying to do what I did with respect to nuclear weapons, to call attention to the importance of the impact of this evolution.”

A.I.’s existential risks

Kissinger’s interest in the ramifications of A.I. date back to 2016, when he attended that year’s Bilderberg Conference, a forum held since the 1950s for the alignment of U.S. and European interests. 

He attended the conference by invitation from Google’s then-Executive Chairman Eric Schmidt, according to a 2021 Time article. The two co-wrote a book, along with computer scientist Daniel Huttenlocher, in 2021 titled The Age of A.I., which argued that A.I. was on the precipice of sparking widespread revolutions in human society, while questioning whether we were ready for it.

That moment may have already arrived, and it is still unclear whether society is prepared. Geoffrey Hinton, a former Google employee who is often referred to as a “Godfather of A.I.,” has recently issued a series of warnings about A.I.’s dangers after leaving Google in part to talk openly about the subject. 

Current A.I. capabilities are “quite scary,” Hinton told the BBC last week, and as machines become increasingly adept at a larger number of tasks, the opportunities for “bad actors” to use them for “bad things” also grow, he told the New York Times earlier this month. In another interview with Reuters last week, Hinton warned that the existential risk of A.I. could even “end up being more urgent” than climate change.

Over 1,000 technologists, historians, and computer scientists called for a moratorium on development of advanced A.I. systems in an open letter in March to gain a better understanding of the technology’s capabilities and risks, especially as companies work on A.I. that could potentially match or surpass human intelligence. Other experts, including Hinton, have argued that it may be an impossible problem to solve, as the U.S. and China are already competing internationally on the A.I. front.

Kissinger, Schmidt, and Huttenlocher warned that A.I.’s capacities can “expand exponentially as the technology advances” in a February op-ed for the Wall Street Journal. A.I.’s growing complexity with each new iteration means even its creators are not fully aware of what it can do, the co-authors cautioned. “As a result, our future now holds an entirely novel element of mystery, risk and surprise,” they wrote.

Calls to regulate

The situation with A.I. has been compared to the crisis of unknown risks that surrounded the development of nuclear weapons during the second half of the 20th century that required international coordination to rein in. Berkshire Hathaway CEO Warren Buffett said during the company’s shareholder meeting last week that A.I., while “amazing,” could be compared to the development of the atomic bomb due to its potential dangers and because “we won’t be able to un-invent it.” 

Hinton also compared the existential threat of A.I. to that posed by nuclear weapons in an interview with CNN last week, as a possible area where the U.S. and China could cooperate on A.I. regulation. 

“If there’s a nuclear war we all lose, and it’s the same if these things take over,” he said, although he did note in his New York Times interview that the situation with A.I. is completely different, as it is much easier for companies and countries to develop the technology behind closed doors than to create nuclear weapons.

Michael Osborne, a machine learning researcher at Oxford University, called for a non-proliferation agreement similar to that governing nuclear weapons to rein in A.I. during an interview with the Daily Telegraph in January. “​​If we were able to gain an understanding that advanced AI is as comparable a danger as nuclear weapons, then perhaps we could arrive at similar frameworks for governing it,” he said.

But in his interview with CBS, Kissinger acknowledged that an A.I. arms race represented a completely different ballgame from the race to develop nuclear weapons, given the vast unknowns.

“[I]t’s going to be different. Because in the previous arms races, you could develop plausible theories about how you might prevail. It’s a totally new problem intellectually,” he said.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.