dailyO
Technology

OpenAI CEO Sam Altman warns of 'risk of extinction from AI', critics ask if the threat is just another hype

Advertisement
Amrutha Pagad
Amrutha PagadMay 31, 2023 | 10:34

OpenAI CEO Sam Altman warns of 'risk of extinction from AI', critics ask if the threat is just another hype

Once again, the creators of artificial intelligence have come out with a dire warning about the very thing they are creating; this time in a succinct 22-word sentence statement that minces no words.  

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
- Center for AI Safety

The statement was published on Tuesday, May 30, by a San Francisco-based NGO, the Center for AI Safety, and has been signed by over 350 people from the industry including bigwigs such as OpenAI CEO Sam Altman (behind ChatGPT), Google DeepMind CEO Demis Hassabis, two 2018 Turing Awardees and more. 

Advertisement

The tech industry bigwigs have been relentlessly talking about the potential dangers and Terminator-style future of AI while racing ahead with its development. It all started with ChatGPT, a Large Language Model (LLM) based system, which became popular with the general public loading on millions of users in no time.

ChatGPT charmed the pants off of most people with its ability to generate written text like a human in an eloquent manner. But the repeated Hollywood-style doomsday warnings have some asking whether this is all hype created around AI to boost business. 

After all, tech companies like Microsoft and Google are clearly riding the wave of AI hype. And for the first time in history, a computer chipmaking company, Nvidia, became a trillion-dollar business on May 30 (the same day as the 22-word statement was published) as sales rose given the demand for computer chips amid the generative AI race. 

What does the latest statement mean?

  • The executive director of the Center for AI Safety, Dan Hendrycks, compared the warning to that of atomic scientists sounding the alarm on the very thing they were building. 
  • Hendrycks told The New York Times that the briefness of the statement was intentional, as they wanted it to serve as a clear message. 
  • The statement did not include any suggestions on mitigating the risks to avoid disagreements and diluting the essence of what they were trying to say. 
  • While the current statement doesn't elaborate on how AI could end the human race, the Centre for AI Safety website suggests a number of apocalyptic possibilities like weaponising AI, a rise in misinformation, the concentration of AI power into a few hands, and Wall-E movie-like human dependence on the system.
Advertisement

  • OpenAI recently suggested superintelligence be regulated like nuclear energy. 
We are likely to eventually need something like an IAEA [International Atomic Energy Agency] for superintelligence efforts...
- OpenAI

It comes just two months after another similar warning was sounded by Elon Musk-funded Future of Life Institute, which asked for a 6-month pause on the development of the AI arms race led by ChatGPT, Microsoft and Google, to evaluate potential risks and build guardrails. 

What are critics saying?

You know how people thought we would have had self-driving cars or self-flying cars by now? And it isn't here yet… 

This is the feeling that critics are having with generative AI. They feel like the creators of AI are overselling it because, of course, it is important to their business to generate buzz that their creations are going to be just like what our imaginations have been making movies about for decades. 

Scroll back a little to when Elon Musk signed the March warning that asked for a 6-month pause on generative AI development. Well, he announced his own TruthGPT, a rival to ChatGPT, less than a month later. 

The latest statement was signed by two 2018 Turing Awardees (known as the nobel prize for computing) - Geoffrey Hinton and Yoshua Bengio. There is a third joint awardee who did not sign the statement and instead had this to say:

Advertisement

Economists Gary N Smith (the Fletcher Jones Professor of Economics at Pomona College) and Jeffrey Lee Funk, an independent technology consultant, wrote in February that people are spreading the false narrative that computers are somehow smarter than humans and that the AI bubble is inflating. 

Their argument is that the generative AI (ChatGPT) behind the whole hype is misunderstood. ChatGPT is like a calculator for languages throwing up the best possible word after word based on large amounts of human data they are trained on. 

ChatGPT or any such technologies do not understand the meaning of the words. Hence, they are unlikely to take over the world. 

Critics point out that the doomsday-style warnings are actually distracting us from zeroing in on the real present-day problems of AI - like misinformation, the inability to tell what is fake and what is not, the use of such technology for propaganda, surveillance and oppression, and more. On the other hand, many countries still lack strong policy frameworks when it comes to the present-day Internet.

So, is it a doomed AI bubble or are were ignoring the warnings of industry insiders like we did with the Covid-19 pandemic? 

Last updated: May 31, 2023 | 11:24
IN THIS STORY
    Please log in
    I agree with DailyO's privacy policy