Once again, the creators of artificial intelligence have come out with a dire warning about the very thing they are creating; this time in a succinct 22-word sentence statement that minces no words.
The statement was published on Tuesday, May 30, by a San Francisco-based NGO, the Center for AI Safety, and has been signed by over 350 people from the industry including bigwigs such as OpenAI CEO Sam Altman (behind ChatGPT), Google DeepMind CEO Demis Hassabis, two 2018 Turing Awardees and more.
The tech industry bigwigs have been relentlessly talking about the potential dangers and Terminator-style future of AI while racing ahead with its development. It all started with ChatGPT, a Large Language Model (LLM) based system, which became popular with the general public loading on millions of users in no time.
ChatGPT charmed the pants off of most people with its ability to generate written text like a human in an eloquent manner. But the repeated Hollywood-style doomsday warnings have some asking whether this is all hype created around AI to boost business.
After all, tech companies like Microsoft and Google are clearly riding the wave of AI hype. And for the first time in history, a computer chipmaking company, Nvidia, became a trillion-dollar business on May 30 (the same day as the 22-word statement was published) as sales rose given the demand for computer chips amid the generative AI race.
It comes just two months after another similar warning was sounded by Elon Musk-funded Future of Life Institute, which asked for a 6-month pause on the development of the AI arms race led by ChatGPT, Microsoft and Google, to evaluate potential risks and build guardrails.
You know how people thought we would have had self-driving cars or self-flying cars by now? And it isn't here yet…
This is the feeling that critics are having with generative AI. They feel like the creators of AI are overselling it because, of course, it is important to their business to generate buzz that their creations are going to be just like what our imaginations have been making movies about for decades.
Scroll back a little to when Elon Musk signed the March warning that asked for a 6-month pause on generative AI development. Well, he announced his own TruthGPT, a rival to ChatGPT, less than a month later.
The latest statement was signed by two 2018 Turing Awardees (known as the nobel prize for computing) - Geoffrey Hinton and Yoshua Bengio. There is a third joint awardee who did not sign the statement and instead had this to say:
This is absolutely correct.
— Yann LeCun (@ylecun) May 4, 2023
The most common reaction by AI researchers to these prophecies of doom is face palming. https://t.co/2561GwUvmh
Economists Gary N Smith (the Fletcher Jones Professor of Economics at Pomona College) and Jeffrey Lee Funk, an independent technology consultant, wrote in February that people are spreading the false narrative that computers are somehow smarter than humans and that the AI bubble is inflating.
Their argument is that the generative AI (ChatGPT) behind the whole hype is misunderstood. ChatGPT is like a calculator for languages throwing up the best possible word after word based on large amounts of human data they are trained on.
ChatGPT or any such technologies do not understand the meaning of the words. Hence, they are unlikely to take over the world.
Critics point out that the doomsday-style warnings are actually distracting us from zeroing in on the real present-day problems of AI - like misinformation, the inability to tell what is fake and what is not, the use of such technology for propaganda, surveillance and oppression, and more. On the other hand, many countries still lack strong policy frameworks when it comes to the present-day Internet.
So, is it a doomed AI bubble or are were ignoring the warnings of industry insiders like we did with the Covid-19 pandemic?