dailyO
Technology

Elon Musk asking for a 6-month halt after ChatGPT 4. He is not alone

Advertisement
Amrutha Pagad
Amrutha PagadMar 30, 2023 | 10:17

Elon Musk asking for a 6-month halt after ChatGPT 4. He is not alone

Elon Musk and others urge to slow down generative AI progress for humanity. Photo: DailyO

It isn't every day we hear the upper echelons of the tech world urging the world to "slow down" the progress in technology. Slowing down tech is taboo. But the tide is changing. The likes of Elon Musk, and Apple co-founder Steve Wozniak have signed an open letter urging to halt generative AI development (read: ChatGPT) over "profound risk to humanity".

Advertisement

What does the open letter say?

An open letter titled "Pause Giant AI Experiments" written by the Future of Life Institute, an organisation focused on technological risks to humanity has been signed by over 1,300 people so far including big names in the tech world.

Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources.
- Open Letter 
  • It argues that the creators of these advanced generative AI models do not completely understand the risks or dangers the new technology poses and are only in a race to conquer the market and develop a money-making machine. 
  • It also says that the creators are in no position to predict how the AI systems will transform in the future, nor do they know how to reliably control these systems.
Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control.
- Open Letter

Suggestion:

Signatories to the open letter want the creators to take a step back and built a safety foundation before racing ahead. The open letter recommends adopting Asilomar AI Principles to avoid the AI arms race and focus on working towards the common good. 

Advertisement

What are the Asilomar AI Principles?

  • The principles were developed at the 2017 Asilomar Conference on Beneficial AI organised by the authors of the open letter.
  • It consists of 23 sets of guidelines created to direct AI research and development. It includes three sections - research issues, ethics and values, and long-term issues. 
  • Topics such as privacy, avoiding the AI arms race, and more are covered.
  • The rules have signatories such as Stephen Hawking, Elon Musk, and even Ilya Sutskever, co-founder and research director of OpenAI, the company behind ChatGPT.

Why is ChatGPT sounding off alarm bells?

Whether it is the latest ChatGPT model giving medical advice and listing possible health issues based on tests and other information or text-to-image search churning out art, there are various concerns big and small related to the advancement of generative AI. 

  • Many are concerned about whether generative AI will take away their jobs (like that of this author) or will render tests, exams, and our education system meaningless. 
  • There is a concern regarding how it will be used to spread propaganda and misinformation. 
  • We don't have to look too far to find examples in deepfakes showing world leaders making hateful statements that they never made.

  • We don't know yet how generative AI will impact the economy and banking system as we know it.
  • There have already been examples of AI systems making biased decisions based on gender, race, or colour and spitting out hateful comments.
  • We also don't need to look too far to find an example of the last time we developed a disruptive tech without any safety harness - social media. 
  • Modern AI systems including the current generative AI systems are based on a neural network, inspired by the human brain.
  • Neural networks teach themselves to come up with novel outputs based on the data that is fed to them, unlike computer programming which relies mostly on a set of instructions and predictable outcomes. 
Advertisement

Does this mean that one day, AI systems based on vast data and neural networks are able to think for themselves?

We don't know whether the current AI arms race will unknowingly create an AI system so powerful that it may want to end humanity and take over, or if it will just end humanity due to a difference in our value systems while trying to achieve the same goals. 

The AI dystopia bringing an end to humanity doesn't seem too impossible anymore. It also seems very likely to happen in the near future than in the distant future. Like Marie Curie who died due to years of exposure to radiation while studying radioactivity, our experimenting scientists may not realise the dangers of the substance they are holding in their hands right now. 

Last updated: March 30, 2023 | 10:20
IN THIS STORY
    Please log in
    I agree with DailyO's privacy policy