Skip to main content

‘Stop building AI now’: Elon Musk warning letter calls for breathing space

A group of high profile technologists, including Steve Wozniak and Elon Musk, have demanded that all AI labs cease work on systems more powerful than OpenAI’s language generator GPT-4.

The demand came in the form of an open letter, signed by more than 1000 luminaries, which said that AI posed “profound risks to society and humanity”.

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

The letter was posted on The Future of Life Institute’s (FLI) website over a week ago, but only came to general media attention in the past 24 hours.

The institute is a non-profit organization founded in 2015 to “steer transformative technologies away from extreme, large-scale risks and towards benefiting life”.

Its main address is in Pennsylvania in the US, but it also has team members based across the US and Europe. 

In the open letter, the concerned group said that AI research and development should stop building ever-more powerful machine learning models, and focus on making current systems “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

The trouble with GPTs

GPT stands for “Generative Pre-trained Transformer”, and refers to a type of machine learning that uses massive amounts of data to train virtual neural networks, based on real-world desired outcomes.

In the case of OpenAI’s GPT series, the network was tasked to find the next word in any sequence of text. Billions of words from the internet and books were used as training data, using feedback loops to modify network connectivity.

In GPT-4, this has resulted in an AI that can pass legal exams, hold apparently meaningful conversations, and is set to revolutionise games

One concern for researchers is that the networks are opaque: no one knows exactly how GPT-4 has internally represented the training data.

The FLI open letter asks for a 6-month pause in order to make policies and processes that could control powerful systems.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

As well as Apple co-founder Wozniak and Tesla CEO Musk, the letter was signed by Sapiens author Yuval Noah Harari, entrepreneur Andrew Yang and numerous AI researchers. Employees for San Francisco-based OpenAI were apparently absent from the list.

Enjoy our reporting? Sign up for the Pharos newsletter and receive an update every week for free.

Hal Crawford

Hal Crawford is an experienced journalist and newsroom manager, and the head of content at Polemos.