After the rise of ChatGPT, Elon Musk and a group of tech experts have called for a six-month freeze in the training of advanced artificial intelligence models, claiming the systems could pose “deep threats to society and civilization.”
The CEOs of Twitter and Tesla signed an open letter prepared by the charity Future of Life Institute, which is mostly sponsored by the Musk Foundation.
According to the European Union’s transparency register, the group also receives donations from the Silicon Valley Community Foundation and the effective altruism group Founders Pledge.
The letter outlines the possible threats that advanced AI poses in the absence of competent oversight and urges for an industry-wide halt until proper safety measures have been devised and validated by independent experts.
The spread of “propaganda and falsity,” employment losses, the emergence of “nonhuman brains that may someday outnumber, outsmart, obsolete, and replace us,” and the potential of “loss of control of our civilization” are all risks.
The experts pointed out that OpenAI recently acknowledged that “independent review before beginning to train future systems” may soon be required.
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” the letter says. “This pause should be public and verifiable, and include all key actors.”
Musk was a co-founder and early investor in OpenAI, the company that developed ChatGPT. He has now quit the board of directors of OpenAI and is no longer involved in its activities.
Shivon Zilis, an AI expert who gave birth to twins fathered by Musk through in vitro fertilization, recently resigned from OpenAI’s board of directors. She had been an OpenAI adviser since 2016. Zilis, 37, works for Neuralink, Musk’s brain chip firm.
Despite his public reservations about AI, Musk is apparently considering constructing a ChatGPT competitor, according to the New York Post.
Microsoft-backed OpenAI’s ChatGPT-4, the latest version of its AI chatbot, has both shocked the public with its ability to generate life-like responses to a huge variety of prompts and stoked fears that AI will place many jobs at risk and ease the spread of misinformation.
Other notable signers of the letter include Apple co-founder Steve Wozniak, Pinterest co-founder Evan Sharp and at least three employees affiliated with DeepMind, an AI research lab owned by Google parent Alphabet.
OpenAI’s CEO Sam Altman has not signed the letter.
Active AI labs and experts “should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” to ensure the systems are “safe beyond a reasonable doubt,” the letter adds.
“Such decisions must not be delegated to unelected tech leaders,” the letter says. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter adds.
Elon Musk has repeatedly warned about the danger posed by the unrestrained development of AI technology – describing it last month as “one of the biggest risks to the future of civilization.”
Musk likened AI to the discovery of nuclear physics, which led to “nuclear power generation but also nuclear bombs.”
“I think we need to regulate AI safety, frankly,” Musk said. “Think of any technology which is potentially a risk to people, like if it’s aircraft or cars or medicine, we have regulatory bodies that oversee the public safety of cars and planes and medicine.”
“I think we should have a similar set of regulatory oversight for artificial intelligence, because I think it is actually a bigger risk to society,” he added.