A number of well-known tech leaders have penned a letter calling for a 6-month AI experiment pause, citing “profound risks” to society and humanity. The open letter signed by hundreds of AI experts, tech entrepreneurs, and scientists requested a halt to developing and testing AI tech stronger than OpenAI’s GPT-4.
Potential Threat To Civilization
It warns that language models such as GPT-4 can match humans in an increasing variety of tasks, leading to job automation and misinformation proliferation. The letter also mentions the remote possibility of AI systems that could substitute humans and fundamentally transform civilization.
Published by the nonprofit Future of Life Institute on Wednesday, the open letter features several high-profile signatories, including Tesla’s Elon Musk and Apple’s co-founder Steve Wozniak. The statement also urges AI developers to collaborate with policymakers to hasten the implementation of effective governance systems, including a dedicated AI regulatory authority.
“Contemporary AI systems are becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter reads. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?
Should we risk the loss of control of our civilization?” You can read the full letter in full here.
A Highly Competitive AI Market
The letter’s focal point was the worry over the profit-oriented rivalry between OpenAI, Microsoft, and Google to promptly create and launch AI models. It contends that advancements are outpacing society and regulators’ ability to adapt at such a pace.

Ultimately, there are truths to these claims, particularly on the significant scale of investment. Microsoft invested $10 billion into OpenAI and has incorporated AI into its Bing search engine and other applications.
On the other hand, Google has previously developed some of the AI needed to build GPT-4 and other powerful language models. However, until this year, it chose not to release them due to ethical concerns. Whether those concerns have been resolved is yet to be seen.
The Way Forward
These recent advancements in AI coincide with the belief that additional measures may be necessary to regulate its use. Even OpenAI has publicly acknowledged the possible need for an “independent review” of upcoming AI systems to guarantee compliance with safety standards. And according to the signatories, the ideal time for such systems is now.
However, with tech giants like Google and Microsoft swiftly launching new products, the letter’s impact on the present AI research landscape is doubtful. Nonetheless, it shows the increasing resistance to the nonchalant attitude towards safety and ethics in AI development.

The letter concludes on an optimistic note, saying that “humanity can enjoy a flourishing future with AI.” It also notes that AI research has progressed to the point where we can now bask in what it terms an “AI summer.” However, proceeding with further development and testing might be detrimental to civilization.
“Society has hit pause on other technologies with potentially catastrophic effects on society,” they concluded. “We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”