Former Google CEO Wants AI Industry To Regulate Itself

Tanya Taylor
AI is advancing at an unprecedented rate and there’s a clear need for legislation, but is it good for the industry to regulate itself?
Former Google CEO calls for AI self-regulation
Tech workers taking notes. They have the best understanding of the industry but should they create their own legislation? Photo: Scott Graham on Unsplash

According to a recent report, Eric Schmidt, former Google CEO wants AI industry to regulate itself. He believes governments don’t understand the industry enough to create regulations.

Schmidt is also concerned that governments aren’t as enthusiastic about the global AI race and would stifle innovation with tight regulations. But, he has many investments in AI start-ups, and many find his comments are born from self-interest. 

Self Regulation

Eric Schmidt is known for transforming Google from a Silicon Valley start-up into a global technology leader. According to Times Now News, he left Google in 2020 after 19 years of service. 

Futurism reports that in a recent interview with NBC’s “Meet the Press” Schmidt declared that the AI industry should regulate itself. He is pessimistic about government regulation because he feels it would stifle innovation.

In an interview with the MIT Sloan School last year, Schmidt said, “We don’t know what role we want AI to fill in society. If you don’t know what you want, it’s hard to regulate it”. 

He thinks governments over-regulate and under-promote AI technology and should also have a “how are we going to win AI plan”. 

According to CNBC, Schmidt was in the headlines last year for not declaring his investments in AI start-ups while he helped write AI legislation. 

As the Chairman of the National Security Commission on Artificial Intelligence, he wrote legislation for the industry and secured taxpayer funding without declaring his investments in AI startups. 

Not declaring his financial interests isn’t illegal, but many politicians say it’s highly unethical and shows a profound conflict of interest.

The AI Race

Former Google SEO, Eric Schmidt feels that government regulation could lead to the US falling behind in the AI race. Photo: Nathana Rebouças | Unsplash

Earlier this year, industry leaders, including Elon Musk, signed an open letter to pause AI technology until we can regulate it properly. Dr Geoffrey Hinton, “The Godfather of AI”, also quit Google recently due to his concerns about the speed of AI development.  

Eric Schmidt doesn’t support a pause in development due to the fear that the US will be left behind in the AI race, leaving China to overtake. 

According to Yahoo Finance, Schmidt said, “Frankly, this technology was invented in the United States. I don’t want us to give our lead up to the Europeans, Chinese or anybody else, and I want us to now address the shortcomings while we harness this incredible stuff.”

The popularity and efficiency of AI have surprised developers, and companies are scrambling to release technology before their competitors. 

AI can radically change our society, and many professionals believe we need to thoroughly assess risks before releasing new technology – rather than releasing it for profit and notoriety.

Safeguarding AI 

ChatGPT has had a huge impact on society since it’s release last year. Photo: BoliviaInteligente |Unsplash

Since its release last year, OpenAI’s ChatGPT has taken the world by storm, and even developers were surprised at its impact on society. Though AI has many benefits, there are concerns about potential personal and national security and safety risks. 

In the USA, last year, congress introduced the Blueprint for an AI Bill of Rights. It highlights the drawbacks of AI technology and claims it threatens the rights of Americans. Algorithms, in particular, can be biased, limit opportunities, and prevent access to resources. 

The European Commission also recently introduced AI legislation, The Artificial Intelligence Act. The act assesses and categorizes the risks of AI systems. Systems which provide unacceptable risk will be banned outright, and high-risk ones will be subject to strict regulation.


Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts