It seems clear now that Artificial Intelligence is not only here to stay, but to further grow and expand into uncharted territories. This is increasingly worrying Tech leaders, government officials, and academics, who are asking for regulation that they consider overdue and urgently needed.
Such is the case of Google’s CEO Sundar Pichai, who in a recent interview with CBS’s 60 Minutes, said that AI must “align with human values,” and warned that it will “impact every product across every company.”
“Does that keep me up at night? Absolutely,” he told 60 Minutes correspondent Scott Pelley. “We need to adapt as a society for it.”
But exactly how to adapt remains presently an unclear decision. From outright banning AI technologies such as Open AI’s Chat GPT, to self-regulation efforts by Tech companies, and incipient law initiatives by some governments, different paths are being explored.
At some point, a global framework to regulate AI technology could be universally set in place.
Why AI Needs Regulation

As a technology, AI has a never-before-seen potential for disruption in virtually every area of human practice. This includes both beneficial and harmful uses as equal forces that are rapidly developing.
On the one hand, beyond optimizing everyday productivity, AI is on its way to revolutionize areas like energy efficiency models, and healthcare applications such as disease detection and medicine and vaccine development. On the other hand, its misuse potential goes from surveillance models to spreading disinformation and impersonating or faking real people.
Read how People Are Now Catfishing With AI Images.
Google’s CEO Sundar Pichai warned in the 60 Minutes interview against two general groups in which AI is potentially harmful: workforce impacts and disinformation spreading.
According to Pichai, AI will soon impact the jobs of “knowledge workers” such as writers, accountants, architects, and software developers. But this is already happening. As reported by The_Byte, of the many companies that are already using AI technologies like Chat GPT, a great deal of them have already replaced human workers with AI.
Disinformation spreading and deep fake videos are patent AI dangers that require law regulation, according to Google’s CEO. “Anybody who has worked with AI for a while… realizes [s] this is something so different and so deep that, we would need societal regulations to think about how to adapt,” said Pichai.
There is currently a race between tech companies to reach AI breakthroughs. This is another problem that requires reflection and regulation, according to Pichai.
Calls For AI Regulation

As AI’s expansion seems now unstoppable, banning it altogether does not seem like a realistic solution for governments in the long term. Regulation will be the chosen path, but the question remains: how to regulate, and who will get to do it?
According to Sundar Pichai, it has to come from an open conversation involving society as a whole. Beyond engineers, it has to include “social scientists, ethicists, philosophers, and so on,” he said to 60 Minutes.
Self-regulation efforts are being put in place by top Tech companies, like Google’s recommendations for regulating AI, and Open AI’s calls to plan for a responsible AI future. But self-regulation is certainly not enough.
That is why an EU AI Act has been proposed in the European Parliament. It would be the first of its kind to regulate AI use and development by a high governmental Institution. It is based on a risk-tier system, with unacceptable-risk uses to be banned; high-risk uses to be regulated; and low-risk uses to be left unregulated. This act was first proposed in 2021 and is currently being discussed by the European Parliament.
In the great scheme of things, AI is still in its beginnings, so as Google’s CEO noted in the 60 Minutes interview, now is the right time for governments to get involved and regulate.
Eventually, as the technology is adopted by the majority of countries, there could be global frameworks and treaties to regulate AI on an international level, very much like there are nuclear policies in place to regulate global limitations and non-proliferation.
The need for that scenario seems like a realistic possibility in the near future.