Ever since AI saw a recent explosion in general use, following the release of open technologies such as ChatGPT, Midjourney, and Stable Diffusion, public opinion has been divided between the potential benefits of this technology and its multiple risks concerning online security, jobs, and privacy. Governments have been actively taking part in this discussion.
Following an initial reaction that made several governments ban the technology completely, many countries are now seemingly leaning towards a softer approach, in which frameworks are being developed to attempt a balance between regulating risks and encouraging beneficial development.
This nuanced view also allows for further stronger regulation to be eventually put in place as AI uses themselves are developed and explored, including harmful ones.
The question does not seem to be anymore whether AI should be regulated, but how could it effectively be done so. How are governments attempting to ensure safe uses of a nascent technology that sometimes outpaces our understanding of it?
Read: Prominent AI Researcher Quits Google & Regrets His Work.
Setting Guidelines For Balance

Governments, especially the ones where AI technologies are being developed, are looking to mitigate potential risks associated while providing a good environment for its development and experimentation.
Such is the case in the UK, where the government has spread AI regulation responsibilities among bodies overseeing human rights, health and safety, and competition. They want to “avoid heavy-handed legislation that could stifle innovation” and instead take an “adaptable approach to regulation based on broad principles such as safety, transparency, fairness and accountability,” according to Reuters.
More recently, technology and digital ministers of G7 countries (Britain, Canada, France, Germany, Italy, Japan, and the United States) released a joint statement at the Hiroshima Summit, where they formulated an AI approach that should be “risk-based and forward-looking to preserve an open and enabling environment for AI development and deployment that maximises the benefits of the technology for people and the planet while mitigating its risks.”
Similarly, the US National Institute of Standards and Technology released in January an Artificial Intelligence Risk Management Framework, where four key functions were established to organize AI risk management: Govern, Map, Measure, and Manage.
The US framework is designed to give AI actors “approaches that increase the trustworthiness of AI systems,” and to help them “foster the responsible design, development, deployment, and use of AI systems over time.”
Towards An International Framework?

These initial regulation steps could eventually evolve into a cooperative framework that could govern AI on an international level.
In a recent interview with 60 Minutes, Google’s CEO Sundar Pichai expressed that regulation should arise from an open conversation between states, tech companies, and society as a whole; and as the technology becomes available in more countries, a global framework will eventually be needed it to regulate it internationally.
The EU has taken initial steps in this regard, with an EU AI Act that was initially proposed in 2021 and is currently being discussed by the EU Parliament. It is based on a risk-based approach and creates three tiers of risks with corresponding regulation: unacceptable risks are to be banned, high risks are to be regulated, and low risks are to be left unregulated.
An international AI regulation based on treaties could also prevent governments from abusing the technology. Such cases are also possible, and are even happening already, as China has recently released regulation that forces AI products to “reflect the core values of socialism,” and warns that they “should not subvert state power.” It also makes AI services identify the personal data of users.
Similarly, France’s National Assembly approved in March the use of Artificial Intelligence to provide video surveillance of the 2024 Paris Olympics, a move that has provoked concern from civil rights groups that worry that the technology will be used to threaten privacy and civil liberties.