OpenAI/ChatGPT has backtracked on the threat to quit Europe Over AI laws. The CEO Sam Altman, said on Wednesday that OpenAI/ChatGPT would quit Europe if regulations become too much. However, he has since released a statement retracting his comments.
The EU is currently creating rules regarding AI which will potentially set a global standard. However, there’s still a long way to go before the legislation becomes law.
The EU Artificial Intelligence Legislation
Time reported yesterday that Altman made the threatening Statement at the University College of London on Wednesday, after a panel discussion during a European tour.
He said, “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.”
According to ITPro, He has since backtracked on his original statement in a tweet yesterday which read:
“Very productive week of conversations in Europe about how to best regulate AI. We are excited to continue to operate here, and of course have no plans to leave.”
Members of the European Union first proposed the draft Artificial Intelligence legislation two years ago to protect its citizens from the rapidly developing AI industry. Europe wants to encourage AI development but also protect its citizen’s rights.
The EU AI Act also assesses the risks of AI systems – and categorizes them. Systems with an unacceptable risk will be banned outright, and those classed as high-risk will have strict regulations.
Reuters reports that the legislation must pass through complex negotiations – each EU country must discuss and negotiate it first. The draft is close to the final stages, but EU members must debate and fine-tune it before it becomes law.
In the draft legislation, companies must disclose if they use copyright material in their system development. This was one of the main points of concern for Altman, who originally said that it felt like “over-regulation”.
The Pushback

In addition to the copyright disclosure concerns, Altman was also worried that OpenAI’s ChatGPT will fall under the high-risk category of AI – meaning they would have to comply with extensive safety regulations.
Under the draft legislation, ChatGPT would be a General Purpose AI System (GPAIS). The EU considers GPAIS as high risk because they believe that they are multifunctional tools which people can use in ways unintended by the developers.
EU industry Chief, Thierry Breton, responded for the EU, telling Reuters “Let’s be clear, our rules are put in place for the security and well-being of our citizens, and this cannot be bargained”.
Should We Regulate AI?

There are many genuine safety concerns about AI, and even industry leaders have called for a halt in development until we can fully understand it and its capabilities. AI clearly needs regulation, but who should make the rules?
Many major corporations, including Apple, have banned staff from using ChatGPT due to concerns about confidential material stored by a third-party company. Countries such as Italy have also banned the app for similar privacy concerns.
The industry is rapidly moving forward, making it almost impossible to govern. According to Telcom, while the pending EU legislation will set a benchmark in global regulation, it’s China and the US leading AI innovation.
Former Google CEO Eric Schmidt recently called for the industry to regulate itself because they claim that government rules don’t support innovation. But, many experts believe there need to be inclusive talks between industry and governments to get the best results in the interest of everyone.