The Italian data protection authority initiated an investigation into ChatGPT on March 31st, prompting OpenAI, the company behind the AI, to take the chatbot offline. The probe was launched due to the application’s alleged breach of privacy regulations, and this marks the first instance of a government order blocking the chatbot.
ChatGPT has gained widespread popularity since its launch in November last year, owing to its impressive capacity to generate plausible responses to questions as well as its ability to produce scripts, academic essays, and code corrections upon request. It operates using a cutting-edge artificial intelligence system that has been trained on an extensive pool of information sourced from the internet.
The data protection authority accused OpenAI of illegally collecting users’ personal data and failing to implement an age verification system to prevent minors from accessing inappropriate content. Although OpenAI claimed that the chatbot was designed for users aged 13 and over, the authority stressed the need for a filter that can verify the user’s age to protect minors from exposure to content beyond their developmental level.
The investigation was prompted by a data breach on March 20th that compromised the personal data of some users, including their full names, chat histories and payment information.
The ban follows a warning by Elon Musk and hundreds of global experts, who raised concerns about the significant risks posed by AI systems to society and humanity. They called on OpenAI and other AI companies to halt the development of such technology for at least six months.
The Italian watchdog has demanded that OpenAI report within 20 days the measures taken to address the situation or face a fine of up to $21m, or 4 percent of the company’s annual revenue.
The Implications Of The Ban
While ChatGPT has been lauded for the numerous opportunities and benefits it provides, the anxiety triggered by the chatbot’s data breach has increasingly spread to other countries, including the United States. The Center for A.I. and Digital Policy, an advocacy group promoting the ethical use of technology, has urged the U.S. Federal Trade Commission to prevent OpenAI from releasing new commercial versions of ChatGPT.
The data breach has raised serious privacy concerns and red flags for the use of the chatbot. The potential violation of third-party intellectual property rights during the machine learning process is a pressing concern that needs to be resolved. In a statement released by OpenAI, they claimed to have addressed all the issues, implemented necessary checks and investigated the incident thoroughly. “Our top priority is to support and inform our users,” OpenAI stated.
Moreover, OpenAI revealed that they are actively working to minimize personal data in training AI systems like ChatGPT because the AI should learn about the world, not individuals’ private lives. ChatGPT remains inaccessible in countries such as Russia, China, Iran, and North Korea, and Italian users are unsure if the ban on the AI tool will be lifted anytime soon.
Do you have any concerns regarding the privacy implications of using ChatGPT? Feel free to share your thoughts in the comment section below!