In a recent report published on May 18th, 2023, the Wall Street Journal revealed that Apple has implemented a new policy restricting its employees from utilizing external artificial intelligence tools, including ChatGPT. The decision was prompted by Apple’s apprehensions about potential data leaks and the exposure of confidential information that could arise from the use of AI tools like ChatGPT. Interestingly, the report also disclosed that Apple is actively working on developing its own language-generating AI technology.
By taking these measures, Apple aims to enhance data security and privacy standards while also competing with existing artificial intelligence tools. Apple’s foray into the field of artificial intelligence began with the introduction of Siri, their virtual assistant, in 2011. As part of their ongoing efforts in this domain, Apple made a significant move in March by acquiring WaveOne, a California-based startup specializing in AI algorithms for video compression. This acquisition adds to the list of AI-focused companies that Apple has strategically brought under its wing over the years.
The Use Of AI In Tech Companies
In addition to prohibiting the use of ChatGPT, Apple has also advised its employees against utilizing GitHub’s Copilot, a tool owned by Microsoft that leverages OpenAI Code to automate software code writing, as reported. It is worth noting that Apple is not the only company to implement such restrictions on the use of AI within its organization.
Samsung, for instance, has also banned the use of ChatGPT by its employees following an incident where sensitive information was inadvertently leaked through the platform. Several prominent institutions, including Citigroup, JPMorgan, and Bank of America, have joined Apple in prohibiting the use of ChatGPT to safeguard their confidential information.
The decision to ban AI tools like ChatGPT stems from the recognition of the significant risks they pose to companies, as the exposure of vital information could result in substantial damage. In regard to the use of AI, Apple CEO Tim Cook emphasized the need for an intentional approach, stating, “I do think it’s very important to be deliberate and thoughtful in how you approach these things.”
The Future Of AI
In April, the use of AI tools sparked various concerns and issues. Italy took a significant step by imposing a complete ban on the AI tool across the country, explicitly outlining the necessary updates and fixes that must be implemented before granting permission for its use. Responding to the situation, OpenAI promptly released a series of updates to ChatGPT, incorporating enhanced privacy controls.
In a related development, U.S. Senators Michael Bennet and Peter Welch introduced the Digital Platform Commission Act, which aimed to establish a federal agency responsible for regulating artificial intelligence and digital platforms. This action followed a recommendation from a panel of AI experts, including OpenAI CEO Sam Altman, who called for regulations and federally mandated safety standards in the field.
The rapid advancement of technology, particularly in artificial intelligence development and management, necessitates the implementation of robust safety guidelines by companies to mitigate data security breaches. Major industry players like Apple, Microsoft, and Samsung are actively involved in developing their AI solutions, incorporating innovative trends to address current challenges. As time progresses, it becomes increasingly apparent that privacy concerns must be treated seriously by all companies worldwide.
Do you think Apple’s ban against the use of ChatGPT will be lifted? Let us know in the comments section below.