In a recent interview with Futurism, Blake Lemoine, a former Google employee, shared his thoughts on the future of artificial intelligence (AI) and hinted that Google isn’t exactly transparent with how far they’ve developed their AI models.
The software engineer and AI ethicist became known for speaking out about Google’s rapid AI advancements and that LaMDA, Google’s powerful large language model (LLM), was a sentient being.
Lamoine was fired from Google swiftly after speaking out about his AI concerns.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google said in a statement.
Is Blake Lemoine trying to tell us something or is Google simply following common company privacy and security practices when it comes to what information is released to the public on products they’re working on?
Google’s Hidden Progress In AI Raises Questions Of Its Use
In the interview, Lemoine emphasized the importance of transparency in AI development, stating that “transparency is critical in order to build trust.” He explained that AI algorithms are often complex and difficult to understand, which can lead to concerns about bias and unfairness.
By being transparent about the data used to train algorithms and how they work, companies can address these concerns and build trust with users.
Lemoine went on to discuss some of the changes that had been made in the last couple of years when it came to transparency in the development of AI but was clear that there is still lag time around information sharing with the public.
He went on to say “[by] the time the public learns about an AI product, the companies who built it have vetted their PR story, have consulted with their lawyers, and have potentially lobbied regulators to get preferential legislation passed. That’s one of the things I always dislike — tech companies will try to get legislation passed that will govern technology that regulators do not yet know exists.
They’re making bargains around what clauses to include in regulations, and the regulators legitimately have no idea how those things will work out in practice because they don’t yet know what technology exists. The company hasn’t revealed it.”
Is Google’s Use Of AI Ethical?
If the progress of Google’s AI and the technology itself is being held back from the public, transparency, and accountability are very hard to achieve. It raises questions about what companies like Google are doing with advanced AI and whether it is being used ethically.
What makes this topic even more intriguing is the fact that Blake Lemoine isn’t the only person to have gotten into hot water with Google about its AI ethical practices. In 2021, Margaret Mitchell, former leader of Google’s Ethical AI team, was fired after she warned Google that people could believe the technology is sentient.
Is Google using AI to gain a competitive advantage over other companies? Is it being used for surveillance purposes or some other purpose that may be unethical or illegal?
Lemoine did not provide any specific details about the advanced AI that Google is holding back, so it is difficult to say what exactly is being hidden from the public. However, he did discuss the potential impact of AI on the job market.
He acknowledged that AI has the potential to automate jobs and displace workers, but also pointed out that it can create new job opportunities in fields related to AI development and implementation.
This brings up another ethical consideration: how can companies like Google ensure that AI development does not lead to widespread job loss and economic inequality? As AI becomes more advanced and widespread, it is likely that it will have a significant impact on the job market.
Does holding back information on the development of AI from the public create an unfair economic advantage for those in the know?