In recent years, there has been growing concern about the use of deepfake technology to create highly realistic but fabricated videos of individuals for malicious purposes. Now, it seems that AI-generated images are being used to catfish unsuspecting victims as well
Catfishing is the act of creating a fake online persona to deceive someone into a romantic or emotional relationship, usually to exploit them for money.
In the past, catfishers would commonly use stolen or manipulated photos to create their fake identities. However, with the advent of advanced AI imaging technology, it has become much easier to generate highly effective images of people who don’t actually exist.
The Growing Use Of AI-Generated Images In Digital Deception

According to a recent report by The Washington Post, there has been a surge in the use of AI-generated images in online dating scams and other forms of digital deception. In many cases, these images are so realistic that they can easily fool even the most discerning of individuals.
The report highlights a specific case. She calls herself ‘Claudia’ and her photo appeared on Reddit alongside the caption “Feeling pretty today :)”.
Claudia caused quite a stir among experts and Reddit users, who started calling out the image as fake.
The problem? She’s selling naked images for cash.
It turns out the experts weren’t wrong. In a report by Rolling Stone, the image was confirmed as fake, having been created by 2 computer programmers and it had made its creators $100 before cash dried up after the fake AI accusations started flooding the photos thread.
There has been a general growing concern that the use of AI images is being used in the creation of fake pornographic content, essentially, catfishing people. This is a particularly disturbing trend, as it allows anyone with the right tools to create realistic pornographic content without the consent of the individuals featured.
By creating fake online personas using these images, scammers can trick unsuspecting victims into sharing personal information, sending money, or even engaging in romantic or sexual relationships.
The Implications And The Need For Action

As AI imaging tools become more user-friendly, affordable and accessible to the average person, it is likely that we will see an even greater upsurge in fake images and videos.
The implications of this trend are far-reaching. Not only does it threaten the privacy and security of individuals, but it also raises significant ethical questions about the use of AI in creating fake content.
As we continue to grapple with the complications and advancement of AI imaging technology, it is clear that we need to take steps to address this issue, and evidently, the U.S Government agree. In 2021, they introduced The Deepfake Task Force Act with the hope of curbing the concerning trend.
Some companies are rushing to develop tools that can filter out deepfakes from real content but there is concern that the technology is moving so fast that any tools developed now become quickly outdated. Microsoft’s Video Authenticator tool is just one example of a company attempting to tackle the issue.
Do you think companies and governments should be doing more to tackle the issue? Let us know in the comments.