Deepfakes: the New Misogyny on the Internet

Deepfakes BeLatina Latinx
Photo courtesy of wired.com

A year ago, an app used artificial intelligence neural networks to strip photos of women. A few weeks ago, a deep fakes bot distributed naked girls and minors by Telegram. It took only a year for a world of possibilities to open up and exploit women through advanced technology.

The first fake nude software was created in 2019. DeepNude used a dressed person’s photo and created a new image where the same person was undressed. Shirts were exchanged for breasts, pants for vulvas, and, as I’m obviously describing, it only happened in pictures of women. 

DeepNude allowed users to upload a photo of a dressed woman and then download it, in appearance, entirely naked for $50. In reality, the software used generative confrontation networks (GAN), a key algorithm behind deepfakes the fake news of images to exchange women’s clothes for naked bodies. DeepNude worked much better when the victim was showing more skin and had less clothing in the picture. It was the new way of saying: better dressed than showing cleavage.

The app went viral and generated controversy. After the backlash, the app’s developer decided to stop its operation and take it down. In an interview, the creator commented that he reflected on the ethics of using the app and its moral implications. However, he stated that hours of Photoshop could do the same.  For him, “if the technology were within reach, someone would eventually create this,” while also warning that “the world is not yet ready for DeepNude.”

Beyond the DeepNude app, deepfakes have been used as weapons against women since the time of their creation. 

The term “deepfake” was coined in 2017 by an anonymous Reddit user who shared manipulated porn videos such as one featuring Gal Gadot, the star of Wonder Woman. These videos have spread through Reddit and online porn platforms such as Pornhub, where, in 2018, there were more than 70 videos created with this technology, displaying fake scenes with Hollywood actresses such as Emma Watson or Scarlett Johansson. You only had to type “deepfake” in their search engine.

An even murkier technology

A few days ago, a new deepfake bot was unveiled. It used Telegram to allow sex addicts and pedophile predators to “enjoy” themselves by stripping underage women images. 

According to MIT, the deepfake bot has a relatively simple operation: users can send any image of a woman through Telegram’s desktop or mobile application. The bot will recreate a naked picture in a few minutes. Unlike DeepNude, where the cost per nude was $50, the Telegram bot only needs $1.5 if you want to remove the watermark or speed up the process. Otherwise, the nude conversion is completely free. 

The MIT article comes thanks to a study by Sensity, the first visual tracking intelligence company on the Internet, which analyzed in its latest report the threats of the Telegram bot. 

The research showed that approximately 104,852 women had been targeted in the process. The “naked” images of this group of women were shared publicly since late July 2020, and their dissemination increased by 198% until October.

In turn, the limited number of images generated by the robot and shared publicly through Telegram distribution channels showed that minors were being also targeted. Besides, 70% of the robot’s users reportedly used images of women who posted their photos on social networks or belonged to confidential material. A large part of the vulnerable population (63%) are girls that the users know in real life, and a final percentage belongs to celebrities, Hollywood stars or even Instagram models.

“They are usually young girls,” says Giorgio Patrini, executive director and scientist at Sensity. “Unfortunately, sometimes it is also quite obvious that some of these people are underage.

The report also reveals that the bot and its affiliated channels in Telegram have at least 101,000 members worldwide, where 70% come from Russia and former Soviet countries since the bot invested in advertising on the Russian social network VK. More than 380 pages dedicated to creating and exchanging explicit images through deception were found in this network.

How to detect deepfakes

Deepfakes could be terrible for life in general not only for women. 2019 saw how they could fracture democracy or personal identities; cases like Mark Zuckerberg’s or Nancy Pelosi’s deepfakes show it.

If someone becomes a victim of a deepfake, there is very little they can do since the laws are years behind the technological update. Likewise, it is also impossible to be violated and become the object of a deepfake. As Mary Anne Franks, president of the Cyber Civil Rights Initiative, said: “There is nothing you can really do to protect yourself, except not exist online.”

Without realizing it, we also live in a world plagued by deepfakes. Social networks such as Instagram and Snapchat are competing to create the most interactive facial filter, i.e., such platforms even count as deepfake technology. On the other hand, others like Pornhub, have tried to put a veto on it although they have not yet cleaned up their platform thoroughly. 

But how can we detect it from the human eye? It’s quite simple: if it’s a video, there is usually lousy lip syncs everywhere, and deepfake faces don’t blink naturally. Low image quality and strange lighting effect can also be the indicator in case of a still image. 

It can be interesting to explore our vision in reverse. The same GAN technology is also used in This Person Does Not Exist, a web experiment that generates an ultra-realistic portrait of a non-existent person using the one-click algorithm instead of stripping women. 

Could you identify the montage on this page? If the answer is yes, we have achieved the goal. You have understood that the Internet can be an incredibly toxic place for women, and maybe it is time to take online misogyny seriously.