Rejecting the Compromise: Deepfake Porn Shouldn’t Be My Reality

Recently, I received a Google Alert informing me about the presence of deepfake pornography involving me. While this may shock some, I wasn’t surprised. Over the past year, I have been the victim of extensive online harassment, and deepfake porn has become a favored weapon for misogynists aiming to silence women in public life. These deepfake videos, created using artificial intelligence, manipulate explicit clips to make it seem like real people are engaging in sexual acts that never actually occurred. As I informed my lawyers about this invasion of privacy, my prevailing emotion was deep disappointment in the technology itself and the lack of justice provided by lawmakers and regulators for those who unwillingly appear in porn clips. Many people are concerned about the potential threats of artificial intelligence, such as deepfake videos influencing elections or causing conflicts, or the impact of generative technologies like ChatGPT on job security. However, policymakers have largely ignored an urgent AI-related issue that is already affecting countless lives, including mine.

Last year, I stepped down as the head of the Department of Homeland Security’s Disinformation Governance Board. This policy coordination body, facing criticism mainly from the right, was allowed to dissolve by the Biden administration. In the months that followed, several deepfake videos emerged, supposedly showing me participating in sexual acts. The videos bear little resemblance to me, as the generative AI models behind them seem to have been trained on my official U.S. government portrait, taken during my pregnancy. The creators likely utilized free “face swap” tools, essentially grafting my photo onto an existing porn clip. While the original performer’s mouth occasionally shows, the deepfake version moves with altered facial features. These videos, however, are never intended to be convincing. All the websites hosting them clearly label them as fakes. Although these videos may provide cheap thrills to some viewers, their main purpose is to humiliate, shame, and objectify women, particularly those who dare to speak out. Personally, I have grown to some extent desensitized to this abuse after years of researching and writing about it. However, for many women, especially those in more conservative or patriarchal environments, being featured in a deepfake porn video can be profoundly stigmatizing, potentially jeopardizing their careers or even their lives.

To further emphasize the desire of video makers to punish outspoken women, one of the videos brought to my attention by Google depicted me alongside Hillary Clinton and Greta Thunberg. Deepfakes featuring these global celebrities are far more numerous and explicit compared to those involving me. It’s also easy to find deepfake porn videos of prominent women such as Taylor Swift, Emma Watson, Megyn Kelly, Kamala Harris, Nancy Pelosi, Alexandria Ocasio-Cortez, Nikki Haley, Elise Stefanik, and countless others. By simply being women in the public eye, we have all become targets, stripped of our achievements, intelligence, and activism, reduced to mere sex objects for the voyeuristic pleasure of anonymous viewers.

Men, in contrast, face this type of abuse far less frequently. In my research for this article, I searched for videos involving Donald Trump on a major deepfake porn website and found only one video featuring the former president, while three full pages were dedicated to his wife, Melania, and daughter, Ivanka. According to a 2019 study by Sensity, a company monitoring synthetic media, over 96% of existing deepfakes at that time consisted of nonconsensual pornography featuring women. The reasons for this stark disproportion are multifaceted and relate to both technical and motivational aspects. Those creating these videos are presumably heterosexual men who prioritize their own gratification over recognizing women’s personhood. Furthermore, AI systems are trained on a vast internet landscape saturated with images of women’s bodies, making the nonconsensual porn generated by these systems more believable than, for example, computer-generated clips of playful animals.

During my investigation into the origins of the videos featuring me – as a disinformation researcher, it’s only natural for me to delve into these details – I stumbled upon deepfake porn forums where users seemed disturbingly indifferent to the invasion of privacy they perpetrate. Some individuals believe they possess the right to distribute these images, considering the public availability of a woman’s photo used in an application designed for creating pornography as art or legitimate parody. Others seem to think that labeling their videos as fake exempts them from any legal consequences. These purveyors argue that the videos serve entertainment and educational purposes. However, by using such descriptions for videos that “humiliate” or “pound” well-known women, they reveal a great deal about their own notions of pleasure and what they find informative.

Ironically, some creators who participate in deepfake forums exhibit concern for their own safety and privacy. In one forum thread I discovered, a man was mocked for using a face-swapping app that failed to safeguard user data. However, these same individuals insist that the women depicted in their videos relinquish those rights due to their chosen public careers. I came across one chilling page listing women who would turn 18 this year. These women are initially classified in “blacklists” maintained by deepfake-forum hosts to avoid violating child pornography laws.

What the victims of deepfake porn truly need are robust laws to protect them. Several states, including Virginia and California, have already criminalized the distribution of deepfake porn. However, these laws have limited impact for victims residing outside these jurisdictions or seeking justice against perpetrators in different locations. In my case, discovering the identity of the video creators isn’t likely to be worth the time and money. I could attempt to subpoena platforms for information about the users who uploaded the videos, but even if the sites possess the necessary details and cooperate, there is little I can do to bring my abusers to justice if they live in another state or country.

Representative Joseph Morelle of New York is working to address this jurisdictional loophole by reintroducing the Preventing Deepfakes of Intimate Images Act as an amendment to the 2022 reauthorization of the Violence Against Women Act. Morelle’s bill aims to establish a nationwide ban on the distribution of deepfakes without explicit consent from those depicted in the images or videos. It would also provide victims with easier recourse when they find themselves unwittingly involved in nonconsensual porn.

Given the absence of strong federal legislation, the options available to mitigate the harm caused by deepfakes of me are not particularly encouraging. I can request that Google delists the web addresses of the videos from its search results and instruct my attorneys to ask online platforms to remove the videos altogether, although the legal basis for such demands is tenuous at best. Even if these websites comply, the likelihood of the videos resurfacing elsewhere remains extremely high. Women targeted by deepfake porn find themselves entangled in an exhausting, expensive, and endless game of trying to combat trolls.

The Preventing Deepfakes of Intimate Images Act will not single-handedly solve the deepfake problem. The internet is an everlasting realm, and deepfake technology continues to become more prevalent and convincing. Nevertheless, as AI grows increasingly powerful, it becomes all the more crucial to adapt the law to address this emergent form of misogynistic abuse and protect women’s privacy and safety. While policymakers worry about whether AI will bring about the downfall of the world, I implore them to prioritize stopping those who are using AI to discredit and humiliate women.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment