Image-based sexual abuse, that is what AI porn and deepfakes are all about.
It happened to Taylor Swift, it can happen to your daughter.
And while I stand behind Taylor Swift, should we expect AI tech companies like Open AI and social media platforms like X or Twitter to take the same action for ordinary people like us, who could one day find the face of our children on deepfakes porn?
Sadly, the answer to that question is a big No.
Taylor Swift AI
It was a trending topic on Twitter or X and one Taylor Swift AI photo was on the platform for hours and has been viewed 45 million times before the beleaguered social media took action.
And if not for the Swifties, it could have been worse, and yet those deepfake porn photos are out there, and it can resurface anytime.
Some say, what is the big deal?
It isn’t her.
First, Taylor Swift didn’t consent to use any of her photos or likenesses.
It is non-consensual.
We live in the digital age where innovation knows no bounds, and what happened to Taylor Swift underscores a sinister undercurrent, as it challenges our understanding of consent and privacy.
Sexual deepfakes represent a chilling intersection of technology and sexual violence.
Victoria Rousay’s groundbreaking thesis peels back the layers on this phenomenon, revealing not just the technological prowess behind these creations but the deeply gendered nature of this form of abuse. Through a meticulous phenomenological study, Rousay exposes the scars left on victim-survivors, scars that mar not just the digital persona but the very soul of those targeted.
At the heart of this issue lies the exploitation of digital personas, transforming innocent photos and videos into tools of sexual violence.
It not only violates privacy, but an assault on the identity and dignity of individuals and it victimized mostly women who with the use of generative AI find themselves the unwilling stars of AI porn content not only spread across the dark corners of the Internet but finds its way on social media platforms.
Rousay’s research paints a harrowing picture of the impact: physical, emotional, and social, extending beyond the initial shock of discovery.
A new analysis of nonconsensual deepfake porn videos, conducted by an independent researcher and shared with WIRED, shows how pervasive the videos have become. At least 244,625 videos have been uploaded to the top 35 websites set up either exclusively or partially to host deepfake porn videos in the past seven years, according to the researcher, who requested anonymity to avoid being targeted online.
Over the first nine months of this year, 113,000 videos were uploaded to the websites — a 54 percent increase on the 73,000 videos uploaded in all of 2022. By the end of this year, the analysis forecasts, more videos will have been produced in 2023 than the total number of every other year combined. — Wired
Taylor Swift AI Situation
What happened to Taylor Swift shows the digital advancement in the field of artificial intelligence, resulting in generative AI tools creating or recreating sexual deepfakes is a manifestation of image-based sexual abuse.
The canvas of this discussion extends beyond the victims to encompass a broader critique of heteronormativity and the pervasive culture of sexual violence, laying the groundwork for a critical examination of the ethical and legal challenges that lie ahead.
Rise of Sexual Deepfakes
Unlike the deepfakes as we knew them to be, where real porn videos are edited to have cut-out images or video footage of celebrities and real people, AI deepfakes with the help of generative AI tools create images with the intent of producing sexualize content of celebrities and real people.
With only a few words as prompt, these tools like the Microsoft Designer which according to reports has been the tool of choice of people mostly men who created the Taylor Swift AI deepfakes.
Microsoft has since closed the loopholes in Microsoft Designer that have been misused by people with bad intent.
Investigating these reports and are taking appropriate action to address them,” a Microsoft spokesperson told us in an email on Friday. “Our Code of Conduct prohibits the use of our tools for the creation of adult or non-consensual intimate content, and any repeated attempts to produce content that goes against our policies may result in loss of access to the service. We have large teams working on the development of guardrails and other safety systems in line with our responsible AI principles, including content filtering, operational monitoring and abuse detection to mitigate misuse of the system and help create a safer environment for users.” — 404Media
At its core, a technology that leverages sophisticated AI algorithms capable of hijacking personal images to fabricate non-consensual pornographic material.
What’s alarming is not just the technological capability to create these deepfakes but the accessibility of these tools, placing powerful means to inflict harm in the hands of virtually anyone with internet access.
These generative tools are available to everyone. Unlike before when one needs to dig in deeper and the learning curve is quite difficult,
It only takes a deep search on the web would help you find an AI tool that will create AI-generated porn and with a few words as prompt, you can create deepfakes.
This democratization of digital deception ushers in a new era of privacy invasion, where individuals’ consent is flagrantly violated.
The implications of this are profound and far-reaching.
As Rousay’s thesis underscores, the gendered nature of this abuse is no accident. It reflects a broader societal malaise, where women’s bodies are viewed as commodities, and their digital personas are fair game for exploitation. The ease with which someone can create and disseminate sexual deepfakes is a stark reminder of the fragility of digital identity in an age where consent can be bypassed with a few clicks.
This abuse is underpinned by heteronormative assumptions that prioritize male desire and entitlement, further marginalizing women and LGBTQ+ communities by using their images as fodder for non-consensual fantasies.
Let’s have a conversation
Imagine, for a moment, discovering that your image has been manipulated to star in explicit and non-consensual scenarios, that can easily be found on the Internet, or you have become the victim of sextortion.
Many of these cases targeted children, young adults, and mostly women. There were even victims who had died of suicide, all because of sextortion. While some of these cases aren’t related to AI-generated images and videos, they are all the same.
Finding their victim, and bullying them into submission otherwise with a simple click, an image or video can be released on the Internet for everyone to see.
The psychological toll is insidious, leaving victims haunted by the constant specter of their digital doppelgängers, performing acts they never consented to. The emotional trauma seeps into their daily lives, affecting relationships, self-esteem, and mental health.
Moreover, the social consequences are equally devastating.
The stigma and shame associated with being the target of sexual deepfakes can lead to isolation, further exacerbating the pain. As Rousay’s research reveals, the scars of this abuse aren’t limited to the digital realm; they leave indelible marks on the lives of those targeted.
Combating sexual deepfakes is not the sole responsibility of victims and communities. Policymakers must take swift and decisive action to enact laws that specifically address this form of abuse. Technology companies, too, bear a significant burden, as they must develop robust mechanisms for detecting and removing sexual deepfake content from their platforms.
Collective action is paramount.
Public-private partnerships can drive innovation in authentication and verification technologies, safeguarding the integrity of digital content. By working together, we can create a safer digital space where individuals are protected from the insidious harm of sexual deepfakes.
Final words
We all know it has been happening, and yet when it happened to Taylor Swift — the White House was alarmed.
The only silver lining is that everyone is still talking about it and that the companies behind these generative AI tools should take this as a cue and warning, that their tools need to be regulated.
Even if they say that their AI tools aren’t sentient, and we are at the very early stages of artificial intelligence, just imagine the harm it can pose to our society if we don’t act today.
Thank you for reading.
—
This post was previously published on MEDIUM.COM.
***
You may also like these posts on The Good Men Project:
White Fragility: Talking to White People About Racism | Escape the “Act Like a Man” Box | The Lack of Gentle Platonic Touch in Men’s Lives is a Killer | What We Talk About When We Talk About Men |
Join The Good Men Project as a Premium Member today.
All Premium Members get to view The Good Men Project with NO ADS.
A $50 annual membership gives you an all access pass. You can be a part of every call, group, class and community.
A $25 annual membership gives you access to one class, one Social Interest group and our online communities.
A $12 annual membership gives you access to our Friday calls with the publisher, our online community.
Register New Account
Need more info? A complete list of benefits is here.
—
Photo credit: iStock.com