@Unfiltered/AI: Bridging Bytes and Ballots | Edition 15
By Lacy Crawford, Cara Freibaum Homick, and Jayne Fagan of Unfiltered.Media
Welcome back to the Bridging Bytes and Ballots newsletter—helping you navigate the intersection of AI and politics. This week we’re discussing one of the biggest concerns many of us have about AI right now—the ability for AI to use our likeness and our voices for abuse and harm. This newsletter comes in the wake of fury around both a fake Biden voicemail in New Hampshire and a surge of fake, explicit AI images circulating online of Taylor Swift that prompted Twitter to pause some searches of her name on the platform and the White House to speak out about the need to curb AI abuses. This week we are joined by Cara Homick and Jayne Fagan of Unfiltered Media.
Content warning: This newsletter will discuss deep fake nonconsensual images and online sexual harassment.
Let’s get to it.
Lacy: The advent of generative artificial intelligence harkens a new dawn which—if properly leveraged—can give women, Black people, communities of color and others the tools to fight back against entrenched power. But I harbor no illusions that the word “if '' isn't doing some heavy lifting in that sentence, and that any technological advancement won’t also be exploited for nefarious purposes. Which brings us to our first article this week concerning the pornographic deepfake images of Taylor Swift that have spread online. Cara and Jayne, what do you think the deepfakes mean for our social discourse?
Jayne: Imagine someone taking your personal photos and using them without your consent. It's a serious invasion of privacy! The effect it can have on the mental health of the victims cannot be underestimated, and the rise in deepfake content is making it hard for all of us to trust what we see online. The more we hear about manipulated content, the more we doubt the authenticity of everything we come across, and it's eroding the trust we have in digital media. It's not just a matter of feeling unsure—it's affecting how we express ourselves online. Women especially are holding back or limiting ourselves because we’re scared of being targeted. It's a dark cloud looming over the free exchange of ideas, and it's concerning.
Cara: If this is the first time you’re hearing about deepfake pornagraphic images that are “overwhelmingly harming women and children at an unprecedented rate,” it’s yet another sign of not only the power of Taylor Swift, but the magnified misogyny directed towards her. This is the latest, most high-profile example of this deeply concerning issue, but it’s one that needs to be on everyone's radar. Not only did this example expose the rapid explosion of AI-generated pornography, it also raises the challenge of stopping it from spreading and the overall lack of rules in place to prevent it.There are two major issues at hand, and they are quite Delicate. One is the lack of content moderation on X and social media in general, contributing to its spread. Polling shows that more than 80% of child sex crime starts on social media and 64% of teenagers say they’ve seen porn accidentally while scrolling social media. The other major issue is the fact that AI software developers are allowing for this kind of content to even be produced from their tools in the first place! In all likelihood, the images of Taylor Swift were probably created by Stable Diffusion, Midjourney, OpenAI’s DALL-E, or a diffusion model similar to it. This is why there has to be safeguards. I find it hard to believe that it’s impossible for these companies to track down exactly who is responsible for creating the content on their sites (they would first have to admit it was created by their software, of course). Rep. Joe Morelle recently introduced a bill that would criminalize deepfake porn, and in my opinion lawmakers can’t act swift-ly enough to enact it. Otherwise, when it comes to AI we will soon be all singing along to This is Why We Can’t Have Nice Things.
This article discusses the increase in fake pornographic images and videos of women and teens generated by AI, with a 290% increase in such content since 2018. Victims, like YouTube influencer Gabi Belle, experience significant distress and violation, as these AI tools can create realistic photos or impose faces into explicit videos, often with little legal recourse due to inadequate federal and state regulations.
Lacy: I think it’s clear that we inhabit a patriarchal society which sees anyone who is not white and male as a threat. In my view, this explains why we’re seeing such an increase in explicit deep fakes with the adoption of AI. But what am I missing?
Cara: There have already been countless examples of non consensual AI pornography attacking celebrities, political figures like Rep. Alexandria Ocasio-Cortez, and teenage girls. One study found that 96% of deepfake images are pornography, with 99% of those targeting women. The hope I hold onto is that right now, people are racing to solve the issue of detecting AI-generated imagery and how to block non-consensual AI porn. What concerns me, is that the AI will only get smarter and the issues with it will only get harder for us to solve. Some experts even think it may become impossible to fix. Companies like Google are trying to address the issue internally by blocking the explicit content in their search engines, but the AI-generated content still has a way of slipping through the cracks. It’s hard for people to request removal of non-consensual AI-imagery because companies have claimed the content wasn’t “solely derived from their likeness.” Nine states have already passed legislation, but it’s time for federal legislation. Lives depend on it.
Jayne: The advancing sophistication of AI signifies that virtually anyone can become a target. Despite the widespread apprehensions regarding the broader consequences of deepfakes, it is crucial to recognize that the technology itself is not the root cause of abuse. Rather, it functions as a tool employed to carry out such abuses. It is imperative that we work to address the problematic gender norms and beliefs that frequently serve as the foundation for these forms of abuse, as well as passing federal legislation to hold those who create this harm accountable.
A robocall, seemingly using a digitally manipulated voice of President Joe Biden, told New Hampshire Democrats not to vote in the presidential primary, falsely stating that voting would aid Republicans and Donald Trump. The New Hampshire attorney general is investigating the incident, which appears to be an illegal attempt to disrupt the primary and suppress voter turnout.
Lacy: Unfortunately this isn’t the first or the last time bad actors will use deceit to confuse voters. What can we do to prevent these incidents of deception?
Cara: This is a pretty frightening attack on democracy, and yet another example of people using AI for evil. Unfortunately, this feels like it was probably a test run and sheds light on the increased security we will need to protect our elections. I’d be very curious to follow this story to see if there are any consequences for impersonating a sitting President to suppress voter turnout. If people can hide behind what they create using AI, or distributing it through AI, that becomes a major problem when it comes time to hold someone accountable.
Jayne: Addressing deepfakes will involve collaboration between regulatory bodies, technology companies, and other stakeholders. As of now, organizations like Public Citizen are demanding the Federal Election Commission (FEC) clarify that the law against “Fraudulent Misrepresentation” applies to deceptive campaign communications. According to The Washington Post, FEC Chairman Cooksey states the agency would resolve the issue by early summer 2024. We certainly hope they’re working to implement solutions AT LEAST by then, since this election will certainly be rocked by AI bad actors.
Thank you for taking the time to read Unfiltered.Media’s Bridging Bytes and Ballots. Be sure you’re subscribed to our Substack, and share the post with your network.