@Unfiltered/AI: Bridging Bytes and Ballots | Edition 8
By Lacy Crawford, Craig Johnson, and Nicolai Haddal, Unfiltered.Media
Welcome back to the Bridging Bytes and Ballots newsletter—helping you navigate the intersection of AI and politics.
This week we welcome back Nicolai Haddal, the Director of Artificial Intelligence Research at Unfiltered.Media. With a background in public policy advocacy and organizing, Nicolai offers a distinctive viewpoint in his role. He’s utilizing his extensive expertise in digital activism tools to develop automation and machine learning solutions for progressive policy groups. We'll be discussing restrictions and expansions of AI use in politics.
Let’s get to it.
AI Tailoring Comms for Audiences?
NYC Mayor Eric Adams uses AI to make robocalls in languages he doesn’t speak
The Verge
Lacy: Artificial Intelligence has the potential, when used ethically, to help society become more efficient, effective, and productive—especially when barriers like those of language translation are taken into consideration. But this application of AI is fraught with challenges when clear disclaimers aren’t used, like in the case of New York, where Mayor Eric Adams’ robocalls fail to mention that they’re AI generated, and that he doesn’t actually speak the languages in the calls sent to New Yorkers. Nicolai, could you explain to us how this is possible? I want to know the technical aspects of this. For example, I’ve been testing out different AI-powered reading apps that even use celebrity voices to read news articles and I’d like to know what’s behind it. And Craig, what are your thoughts on the ethical concerns? How can this be used for good—especially in regard to political campaigns?
Nicolai: So, let’s take a step back and think about how the human voice used to be imitated. Before the AI revolution, dedicated electronic devices called voice synthesizers were used to imitate the human voice. The most famous example of this was the voice of the late physicist Stephen Hawking, who used a speech machine to communicate with the outside world. These speech machines imitated the shape—technically ‘formant’—of the human voice. But the resulting electronic voice was so different from the human voice that it became a distinct pop-culture hallmark for Hawking. It wouldn’t have fooled anyone over the phone.
Now, software can almost perfectly clone the human voice. In this case, the generative voice AI Mayor Adams is using is meant to imitate a ‘real’ human voice down to the most minute details. The tech behind AI voice generation is very similar to the tech behind tools like ChatGPT. It’s neural networks and deep learning all the way down! You might know ChatGPT analyzes vast amounts of data to ‘learn’ what kind of data users would expect when they ask it questions. Behind the scenes, it does so by breaking text into ones and zeros. Similarly, in the case of voice AI, a voice recording is sampled down to ones and zeros, and used to fine-tune a voice model. The results are eerily similar to the real human voice.
Craig: There are two important parts to this story in regards to ethics. The first is the clear need for people to be very transparent when they are using AI in any type of project—whether it’s city-sponsored robocalls for public engagement purposes or for campaigning where candidates are reaching out to their target voters in-language. People deserve to know if you used AI in any form of outreach, whether it’s visually through ads, audibly through voice generation, or through translation as shown here. It doesn’t make the method of outreach any less effective or potentially less useful, it just simply clears up the implication that your candidate speaks a language they do not. Secondly, language democratization is coming for the world around, and translators should be nervous. The thing I’ve been most concerned about for the potential of AI to disrupt elections is the ability to allow out-language groups to communicate effectively in-language. The best way to convince someone to do anything is to look like, act like, and talk like them. AI allows you to do this. And for ethical purposes, AI (with disclaimers) should be normalized in its use as it will make the world more connected and help erase cultural and language barriers.
However, there is an extremely fine line between providing in-language voting, public services, or emergency information in-language through the use of on-the-fly AI translation and the ability to manipulate the public through government means for the good of a single politician. And often the line is far blurrier than we think.
Devolution ChatGPT’s of Political Restrictions
ChatGPT breaks its own rules on political messages
Washington Post
Lacy: Ah, the elephant in the room—AI chatbots and political campaigns. Our previous article touched on a somewhat local political use—yes I know New York is the largest city by population in the country—but that use case, with its potentially deleterious effects, only scratches the surface of how generative AI can be a hindrance to the project of democracy. However, the OpenAI ban on political campaigns using ChatGPT to create content tailored to specific demographics gets to the heart of the confluence of profit, ethics and governance.
As the Washington Post revealed, OpenAI’s ban isn’t actually enforced, meaning that just about anyone can use the chat bot to create materials to target specific voting demographics. If generative AI is like electricity, how do we push it to help all and not the bad actors?
Nicolai: Let’s start by acknowledging that tools like ChatGPT are so powerful precisely because they are so open-ended. Generative AI is useful insofar as it can respond to novel use cases that aren’t anticipated in advance. That’s the power of the human mind—it can respond to novel information dynamically without shutting down, and that’s precisely the experience AI chatbots are trying to replicate. Now that’s not an argument against implementing guardrails against undesired output. As an industry leader OpenAI has a responsibility not to generate content in ChatGPT that goes against its own policies, and while there are guardrails in place, clearly they’re not enough at the moment. I’m curious how they’ll improve compliance in the future. Will self-regulation of large AI companies continue to be enough?
Craig: This is the same problem we face in the social media arena. Theoretically, TikTok has a ban on political ads—a ban which I’ve seen violated over and over again. Facebook and X have bans against hate speech, yet are the drivers of most hate speech around the world. There is no incentive for these platforms to alleviate these concerns or problems because that would cost them money. They have shareholders who expect profits over anything else, and removing every single piece of hate speech—or in this case coming up with every single prompt someone could use to do political work—just isn’t in the interest of these groups. That’s why industry self-regulation will never work. We have an important and urgent need for government regulation of these industries. They have no reason to stop the societal damage they are causing.
Fake-it-till-you-make-it Rules on Deepfakes
Drawing the line on AI-based deepfakes proves tricky for Congress
Roll Call
Lacy: We’ve touched on deep fakes, albeit in a cursory manner, in this newsletter before. To expand on the subject a bit, images can unfortunately elicit actions faster than the 118th Congress can produce a speaker. However, there appears to be some bipartisan support for regulating AI. What are your thoughts on the history of deceptive political messaging? What about the idea of giving people who are subject to fake ads the ability to file lawsuits, while also allowing the deep fakes to be distributed as long as there is a disclaimer? Speaking of disclaimers, we aren’t lawyers so keep that in mind.
Nicolai: Deepfakes can be frightening because they convincingly portray scenes of people and places that never happened. But before we get caught up in that, consider a famous example of deceptive images from the past: The doctoring of photos to remove political figures who were later imprisoned or executed in the Soviet Union. Already in the early 20th century photographs could be manipulated in darkrooms relatively easily to take people out—now, AI can put people in. I’m just bringing up that we’re really dealing with a new and more powerful twist on an old problem. It will be interesting to see how proposed legislation regarding the use of deep fakes to portray federal lawmakers will be enforced. I think it really comes down to intent: are people creating media to enlighten, or deceive? This has always been a tricky problem, and now that generative AI is making it even harder, the credibility and integrity of people using AI will need to be considered when sorting the good actors from the bad.
Craig: Nicolai rightly pointed out that the problem of deep fakes, or even the altering of photos, has been around since the time of the photograph itself. Even for the last thirty years, we have had the ability to shape and alter images through photoshop or GIMP. The problem isn’t doctored images, it is the ease of manipulation and how many you can quickly produce. You can use generative AI to create a whole backstory for someone. Fake events, with fake friends in fake places—it’ss certainly a useful technology for spies trying to create convincing back stories, as well as those who want to paint their political opponents in a negative light. Taken into account with the ever increasing levels of non-consensual NSFW images being generated and spread online, images and soon video present the largest danger for AI in society.
What We Are Reading
Wall Street Journal: AI Gold Rush Prompts Some College Students to Drop Out: “Someone is going to get their jobs automated away. I’d rather be doing the automating,” said Govind Gnanakumar, 19, the CEO and a co-founder of Automorphic and recent college dropout.
Washington Post: FEC taking up AI in political ads again, but rules still in limbo