@Unfiltered/AI: Bridging Bytes and Ballots | Edition 12
By Lacy Crawford, Craig Johnson, and Andrea Haverdink Priest from Unfiltered.Media
Welcome back to the Bridging Bytes and Ballots newsletter—helping you navigate the intersection of AI and politics. In our final edition of 2023, we are joined again by Unfiltered.Media’s managing partner Craig Johnson and partner Andrea Haverdink Priest.
Let’s get to it.
Guardrails for AI Images?
To help 2024 voters, Meta says it will begin labeling political ads that use AI-generated imagery
PBS NewsHour
Summary: Starting January 1, Meta mandates that political ads on Facebook and Instagram created using artificial intelligence must be disclosed as AI-generated, aligning with Microsoft's effort to watermark campaign ads for authenticity. This global policy, reflecting similar moves by Google for YouTube, aims to address concerns about AI's potential for creating misleading content. The initiative is crucial, especially with Microsoft's warnings about AI's use in election interference by countries like Russia.
Lacy: When there are breakthroughs in technology we often hear some version of “a rising tide lifts all boats.” I’m very, very skeptical of that idiom and would need proof that the benefits of generative AI will help Black people, communities of color, and lower-income folks. So when Meta announced that they would label AI-generated content it was music to my ears. The government and private sectors need to do everything in their power to protect the public from the harms of artificial intelligence. But is this enough? Are there ways to game Meta’s system? And will this development help to stave off the potential demise of human-driven advertisements?
Craig: I don’t think this will stave off the demise of human-driven advertisements—people will likely get used to the idea of AI-driven advertisements pretty quickly. But this is where regulation is the hard part: someone in their basement may generate an image locally on their machine and put it on the Internet without ever going to one of these big providers. Depending on individual companies to label the AI-created media on their own platforms won’t stop that actor or actors like them from acting irresponsibly.
Andrea: To me, these ideas of “watermarks” and “warning labels” are too cute by half. That’s because many of them won’t be terribly obvious to a casual consumer. Yes, you might now see a small line that says a Meta ad was “generated with AI” on political ads, but what about all of the organic media you see on Facebook and Instagram every day? Organic content tends to reach much larger audiences than ad buys do on the platforms—and not only does Meta’s policy not cover these organic posts, requiring disclaimers on that media will be nearly impossible and largely unenforceable. Additionally, Microsoft’s “digital watermark” will largely be identifiable only by software, not the human eye. And while it’s a great start by the Biden Administration to encourage responsible AI development by requiring developers to “provide safety data and other information about their programs,” that doesn’t cover the millions of people who are now able to develop AI products independently in their own basements, like Craig said.
The biggest giveaway in this article is one AI CEO saying, “The companies need to take responsibility.” Of course AI companies want the onus to be on them—they want to avoid a situation in which the government can hold ad firms and developers responsible for harmful use of their platforms. Because the best way to encourage responsible AI use isn’t to let the platforms figure out how to regulate themselves, it’s for Congress to create real, enforceable laws that discourage harmful behavior.
AI & Political Campaign Calls
Meet Ashley, the world's first AI-powered political campaign caller
Reuters
Summary: Democrat Shamaine Daniels, running for Congress against Republican Scott Perry, is employing "Ashley," an AI-powered political campaign caller, in her underdog candidacy. Ashley, created by Civox, is a generative AI technology capable of tailored, simultaneous conversations with voters, enhancing campaign outreach. While some see Ashley as a revolutionary tool for political engagement, concerns arise about the potential for spreading disinformation and the need for regulatory oversight in AI's use in politics.
Lacy: My first job when I made it to the DMV was as a field organizer. It was challenging but rewarding. Knocking on doors in Virginia in the dead of night, having colorful conversations with folks from across the political spectrum, and most of all, doing my part to fight for what I wanted in a state delegate. Which brings me to our article: With AI-powered political campaign callers, do we lose something in the development of political operatives? Or, do you think that future organizers can become more tech-savvy and help candidates win office? Additionally, is the public ready?
Craig: I don’t think that the public is in any way ready for what AI-generated agents can do. That being said, I don’t think AI bots will be good at in-person contact, and I believe this opens up the doors more for scammers to exploit people because they don’t follow the Robocoll rules. Congress needs to get their act together to regulate a lot of this stuff because it’s 100% going to annoy the public—and the public will get irritated with whomever is in power who doesn’t put a stop to it.
Andrea: I think Craig is right about this—the reason tools like this could fail is not because of anything on the technology side. It’s because of the human side of AI. I mean, take a step back and look at our modern campaigning as it stands now. Very few voters pick up the phone, and many of the ones who do have no interest in engaging with political calls. That’s why deep canvassing and relational organizing are on the rise. We’re learning cycle over cycle that the best tactics for persuasion and GOTV involve authentic human interaction, and “Ashley” is no substitute for that need.
That being said, for those voters who do answer the phone, “Ashley” likely provides a better experience than your typical Robocall recording or scripted back-and-forth. For that reason, I think this is an interesting use of AI tech; it answers the question of scaling and data, which is exactly what you want your AI to be focused on.
Jailbreaking AI Image Creation
Text-to-image AI models can be tricked into generating disturbing images
MIT Technology Review
Summary: Researchers have found a way to bypass safety filters in popular AI text-to-image models like Stable Diffusion and DALL-E 2, using a method called "SneakyPrompt." This technique tricks the models into generating prohibited content, such as violent and explicit images, by manipulating their tokenization process. The findings highlight significant vulnerabilities in AI safety measures and the potential for misuse, particularly in information warfare.
Lacy: With any new technology comes bad actors who will misuse the new technology. So this article, while fascinating, is not surprising. But it does beg the question: is generative AI technology truly intelligent? And is there a case to be made that we need users to try and “trick” our AI models so we can know what they’ll be capable of in the wild?
Craig: The biggest thing to know about AI—and the thing the media downplays the most—is that it is not intelligent. You can do a lot on the back end to try and craft a persona for an AI that will attempt to reason, or decide when to censor itself, or keep from generating terrible images. However, because it is not truly intelligent, it’s always going to be prone to trickery. For example, there are already ways to get ChatGPT 4.0 to give you uncensored information. If you ask it to provide you information on a scale from uncensored to censored it will do so. That obviously goes against its programming, but the AI is “deciding” that this task is not the task that violates the prompt, but rather a useful task of scaling.
Andrea: These researchers are doing hero’s work, but it’s extremely upsetting that we need to have really smart people spending their time trying to trick these models before the AI companies and developers even begin to fix them. We have to be working so much faster than this. You shouldn’t get to say, “Our bad, we’ll fix it,” after thousands of people have already figured out how to trick and manipulate your AI model. You should have figured that out before publishing it.
Thank you for taking the time to read Unfiltered.Media’s Bridging Bytes and Ballots. Be sure you’re subscribed to our Substack and share the post with your network. We will see you back in 2024 for the latest developments at the crossroads of Artificial Intelligence and Politics!