@Unfiltered/AI: Bridging Bytes and Ballots | Edition 6
By Lacy Crawford, Alan Rosenblatt, PhD, and Craig Johnson, Unfiltered.Media
Welcome back to the Bridging Bytes and Ballots newsletter—helping you navigate the intersection of AI and politics. This week we’re welcoming back Alan, a partner at Unfiltered.Media, a digital and social media strategist, organizer, professor and thought leader with over 30 years’ experience in the field. You’ll notice that this week's issue is a bit late. Unfortunately, life happens to us all and I had to deal with a family emergency.
Let’s get to it.
Progressive Money, Where Are You?
Opinion | Where Has All the Left-Wing Money Gone?
The New York Times
Lacy: This recent New York Times article highlighted the 2023 funding decline for progressive groups, possibly due to liberal apathy. This decline challenges their election preparedness and reflects a trend of initial crisis engagement followed by disengagement. Notably, the focus on averting disasters without a positive vision may worsen funding issues. This raises a question: Should AI ethics and policy advocacy prioritize long-term visions over immediate challenges?
Alan: As the pool of donor money goes down the importance of an organization’s ability to connect its own message to the way donors talk about the world goes up. At the same time, the need to turn your existing audience into a platform for growing your capacity to get your message out to the world, to further grow your audience, and to raise more longtail funds becomes more important than ever. With less money coming in from donors, we have to use more scalable tactics that reinforce each other and use them much more efficiently.
To the extent that AI technology helps us scale many of these activities, there will be a growing demand to divert even more money from the activities we need to scale to the development of AI tools. This will make the demand for the AI tools even more intense, pushing us to develop better AI tools to tackle each successive scaling issue without necessarily thinking through all of the ethical implications. Demand might very well overwhelm the thoughtful development of AI technology.
Craig: Lacy, our trusted guide in this newsletter journey, gave a wonderful synopsis that I thought really pulled at the real reason AIs today are poor replicators and replacements for human innovation and messaging. If we use poorly moderated public sources for training AI on how to talk about issues, we are inevitably going to have the same problems we face now as an output.
Lazy practitioners of email use short term fear tactics in order to maximize short term financial gains for their partners often at the cost of their clients long term fundraising health. AI’s trained on this will continue down this unsustainable path.
Generating Image Generators
OpenAI unveils better image generator, DALL-E 3, as AI arms race deepens
The Washington Post
Lacy: OpenAI's most recent technological marvel, DALL-E 3, promises enhanced image generation from text prompts. This version, with a timeline set for this October, boasts an enhanced ability to turn text into lifelike images. However, with the lines between AI-created imagery and reality becoming increasingly indistinct, concerns surrounding misuse, particularly deep fakes, have taken center stage. In collaboration with the White House, OpenAI is taking steps to label AI-generated content. Yet, many wonder: Are these measures enough to curb potential risks?
Alan: A doomsday machine only works if people know it exists and watermarks on AI-generated images only works if the technology people use puts the watermark on the image. But just because Google creates the best watermark ever does not mean OpenAI will incorporate it. And it certainly does not mean anyone creating their own private AI in their basement will use it. Isn’t that the point of having your own AI? AI image generation is just getting better and better, real fast.
Craig: The real dis- and mis-information game is with these image generators. People have been relatively pushing mis- and dis-information since the dawn of politics, however there has always been a gate to this production, i.e. you needed a willing and able creative person to create the visual content. With that gate effectively removed, think of all the evil ways that someone can make visually deceptive pictures. Already the news is rife with photoshopped fake images of public figures. Take away the need to have skills in photoshop, along with the ability to generate any situation you can think of and all the ways people are going to lie about where they were and what they were doing will explode.
Google: Disclose AI Use
Google to require disclosure of AI use in political ads
Politico
Lacy: Come November, Google is trying to step up its game in political ad transparency. They'll require ads to reveal any AI or synthetic content use, a move that resonates globally and impacts platforms like YouTube. While Congress mulls over comprehensive AI rules and the Federal Election Commission examines guidelines, Google's stance is clear: distinct labels for AI-heavy content. But basic edits? They're in the clear. Notably, while Facebook eschews AI disclosure, it does curtail specific manipulated media.
Alan, in light of Google's imminent policies on AI in political ads, can you weigh in on its value and challenges?
Alan: Google requiring political ads to disclose if it uses AI is a good thing, in principle. It is policeable in the approval process to launch the ad and can be reported and pulled after launch. So, it is still playing catch up a bit too much.
As for drawing up the rules to prohibit AI-generated political ads, it can be a slippery slope. If we use classifier-AI to analyze content data and derive ad message ideas from that, is that AI-generated? I realize this line can be easily defined now, but I can imagine we will figure out ways to blur them again.
Just like we see in the larger war against disinformation, the imperfect solution is always three-pronged: 1) legal/regulatory, 2) technologically, and 3) media and information literacy. No one of these will solve the problem alone and even all together it will be like playing Whac-a-Mole.