@Unfiltered/AI: Bridging Bytes and Ballots | Edition 16
By Lacy Crawford, Craig Johnson, and Nicolai Haddal of Unfiltered.Media
Welcome back to the Bridging Bytes and Ballots newsletter—helping you navigate the intersection of AI and politics. This week, we delve into the intersection of cutting-edge AI tools like Sora and the erosion of truth. As always, we’re committed to bridging the gap between AI advancements and political integrity.
Let’s get to it.
OpenAI launched a groundbreaking AI model capable of generating videos. This model can produce content that rivals the output of startups and tech giants, marking a significant step forward in generative AI capabilities.
Lacy: In my opinion, OpenAI’s Sora is pretty impressive—especially when compared to other text-to-video software like RunwayML. Sora’s videos seem richer and more realistic. Craig and Nicolai, could you explain why Sora appears so much more realistic than other AI video generators?
Nicolai: On the surface, OpenAI might seem to have the edge in video generation because they possess more raw computing power, content for training, and in-house expertise than the competition. Of course I don’t have direct information on who’s planning to do what at Google and Meta and any other competitor in this space, so who knows— - maybe Sora will soon be yesterday’s news. And the general public doesn’t have access to Sora either, so the model might disappoint for many common use cases.
But here’s one thing I’m confident about: OpenAI has always had the edge in audacity. Among the reasons ChatGPT was such a huge success was that they bet big on a ‘more means more’ approach to generative AI. That approach is clearly paying off for them again.
Craig: Did you know that there’s not really a difference between what the next pixel should be in an image and what the next word should be in a sentence? Basically, if you give OpenAI a gigantic dataset that was already prelabeled, creating Sora is fairly trivial. What we are seeing from this is the reason why a bunch of first-party content sites, like Reddit, have limited their API’s and why they are now signing exclusivity content deals with other AI providers. Because the secret sauce isn’t in the computer power, or the AI. The value is in how well-structured your data set is and how well you communicate that data set into forming the weights of the AI itself.
This election year, top AI companies including Anthropic, OpenAI, Google, and Meta are taking steps to prevent AI's misuse in politics—such as banning AI in campaigning and enhancing content labeling. Despite these efforts, the impact of AI in potentially distorting democratic processes remains a concern.
Lacy: My question is whether we, as the general public, believe that AI companies have the credibility, capability, and commitment required to regulate misinformation, prevent algorithmic bias, guard against deceptive digital content, and protect democracy from the potential threats posed by their technologies. Is the private sector truly equipped to address these significant challenges?
Given the intricate balance between profit motives and the imperative of truth-telling—a balance with which, for instance, mainstream media has notably struggled, especially leading up to the Trump presidency and afterwards—there exists a legitimate concern about the private sector's capacity to uphold democratic values. Yet, there are glimmers of hope, such as the pivotal role of Black media in maintaining journalistic integrity and shining a light on racial injustices.
So what do we make of this effort? And from a technical point of view, can this be done?
Nicolai: Let’s start with the technical: Yes, safeguards can and are put in place by every responsible generative AI provider to screen against hateful, manipulative, or otherwise objectionable content out there. Note the use of the term responsible. And this makes the question of self-regulation by private companies more complicated than it may seem.
Indulge me a moment: In my mind, there’s a (somewhat dramatic) parallel between the development of the neural networks we use and the development of atomic weapons last century. As anyone who just saw ‘Oppenheimer’ could tell you, during World War II there was a race to build an atomic bomb between the Allied and Axis powers. Both knew that atomic weapons were theoretically possible—so it was just a matter of time until someone succeeded. Why? Because the theoretical framework underpinning nuclear fission was surprisingly simple. The math and programming behind the neural networks required for generative AI is simple and elegant, too, and now that Pandora’s Box has been opened, there’s a whole world of open generative models that can be bent to pretty much any end.
I know, so why bother regulating anyone? I maintain that this isn’t a fatalistic argument against regulation in the private sector: The major AI players can and should be required to do their part to ensure the most accessible AI tools are safeguarded, so people without malicious intent don’t unwittingly cause harm. But we’ve got to come to terms with the reality that the means for malicious actors to deepfake misinformation are already out there, and have and will be exploited.
As India prepares for its 2024 general elections, political parties are increasingly leveraging artificial intelligence, including deepfakes, to sway voters. This trend, highlighted by a deceptive video from the Congress party, underscores the growing challenge of distinguishing real from AI-generated content. Amidst this, the government urges tech to curb such manipulations, though regulations are vague.
Lacy: In the world’s largest democracy, political skullduggery is supercharged in the age of AI. This is by no means an indictment on India, as America has a long history of not only political gamesmanship but the total disenfranchisement of Black people, women, and others. Can we learn any lessons from what other democracies around the world are experiencing with AI? Are there things we can do, like outlawing political ads during a certain timeframe to prevent the spread of misinformation?
Craig: I think banning political ads is a start, but the real danger and the real crux of the problem is in grasstops-oriented content, in which individual actors domestic or otherwise use standard propaganda techniques in more efficient and effective ways. The real issue is what happened to Taylor Swift. The creation and spreading of NSFW content on a platform designed to spread disinformation was the first taste of what is only going to become a nastier affair. Artificial intelligence’s manipulation of the spoken word has been weaponized to a level humanity is likely poorly equipped to handle. That’s why we think it’s so important to fight back with our own AI.
Nicolai: I feel like a real doomsayer today! But again, the hard reality we have to live with is that, given the kind of internet we have now, and given the ready availability of generative AI tools, I don’t think any law can prevent the spread of malicious political advertising. Let’s be honest: Without incredibly invasive mass surveillance—yes, more invasive than what we’ve got now—we can’t just pass a law and hope for the best. Here’s the lesson I take away from the malicious use of AI by political actors in India, the malicious use of it here, the malicious use of it anywhere in the world: The electorate needs to award honest candidates who don’t use malicious AI at the ballot box. And how can they assess that? To paraphrase the duck test: If something looks like malicious AI content and came out at a time very bad for the opposition—that something is probably malicious AI content. Voters have never been off the hook for evaluating misinformation, and unfortunately this is a new sophistication we’re going to have to adopt if there’s any hope for representative democracy in the future. No one said it’d be easy!
Thank you for taking the time to read Unfiltered.Media’s Bridging Bytes and Ballots. Be sure you’re subscribed to our Substack, and share the post with your network.