@Unfiltered/AI: Bridging Bytes and Ballots | Edition 14
By Lacy Crawford, Craig Johnson, and Nicolai Haddal from Unfiltered.Media
Welcome back to the Bridging Bytes and Ballots newsletter—helping you navigate the intersection of AI and politics. This week we’re changing it up a little bit, and zooming out to some of the biggest questions we get from our progressive allies about AI. We’ll bust some myths, correct some records, and talk about areas in which our fears are justified—and where we’re perhaps not fearful enough. As usual, joining us will be Lacy Crawford, Nicolai Haddal, and Craig Johnson.
Rhett Martino, Andrea Haverdink Priest and Cara Homick helped with this newsletter.
Let’s get to it.
Exploring AI’s Destiny
Lacy: One of the biggest questions we get as AI practitioners is existential: Are we in control of AI’s destiny, or is AI in control? How do we know it’s not already controlling our consciousness and societal norms?
Craig: It’s always important to understand that right now, AI is merely a well-trained reflection of our society through the lens of the internet. So the more realistic question is: How much more control will propagandists grab with access to newfound efficiency in communication amplification techniques? I recently launched a new company called Change Agent, which customizes AI models to change tone and language. Why? Because the best way to persuade someone or effect change is to look like, act like, and talk like them. AIs are a way to make the political process that exists today even faster, more believable, and culturally competent—and perhaps most importantly in whichever language you need to communicate in.
Nicolai: I think it’s instructive to zoom out for a second and consider how governments, corporations, and other powerful actors have already been farming out their decision-making process to procedures instead of people, even before the recent AI wave. Algorithms have long been used to allocate who gets what in a litany of different contexts. With this point in mind, AI will indeed “control” our societal norms and consciousness in the form of what we see online, the prices and items we see at the store, and even what medicines we receive. But that’s due to human choice—more precisely, the choice of a few powerful humans to use procedural decision-making that too often puts their own priorities ahead of the common good.
What AI’s Limits Should Be
Lacy: As assistive AI and autonomous AI tools become more prevalent, what should be off limits? For example, should we ever be ok with assistive or autonomous AI weapons of war? Do we entrust autonomous AI to complete medical surgeries? Where is the “do not cross” line? How do we determine when it is “safe” to trust AI for tasks that require some level of “judgment”?
Craig: I hate to say it, but we already have AI that assists weapons of war—we’ve had it for decades. Modern enemy recognition technology is a form of image AI, for example. My biggest fear is the use of AI in medicine. I recognize that it has already provided us with great utility—Covid vaccines were developed through the use of AI for example—but the personal experiences nurses are sharing, and the awful headlines in the news every day tell horror stories that need fixing. Major insurance companies are using AI models with a 90% failure rate in approving care. Hospitals are using AI to determine staffing conditions, often leaving rural hospitals understaffed. The use of AI to make life-saving decisions in every aspect of medicine needs to be far more highly regulated.
Nicolai: In my opinion, autonomous weaponry must be an absolute red-line. If the sole purpose of a particular technology is to destroy or kill, its use must be guided by human beings. Sadly, I don’t think an international agreement on this issue is forthcoming, given the intensely polarized world we’re living in, with active hostilities between the major powers of the world. As far as medical surgeries and other everyday procedures (I’ll toss in driving, for example)—where the evidence shows us that AI could reach parity with a human being or even outperform one, we should embrace its use. I have no problem with a future where certain medical procedures can be made cheaper and more readily available through AI and automation. The same goes for driving—if (and it’s a big if), autonomous driving can be shown to be safer than a human being overall, I support it unequivocally. After all, a self-driving car could never drive drunk.
AI and the Job Market
Lacy: Three in four Americans (75%) think AI will decrease the total number of jobs over the next 10 years. About one in five (19%) believe AI will not affect the number of jobs, and 6% say it will result in an increase in jobs. Are these fears justified? Do you think it is essential that people learn how to use AI in order to keep their job?
Craig: I think there are two aspects to this question. One, are their fears justified? Absolutely, big business in America will lay off thousands of workers at the hint of an economic downturn—they barely need an excuse now. However, and I may sound like a broken record at this point, I believe this technology as it stands now is simply a great efficiency amplifier. It makes humans more efficient, it doesn't replace them—especially in politics. Now if you are a low- to mid-level communications professional in a business outside of politics? Good luck, and you had better be at the forefront of using the technology to achieve strong goals. The reason? In politics there are three finite resources that go into every campaign no matter what (fingers crossed): time, people and money. AI helps campaigns save money, by making people more efficient, saving enormous amounts of time, and enabling more actions that create change.
Nicolai: Let’s face it: employers come up with all kinds of excuses for making unpopular decisions, especially laying off their own staff—taxes, crime, the direction of the wind, and so on. AI will definitely join that list as a frequent scapegoat. I could see corporations using AI pilot programs as a sneaky union-busting tactic for instance—perhaps even ones that are doomed to fail eventually so that another non-union workforce can be set up from scratch. Given this reality: The best way to ensure AI benefits workers instead of just bosses is to organize. I think the recent Hollywood strike was a good example of that. I’ll pivot a little from the question to urge organizers to get smart about the risks of AI in their workforce, and to also get smart about using AI tools to augment their own organizing efforts! Use chatbots to draft the first versions of your engagement conversations, so you can spend more getting to know the people you’re trying to organize better.
Garbage In, Garbage Out
Lacy: There is growing focus on the possibility of bias and other ills in AI systems and the decisions or outcomes they lead to. For example, if a company spent decades promoting white males with Ivy League degrees into positions of authority, then an algorithm trained to identify future leadership talent might focus on that same type of individual, and ignore people who don’t belong to that group. What are the realities of AI bias and how does Unfiltered.Media recommend acting on this?
Craig: Modern AI systems talk like a college professor with the intelligence of a 6th grader. This is the product of the materials used to generate the datasets for training. This is true from Meta’s Llama, to OpenAI’s ChatGPT, to Google’s Bard. The good thing with something that is biased to be a center-left college professor is that it is in fact biasable, and there exists technology to bend that bias. That is what we do here at Unfiltered.Media; we create custom data sets that we believe reflect a much wider set of voices, and then train the models to have the desired “bias” and tone that actually reflects the diversity of our country.
Nicolai: We say this all the time, and it’s worth repeating: “Garbage in, garbage out!” AI is definitionally a tool that makes an inference based on past information. So sure, a hiring algorithm trained on data that favored white males with Ivy League degrees would bias for Ivy League degrees. But that would be a deliberate choice on the part of the algorithm-maker. It’s also possible to make algorithms that don’t favor these kinds of candidates. As usual, the buck stops at the top.
Thank you for taking the time to read Unfiltered.Media’s Bridging Bytes and Ballots. Be sure you’re subscribed to our Substack, and share the post with your network.