@Unfiltered/AI: Bridging Bytes and Ballots | Edition 5
By Lacy Crawford and Nicolai Haddal, Unfiltered.Media
Welcome back to the Bridging Bytes and Ballots newsletter—helping you navigate the intersection of AI and politics.
This week, Nicolai Haddal, the Director of Artificial Intelligence Research at Unfiltered.Media, joins us. With a background in public policy advocacy and organizing, Nicolai offers a distinctive viewpoint in his role. He’s utilizing his extensive expertise in digital activism tools to develop automation and machine learning solutions for progressive policy groups. We'll be discussing best practices for using chatbots and exploring the global political implications of artificial intelligence.
Let’s get to it.
More Intelligent Approaches to Artificial Intelligence
The do’s and don’ts of using ChatGPT in your daily life
The Washington Post
Lacy: This article recommends some best practices for using ChatGPT. Although ChatGPT’s usage has gone down since its height earlier this year, I see many people, businesses, and organizations using large language models to help with everyday tasks. Nicolai, what's your perspective on these chatbots? Are they reliable?
Nicolai: This piece really drives home the fact that chatbots are not people—they're chatbots! So keep that in mind when commissioning chatbots for tasks that involve creativity or gathering empirical information.
It's true that chatbots shouldn't be 100% trusted for spitting out factual information. But they're still great for interacting with information—for instance, summarizing the contents of articles. In most cases you should take a 'trust but verify' approach to interacting with chatbots. Whenever a fact comes up, if it's something important, be sure to find a primary source of some kind that verifies the claim. But that's not much different than working with something like Wikipedia, right? Or even another human being.
AI in Global Politics: Revolution or Risk?
Artificial Intelligence and Digital Diplomacy
E-International Relations
Lacy: The COVID-19 pandemic hastened digital adoption in areas like healthcare and politics. This rapid shift has prompted concerns over cyber threats and misinformation. Additionally, the growing role of AI underscores the importance of balancing its potential with ethical considerations and the need for new international regulations. What are your thoughts on the multifaceted impact of AI in politics, particularly in terms of information gathering, transparency, and its potential for internal political goals?
Nicolai: This is a fairly wide-ranging piece pontificating on the role AI will have in the political process at many levels; in how people gather empirical information in the first place, in the lack of transparency regarding the primary sources that have been used for training models, and the possible uses of AI for achieving internal political goals. Personally, I'd return to the theme of the last article and emphasize that any kind of AI technology needs to be seen as the technology in and of itself, and not as a stand-in for anything else. From this point of view, AI can augment values-based processes that are steered by human beings with good aspirations and intentions.
It’s not you hallucinating, it’s your AI
Are AI models doomed to always hallucinate?
TechCrunch
Lacy: Large language models can produce misleading content known as hallucination, derived from data patterns, and while some techniques mitigate this, the balance between creativity and accuracy remains crucial. What are your thoughts on the ethical implications of using AI as a source of information, given the potential for misinformation?
Nicolai: We return to the problem of using AI chatbots to report empirical information. Let's keep in mind that there is already a sea of misinformation present online—there are innumerable examples of articles written by real people with false or misleading claims. None of us possess a knowledge and belief system that is entirely accurate in an empirical sense. But chasing total empirical accuracy would be a ridiculous waste of time. What counts is that our system of knowledge and beliefs is tied together and built on a foundation of good intentions, guided by ethical principles. We should hold LLMs to that same standard and use them the same way we would any other knowledge-gathering tool.