@Unfiltered/AI: Bridging Bytes and Ballots | Edition 13
By Lacy Crawford, Craig Johnson, and Nicolai Haddal from Unfiltered.Media
Happy New Year and welcome back to the Bridging Bytes and Ballots newsletter—helping you navigate the intersection of AI and politics. This week we are joined again by Unfiltered.Media Director of AI Research Nicolai Haddal, and Craig Johnson to discuss what to expect from artificial intelligence in 2024, and beyond!
Let’s get to it.
How Sweeping AI Laws Will Reshape Tech in 2024
What’s next for AI regulation in 2024?
MIT Technology Review
Summary: We are anticipating the introduction of the first comprehensive AI laws globally. The United States will likely see the enactment of policies detailed in President Biden's Executive Order, emphasizing transparency and new standards for AI. The European Union has agreed on the AI Act, focusing on regulating high-risk AI applications and banning certain AI uses, such as facial recognition databases. China is expected to introduce a comprehensive AI law to regulate the industry more holistically. Additionally, the African Union is planning to release an AI strategy for the continent. This year will mark significant progress in AI policy and regulation, potentially affecting how AI companies operate and innovate.
Lacy: For the past year, AI has been at the top of everyone’s minds. Many people are worried that generative AI will displace workers and that no sector is safe from this new technology. Luckily, in my view, governments are at least trying to develop legislation to help. Craig and Nicolai, what’s your take on some of these new laws, particularly in the U.S.?
Nicolai: It's comforting that lawmakers are taking the risks of AI seriously, even if the U.S. Congress has yet to go further than the Biden Administration's Executive Orders from last year. This article points out that legislators both within and outside the U.S. are drafting regulations that take into account the novel risks presented by AI, specifically the use of deepfakes, and will hopefully address the question of platform accountability. Who is ultimately responsible if a particular AI solution is used to generate deepfake images for propaganda purposes? Is it just the creators, or the AI companies themselves, too?
One thing I'll point out, which colors my perspective: To a great extent, the toothpaste is already out of the tube on AI. Companies like ChatGPT have created astonishingly powerful models that they exercise control over, and they need to responsibly steward the use of their product. But ChatGPT isn't the only AI in the game: there are open-source, free AI models that are far, far smaller and can be run on consumer-grade hardware. These small language models can deliver incredibly impressive and convincing results on their own. So no matter how well we regulate large AI companies, or go after bad actors, we'll still need to ensure the general public is educated on AI, understands how it works, and can recognize the signs of a malicious actor on their own.
Craig: I think it’s interesting that none of these governments have seemingly fallen for the AI-becoming-sentient trap. That’s just not how this AI works. In terms of regulating the domain, I think what Oregon and Michigan did with their new political ads disclosure laws is likely a great template for the future. Making everyone along the chain liable for the distribution of disclaimer-less AI generated content is a great way to tackle this problem.
That being said, the consequences of what Nicolai pointed out, combined with the ability to remain anonymous on the internet, means that the threat of dis- and misinformation is really high. Not everyone lives in the EU or the US, but they still have access to our social media, websites, and online conversations. I believe we will see the destruction of truth as a socially accepted norm, and will end up returning to a time in which the community leader (trusted messenger) is the arbiter of what is real or not.
One last hellish thing we need to regulate is the use of AI in healthcare. There was an article that ran just the other day in which one of the other health insurance providers tasked an AI with making decisions to reject or approve care that was 90%+ wrong. Imagine that you were subjected to life-changing decisions by something that was wrong almost every single time. That time is right now.
The Experts on AI in 2024
What to Expect in AI in 2024
Stanford University: Human-Centered Artificial Intelligence
Summary: This Stanford HAI article highlights predictions from scholars about AI's evolution in 2024. Key points include a shift in white-collar work due to mass AI adoption, increasing complexity and size of AI models, and the proliferation of deepfakes. It also discusses a potential global shortage of GPU processors, which are crucial for AI's functioning, and advancements in AI agents capable of more complex tasks. Lastly, it touches upon the need for U.S. regulation in AI and the interplay of AI with policy, especially with new policies in the EU and U.S. states like California and Colorado.
Lacy: Paradoxically, in my view at least, it appears that generative AI is developing at a breakneck pace while the general public has yet to really notice the benefit from those advancements. I think this post from Stanford University experts does a good job capturing the “why” behind my paradox. I’d like to delve into a few sections: More Helpful Agents and the GPUs shortage. Peter Norvig, from Distinguished Education Fellow at Stanford HAI makes the point that with the rise of AI agents being able to connect to other services we may actually see AI do things for us. Craig and Nicolai, do you see the GPU shortage playing a role here?
Nicolai: It'll certainly be difficult for manufacturers to scale up to the demand for the GPUs that perform the vast amounts of matrix multiplication required for AI. But lest we forget, we've seen supply/demand problems in the GPU market before. There was a massive GPU shortage during the pandemic when cryptocurrency miners were purchasing GPUs in droves. After the crypto market crashed, miners began to offload their equipment, and while prices are still sky-high you can finally actually buy GPUs again. While it remains to be seen whether major manufacturers like NVIDIA will be able to keep up with demand, it should be noted that there are also new solutions for hosting language models on other architectures as well. Right now, Apple is really angling for the AI market. Their products have a special "system on a chip" configuration that integrates GPU and CPU memory, which can be used concurrently for AI purposes. And last month Apple opened up their own "MLX" AI framework for researchers. So it's important to keep in mind that there are novel hardware and software solutions that can mitigate at least some of the demand for conventional GPU units.
Craig: I want to also quibble with part of the article with regard to how models can become MORE useful to MORE people, because they are already useful to people who know how to use them in their work. I actually think models are going to get SMALLER and more complex. Of course, the very best models looking for the absolute peak performance are going to get bigger, but major companies like Google and Apple recognize that if they can get a working, functional Agent AI into your phone, they will win the next phone wars. To do that they need their models to fit on a phone’s hardware, so in reality that means we need to achieve much smaller models that are performing as well as much larger ones. In essence, we need a 3 billion parameter model to work just as well as chat GPT’s 1.8 trillion parameter model 4.0.
That may seem impossible, but experts said 4-bit quantizations would suffice, and here we are using them with almost no performance loss, with 1-bit quantizations on their way. The real mass adoption for AI is when it becomes native on your phone. Pro tip: ChatGPT on your phone is really useful when all you want to do is ask a trivia question but don’t want to get bogged down in Google hell.
Robots, Chatbots and Humans
Robots Learn, Chatbots Visualize: How 2024 Will Be A.I.’s ‘Leap Forward’
New York Times
Summary: At an event in San Francisco in November, Sam Altman, the chief executive of OpenAI, was asked what surprises the field of artificial intelligence would bring in 2024. He immediately responded that online chatbots like OpenAI’s ChatGPT will take “a leap forward that no one expected.” Sitting beside him, James Manyika, a Google executive, nodded and said, “Plus one to that.”
Lacy: So, Sam Altman the head of OpenAI says that we can expect a great “leap forward” that “no one expected.” That seems both promising and ominous at the same time. Craig and Nicolai, what’s your take? I’m particularly interested in how generative AI will be able to reason better in the future, and what that means for AI capabilities.
Nicolai: This is a little pedantic, but I think it's an important point to make: Neural networks can seemingly carry out 'reasoning' in a superficial sense, but they don't actually calculate a multiplication equation like “2 times 2”. To the extent that a language model alone can give the right answer, it's due to being trained on information that says so. While it’s being trained, the network sees the same string over and over again - "two times two equals four" - so it knows it should likely spit that answer out. Generative language models can be augmented with the ability to 'calculate' by being hooked into an external tool like a calculator. But we should be careful to identify what we mean when we use the word 'reasoning'. It does a lot of heavy lifting behind the scenes that I think will be important.
I don't doubt we will see some more stunning AI advancements in 2024. But it remains to be seen how far companies like OpenAI and Microsoft can go using improved multimodal models and developing a more sophisticated layering of models. Here's a hot take of mine: I think the notion of general artificial intelligence (AI that matches or exceeds a human being's reasoning) is a bit overrated as a singular event. I don't think the AIG threshold will be crossed with one product launch or announced in a press release. Rather, AI advancements will continue to accumulate over time, and at some point we'll reach a social consensus that we've arrived. I have doubts that will happen in 2024.
So what about the OpenAI/Microsoft hype? I don't doubt that we'll be astounded, especially with multi-modal models that can fluently understand and move between text, images, video, and audio. But remember - it's all 1s, 0s, and clever inference underneath the hood!
Craig: At the very basic level of computers, which everything is running off of, all a model is doing is adding. It can’t subtract, it can’t divide, it can’t multiply. It can only add. It turns out the ability to add is basically all you need to create the complex systems you see today, and that timeline took quite a while to develop, and insane leaps in mathematics and material sciences. Which is to say, I agree with Nicolai.
Where I want to expound is that these multi-model and extremely small models that fit on your phone are going to transform how we use the technology. Right now it’s a tech that is largely only being adapted by the Fortune 500 tech companies and large communication firms. I believe in the same way Gmail became a dominant force in our lives—because it efficiently solved various problems from email to calendar—these models are going to appear on our phones. They will be able to file our taxes, to file a claim with health insurance providers, etc. That’s where we are going to see AI really develop into something that affects our everyday lives.
Thank you for taking the time to read Unfiltered.Media’s Bridging Bytes and Ballots. Be sure you’re subscribed to our Substack, and share the post with your network.