@Unfiltered/AI: Bridging Bytes and Ballots | Edition 9
Happy Halloween to those who celebrate! Welcome back to the Bridging Bytes and Ballots newsletter—helping you navigate the intersection of AI and politics. Truly spooky stuff.
Lacy: Since our last correspondence, there have been important developments in AI from the White House with the release of an over 100-page executive order that the Associated Press says “seeks to balance the needs of cutting-edge technology companies with national security and consumer rights.” I want to highlight that the Biden executive order includes many provisions that civil rights organizations support. (Full disclosure: my day job is at a civil rights organization!) Check out the Administration’s Blueprint for an AI Bill of Rights for more civil rights context.
So, my AI expert friends, what else is actually in this announcement?
Nicolai: This announcement introduces the nation’s first-ever regulatory action on artificial intelligence. These regulations address the growing commercial, privacy, and safety concerns that have followed the introduction of revolutionary new tools like ChatGPT and Stable Diffusion. AI is a complex topic, and accordingly the list of actions is pretty long. But some major actions include:
Setting safety measures that require the largest players in the AI space to submit safety test results to the U.S. government
Establishing an AI Safety and Security Board to develop AI safety standards
Consumer protections regarding AI-engineered fraud and deception and the use of private information by large AI companies
Setting standards for how the government itself can use AI
We’ve long believed that government action on AI was inevitable. I think with this announcement we can comfortably declare that AI has fully arrived as a force in nearly everyones’ lives and is here to stay.
Lacy: Very helpful overview Nicolai. I also want to draw our readers to the news about the National AI Research Resource, which the order pilots and could become the topic of another newsletter entirely. But if properly implemented it could help address some of the issues with scale and consolidation, such as Big Tech having all the resources leaving little room for competition.
My next question is: what are the limitations to the White House’s regulations? Do you have any ideas for regulations that would be more effective?
Nicolai: For starters, the number of companies that create models large enough to be heavily regulated is quite small—think OpenAI and Meta. So this regulation doesn’t spell the end of self-regulation in the AI space. While massive tools like ChatGPT are being adopted by the general public, highly motivated businesses are adopting smaller and more bespoke custom language models for their needs. I’m not sure how these regulations will affect that space.
I think these regulations will be sufficient to stop many of the most immediately dangerous threats of AI, including the use of AI to engineer biological materials, but there’s still a long way to go before the government can fully adapt to the dangers of our new reality. Take the big question on everyone’s mind: How will generative AI affect the 2024 election? We already know major tools like ChatGPT haven’t been totally consistent in enforcing their own policies banning the output of electoral materials targeting specific demographics of voters. We’ll have to see if these regulatory actions are enough to ensure big companies are complying with even their own requirements.
Craig: While I think the executive order was a good start, I’m here to throw some cold water on the regulation celebration. There are two criteria for regulation that stand out to me as being somewhat out of touch with the real danger AI presents.
Any networks with a processing capability of 10^20 flops
Any models costing $50 million or more to create
These two regulations leave a gigantic hole for small- to medium-sized AI companies to enter the space. That, combined with hardware gains and the realization that model sizes as they are now are not being used to their greatest efficiency, leads to the ability of plenty of nefarious actors to fall through the regulation cracks. I believe that until there are significant penalties and strong enforcement of disclaimer law, some of the most immediate dangers of AI will not truly be addressed. That’s not to mention that the moment society creates a general AI, regulations will be too late. When failure could mean “game over,” it sorta makes you pause at the effectiveness of safety tests.
In our next issues will also touch on the OMB guidance that was released on Wednesday.