@Unfiltered/AI: Bridging Bytes and Ballots | Edition 11
By Lacy Crawford, Nicolai Haddal, and Craig Johnson from Unfiltered.Media
Welcome back to Bridging Bytes and Ballots, your guide at the crossroads of AI and politics. This edition features insights from Nicolai Haddal, Unfiltered.Media’s Director of AI Research, and Craig Johnson, as we delve into the recent developments involving Sam Altman's departure and return at OpenAI.
Let’s get to it.
Billionaires vs. Altruists
Inside OpenAI, a rift between billionaires and altruistic researchers unravelled over the future of artificial intelligence
Australian Broadcasting Corporation
Lacy: Since OpenAI released ChatGPT a year ago, many experts in the field have expressed concerns about the advancement of generative AI. Some are worried about the existential threats to humanity, while others are concerned about how these systems amplify bias and harm communities of color and low-wage workers. In my view, the main concern has always been whether the people in control of OpenAI would prioritize profit over people, and the recent rift at the company did not alleviate any of those concerns. Craig and Nicolai, can we find a way to balance the need for a company to make a profit with the importance of safety?
Nicolai: When OpenAI was founded, it was a mission-driven nonprofit dedicated to advancing generative AI technology in a responsible manner. But here’s the thing: The computing power required for any artificial intelligence tool as complex as ChatGPT requires vast, vast computing resources and power consumption. This isn’t cheap, so OpenAI established a capped for-profit side of the business. Altman became the face of the pro-business side of the company, which prioritized getting product out to market quickly, to the chagrin of some on the non-profit side. Now Altman is back at OpenAI and his opponents are gone. So business has of course triumphed.
So to put your question into more focus: is it possible to actually pay for the incredibly expensive upkeep of running an AI business ethically? That’s now up to Altman to try to deliver. This is just my take: It might have been conceivable for the non-profit/for-profit split model to work responsibly without regulation. After this showdown I think governments may become even more motivated to regulate the AI space more heavily.
Craig: Yeah we could find that balance and to a certain degree OpenAI was trying to find it themselves. Noted competitor and widely considered the #2 non big tech firm Anthropic also has an unusual governance system, with a specific focus on social consciousness and a non-profit governing board. However, I think we need perspective on a couple things. First, is the understanding of how the investors and stakeholders around OpenAI view their main product. They view it as the next big thing, and that may be obvious but the consequence is not. This is like the microchip. People point to the transformation cell-phones, computers and tablets have had on our society, well that is all driven by one singular technology the Field-effect transistor. That’s what powers every single microchip. California is the 5th largest economy in the world because of it. OpenAI represents the next Intel, the next Apple, the next Microsoft but all in one. The country that dominates AI will dominate the future, and every single stakeholder and investor is looking at OpenAI as their golden ticket to a future in which they are in control.
This is why President Biden doubled down on Trump’s rhetoric around Chinese trade barriers and the exporting of Nvidia’s industry leading AI chips. It takes at least 8 of the most advanced GPU connected together to even start to think about making a model and that process is DAYS long. The minimum barrier to entry is just not possible without Nvidia’s chips. If we view the events that transpired in the context that OpenAI could be worth Apple plus Intel + Microsoft together it makes more sense.
Altman's Return Shakes OpenAI Board
Sam Altman's back. Here's who's on the new OpenAI board and who's out
CNBC News
Lacy: OpenAI is planning to announce new board members soon. With the expansion of the board, comes a great opportunity to benefit from diverse insights and experiences from communities that are often not well-represented in corporate governance. Having a more diverse board can help to address some of the AI safety and ethical concerns. As Black people and other communities of color are often excluded from the development of these systems, the board shake-up presents an opportunity to bring their voices to the table and ensure that AI is developed in an ethical and responsible manner. But to the news at hand, what are your thoughts on the new board members and the old? Does this new board mean that the altruists have lost?
Nicolai: Some of the board members who were ousted were adherents of effective altruism - a movement in Silicon Valley focused on evidence-based approaches to philanthropy that invariably involved business and high-tech solutions to social problems. Honestly, from a progressive perspective, I’ve never been an adherent myself, and I don’t see a massive difference between those supposed altruists and someone like Larry Summers. One thing is for sure: This board and probably nearly every other board in Silicon Valley can and must diversify.
Craig: I once took a 400 level philosophy course on the game theory of the evolution of altruism. The game theory rules that underpin the logic behind why altruism exists, if it does at all, is both complex and involves a multirole society, in which instead of hawks and doves, you have hawks, doves, and eagles and an artificial distribution of resources. That is all to say none of these people have done half the thought necessary to understand what “effective altruism” even is and their label is nothing more than guilt washing the fact that they are creating a tool that will cause millions of job losses and the amplification of inequality to scales we’ve never seen before all the while while eliminating the concept of truth. I own an AI company not named Unfiltered.Media and I am making a progressive chatGPT right now. I know that at some point this technology if I develop it properly will cost people jobs even though my goal is the opposite. I just don’t hide the emotional turmoil these crosswise goals cause me under the banner of “effective altruism.”
The Power Struggle Between Wealth and Ethics
OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say
Reuters
Lacy: I don’t know about you but “Q Star” sounds a little foreboding. Since ChatGPT arrived on the scene the general public, myself included, were introduced to the idea of artificial general intelligence (AGI), in short AI that can surpass human intelligence. As a child of the 90s of course I thought of Skynet but Craig and Nicolai have assured me that a sentient AI is years away. So my question is, can we as a society develop AI safely, considering that we could create something more intelligent than ourselves, when profit drives much of this work?
Nicolai: “Q Star” sounds a bit ominous, doesn’t it? A lot of the doomsayers about AI have been warning that convincing artificial general intelligence that could outperform human beings could be the beginning of the end for humanity. If it’s real, I’m sure Q* is impressive tech - but we’d need to know more. If it’s just another incremental improvement on the fundamental tech behind ChatGPT, then we’re still dealing with disarmingly simple tech underneath the hood. ChatGPT continues to be an excellent predictor of what word should come next when it’s presented with a query.
Back to the original question: safety and profit need to be coequal goals for corporate actors. AI in this form is new on the scene, but the tension of profit vs safety has been an ongoing problem in industrialized societies. As a political progressive, I think there are times the government can effectively step in as an arbiter. This is why we have safer cars, universal K-12 education, indoor smoking bans, and smoke detectors in our houses. Obviously businesses can continue to profit in these sectors, whether they do so voluntarily or under regulation. I absolutely think it’s possible to balance these two priorities.
Craig: Can humans teach an object that does not think like us to think better than we do? I think that is the more fundamental question. LLM are just giant math sets, and to say that is not how humans process and understand information couldn’t be more of an understatement.
Ask ChatGPT to synthesize and combine two seemingly different subjects into one coherent text and it will fail. ChatGPT doesn’t have true understanding; it has the world's most complicated prediction algorithm behind it. I still can’t get ChatGPT 4.0, LLaMa 2.0. you name it to regularly follow directions given to it in a system or user prompt. We are facing some interesting limits when it comes to computational power as well. The amount of power manufacturers are pushing through cards is causing problems with interference. We are also approaching the physical boundaries for how small we can make a transistor. In my opinion until quantum computing comes on line and we start listening to the universe to understand the metaphysics running our world we are likely safe.