Opinion - Move fast and break things? Not again, and not with AI.
It was only 12 years ago that Mark Zuckerberg, CEO of Facebook, declared that the company’s culture was to “move fast and break things.” Perhaps he was thinking narrowly about software, and telling engineers that they shouldn’t be afraid to try new things just because it might break some old lines of code or program functions.
But in practice, we now know it’s not just software. What this motto gave birth to is an industry culture that systematically privatizes the upside benefits of technology (more revenues and higher stock price) while socializing the downside human risks (to privacy, mental health, civil discourse and culture).
The problem with “move fast and break things” is that the company now named Meta, along with the other tech giants that adopted this worldview, keeps the money and power for itself while letting everyone else — users, societies, communities and cultures — bear the costs of what gets broken along the way.
It’s painful for me to see Meta trying to do the same thing today with a new generation of technology in artificial intelligence, specifically large language models. And in a new twist of cynical self-interest, this time Meta is trying to position itself as the champion of open source, a community that has done so much to advance and spread the benefits of digital technology in a more equitable manner.
Don’t buy what Zuckerberg is selling. When you hear, as we all do almost every day now, that artificial intelligence has enormous promise but also significant risks, ask yourself this question: Who is going to benefit from the promise, and who is going to suffer and pay for the risks?
Consider Zuckerberg’s eloquent explanation for why Meta is releasing its Llama AI models as “open source.” He claims that open-source AI is good for developers because it gives them more control and freedom; good for Meta because it will allow Llama to develop more quickly into a diverse ecosystem of tools; good for America because it will support competition with China; and good for the world because “open source is safer than the alternatives.”
But the most important parts of this story are false. The societal risks of open-source AI models are higher and the benefits smaller than those of standard open-source software code. Giving full access to AI model weights significantly lowers the barriers for bad actors to remove safeguards and “jailbreak” a model. Once the model weights are public, there’s no way to rescind or control what a criminal or hostile nation tries to do with them. Meta gives up even visibility into what end-users are doing with the models it releases.
For Meta, open source AI means taking no responsibility for what goes wrong. If that sounds like a familiar pattern for this company, it should.
Who is this good for? The answer, of course, is Meta. Who bears the risk of misuse? All of us. That is why I think Meta’s concerted PR push around open source as the path forward for AI is a cynical mask. It doesn’t serve the public interest. What Meta wants is a “corporate capture” of the open-source ethos, to once again benefit its own business model and bottom line.
Governments shouldn’t be fooled. Thankfully, the California state legislature isn’t. That body is considering a first-in-the-nation effort to ensure AI safety, SB 1047, as a light-touch regulatory regime to rebalance the scales of benefit and risk between companies and society.
The legislation protects the public interest in a way that equally protects the scientific and commercial upsides of AI technology, and particularly open-source development. The bill makes sense because the public interest should matter as much as Meta and other AI companies’ stock price when it comes to AI.
But Meta, along with several other tech giants, opposes the bill. Big Tech would prefer the old way of being, of privatized benefits and socialized risks. Many top AI labs acknowledge the possibility of catastrophic risk from this technology, and have committed to voluntary safety testing to reduce those risks. But many oppose even light-touch regulation that would make reasonable safety testing mandatory. That’s not a defensible position, particularly for Meta, given its history of systematically shirking responsibility for the harms its products have caused.
Let’s not make the same mistake with generative AI that we did with social media. The public interest in technology shouldn’t be an afterthought — it should be our first thought. If tech titans are able to successfully leverage their money and power to defeat SB 1047, we’re headed back to a world where they get to define “innovation” as a blank check for tech companies to keep the winnings — and make the rest of us pay for what’s broken.
Jonathan Taplin is a writer, film producer and scholar. He is the director emeritus of the Annenberg Innovation Lab at the University of Southern California. His recent books on technology include “The End of Reality” and “Move Fast and Break Things.”
Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.
For the latest news, weather, sports, and streaming video, head to The Hill.