Major AI players are getting in sync, but it’s what comes next that really matters
AI is here. It is transformational, and it is changing the world. As a result, Silicon Valley’s mojo is back.
On the other side of the country, in Washington, D.C., an equally momentous sea change is taking place: The AI industry’s weightiest players are taking a public policy approach almost as unexpected as the technology itself.
Today’s leading AI companies are shrewdly engaging policymakers early. They are offering members of Congress and their staffs briefings to better understand the technology, and they have shown a willingness to appear before committees publicly and privately. Moreover, they are organizing multi-stakeholder forums and are even signing joint agreements with the White House.
As someone who has worked on numerous public policy efforts straddling technology and the public sector, I have seen firsthand just how difficult it is to get the private sector to agree among itself, let alone with the government.
Some argue that the AI industry’s public pronouncements are simply a facade. These companies know Congress moves at a glacial pace — if at all.
They know the time required for Congress to establish a new regulatory and oversight agency, fund it, staff it, and arm it with the teeth needed for meaningful enforcement could take years. For context, social media companies remain almost entirely unregulated decades after first taking the world by storm.
Regardless of their true motivations, the fact that all of the large AI model players are coming together so quickly and agreeing on broad safety principles and regulatory guardrails demonstrates just how seriously they view AI’s potential risks, as well as its unprecedented opportunities.
There has never before been a technology that has so quickly rallied the private sector to proactively seek government oversight. While we should welcome their steps to date, it is what comes next that really matters.
There has never before been a technology that has so quickly rallied the private sector to proactively seek government oversight.
It is clear that AI executives, along with their public policy teams, learned from the backlash from previous approaches surrounding the emergence of transformational technologies such as social media and ride-sharing. At best, Silicon Valley ignored Congress. At worst, they mocked it.
Moreover, when asked to appear before legislative bodies, industry leaders clumsily and seemingly deliberately displayed their obvious disdain. Their relationships with policymakers, along with the public’s opinion of those companies, soured as a result.
So far, we are seeing the opposite approach with AI. CEOs are appearing before Congress, answering even the most trivial questions with what appears to be their utmost deference. They are speaking straightforwardly, and they are neither overpromising the benefits nor minimizing the downsides. These leaders have come across as thoughtful, responsible, and genuine.
As we move from the initial phase, where simply showing up curries favor, to the sausage-making phase of drafting a regulatory framework, their policy and legislative strategies will be pressure tested.
AI companies would be wise to stay the course. After all, goodwill and trust are extremely difficult to gain and all too easy to lose.
To continue down the path of engagement, consultation, and action, AI industry leaders must build upon their initial efforts. Here are several steps they should consider implementing:
Increase transparency: Find new ways to educate stakeholders on key aspects of the current models — what goes into them, how they are deployed, existing and future safety measures — and pull the curtains back on the teams building them. Furthermore, quickly share new research, as well as newly uncovered risks.
Agree and commit: Companies shouldn’t sign any joint agreements they cannot or will not satisfy. They should avoid vague language designed to provide the ability to wriggle out of pledges. The short-term bump in positive media coverage is not worth the long-term reputational harm for failing to meet their commitment.
Greater member inclusion: Personal outreach softens the hardest edges. Expand outreach to Capitol Hill beyond the members who sit on the relevant oversight committees and connect with every House and Senate office. Hold group briefings followed up with individual meetings. The same should be done with the think tank community and advocacy groups, especially the ones who are sounding the biggest alarms against AI.
Congressional strike force: Offer dedicated employees to help congressional staff with technical questions so they can better prepare their members for hearings and events in their home districts. Helping members to answer constituent questions will further build trust and goodwill.
State government outreach: Activate an equally robust state government strategy. The laboratories of democracy could create a regulatory nightmare for AI companies. Getting ahead of that now, just as they are with Congress, is essential to reducing the compliance risk later.
Political red team: Add a policymaker component to red team exercises. Bring in lawmakers from both sides of the aisle to demonstrate how red teaming works both technically and substantively. And get their participation. It is much harder to throw blame at a company when you are part of the solution or, at the very least, were invited to help.
Explain regulatory pushback: Do not publicly talk about welcoming regulatory reform and speak in generalities around safety while also quietly lobbying governments to kill aspects of bills in the U.S. or Europe. That does not mean accept all regulation as written, but companies should be clear and should communicate why they are fighting certain provisions. Better to receive criticism for disagreeing with a specific policy than to be seen as lying or harboring false motivations.
Bounty programs for safety: Beyond specialized hackathons, create safety-focused bounty programs modeled on traditional software bug bounty programs that incentivize users to report safety exploits. The commercial imperative to develop new AI products means that even the best safety and security measures will likely lag behind innovation. Traditionally when there is a problem with a high-risk product or service, such as airplanes or cars, industry pauses operations with a grounding or recall to assess and fix an issue. However, with software, companies tend to patch while still running the platform. This makes it more important than ever to narrow the time between identifying and fixing a safety breach.
Time will tell whether this new and radically different approach to public policy is here to stay or whether it's a flash in the pan. Ultimately, companies will have to chart their own public policy course. While there is no one-size-fits-all solution, anyone who thinks they have already done enough will be in for a rude awakening.