OpenAI CEO Sam Altman testifies before the Senate: 4 key takeaways

The ChatGPT creator and others spoke to the Senate Judiciary Committee about the future of artificial intelligence.

If the burgeoning artificial intelligence industry has a spokesman, then it is Sam Altman, the CEO of OpenAI and creator of ChatGPT. On Tuesday, Altman testified before the Senate Judiciary Committee in what turned out to be a wide-ranging, big-picture conversation about the future of artificial intelligence.

The stunning speed with which artificial intelligence has advanced, in only a matter of months, has inspired Congress with a rare bipartisan zeal to keep Silicon Valley's innovations from outpacing Washington again.

Sam Altman, chief executive officer and co-founder of OpenAI, at the hearing.
Sam Altman, chief executive officer and co-founder of OpenAI, at a Senate Judiciary Subcommittee hearing in Washington, D.C., on Tuesday. (Eric Lee/Bloomberg via Getty Images)

For decades, policymakers had deferred to the promises coming out of Palo Alto and Mountain View. But as evidence has grown that reliance on digital technology comes with serious social, cultural and economic downsides, both parties have shown willingness — for different reasons — to regulate technology.

Integrating artificial intelligence into American society presents a major test for lawmakers to show the high technology sector that it cannot escape the scrutiny to which other industries have long been accustomed.

Testifying alongside Altman were IBM’s Christina Montgomery, chair of the company’s AI ethics board, and Gary Marcus, a critic of artificial intelligence who teaches at New York University.

Here are some key moments from the hearing.

'Congress failed to meet the moment on social media.'

Sen. Richard Blumenthal speaks.
Sen. Richard Blumenthal, D-Conn., asks Altman questions at the hearing Tuesday. (Win McNamee/Getty Images)

In his opening remarks, which were partially generated by artificial intelligence, Sen. Richard Blumenthal, D-Conn., acknowledged Congress’s struggle to impose meaningful regulations on social media.

“Congress failed to meet the moment on social media,” he said. “Now we have an obligation to do it on AI before the threats and risks become real.”

After the 2016 election, many Democrats charged platforms like Twitter and Facebook with disseminating misinformation that they claimed helped Donald Trump defeat Hillary Clinton for the presidency. Republicans, meanwhile, accused those same platforms of suppressing right-leaning content or “shadow banning” conservatives.

Political grievances aside, it has become clear that social media is harmful to teens, facilitates the spread of bigoted views and leaves people more anxious and isolated. It may be too late to address those concerns when in the case of social media companies that have become mainstay of the corporate and cultural landscape. But there is still time, members of Congress agreed, to make sure that AI does not generate the same social ills.

ChatGPT 'may make up facts,' OpenAI's chief technology officer says (Business Insider) >>>

'Some jobs will transition away.'

Christina Montgomery speaks.
Christina Montgomery, chief privacy and trust officer at IBM, at the hearing Tuesday. (Andrew Caballero-Reynolds/AFP via Getty Images)

IBM’s Montgomery acknowledged the obvious risks that artificial intelligence poses to workers in a variety of industries, including those whose jobs had previously been seen as safe from automation.

“Some jobs will transition away,” she said.

Altman, who has emerged as a kind of industry elder statesman and seemed eager to embrace that role on Capitol Hill on Tuesday, offered a different take.

“I think it’s important to understand and think about GPT4 as a tool, not a creature,” he said, referencing OpenAI’s latest generative AI model. Such models, he said, were “good at doing tasks, not jobs” and would therefore make work easier for people, without replacing them altogether.

Prepping the next generation for the AI future (The Hill) >>>

'This is not social media. This is different.'

OpenAI CEO Sam Altman at the microphone, with other attendees arrayed behind him.
Altman speaks at the hearing Tuesday. (Patrick Semansky/AP)

Anticipating the senators’ regrets about having missed the chance to regulate social media, Altman presented artificial intelligence as an altogether different development, one likely to be far more transformative and beneficial than a feed of cat memes (or, for example, racist messages).

“This is not social media,” he said. “This is different.”

Altman and Montgomery agreed that regulation was required, but neither they nor lawmakers could say, at this relatively early stage in the policy conversation, what such regulation should look like.

“The era of AI cannot be another era of "Move fast and break things,'” Montgomery said, alluding to the outworn Silicon Valley mantra. “But we don’t have to slam the brakes on innovations, either.”

Last year, the White House released a proposed AI Bill of Rights aimed at targeting misinformation, discrimination and other forms of harm. Vice President Kamala Harris recently met with leaders in AI innovation, including Altman, at the White House.

But so far, no regulatory framework has emerged, despite the consensus that one is badly needed.

Americans Are Ready for AI Regulation, Poll Finds (Gizmodo) >>>

'Humanity has taken a backseat.'

Gary Marcus, NYU professor emeritus, speaks at the hearing.
New York University Professor Emeritus Gary Marcus at the hearing Tuesday. (Patrick Semansky/AP)

Marcus, the NYU professor, revealed himself as the panel’s lone AI skeptic, arguing that “humanity has taken a back seat” as corporations race to develop ever more sophisticated AI models with too little regard for the potential dangers.

Altman also acknowledged that those dangers could be significant. “I think if this technology goes wrong, it can go quite wrong,” he said. It was a bracing admission from one of the technology’s most vociferous proponents — but also a welcome break from Silicon Valley’s possibly deceptive facade of optimism.

A future 'God-like AI' could destroy humans or make them obsolete if not properly contained, a prolific AI investor warned (Business Insider) >>>