I’d best describe the tone of congressional hearings involving tech industry executives in recent years as hostile. Mark Zuckerberg, Jeff Bezos, and other tech luminaries were all dressed up on Capitol Hill by lawmakers displeased with their companies.
But on Tuesday, Sam Altman, CEO of San Francisco startup OpenAI, testified before members of a Senate subcommittee and largely agreed with them on the need to regulate the increasingly powerful AI technology being built within his company and others like Google. and microsoft.
In his first congressional testimony, Altman implored lawmakers to regulate artificial intelligence as committee members demonstrated an emerging understanding of the technology. The hearing highlighted the deep concern that technologists and the government feel about the potential harms of AI. But this concern did not extend to Mr. Altmann, who had a friendly audience in the members of the subcommittee.
The appearance of Mr. Altman, a 38-year-old tech entrepreneur, marked his christening as a leading AI figure. for a three-hour hearing.
Mr. Altman also talked about his company’s technology at a dinner with dozens of House members Monday night, and he met privately with several senators before the session. It has provided a loose framework for managing what happens next with rapidly evolving systems that some believe can fundamentally change the economy.
“I think if something goes wrong with this technology, it could just go wrong. And we want to be upfront about that,” he said. “We want to work with the government to prevent that from happening.”
Mr. Altman made his first public appearance on Capitol Hill as interest in artificial intelligence grew. Tech giants have put efforts and billions of dollars into what they say is transformative technology, even amid growing concerns about the role of artificial intelligence in spreading disinformation, killing jobs and one day matching human intelligence.
This has brought technology to the spotlight in Washington. “What you’re doing has tremendous potential and enormous danger,” President Biden said this month in a meeting with a group of CEOs of artificial intelligence companies. Senior leaders in Congress have also promised to create AI regulations.
It was clear that the members of the Senate Subcommittee on Privacy, Technology, and the Law were not planning a rough cross-examination of Mr. Altman, as they thanked Mr. Altman for his private meetings with them and his agreement to appear at the hearing. Corey Booker, the New Jersey Democrat, has repeatedly referred to Mr. Altman by his first name.
Mr. Altman was joined at the hearing by Christina Martin, IBM’s chief privacy and trust officer, and Gary Marcus, a well-known professor and frequent critic of AI technology.
Mr Altman said his company’s technology may destroy some jobs but also create new ones, and it will be important “for the government to know how we want to mitigate that”. He proposed creating an agency that would issue licenses for the creation of large-scale AI models and regulations and safety tests that AI models must pass before being released to the public.
“We believe the benefits of the tools we have deployed so far greatly outweigh the risks, but ensuring their safety is vital to our work,” said Mr. Altmann.
But it was not clear how lawmakers would respond to the call to regulate AI. Dozens of privacy, speech, and safety bills have failed over the past decade due to partisan bickering and fierce opposition from tech giants.
The United States is second only to the world in terms of regulations relating to privacy, speech, and protections for children. It is also behind in AI regulations. EU lawmakers are set to introduce rules for the technology later this year. China has also created AI laws that comply with censorship laws.
Sen. Richard Blumenthal, Democrat of Connecticut and chairman of the Senate committee, said the hearing was the first in a series to learn more about the potential benefits and harms of AI in order to finally “write the rules.”
He also acknowledged the failure of Congress to keep pace with the introduction of new technologies in the past. “Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” said Mr. Blumenthal. “Congress failed to catch up to the moment on social media.”
Subcommittee members proposed an independent agency to oversee AI; rules that force companies to disclose how their models work and the datasets they use; and antitrust rules to prevent companies like Microsoft and Google from monopolizing the emerging market.
“The devil is in the details,” said Sarah Myers-West, managing director of the AI Now Institute, a policy think tank. She said Mr. Altmann’s suggestions for regulations are not enough and should include limits on how AI can be used to police and use biometric data. She noted that Mr. Altman showed no sign of slowing development of OpenAI’s ChatGPT tool.
“It is ironic to see an attitude of concern over the harms being done by people who quickly trigger in commercial use the system responsible for such severe harm,” said Ms West.
Still, some lawmakers at the hearing highlighted the persistent gap in technology knowledge between Washington and Silicon Valley. Lindsey Graham, R-South Carolina, has repeatedly asked witnesses if the speech liability shield for online platforms like Facebook and Google also applies to AI.
Quiet and unperturbed, Mr. Altman has tried several times to distinguish between artificial intelligence and social media. “We need to work together to find a whole new approach,” he said.
Some members of the subcommittees also showed a reluctance to impose severe restrictions on an industry with great economic promise for the United States and that competes directly with adversaries such as China.
Chris Coons, a Delaware Democrat, said the Chinese are creating artificial intelligence that “promotes the core values of the Chinese Communist Party and the Chinese system.” “And I worry about how we can foster AI that advances and strengthens open markets, open societies, and democracy.”
Some of the toughest questions and comments toward Mr. Altman came from Dr. Marcus, who pointed out that OpenAI has not been transparent about the data it uses to develop its systems. He was skeptical of Mr. Altman’s prediction that new jobs would replace those killed by artificial intelligence
“We have unprecedented opportunities here but we also face a perfect storm of corporate irresponsibility, widespread outreach, lack of adequate regulation and inherent unreliability,” said Dr. Marcus.
The technology companies have argued that Congress should be careful with any general rules that group different types of AI together. In Tuesday’s session, IBM’s Ms. Martin called for an AI law similar to proposed European regulations, which set different levels of risk. It called for rules focused on specific uses, not regulating the technology itself.
“At its core, AI is just a tool, and tools can serve different purposes,” she said, adding that Congress must take a “careful regulation approach to AI.”