Lawmakers suggest agency to supervise artificial intelligence

Time to read
4 minutes
Read so far

Lawmakers suggest agency to supervise artificial intelligence

Wed, 05/24/2023 - 18:55
Posted in:
In-page image(s)
Body

WASHINGTON -- Lawmakers are floating ideas about guardrails for artificial intelligence as Congress considers how to confront the fast-moving technology that experts say could have profound implications.

Hearings last week offered a glimpse into lawmakers’ approach to artificial intelligence. The spurt of hearings on the subject came after the office of Senate Majority Leader Charles E. Schumer, D-N.Y., said last month that he had circulated a high-level framework that sketches out a regulatory regime on the technology.

The early indications are that both lawmakers and some industry representatives don’t want the government to stand on the sidelines as artificial intelligence advances. They want government action, potentially including a federal agency to supervise artificial intelligence. Some lawmakers are at least partially motivated by liability protections afforded to internet companies, an approach some now consider a mistake.

“I can’t recall when we’ve had people representing large corporations or private sector entities come before us and plead with us to regulate them,” said Senate Judiciary Chair Richard J. Durbin at a Privacy, Technology and the Law Subcommittee hearing. “In fact, many people in the Senate have based their careers on the opposite, that the economy will thrive if government gets the hell out of the way.”

Lawmakers are also grappling with the potential magnitude of AI, and the CEO of the artificial intelligence company OpenAI told the senators about his fear that the technology and the industry could “cause significant harm to the world.”

The Judiciary subcommittee was one of several panels to hold hearings on AI last week. The Senate Homeland Security and Governmental Affairs Committee held one on AI in government and the House Judiciary Subcommittee on Courts, Intellectual Property and the Internet held a hearing on AI and copyright law.

Senate Judiciary members expressed support for a new agency that would oversee the technology, inquired about international coordination and showed a keen interest in making sure AI companies can be held accountable in court.

“I want to make sure that it’s abundantly clear that the corporations that develop AI can be sued,” Sen. Josh Hawley, R-Mo., said in an interview.

Some lawmakers have already put forward legislation on AI. One measure from Rep. Yvette D. Clarke, D-N.Y., would require a disclaimer for political advertisements that use images or video generated by artificial intelligence.

Another proposal from Senate Homeland Security Chairman Gary Peters, D-Mich, and Sen. Mike Braun, R-Ind., would create an artificial intelligence training program for federal supervisors and management officials.

The AI interest in Washington dovetails with a growing public consciousness because of tools such as ChatGPT. The attention in part stems from deep concerns raised by lawmakers that the technology could eliminate jobs, compromise privacy, manipulate personal behavior and spread misinformation.

OpenAI response

Sam Altman, the CEO of OpenAI, the company that released ChatGPT, told Senate Judiciary subcommittee members that regulatory action will be important to reduce the risks of powerful models. Safety standards could be set and independent audits could be required to show whether a model is in compliance with safety thresholds, Altman testified.

He also suggested a new agency that “licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards.”

Altman said companies should put forward test results of their models before they release them. And he endorsed an idea raised by Sen. Richard Blumenthal, D-Conn., that there be so-called nutrition labels or scorecards for AI, giving people a sense of the content’s trustworthiness.

Sen. Marsha Blackburn, a Tennessee Republican whose state is home to the music town of Nashville, pressed Altman on AI companies using the work of artists to train their models. She said she tried out an OpenAI research release called Jukebox and came away with questions about creative rights and how artists could get compensation for use of their work.

“I went in this weekend and I said, ‘Write me a song that sounds like Garth Brooks,’ and it gave me a different version of ‘Simple Man,’” Blackburn said. “So it’s interesting that it would do that.”

Altman said his company has been talking to artists and other content creators about how to give them benefits from AI, adding that “there’s a lot of ways this can happen.” And he said people should be able to say they don’t want personal data used to train AI.

Blackburn said the issue raised the issue of a national privacy law. “Many of us here on the dais are working toward getting something that we can use,” she said.

Christina Montgomery, vice president and chief privacy and trust officer for IBM, said the company urges Congress to adopt a “precision regulation” approach to AI, setting “rules to govern the deployment of AI in specific use cases, not regulating the technology itself.”

There must be clear guidance on AI uses that are inherently high risk, and the strongest regulation should be applied to cases with the most risk to society and people, Montgomery said.

People should know when they are interacting with an AI system and should have the option to engage with a real person, she said, adding nobody should be tricked into interacting with an AI system.

Companies should also be required to do “impact assessments” in higherrisk situations. Those assessments should “show how their systems perform against tests for bias and other ways that they could potentially impact the public,” Montgomery said.

Gary Marcus, a New York University emeritus professor, told lawmakers that many agencies, such as the Federal Trade Commission, could respond to AI challenges in some ways.

“But my view is that we probably need a Cabinet-level organization within the United States in order to address this,” he said. “And my reasoning for that is that the number of risks is large.”

Legacy of internet companies

The legacy of the protections Congress provided to internet companies in the 1990s loom over the current discussion about AI. The 1996 law, which generally prevents providers from being liable for information originating from a third party, gave internet companies a sweeping immunity from lawsuits.

Some members are blunt in calling the approach a mistake.

“Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real,” Blumenthal said. “Sensible safeguards are not in opposition to innovation. Accountability is not a burden, far from it.”

Altman told lawmakers he doesn’t think his industry can claim protections under Section 230.

Schumer met last week with Republican Sens. Mike Rounds of South Dakota and Todd Young of Indiana, along with Democratic Sen. Martin Heinrich of New Mexico, to talk about what a source familiar with the discussions described as “their emerging bipartisan group focused on comprehensive AI legislation.”

Heinrich on Friday didn’t provide a timeline for introducing any legislation.

“There’s a tendency to want to move really fast, but there’s risk in that too. And I think people need to understand AI, play around with it, see what generative AI can do, see what the risk factors are.” Heinrich said. “We should move in this Congress, but I don’t think we should just rush headlong into solutions before we really understand the lay of the land.”

_____ Niels Lesniewski contributed to this report.