AI rush prompts raft of guardrail proposals
Published in Political News
WASHINGTON — As artificial intelligence companies race to change workplaces, energy use and the economy, concerns over the impact of the technology have only grown on Capitol Hill.
A number of bills and proposals have emerged this year that would impose safety regulations on its use or try to offset job losses. Some of those have come out since the Senate this summer nearly unanimously voted to kill a provision by Sen. Ted Cruz, R-Texas, in the Republican budget reconciliation bill that would have imposed a moratorium on state AI regulations.
Sen. Josh Hawley, R-Mo., who has sponsored or co-sponsored bills seeking to hold companies responsible for their AI systems, said that “the status quo is not acceptable.”
“I think that the corporations are getting a great deal right now, the AI corporations,” he said. “I think that it’s the American people, it’s individuals and workers, who are really endangered by our current regime, which is basically where the corporations do whatever they want.”
The efforts stand in contrast to the push by Cruz, the Senate Commerce chair, who has issued his own plan to treat AI with a “light” regulatory touch and introduced a “sandbox” bill that would let companies seek decadelong waivers from federal regulations.
At a hearing shortly after he released the bill and framework, Cruz called AI “transformative” and said that “America is in an AI race with China.”
In those remarks, Cruz offered strong support for the Trump administration’s “AI Action Plan,” which seeks to “remove red tape and onerous regulation.” Cruz’s sandbox proposal would do just that, and it has won support from the technology industry.
Cruz has also indicated he’s still interested in a moratorium on state AI regulations, although he hasn’t announced a plan for how the provision could move, whether as part of a must-pass bill or on its own.
Still, it’s not yet clear whether he can win over those concerned with the potential harms caused by AI or gather enough backing to advance a bill to law. Meanwhile, lawmakers from both parties are seeking to raise awareness of the pitfalls of AI.
Taxing AI companies
A week after Cruz released his AI framework, Sen. Mark Kelly, D-Ariz., offered his own plan, “AI for America,” which would use the financial gains of AI to fund job programs and energy infrastructure.
Kelly’s plan, which does not yet include introduced legislation, would create a fund “fueled by contributions from leading AI companies” to pay for a variety of solutions to AI-driven problems, including programs to retrain workers who lost their jobs due to AI, as well as higher unemployment payments.
Kelly said his plan addresses how the country could handle disruption in the workforce.
“We’ve got to have the resources available to retrain these people, or upskill them, or even … train them for other jobs, but also maybe train them for jobs that we don’t even know will exist in the future,” he told reporters.
Mark Muro, a senior fellow at Brookings Metro who focuses on the “interplay of technology, people, and place,” anticipates that the skills in demand in the workforce are changing, and that training will need to shift as well.
“We’re talking about a massive change here,” he said. “And I think that the senator deserves a lot of credit for highlighting these issues, which … are not getting that much attention yet, but it’s actually … not a moment too soon.”
Muro said that labor displacement from AI could be “potentially more disruptive” than previous shifts in the economy, including from industrial automation in manufacturing. In his view, that means the need for action is greater too.
“We have not really made much of an effort even in addressing these issues in the past,” he said. “We need to do much more and much better.”
The plan is not popular with those representing AI companies. An industry lobbyist speaking on condition of anonymity called its tone “hostile” toward tech.
“When you’re coming in hostile, why does anyone want to partner with you?” they said.
Amy Bos, director of state and federal affairs for Net Choice, which counts Amazon.com Inc. and Google LLC as members and works for “free enterprise and free expression” online, said while she supports public-private partnerships, she would prefer that they be voluntary.
“The private sector is investing heavily in worker retraining, so I’m not sure that we need government mandates there.”
Muro acknowledged the importance of private sector training but expects it to not be enough.
“It’s hard for a firm to justify spending, investing in skills. So a lot of it comes down to the community colleges, it comes down to local universities, and in other regional programs, and those things will be much more important,” he said.
The lobbyist called the fund “not a fully baked idea” and questioned how it would work.
“There is a recommendation for taxing companies for social destruction … how do you quantify and put a tax on that?” they said.
While Muro is supportive of Kelly’s plan to fund job programs through the profits of AI companies, he said the fund itself “looms over all of this.”
“It’s not … made clear exactly how that would work, and … that is something, I think, that …will require … private sector as well as public sector collaboration,” he said.
Under his plan, Kelly would put the funding of new energy infrastructure for data centers onto companies, rather than local utilities. The plan also recommends the use of “clean energy” and “environmental stewardship.”
At the same time it addresses what could be a bipartisan priority: supporting AI infrastructure. Kelly’s plan supports “consistent standards” for permitting to avoid “unnecessary delays.”
“Achieving this vision will require smarter permitting tools that enable governments to act both quickly and responsibly, while cutting red tape,” the plan said.
But Bos still prefers the Republican plan.
“Sen. Kelly’s proposal just throws tax money at them while creating new regulatory roadblocks,” she said.
Kelly’s plan also includes a section on “earning public trust,” including potentially by requiring “red-teaming” of products by government agencies prior to release, as well as “combatting bias and exploitation.”
Cody Venzke, a senior policy counsel for the American Civil Liberties Union, expressed a desire for more specifics in the plan.
“I wish his plan had further underscored that discriminatory harms are part of public trust,” Venzke said.
Civil rights
Venzke is particularly worried about AI being used in areas where people’s civil rights could be at risk.
“One of the things that’s missing from Senator Kelly’s plan is a really robust view of when AI is being used in these protected areas of life — banking, education, employment, housing — how do we ensure that everyone has a fair opportunity to access those opportunities?” he said.
For a road map to protecting those areas, Venzke pointed to a bill introduced last year in the Senate by Edward J. Markey, D-Mass., dubbed the AI Civil Rights Act, that would have banned the sale or use of an algorithm, including artificial intelligence, to discriminate in opportunities, goods or services. It would require testing prior to deployment of an algorithm, and annually when the system is in use, to look for possible harms.
Venzke said he expects the bill to be reintroduced “in the very near future.”
Also worried about the impact of AI on civil rights is Rep. Yvette D. Clarke, D-N.Y., who introduced a bill last month that would require large companies using algorithms to assess those algorithms’ impact in critical decisions like employment and housing before and after deployment.
Efforts from both parties
Republicans aren’t united under Cruz’s push for light regulation.
Sen. Jon Husted, R-Ohio, sponsored a bill that would require operators of companion AI chatbots to verify users’ ages and provide protections for children’s accounts, including requiring parental consent and blocking access to chatbots that engage in “sexually explicit communication.”
Last month, Hawley co-sponsored a bill introduced by Sen. Richard J. Durbin, D-Ill., that would make AI companies liable under product safety laws. Hawley previously led a subcommittee hearing on the harms of AI chatbots, and he expressed particular concern about risks to children.
“We’ve got chatbots that are literally telling kids how to kill themselves. There’s no repercussions for that,” he said.
Hawley said he hadn’t yet talked to Senate Judiciary Chair Charles E. Grassley, R-Iowa, about the bill.
“I certainly hope we can move forward on it,” he said.
The bill appears unlikely to meet with approval from the Trump administration. The AI Action Plan includes a direction to review Federal Trade Commission investigations to “ensure that they do not advance theories of liability that unduly burden AI innovation.”
Hawley also sponsored a bill, co-sponsored by Sen. Richard Blumenthal, D-Conn., that would require the Department of Energy to establish a program to test advanced AI and evaluate the risk of “adverse AI incidents.” Those incidents could include loss of control, weaponization of a system by foreign adversaries, or “a significant erosion of civil liberties, economic competition, and healthy labor markets.”
Hawley said he’s concerned about protecting workers and property rights in an AI-driven world.
“Protecting individual rights is something that we do well as Americans. It’s at the core of our Constitution. And just because we’re in a new technological age doesn’t mean the Constitution is outdated, and we need to make sure that it remains vital and in effect for this new technological era,” he said.
That bill was referred to the Senate Commerce Committee.
_____
©2025 CQ-Roll Call, Inc., All Rights Reserved. Visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.
Comments