WASHINGTON — Major technology companies are facing intensified scrutiny from regulators on both sides of the Atlantic, as lawmakers push for comprehensive legislation governing the development and deployment of artificial intelligence systems that have rapidly transformed industries from healthcare to finance.

In concurrent hearings held in Washington and Brussels on Monday, senior executives from leading AI companies were pressed on issues ranging from data privacy and algorithmic bias to the environmental impact of training large language models. The coordinated approach signals a new phase of international cooperation on technology regulation.

"The era of self-regulation is over," declared Senator Elizabeth Warren during a Commerce Committee hearing. "These companies have demonstrated that they cannot be trusted to police themselves. It is time for Congress to act."

The hearings focused particularly on concerns about the opacity of AI training processes. Several lawmakers expressed alarm that companies have been reluctant to disclose the datasets used to train their most powerful models, making it difficult to assess potential biases or copyright violations.

"We are being asked to trust systems that we cannot audit, trained on data that we cannot inspect, by companies that refuse to answer basic questions," said Representative Ro Khanna, who chairs the House Subcommittee on Innovation, Data, and Commerce. "That is not acceptable in a democracy."

Tech executives defended their practices, arguing that excessive regulation could stifle innovation and cede competitive advantage to foreign rivals. Dr. Jennifer Martinez, chief technology officer at Anthropic, testified that her company has implemented "robust internal safeguards" and expressed openness to reasonable transparency requirements.

"We share the committee's commitment to responsible AI development," Dr. Martinez said. "We believe there is a path forward that protects both innovation and public interest."

The European Union, which has already enacted the world's most comprehensive AI legislation through its AI Act, is now considering additional measures targeting generative AI systems specifically. European Commissioner Thierry Breton warned that companies failing to comply with EU standards would face "significant consequences."

Industry analysts suggest that the regulatory push could have significant implications for the business models of major tech companies. Research from Morgan Stanley estimates that compliance costs could reduce profit margins at leading AI companies by 8 to 15 percent, depending on the stringency of final regulations.

"The regulatory environment is clearly evolving," said Michael Robinson, technology analyst at Barclays. "Companies that get ahead of this curve by building transparency into their processes will be better positioned than those that resist."

Consumer advocacy groups have largely welcomed the increased scrutiny. The Electronic Frontier Foundation called the hearings "a long-overdue recognition that AI systems deployed at scale have profound implications for civil liberties and democratic governance."

Legislation requiring AI transparency and independent auditing is expected to advance through committee in the coming weeks, though the timeline for final passage remains uncertain given the complexity of the issues involved and the approaching election cycle.