Artificial intelligence is increasingly influencing critical aspects of American life, from determining who gets job interviews and housing to who receives medical care. However, the first significant proposals aimed at addressing AI bias are encountering resistance from multiple fronts.
On Thursday, US lawmakers from states including Colorado, Connecticut, and Texas convened to advocate for legislation designed to mitigate discrimination in AI decision-making. Despite over 400 AI-related bills under consideration in state legislatures nationwide this year, most focus on specific industries or facets of the technology, such as deepfakes in elections or unauthorized use in creating explicit content.
The comprehensive bills proposed by these lawmakers seek to establish a broad framework for oversight, particularly targeting the pervasive issue of AI discrimination. Notable examples of such bias include AI systems that misdiagnosed Black medical patients and algorithms that downgraded women’s resumes during job application screenings.
According to the Equal Employment Opportunity Commission, up to 83 percent of employers utilize algorithms in the hiring process. Suresh Venkatasubramanian, a computer and data science professor at Brown University, highlights the inevitability of bias in AI systems without deliberate interventions. “You have to do something explicit to not be biased in the first place,” he explained.
The proposed legislation, primarily in Colorado and Connecticut, would require companies to perform “impact assessments” for AI systems significantly influencing decisions affecting individuals in the US. These assessments would detail how AI contributes to decision-making, the data collected, analyses of discrimination risks, and the safeguards implemented by the company.
While increased transparency promises greater accountability and public safety, companies express concerns that such measures could heighten the risk of lawsuits and expose trade secrets. David Edmonson of TechNet, a bipartisan network of technology CEOs and senior executives lobbying on AI bills, stated that the organization collaborates with lawmakers to “ensure any legislation addresses AI’s risk while allowing innovation to flourish.”
Under the bills in Colorado and Connecticut, companies using AI would not be obligated to routinely submit impact assessments to the government. Instead, they would only need to disclose to the attorney general if they identify instances of discrimination. This self-reporting mechanism has raised concerns among labor unions and academics, who fear it limits the ability of the public and government to detect AI discrimination before harm occurs.
“It’s already hard when you have these huge companies with billions of dollars,” said Kjersten Forseth, representing Colorado’s AFL-CIO, a federation of labor unions opposing Colorado’s bill. “Essentially, you are giving them an extra boot to push down on a worker or consumer.”
Another point of contention is the limitation on who can file lawsuits under the proposed legislation. The bills generally restrict this right to state attorneys general and other public attorneys, excluding individual citizens. After a provision allowing citizens to sue was removed from a California bill, software company Workday endorsed the proposal. Workday argues that allowing civil actions from citizens could lead to inconsistent regulation, as decisions would be left to judges who may lack technical expertise.
However, Sorelle Friedler, a professor focusing on AI bias at Haverford College, disputes this perspective. “That’s generally how American society asserts our rights, is by suing,” she said.
Reference(s):
U.S. first major attempts to regulate AI face headwinds from all sides
cgtn.com