Analysis

This article has been saved to your Favorites!

AI Regulatory Landscape In Flux As Calendar Turns To 2025

By Vin Gurrieri · 2025-01-03 15:14:51 -0500 ·

As products that incorporate artificial intelligence and machine learning increasingly permeate the workplace and other aspects of society, lawmakers and regulators are steadily setting guardrails around the use of those tools to ward off potential discrimination.

Image of a Large Language Model, capable of understanding and generating human-like language based on input data.

Lawmakers and regulators are trying to keep pace with the increasing growth of artificial intelligence in the workplace and other areas of society by forming regulations to combat potential discrimination from AI-fueled systems. (ArtemisDiana / Alamy Stock Photo)

Colorado and Illinois recently moved to the forefront of states seeking to combat workplace discrimination caused by the use of AI-fueled systems, but the laws they've adopted are still a year away from taking effect and could be revised, while regulators in California are moving closer to finalizing sweeping new rules that would govern AI usage.

Tracey Diamond, a partner at Troutman Pepper Locke LLP, said AI regulation is still "in its infancy," and she believes it is "a little surprising [that] it's taking this long to kind of get these laws on the books."

But in analyzing the approach that lawmakers and regulators are taking in different jurisdictions, common elements are starting to emerge.

"I think it's interesting that when you take all these laws and look at them together, what you're really seeing is this trend towards, 'We don't like the fact that there's this potential for bias, and we don't like the fact that candidates don't know that it's being used, so we're going to require you to notify candidates about its use,'" Diamond said. "And some of these statutes are requiring an actual bias audit, whatever that means, and then making sure that ... you understand that you're liable if there's a potential algorithmic discrimination."

Here, experts discuss three ways AI-related laws and regulations may evolve in the year ahead that employers should watch.

Changes to State AI Laws May Be Forthcoming

Of the few statewide laws already on the books that regulate AI, the statutes enacted in Colorado and Illinois have drawn employment law observers' attention — particularly given the possibility that one or both could be modified before they take effect in 2026.

Colorado enacted its landmark AI-related law in May. The statute, S.B. 205, or the Concerning Consumer Protections In Interactions With Artificial Intelligence Systems, directs companies that develop or use a "high-risk artificial intelligence system" to take reasonable measures to avoid algorithmic discrimination.

The law is slated to take effect in February 2026. But Gov. Jared Polis said in a signing statement that he had some "reservations" and encouraged state lawmakers to reconsider some aspects before its effective date.

Danielle Ochs, co-chair of the technology practice group at Ogletree Deakins Nash Smoak & Stewart PC, said that rules governing AI in the workplace have converged to include both procedural components, like people being given notice that a tool is being used or bias audits being required, and substantive restrictions that prohibit AI tools from being used in certain circumstances if they would violate existing laws.

"Probably the most comprehensive example of that would be the Colorado law, which is by far the most comprehensive AI law out there that impacts employment," Ochs said. "Of course, it's broader than just employment, but it applies to the employment workplace."

Illinois Gov. J.B. Pritzker in August signed H.B. 3773, which amends the Illinois Human Rights Act to clarify that the law is implicated when discrimination stems from an employer's use of AI to make decisions about things like hiring, firing and discipline.

Like Colorado's statute, the new law in Illinois won't take effect until 2026. The state has had a different law on the books since 2020, the Artificial Intelligence Video Interview Act, that regulates the use of AI to analyze videos submitted by job applicants.

When laws are passed with a lengthy on-ramp before they take effect, it may imply "that some fixing up might be needed" and gives lawmakers and regulators time to refine the statutes if needed, according to Ochs. 

"I don't think that what you see now is what will actually go into effect," Ochs said. "I think you will see amendments in Colorado."

Aside from Colorado and Illinois, it's likely that other states will soon adopt their own AI governance framework. Ochs noted that numerous states, while they haven't yet actually advanced potential AI legislation, have set up task forces to study the issue, and their progress bears watching in the year ahead.

As more states move forward with their own proposals, Ochs said the auditing of AI products by both creators and users as well as the need for human oversight are elements that will likely be central aspects of future legislation.

"I also think you're going to see legislatures grappling with who should be held responsible for the outcome," Ochs said. "You see the growth of this concept that nonemployers potentially could be held responsible for the violation of employment laws: The idea that they have some responsibility for the outcomes of the products that they provide to employers, and then you also see employers being held responsible for products that they use."

Calif. Regulators Zero In on AI Rules 

While lawmakers in Colorado and Illinois were successful in getting AI-related legislation across the finish line, California legislators fell short in adopting sweeping legislation of its own.

Though lawmakers during the Golden State's most recent legislative session succeeded in adopting several narrowly targeted laws related to AI, a broader groundbreaking proposal didn't make it across the legislative finish line. 

"California usually leads in the space of regulating the workplace; it has in almost every area except for this area," Ochs said. "And I think the reason is because it also supports the industry. It's constantly weighing the needs of the industry and the impact on California, if the industry is impacted versus its tendency to be in favor of regulation of the workplace."

While Golden State lawmakers are likely to try again, several regulatory bodies are stepping up in the meantime.

In November, the California Privacy Protection Agency lifted the curtain on the public comment period for an expansive set of draft regulations that would regulate the use of AI-infused technologies in employment, consumer protection, healthcare and other contexts.

The regulations, promulgated under the California Consumer Privacy Act, seek to regulate businesses that use automated decision-making technology for "significant decisions," and set requirements for cybersecurity audits and risk assessments, among other things. The public comment period on the regulatory package expires later this month. 

Separately, the California Civil Rights Council, the rulemaking body of the state's Civil Rights Department, has proposed a sweeping set of regulations of its own under the state's Fair Housing and Employment Act that would govern the use of artificial intelligence tools in employment.

The CCRC's proposed regulations, if finalized absent major changes, make clear that employers can't legally use a tool to discriminate against people based on their protected characteristics is illegal under FEHA. They also list examples to help clarify what AI bias can look like. The comment period for the most recent iteration of the proposed regulations expired in November.

"When you look at the draft regulations, [they] actually will have quite a significant impact on [employers], which is why I think that folks need to be paying attention to it now," said Angelina Evans, a Los Angeles-based partner at Seyfarth Shaw LLP,

Evans added that the regulations would have a broad scope in terms of which employers can be held liable for AI-related discrimination under state law. That's partly because of the way the term "agent" is defined in the draft regulations to include employers as well as third parties that provide services related to hiring or employment decisions. Recruiters, applicant screening services, or payroll and benefit administrators can fall under that umbrella, Evans said.

"Any entity that evaluates or makes decisions regarding requests for workplace leave and accommodations, all of these third parties that ordinarily would not be considered 'employers' under the FEHA … can be liable for employment discrimination not only for their own employees, but also for employees and applicants of the companies that they service," she said. "So this is a really broad application and I think it's … concerning or at least something to be aware of."

The Civil Rights Council's draft regulations are also notable for their approach toward medical and psychological inquiries, such as AI tools that ask about or are intended to measure personality-based characteristics, Evans said.

Under FEHA, it's unlawful to conduct medical or psychological examinations or inquiries of job applicants, a longstanding principle that is "clear," she said. 

But the CRC "wants employers to know that certain AI tools that you may not have really considered to be psychological examinations are, in fact, psychological examinations," Evans said.

"Certain [games] or challenges that are like gamified systems that are often used in these AI recruiting tools — those might be considered quote-unquote psychological examinations under the proposed rule, and they may be violative of the FEHA," Evans said. "So that's another piece of the [proposed rule] that I think to the extent that it gets [finalized] in the form that it's in now will have an impact on folks."

In the year ahead, the regulations pending before both the California Privacy Protection Agency and the Civil Rights Council are likely to move forward towards finalization, according to Evans. 

"The California Civil Rights Council and the California Privacy Protection Agency — it feels very much like they are no longer waiting for … broad brush AI legislation to get passed, and they want to make sure that their employees and their consumers are protected," Evans said.

Eyes on NYC AI Law's Enforcement

While states may be increasingly throwing their respective hats in the ring in regulating AI, cities are also part of the mix. A notable example is New York City, which broke new ground several years ago when it enacted Local Law 144 to combat algorithmic discrimination. 

The closely watched law, which took effect citywide in 2023, required employers that use automated decision-making tools  to audit them for potential discrimination, publicize the results of those audits and alert workers and job applicants that such tools are being used.

However, despite its initial, closely monitored rollout, attorneys said that enforcement activity around the new law has seemed minimal up until now. Whether that changes and enforcement activity picks up in the year ahead remains an open question that warrants monitoring, according to Diamond of Troutman Pepper Locke.

"I think that it might be a question of whether it's going to be enforced more vigorously than it has been," Diamond said. "I haven't seen really any case law coming out of it yet. I'm not sure if the law itself needs to be strengthened, or that it's just not really a law that's really being enforced. It's just sort of a law on the books right now."

--Additional reporting by Amanda Ottaway, Dorothy Atkins, Patrick Hoff and Allison Grande. Editing by Amy Rowe.

For a reprint of this article, please contact reprints@law360.com.