The Sound and Fury of Regulating AI in the Workplace

Bradford J. Kelley[*]

Andrew B. Rogers[**]

 

ABSTRACT

New technologies, including those driven by artificial intelligence (“AI”), have transformed the workplace. When designed and executed well, these innovations have the potential to assist companies seeking to enhance operational efficiency, mitigate human bias, prevent discrimination and harassment, and improve worker health and safety. However, the use of AI simultaneously presents labor and employment law risks, including introducing or proliferating bias or unlawful discrimination in hiring decisions, wage and hour violations, and other compliance challenges.

Growing concerns over these and other potential negative outcomes—in addition to uncertainty regarding the challenges and implications already presented by AI tools and technologies—have provoked a myriad of legislative proposals at the federal, state, and local levels to regulate certain applications of AI in employment. Well-intentioned as proponents may be, the measures have failed to implement effective solutions to the problems they purport to address and have suffered from mistakes of haste from the outset. For instance, the same day the Colorado governor signed the state AI Act into law, he urged the legislature to “fine tune” certain problematic provisions. Shortly thereafter, he, the state’s attorney general, and state senate majority leader sent a letter to businesses to “provide additional clarity” to address problematic aspects of the law and committed to “engage in a process to revise the new law” to “minimize unintended consequences” of its hasty enactment.

Local efforts to regulate AI have also fallen short. Notably, in 2023, New York City became the first American jurisdiction to regulate the use of AI in employment decisions—efforts some have characterized as “a toothless flop” and a “bust.” The New York City Department of Consumer and Worker Protection, the city agency tasked with enforcing the AI law, lacks the authority to initiate investigations and reported no complaints as of 2024.

In the absence of congressional action, federal labor and employment agencies have attempted to fill the void with various initiatives and measures. None moved the needle. Instead, these efforts consist of high-level, broad statements that do little more than confirm the obvious: existing laws apply to AI tools and devices used to make employment decisions. Federal agencies have failed to provide meaningful substantive guidance regarding how longstanding labor and employment laws and regulations apply. Moreover, the vehicles themselves—chosen by the agencies—undermined the potential value and effectiveness of the message. Worse, by eschewing the benefits of notice and comment procedures intended for formal rulemaking, agencies turned their backs on a wealth of knowledge and experience that would have been especially helpful in the context of rapidly developing technology and capabilities such as AI. Few employment agencies boast any (much less cutting edge) technical or practical experience with AI.

This Article argues that the federal, state, and local governments have focused on messaging and signaling, as opposed to actionable, concrete regulatory substance and action. In doing so, regulators have been shrewdly walking the line by being very vocal in their support for regulating AI, while simultaneously not pushing for aggressive regulations that would risk the ire of businesses and employers. Finally, this Article concludes that alternative approaches may provide the best path forward for addressing the risks associated with AI in the workplace.

I. INTRODUCTION

New technologies and tools powered by artificial intelligence (“AI”) are increasingly utilized in the workplace to facilitate the employment relationship.1See Bradford J. Kelley, Belaboring the Algorithm: Artificial Intelligence and Labor Unions, 41 YALE J. ON REG. BULL. 88, 89 (2024). From facilitating job postings to screening applications, measuring employee productivity to preventing theft, the potential efficiencies of AI devices or technologies may alter the ways in which employers and employees interact.2Id. If properly designed and executed, these innovations could enhance operational effectiveness, fence off human bias from employment decision making, reduce or prevent unlawful discrimination and harassment, and improve worker health and safety.3See Bradford J. Kelley, All Along the New Watchtower: Artificial Intelligence, Workplace Monitoring, Automation, and the National Labor Relations Act, 107 MARQ. L. REV. 195, 197–98 (2023). Like any tool, however, devices or technologies that use AI present concomitant risks of exacerbating or proliferating bias or discrimination in employment decisions, as well as complicating employers’ ability to maintain compliance with wage and hour law and other legal frameworks.4See Kelley, supra note 1, at 104–05. Employers must be proactive and diligent in the development, selection, application, and review of such technologies so that the efficiencies and other benefits are not subsumed by risk and liability.5See Bradford J. Kelley, Mike Skidgel & Alice Wang, Considerations for Artificial Intelligence Policies in the Workplace, LITTLER MENDELSON P.C. (Mar. 10, 2025), https://www.littler.com/publication-press/publication/considerations-artificial-intelligence-policies-workplace [perma.cc/3FPN-FRZL].

Growing concerns regarding potential uses and outcomes of AI tools in the workplace have been compounded by fears of both known and unknown challenges presented by these technologies.6See Kelley, supra note 1, at 102. Together, they have provoked political responses at the federal, state, and local levels in the United States, as well as in Europe.7See generally Keith E. Sonderling, Bradford J. Kelley & Lance Casimir, The Promise and the Peril: Artificial Intelligence and Employment Discrimination, 77 U. MIAMI. L. REV. 1 (2022) (exploring the legal landscape in the United States and globally); see also Zach Williams, Unions Keep Pressure on Statehouses to Regulate AI, Protect Jobs, BLOOMBERG L. (Sept. 18, 2023), https://news.bloomberglaw.com/artificial-intelligence/unions-keep-pressure-on-statehouses-to-regulate-ai-protect-jobs [perma.cc/XWY4-YVJ6]. Indeed, in recent years, there have been myriad legislative and regulatory proposals that seek to control certain ways in which employers use AI in the workplace.8See id.

At the federal level, the Biden administration primarily emphasized symbolic measures—such as coordination efforts, studies, and warnings—rather than imposing substantive regulatory frameworks for AI governance.9See Kelley, supra note 1, at 92. Congress has yet to enact any legislation regulating the use of AI technology or devices in the workplace which has largely left administrative agencies to fill the void. This constituted vague gestures and messaging to the same obvious point: existing federal law and regulations apply to AI, just like any other tool.10See Artificial Intelligence for the American People, TRUMP WHITE HOUSE ARCHIVES, https://trumpwhitehouse.archives.gov/ai/ [perma.cc/83C4-GW77] (last visited Oct. 25, 2025); see also Will Henshall, Why Biden’s AI Executive Order Only Goes So Far, TIME (Nov. 1, 2023), https://time.com/6330652/biden-ai-order/ [perma.cc/L5K7-7P7M] (noting that the Biden administration’s approach “left questions unanswered about how it could work in practice.”). Memoranda, initiatives, and general statements tabled serious practical discussions regarding tangible uses (and potential abuses) of AI employment devices and technology.11For instance, the Biden administration engaged with the CEOs of major tech companies to promote voluntary commitments and responsible AI development—an approach that appeased calls for federal involvement without imposing binding regulatory constraints likely to face resistance from industry stakeholders. See Press Release, The White House, Readout of White House Listening Session with Union Leaders on Advancing Responsible Artificial Intelligence Innovation (July 3, 2023) https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2023/07/03/readout-of-white-house-listening-session-with-union-leaders-on-advancing-responsible-artificial-intelligence-innovation/ [perma.cc/SQE6-KA2A]. This approach mollified some calling for federal action regarding use of AI in the workplace while simultaneously failing to advance any substantive limitations that might have provoked opposition from businesses and employers.12See Kelley, supra note 1, at 105.

During the Biden administration, the U.S. Department of Labor (“DOL”), the Equal Employment Opportunity Commission (“EEOC”), and the National Labor Relations Board (“NLRB”) announced various joint and independent efforts to address certain uses of AI in the workplace.13Id. at 91, 93, 103. These accomplished little. They neither provided tangible protections for workers nor practical guidance for employers.14Id. First, the output of federal labor and employment agencies regarding AI failed to clarify how existing laws and regulations applied to AI technologies and tools with any semblance of consistency. The EEOC and the NLRB had not released any AI guidance in over two years despite continued rapid development and advancement of AI technology. Second, the AI-related output of these agencies eschewed the benefits of notice and comment—input that is especially valuable when the subject of government action is a new technology that is both complex and rapidly evolving.15See id. at 97¬¬¬–98, 103. Engagement of stakeholders at every stage of the development and use of AI devices and technology is important to ensure that new regulations—or new applications of existing regulations—benefit from a comprehensive and sophisticated understanding of the technologies.16Id. at 89, 91, 102. When regulators proceed without such a foundation, they undermine (if not negate entirely) the efficacy of their work, to the detriment of workers and employers.

Most substantive AI regulatory efforts have occurred at the state level.17See Artificial Intelligence 2025 Legislation, NAT’L CONF. OF ST. LEGISLATURES (July 10, 2025), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation [perma.cc/DT9A-JER4] (noting that in the 2025 legislative session, all 50 states, along with Puerto Rico, the U.S. Virgin Islands, and Washington, DC, introduced AI-related legislation, with 38 states adopting or enacting approximately 100 measures). In contrast to advisory federal efforts, several of these measures purport to impose limitations on employers’ use of AI during various stages of the employment process.18See Kelley, supra note 3, at 195, 210. However, many are poorly drafted or were hurriedly enacted or promulgated.19See id. at 209–10 (discussing how the then-NLRB General Counsel proposed an AI framework that was based on three brief sentences contained in a short symposium journal article and the proposal lacked both analytical support and practical detail, rendering the proposal speculative and undeveloped at best). The rush simply to “do something,” especially through legislation or regulation, is often counterproductive—and recent state-level AI measures are no exception. The rush to show action to constituents prevented thoughtful consideration of the subject matter, resulting in inconsistent and flawed regulatory frameworks. For example, some of these measures lack basic, elemental components. They fail to define “AI,” “AI analysis,” and other operative terms that the statute purports to regulate.20See Madyson Fitzgerald, What is Artificial Intelligence? Legislators Are Still Looking for a Definition, STATELINE (Oct. 5, 2023), https://stateline.org/2023/10/05/what-is-artificial-intelligence-legislators-are-still-looking-for-a-definition/ [perma.cc/7WDS-JY2H]. This injects facial ambiguity that undercuts the efficacy of the statute and other benefits, just like any law that aims to regulate a subject it does not define.21See id. (noting that the language of “AI” and “artificial intelligence” can imply a range of systems which are capable of anything from machine learning to automated decision-making).

A recent law in Colorado highlights the pitfalls of proceeding too quickly when regulating a rapidly evolving technology. Deficiencies of the Colorado AI Act were apparent and acknowledged even before it was enacted.22See Ed Sealover, “With Reservations,” Polis Signs Landmark AI Regulation Bill, SUM & SUBSTANCE (May 21, 2024), https://tsscolorado.com/with-reservations-polis-signs-landmark-ai-regulation-bill/ [perma.cc/XR22-ZES8]. Indeed, on the same day he signed the Colorado AI Act into law, the governor wrote to the Colorado General Assembly enumerating several “reservations” about the law.23See Letter from Jared Polis, Governor, State of Colo., to Colo. Gen. Assemb. (May 17, 2024), https://www.dwt.com/-/media/files/blogs/artificial-intelligence-law-advisor/2024/05/sb24205-signing-statement.pdf?rev=a902184eafe046cfb615bb047484e11c&hash=213F4C6CDFF52A876011290C24406E7F [perma.cc/4KDY-3JC6]. He urged the legislature to “fine tune” certain provisions.24Id. Not content to wait for these critical adjustments, the governor, state attorney general, and state senate majority leader, wrote an open letter to affected employers to “provide additional clarity” while promising to “engage in a process to revise the new law” to “minimize unintended consequences associated with its implementation.”25Letter from Jared Polis, Governor, State of Colo., Phil Weiser, Att’y Gen., State of Colo. & Robert Rodriguez, Majority Leader, State of Colo. Senate, to Innovators, Consumers, and All Those Interested in the AI Space (June 13, 2024), https://newspack-coloradosun.s3.amazonaws.com/wp-content/uploads/2024/06/FINAL-DRAFT-AI-Statement-6-12-24-JP-PW-and-RR-Sig.pdf [perma.cc/4Y42-YFUD]. As a result, the structure and key provisions of the Colorado AI Act will likely be amended before the effective date. With the law in limbo, Colorado state agencies remain in a compromised enforcement position, forced to take a “wait and see” approach. Unfortunately, other states may model their own measures after the Colorado statute, warts and all.26See Alex Siegal & Ivan Garcia, A Deep Dive into Colorado’s Artificial Intelligence Act, NAT’L ASS’N OF ATT’YS GEN. (Oct. 26, 2024), https://www.naag.org/attorney-general-journal/a-deep-dive-into-colorados-artificial-intelligence-act/ [perma.cc/9N64-3P57].

Meanwhile, some states have applied amorphous legal standards and concepts that are burdensome and likely to be ineffective to achieve some of the stated objectives of their proponents.27See Amanda Ottaway, ‘Everyone Ignores’ New York City’s Workplace AI Law, LAW360 (Mar. 1, 2024), https://www.law360.com/employment-authority/articles/1808951 [perma.cc/3NYN-QKB6]. As noted above, existing labor and employment laws obviously apply AI tools in the workplace just as they do to traditional employment tools.28See Kelley, supra note 3, at 198. Anti-discrimination laws prohibit employers from discriminating against applicants and employees because of certain protected characteristics.29See Sonderling, Kelley & Casimir, supra note 7, at 6. Liability attaches for unlawful discrimination regardless of whether the discriminatory act was performed by a human employee or sophisticated AI.30See id. at 5. In this sense, new legal provisions may not be necessary to establish the basic point—antidiscrimination requirements clearly apply to AI technologies and tools used to make or assist in employment decisions in the workplace. Even so, AI’s potential efficiencies of scale in employment decision making have provoked longstanding concerns that AI might multiply discriminatory motivations and decisions, resulting in unlawful discrimination at scale.

Attempts to regulate AI at a local level have fared no better. In 2021, New York City passed what purported to be the broadest AI employment law in the United States.31See id. at 47. It imposes requirements on employers’ use of AI tools for hiring and promotion decisions in New York City.32See id. According to civil rights groups, key provisions during its development were “introduced and rammed through in a rushed process that excluded workers, civil rights groups, and other stakeholders from providing any input.”33See Matt Scherer & Ridhi Shetty, NY City Council Rams Through Once-Promising but Deeply Flawed Bill on AI Hiring Tools, CTR. FOR DEMOCRACY & TECH. (Nov. 12, 2021), https://cdt.org/insights/ny-city-council-rams-through-once-promising-but-deeply-flawed-bill-on-ai-hiring-tools/ [perma.cc/S8HH-9ZYB]. As demonstrated in Colorado, haste to pass a law that can be touted as limiting AI confines the law’s effectiveness. Practitioners have criticized the law because it leaves too many unanswered questions regarding the nature of the required audit, the AI tools, or processes that fall under (or outside) the law’s mandate, as well as even the most basic questions regarding coverage.34See Sonderling, Kelley & Casimir, supra note 7, at 48.

Since it became effective in July 2023, the law has been widely panned as “a toothless flop,” a “bust,” and completely “ineffective.”35See Ottaway, supra note 27. The New York City Department of Consumer and Worker Protection, which enforces the AI law, lacks the authority to initiate investigations.36The Department of Consumer and Worker Protection encourages reporting of suspected violations of Local Law 144’s audit and notice requirements, in lieu of a mechanism for initiating investigations. See Niloy Ray, Monica Sislak & Eli Freedberg, NYC Department of Consumer and Worker Protection Issues Guidance on AI Regulations, LITTLER MENDELSON P.C. (July 5, 2023), https://www.littler.com/news-analysis/asap/nyc-department-consumer-and-worker-protection-issues-guidance-ai-regulations [perma.cc/58QE-Z73Q]; Ottaway, supra note 27. Further, the Department has stated that the agency has not received a single complaint since it began enforcing the law in July of 2023, a predictable development given the lack of evidence that employers were using AI tools in hiring and promotions.37See Ottaway, supra note 27. A Cornell University study published in early 2024 concluded that most employers in New York City have simply opted out of complying with the new law.38See id.

While a handful of other states and localities have enacted measures to regulate the use of AI tools in employment, many have not. The resulting patchwork of laws makes compliance difficult for employers operating nationally, and even within certain regions.39See Roy Maurer, AI Employment Regulations Make Compliance ‘Very Complicated’, SHRM (Nov. 26, 2024), https://www.shrm.org/topics-tools/employment-law-compliance/ai-employment-regulations-compliance-complicated [perma.cc/AV33-744S]. Federal inaction with respect to the broad regulation of AI technology has led some to infer that Congress has ceded the field, at least for the present.40See Joy C. Rosenquist, Bradford Kelley, Deborah Margolis & Alice Wang, Divergent Paths on Regulating Artificial Intelligence, LITTLER MENDELSON P.C. (Apr. 1, 2024), https://www.littler.com/news-analysis/asap/divergent-paths-regulating-artificial-intelligence [perma.cc/3BT9-DHZG]. The few federal endeavors have only exacerbated the burdens imposed by the patchwork. Not surprisingly, the rapidly proliferating federal, state, and local legislation and regulation in the AI arena already poses compliance challenges.

Although uncertainty with the overall regulatory scheme surrounding AI may vex lawyers and compliance personnel, private initiatives have stepped into the void, embracing responsible self-restraint (or, as some charitably describe it, “self-regulation”) to foster responsible AI development and deployment.41See Ottaway, supra note 27. Thus the current state of affairs—a nascent legal and regulatory landscape combined with private action—has created a workable environment facilitating the development of AI technologies with respect to the workplace.42See Vin Gurrieri, NYC AI Bias Bill May Be Compliance Headache for Employers, LAW360 (Dec. 3, 2021), https://www.law360.com/employment-authority/articles/1445563 [perma.cc/83Z3-FRVV].

Part II of this Article explores the federal regulatory landscape. This Part also examines the various joint and independent initiatives and measures that federal agencies have taken to address the misuse of AI in the workplace in the absence of regulations. Part III discusses specific proposals that have been considered in recent years that illustrate deficient state efforts to regulate AI in employment. This Part also examines legislative efforts and proposals that some states have introduced, highlighting potential issues with their design and challenges of effective implementation. Many of these state-level initiatives have been poorly drafted and lack thoughtful consideration, threatening to exacerbate and extend the mistakes of early efforts to regulate AI in employment. Part IV analyzes various proposals introduced by local jurisdictions, focusing on the challenges and shortcomings of these local efforts. As set forth below, many of these local-level proposals have been poorly developed. Finally, Part V briefly looks at various international efforts and explains why it is important to track international AI developments. Against the backdrop of the overall regulatory uncertainty, this Article concludes in Part VI by outlining several recommendations.

II. THE FEDERAL LANDSCAPE

This Part of the Article explores the general federal regulatory landscape surrounding workplace AI, emphasizing executive orders and other agency actions during the Biden administration. It also covers specific regulatory efforts aimed at governing AI, highlighting the short-sighted initiatives undertaken by agencies to manage AI’s impact on employment.

A. Executive Actions

The Biden administration’s approach to AI centered on public displays of broad government interest, concern, and study, highlighting concerns with AI technologies raised by labor unions.43See The White House, supra note 11. For instance, the White House hosted a listening session in June of 2023 with several high-profile union leaders to discuss the impact of AI on members, job quality, and civil rights.44See id. The Biden administration also relied on executive actions including a Blueprint for an “AI Bill of Rights” as well as requests for information.45See Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, THE WHITE HOUSE, https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/ [perma.cc/4FNL-WTS7] (last visited Oct. 25, 2025); see Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure, THE WHITE HOUSE (Jan. 14, 2025), https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2025/01/14/executive-order-on-advancing-united-states-leadership-in-artificial-intelligence-infrastructure/ [perma.cc/9AK3-2J3J]. In October 2021, the White House’s Office of Science and Technology Policy (“OSTP”) began a series of listening sessions and related events to form the groundwork for an AI “Bill of Rights” allegedly “to guard against the powerful technologies we have created.”46See Eric Lander & Alondra Nelson, ICYMI: WIRED (Opinion): Americans Need a Bill of Rights for an AI-Powered World, THE WHITE HOUSE (Oct. 22, 2021), https://bidenwhitehouse.archives.gov/ostp/news-updates/2021/10/22/icymi-wired-opinion-americans-need-a-bill-of-rights-for-an-ai-powered-world/ [perma.cc/TC7K-277B]. The original premise for the document was the presumption by OSTP that basic civil rights protections are routinely violated by AI.47See id. Certain advocacy groups in the technology space supported this effort and lobbied for its release, which was delayed until October 2022 when, under new leadership, OSTP released a “Blueprint for an AI Bill of Rights.”48See Ellen Glover, AI Bill of Rights: What You Should Know, BUILT IN (Mar. 19, 2024) https://builtin.com/artificial-intelligence/ai-bill-of-rights [perma.cc/9YGV-G3D4]. Instead of the anticipated “Bill of Rights” the “Blueprint” simply reiterated basic principles of privacy, transparency, and protections from discrimination.49See id. As the “Blueprint” was not the promised and anticipated output, it has been criticized as “toothless” and “insufficient.”50See Sonderling, Kelley & Casimir, supra note 7, at 42. In addition, critics noted that some of the goals included in a AI “Blueprint” would require the government to take aggressive steps to regulate and enforce AI restrictions, potentially hampering innovation.51See id.

In October 2023, President Biden signed a comprehensive executive order directing numerous federal agencies to take broad actions related to AI.52Exec. Order No. 14,110, 88 Fed. Reg. 75191 (Nov. 1, 2023) (rescinded by Exec. Order No. 14,148, 90 Fed. Reg. 8237 (Jan. 28, 2025)); see Kelley, supra note 1, at 92–93. At least one researcher construed the order to ‘“empower[] federal agencies to push the boundaries of their amorphous authority over advanced computational systems.”53See Adam Thierer, Why the End of Chevron Deference is Largely Meaningless for AI Policy, Part 2, MEDIUM (July 2, 2024), https://medium.com/@AdamThierer/why-the-end-of-chevron-deference-is-largely-meaningless-for-ai-policy-part-2-ba9276bc272f [perma.cc/Z7PX-P7R2]. Whatever the actual intent, the practical impact was far less grand than expansion of the regulatory frontier. Indeed, some of the directives were never implemented. For instance, the AI Executive Order directed DOL to issue a report on how the government can support workers displaced by AI by the end of April of 2024, but DOL never issued the report.54See Bradford J. Kelley & Alice Wang, Artificial Intelligence Executive Order WHD and OFCCP Guidance Issued, LITTLER MENDELSON P.C. (May 1, 2024), https://www.littler.com/news-analysis/asap/artificial-intelligence-executive-order-whd-and-ofccp-guidance-issued [perma.cc/9U3A-4R6D].

B. Interagency and Intra-Agency Agreements

The Biden administration also focused on interagency and intra-agency agreements, known as Memoranda of Understanding (“MOUs”).55See Kelley, supra note 1, at 94–96. MOUs are generally agreements between agencies that outline the ways in which they will work together or share information, or conduct investigations, training, enforcement, and other informal arrangements.56See id. Some MOUs have existed with minor modifications for extended periods. Others that are more substantively partisan often are casualties with each change of administration. Collectively, MOUs function as a network of interagency agreements intended to promote coordination across the administrative state and to streamline investigations and enforcement efforts targeting a wide range of employer practices.57See id. Done well, MOUs minimize government waste and duplication of efforts. But sharing of information can operate as an end-run around enforcement powers given to—and withheld from—a particular agency by Congress.

Despite their benefits, certain applications of MOUs—especially the more partisan varieties—may raise several concerns. Connecting the investigatory powers Congress afforded (or withheld from) separate agencies may be used to end-run statutory limitations. Such information sharing may raise confidentiality concerns. Oftentimes, when data is shared or complaints are referred between agencies, the receiving agency lacks the same level of familiarity with the applicable confidentiality protections.58See id. at 96. Some critics of these interagency agreements contend that agencies should focus more on offering compliance assistance to the general public, enabling the regulated community to better understand and meet legal requirements.59See id.

The risk of abuse inherent in agency information sharing agreements was taken to new heights in the Biden administration, as media reports demonstrated that at least one federal department entered into agreements to share information collected by the government with private law firms.60See Austin R. Ramsey, Lawmakers Probe DOL Data Sharing Pact with Plaintiffs’ Firm, BLOOMBERG L. (Nov. 21, 2024), https://news.bloomberglaw.com/daily-labor-report/lawmakers-probe-dol-enforcement-data-pact-with-plaintiffs-firm [perma.cc/SGY6-Q7E8]. Specifically, news reports indicated that DOL may have entered into a “common interest agreement” with a private plaintiffs’ law firm, Cohen Milstein Sellers & Toll PLLC.61Id. If true, this arrangement “suggests a new link between federal benefits enforcement action and private-sector litigation brought by plaintiffs’ firms.” Even more troubling, a federal magistrate judge warned that such a pact would allow regulators to “litigate in the shadows”—a deeply unsettling prospect given the potential funneling of confidential regulatory information into private litigation.62Id.

C. Federal Agencies

Despite the growing integration of AI tools into the workplace, federal labor and employment agencies during the Biden administration failed to provide clear, consistent, or technically sound guidance.63See Bradford J. Kelley, All the Regulatory Light We Cannot See: The Impact of Loper Bright on Regulating Artificial Intelligence in the Workplace, 49 SETON HALL J. LEGIS. & PUB. POL’Y 708, 714 (2025). Neither the EEOC nor the NLRB issued meaningful guidance explaining which uses of AI are permissible and which violate the law. This silence was particularly striking given AI’s rapid adoption across the employment lifecycle—from hiring and promotion to discipline and termination. The fact that the EEOC and other agencies struggled to keep pace with the details of AI technology limited their ability to provide meaningful guidance or impose effective regulatory limitations.64Id. The failure to use existing authorities cuts against contentions that additional powers or flexibilities are necessary or likely to achieve better results.65See Bradford J. Kelley, Wage Against the Machine: Artificial Intelligence and the Fair Labor Standards Act, 34 STAN. L. & POL’Y REV. 261, 269 (2023). No agency will be in a position to exercise additional or broader authority—to protect workers or to offer compliance assistance to employers—until it understands the technology it seeks to regulate.66See id. Armed with such expertise, agencies would be better positioned to more effectively marshal the tools and powers they already possess, both to provide guidance and enforce federal civil rights protections.

Much of the guidance issued during the Biden era came in non-binding formats, especially technical assistance documents, none of which were subject to public notice and comment.67See Kelley, supra note 1, at 95. These procedures, often required before agencies may promulgate regulations, are not required for the technical assistance and other guidance that merely restates, explains, and clarifies, existing regulations or requirements.68Id. at 103. However, given AI’s rapid evolution and expansion, participation and input from those closest to the development, evolution, and growth of that technology would aid the utility and efficacy of regulations.69Id. at 104. In doing so, agencies turned aside the very information and expertise that might have strengthened both their guidance and position to regulate AI.70Id. at 104–05. But agencies in the Biden administration took a different approach, deciding to move forward alone.

Agencies opted to regulate in isolation rather than draw upon the practical expertise necessary to craft workable standards. Labor and employment agencies did not engage key stakeholders during the Biden administration, even outside the rulemaking context.71See Kelley, supra note 63, at 714–15. Notably, the EEOC has not held any public hearings on workplace AI technologies since January 2023, and the sole hearing conducted—non-technical in nature—did not feature a single AI vendor actively involved in developing such tools.72See Keith E. Sonderling & Bradford J. Kelley, Filling the Void: Artificial Intelligence and Private Initiatives, 24 N.C. J. L. & TECH. 153, 161 (2023). The work product produced by these agencies during the Biden administration lends support to criticisms that agencies lack not only technical expertise in AI, but also interest in developing it. Instead, most agency AI technical assistance retreated to broad assertions of uncontroversial principles that did little to help employers identify uses of AI that are permissible and the line where the uses run afoul of antidiscrimination statutes. These milquetoast efforts unreasonably chilled bona fide uses of AI technology by employers, even in ways that are likely to reduce unlawful discrimination.

Federal agencies have frequently overlooked how AI technologies are procured and deployed in the workplace. A prime example is DOL’s Wage and Hour Division which issued a Field Assistance Bulletin in 2024 to address wage and hour risks associated with AI.73See Field Assistance Bulletin 2024-1, U.S. DEP’T OF LAB. (Apr. 29, 2024), https://www.millercanfield.com/assets/htmldocuments/fab2024_1.pdf [perma.cc/5ZYK-JF89]; Kelley & Wang, supra note 54. The Wage and Hour Division withdrew FAB 2024-1 shortly after Executive Order 14110, on which it was based and predicated, was revoked by Executive Order 14148. Yet it failed to account for the reality that most employers rely on third-party vendors for AI tools.74See Kelley & Wang, supra note 54. Employers rarely build their own AI technologies from scratch. Instead, they frequently engage third-party software vendors that develop and sell AI-powered tools, which are then used to perform a wide variety of employment tasks.75See id. By disregarding the vendor-employer dynamic, the 2024 guidance overlooked practical implementation concerns, highlighting the need for robust stakeholder input before issuing such directives. This disconnect was mirrored in other agency actions. For example, the NLRB General Counsel’s October 2022 Memorandum introduced a skeletal AI regulatory framework with no meaningful analysis of how such tools are actually used in employment settings.76See Kelley, supra note 3, at 199. The lack of grounding in real-world implementation reflects a broader trend of agencies attempting to regulate AI without the necessary technical understanding or consultation with affected parties.

In some cases, agencies compounded the problem by failing to substantiate their positions. The April 2024 Field Assistance Bulletin, for instance, cited no evidence to support its assertions about how employers use AI.77See Kelley, supra note 63, at 715. Even when agencies cite their sources, they often reveal a flawed foundation. For instance, the former NLRB General Counsel’s memorandum establishing an AI framework relied on just three brief sentences from a short 2018 symposium journal article.78See Kelley, supra note 3, at 221. The article offers no substantive explanation of how the proposed framework would function in practice, and the relevant statements are entirely unsupported.79Id. at 221–22. Unsurprisingly, the NLRB general counsel’s memorandum similarly lacks practical detail.80See id. Notably, the article’s author acknowledges in a footnote that “[t]he proposal is laid out here only briefly, to be elaborated in future work,”—a follow-up work that was never published.81Id. at 222.

These shortcomings were exacerbated by the EEOC’s failure to involve the full Commission in critical decision-making. Throughout the Biden administration, all AI-related guidance was issued solely by the Chair without a vote by the Commission.82See Kelley, supra note 63, at 714. This unilateral approach lacked transparency and suggested a partisan orientation in what had previously been a nonpartisan space. Rather than reflecting balanced, deliberative policymaking, the agency’s actions appeared crafted to promote one view of the regulatory debate while sidelining opposing ones. This perceived lack of neutrality is well illustrated by the EEOC’s entanglement with the Center for Democracy & Technology (“CDT”), a progressive nonprofit that has long advocated for more aggressive AI regulation.83Id.; see also Sonderling & Kelley, supra note 72, at 184. During her tenure, the EEOC’s then–Legal Counsel—who was directly responsible for drafting agency policy and guidance—simultaneously served on the advisory committee for CDT’s “Project on Disability Rights & Algorithmic Fairness.” CDT’s website prominently listed both the EEOC and her official title while the organization actively lobbied the agency on AI policy.84See Kelley, supra note 63, at 714. This overlap raises serious conflict-of-interest concerns and casts doubt on the impartiality of AI-related guidance documents issued under her leadership. The appearance of favoritism was further reinforced by the EEOC’s public hearings on AI, where groups like CDT and other civil rights organizations were prominently featured, while employer-affiliated stakeholders were largely excluded.

During the Biden administration, the EEOC also took inconsistent positions on AI vendor liability, with its amicus briefs pointing in one direction and its public training materials in another. In litigation, the EEOC filed amicus briefs supporting a broad view of vendor liability, explicitly arguing that vendors may be held responsible for discriminatory outcomes caused by their AI tools.85Id. at 716–17. Yet in its public trainings, the agency sent the opposite message. For example, a March 2024 training entitled “Artificial Intelligence in the Workplace: Real Life Examples of the Risks to Employers,” the EEOC instructed that “if your company uses software, computer systems, etc., created by someone else that discriminates, you will be on the hook, not the vendor.”86Steven A. Wagner, EEOC Training Inst. Presentation on Artificial Intelligence in the Workplace: Real Life Examples of the Risks to Employers 8–10 (Mar. 28, 2024) (on file with authors). A separate slide titled “Lessons Learned” reinforced the point, stating that while a “vendor may not be liable” for discriminatory software, employers “will be!”87Id. This stark contrast highlights an unresolved and confusing inconsistency: while the EEOC tells courts that vendors may face liability, it tells employers in training sessions that vendors will not.

In the end, the shortcomings in the federal response to AI in the workplace may help explain why a federal district court opinion neither cited nor acknowledged an amicus brief the EEOC filed in an employment discrimination case involving AI decisions.88See Kelley, supra note 63, at 717. While courts often afford little or no weight to amicus briefs—especially in an era characterized by sharp and frequent policy reversals—the court’s disregard of this particular brief is noteworthy. Unlike other areas subject to such “wild flip-flops,” the brief addressed a relatively stable and emerging domain of law where the EEOC’s views would reasonably be expected to carry persuasive value. See Keith E. Sonderling & Bradford J. Kelley, The Sword and the Shield: The Benefits of Opinion Letters by Employment and Labor Agencies, 86 MO. L. REV. 1171, 1201–02 (2021) (noting courts’ increasing skepticism toward agency amicus briefs). The court’s complete disregard of the EEOC’s amicus brief may suggest that courts are already becoming less willing to defer to agency interpretations on complex and technical matters like AI because they lack a reliable, substantive comprehension of the technology on which they opine.89See id. Not only does the decision signal a disinclination to follow agency interpretations, but it also demonstrates that at least one court does not feel any obligation to even explain why it chose to ignore the EEOC’s position.

Even though Biden’s EEOC did not focus on public-facing AI guidance, the agency attempted to influence the AI regulatory landscape through more unofficial methods, such as press releases, settlement agreements, and training materials.90See Press Release, Equal Emp. Opportunity Comm’n, DHI Group, Inc. Conciliates EEOC National Origin Discrimination Finding (Mar. 20, 2023), https://www.eeoc.gov/newsroom/dhi-group-inc-conciliates-eeoc-national-origin-discrimination-finding [perma.cc/QK6K-VCDZ]; see Press Release, Equal Emp. Opportunity Comm’n, EEOC Hearing Explores Potential Benefits and Harms of Artificial Intelligence and other Automated Systems in Employment Decisions (Jan. 31, 2023), https://www.eeoc.gov/newsroom/eeoc-hearing-explores-potential-benefits-and-harms-artificial-intelligence-and-other [perma.cc/9C7C-YX2Q]. These channels allowed the agency to shape narratives around AI and discrimination without undergoing formal rulemaking or engaging with stakeholders through notice-and-comment procedures.

In 2023, the EEOC issued a press release announcing a conciliation agreement with DHI Group, Inc., a company that operates a job-search website for technology professionals.91See Press Release, Equal Emp. Opportunity Comm’n, DHI Group, Inc. Conciliates EEOC National Origin Discrimination Finding, supra note 90. The agreement resolved multiple national origin discrimination charges stemming from allegations that DHI permitted certain customers to post job listings that explicitly excluded U.S. workers based on national origin.92See id. The EEOC found reasonable cause to believe that this practice violated Title VII, which prohibits job advertisements that express a preference or impose limitations based on national origin.93See id. As part of the settlement, DHI agreed to provide monetary relief to the estate of the original complainant and to modify its platform’s programming to detect and flag potentially discriminatory phrases—such as “OPT,” “H1B,” or “Visa”—when used alongside restrictive language like “only” or “must.” In the EEOC’s press release, the EEOC’s Miami District Systemic Coordinator noted “DHI’s use of programming to ‘scrape’ for potentially discriminatory postings illustrates a beneficial use of artificial intelligence in combatting employment discrimination.”94Id.

Later that year, the EEOC announced a settlement in what has been described as the agency’s first-ever AI discrimination case.95See Press Release, Equal Emp. Opportunity Comm’n, iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit (Sept. 11, 2023), [perma.cc/E38B-CQXQ]; see Annelise Levy, EEOC Settles First-of-Its-Kind AI Bias in Hiring Lawsuit, DAILY LAB. REP. (Aug. 10, 2023), https://www.bloomberglaw.com/bloomberglawnews/daily-labor-report/X4ER6U4000000 [perma.cc/UK8Z-D5QY]. The lawsuit involved iTutor Group and related entities, which hired thousands of U.S.-based online tutors.96See Press Release, Equal Emp. Opportunity Comm’n, supra note 95. The EEOC alleged that the employer’s online application system automatically rejected female applicants aged 55 or older and male applicants aged 60 or older.97See id. While multiple media reports have characterized the EEOC’s iTutor lawsuit as a case involving AI, the actual complaint only alleged that the online job application system requested dates of birth and was programmed to automatically screen out female applicants aged fifty-five or older and male applicants aged sixty or older.98See id.; Levy, supra note 95. While the EEOC’s complaint and proposed consent decree did not expressly reference AI or machine learning, the EEOC’s press release linked the case to its Artificial Intelligence and Algorithmic Fairness Initiative as an example of the types of technologies that the EEOC is interested in pursuing.99See Press Release, Equal Emp. Opportunity Comm’n, supra note 95. This framing appeared designed to link the lawsuit to broader concerns about algorithmic bias, despite the lack of any allegations regarding AI in the legal filings.

Taken together, these actions illustrate the EEOC’s strategy of advancing AI enforcement goals indirectly—through enforcement mechanisms and public messaging rather than formal regulatory pathways. While these efforts reflect a growing interest in addressing algorithmic harms, they also raise concerns about transparency and the coherence of the agency’s approach to emerging technologies.

Perhaps recognizing their own lack of readiness to regulate AI effectively, federal labor and employment agencies under the Biden administration turned to non-government organizations to fill the regulatory void.100See Press Release, U.S. Dep’t of Lab., US Department of Labor Announces Framework to Help Employers Promote Inclusive Hiring as AI-Powered Recruitment Tools’ Use Grows (Sept. 24, 2024), https://www.dol.gov/newsroom/releases/odep/odep20240924 [perma.cc/B2TZ-Z6YP]. A notable example was the DOL’s announcement on September 24, 2024, of the release of its “AI & Inclusive Hiring Framework,” described as a new tool designed to promote the inclusive use of AI in hiring technologies and improve employment outcomes for individuals with disabilities.101See Bradford J. Kelley, Alice H. Wang & Sean P. O’Brien, DOL Issues “AI & Inclusive Hiring Framework” Through Non-Governmental Organization, LITTLER (Sept. 25, 2024), https://www.littler.com/news-analysis/asap/dol-issues-ai-inclusive-hiring-framework-through-non-governmental-organization [perma.cc/E6F4-2EWK]. However, the new framework was not issued by the DOL itself. Instead it was published by the Partnership on Employment & Accessible Technology (“PEAT”), a nonprofit initiative funded by the DOL’s Office of Disability Employment Policy (“ODEP”) and operated by a private contractor.102See Press Release, U.S. Dep’t of Lab., supra note 100. Despite receiving federal funds, PEAT maintains independence, and its website clearly states that its materials “do not necessarily reflect the views or policies of” ODEP or DOL, nor are they endorsed by the federal government.103See Kelly et al., supra note 101. According to DOL’s press release, the framework was developed with input from disability advocates, AI experts, government officials, industry leaders, and members of the public.104See Press Release, U.S. Dep’t of Lab., supra note 100. The process stemmed from a virtual think tank hosted by DOL and PEAT on April 17, 2023, which included participants from federal agencies, civil rights groups, disability organizations, and technology companies.105See Press Release, U.S. Dep’t of Lab., Readout: Department of Labor Gathered Experts, Stakeholders To Ensure More Inclusive Hiring As Automated Technology Affects Decision-Making (Apr. 17, 2023), https://www.dol.gov/newsroom/releases/odep/odep20230417 [perma.cc/63EL-HMSR]. Notably absent from this gathering were employers—even though the framework’s stated primary audience is employers deploying AI hiring tools.106See Kelley et al., supra note 101. Although employer viewpoints were solicited during broader stakeholder engagement sessions, they were not given the opportunity to provide direct feedback on the draft guidance.107Id. This omission raises concerns about whether the final framework adequately reflects the practical challenges and operational realities employers face in striving to deploy accessible and inclusive AI technologies.108See id. In short, the process sacrificed balance—namely, a fair consideration of stakeholder interests, legal constraints, and implementation realities—and practical utility in favor of symbolic progress, potentially weakening the framework’s effectiveness and credibility.

Even without imposing sweeping substantive regulations, the Biden administration managed to create unnecessary burdens through piecemeal, ill-conceived AI initiatives. A clear example came in August 2023, when the Office of Management and Budget approved DOL’s Office of Federal Contract Compliance Programs (“OFCCP”) request to revise its “Itemized Listing” for federal contractor audits.109See Bradford J. Kelley, Chris Gokturk, David Goldstein & Niloy Ray, OFCCP Preparing to Scrutinize Federal Contractors’ Use of AI Hiring Tools and Other Technology-based Selection Procedures, LITTLER (Sept. 7, 2023), https://www.littler.com/news-analysis/asap/ofccp-preparing-scrutinize-federal-contractors-use-ai-hiring-tools-and-other [perma.cc/U2YD-EQX5]. The revised listing added a new requirement—“Item 21”—obligating contractors to disclose information and documentation about any policies, practices, or systems used to recruit, screen, and hire, including those involving “artificial intelligence, algorithms, automated systems, or other technology-based selection procedures.”110Id.

This requirement illustrates the harms of the administration’s regulatory approach. First, OFCCP offered no definition of “artificial intelligence,” while simultaneously expanding its reach to “algorithms” and “other technology-based selection procedures.”111Id. The result was breathtakingly overbroad, sweeping in even basic tools like online intake forms or database search functions that have nothing to do with AI.112Id. Second, the requirement imposed heavy compliance costs on contractors, who were tasked with cataloging technologies that are often integrated across multiple platforms, not always visible to end users, and, in many cases, proprietary to third-party vendors. Contractors were thus forced into an impossible position—either attempt to describe systems they did not build and cannot access, or risk noncompliance with OFCCP audits.113Id.

Equally troubling, the new disclosure obligation lacked any clear purpose or benefit. OFCCP did not explain how the information would be used, or why existing enforcement tools were insufficient. If no adverse impact is revealed, the agency has no basis for further inquiry; if impact is detected, OFCCP already had authority to investigate whether AI was a contributing factor. In other words, the requirement added bureaucratic burden without enhancing enforcement or worker protections.

This episode underscores the broader issues of the Biden administration’s AI regulatory posture: ambiguous mandates, poorly targeted obligations, and costly compliance exercises untethered from meaningful outcomes. Rather than offering clarity or protection, these efforts increased uncertainty, drained employer resources, and highlighted the federal government’s lack of preparedness to regulate AI responsibly.

D. Congressional Inaction

Although members of Congress have expressed interest in regulating certain aspects and uses of AI, the body has yet to take meaningful action. In recent years, Democratic lawmakers have reintroduced the Algorithmic Accountability Act, a bill that would grant the Federal Trade Commission authority to promulgate regulations mandating that large companies assess their AI tools for potential unlawful bias.114See Algorithmic Accountability Act of 2022, H.R. 6580, 117th Cong. (2022). Specifically, the bill would require all large companies to perform a so-called bias impact assessment of any automated system that makes critical decisions in a variety of sectors, including employment, financial services, healthcare, housing, and legal services.115See id. However, the bill has been strongly criticized for its perceived overreach, lack of definitional clarity, insufficient direction to the FTC, and several other shortcomings—making its passage into law highly unlikely.116See Furkan Gursoy, Ryan Kennedy & Ioannis A. Kakadiaris, A Critical Assessment of the Algorithmic Accountability Act of 2022, (2022) (manuscript at 4–6), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4193199 [perma.cc/R6CG-KMPP].

More broadly, Congress has not expressly delegated authority to any labor and employment agency to regulate AI. In the absence of such authority, any agency attempting to regulate AI in the labor and employment context risks overstepping its jurisdiction, making its rules and guidance vulnerable to legal challenges for exceeding the scope of the agency’s authority.

III. STATE AI LAWS

This Part of the Article analyzes how states have approached the regulation of AI to address potential risks. As explicated in greater detail below, many of these state-level initiatives have been poorly drafted and lack thoughtful consideration, resulting in inconsistent and flawed regulatory frameworks.

A. AI Video Interview Laws

The COVID-19 pandemic reshaped not only how people work but also how some companies hire. With government restrictions limiting onsite operations, employers rapidly expanded remote work arrangements, which in turn accelerated the adoption of AI-driven hiring tools—most notably video interview platforms.117See Kelley, supra note 65, at 269. A 2020 survey found that 86% of employers were using virtual interview technology to support remote hiring.118See Sonderling, Kelley & Casimir, supra note 7, at 42. This rapid shift prompted several states to enact laws specifically aimed at regulating the use of AI in video interviewing.119Id.

The Illinois Artificial Intelligence Video Interview Act stands as one of the earliest state-level attempts.120See id. The law requires employers to provide applicants with advance notice if AI-driven video interview technologies will be used, including an explanation of how the AI functions and what general traits or characteristics it will assess.121See 820 ILL. COMP. STAT. ANN. 42/5 (West 2020). The advance notice must explicitly inform applicants that “artificial intelligence analysis” may be used in evaluating the interview.122See Sonderling, Kelley & Casimir, supra note 7, at 45. Additionally, the law gives applicants limited control over their data by allowing them to request deletion of their video interview, including all backup copies, within thirty days of the request.123See 820 ILL. COMP. STAT. ANN. 42/5 (West 2020).

Despite its pioneering intent, the Illinois law is deeply flawed and may ultimately create more confusion and compliance challenges for employers than it resolves. Most notably, the statute fails to include essential definitional clarity—it does not define key terms such as “artificial intelligence,” “AI analysis,” or other operative language central to its regulatory scope.124See Sonderling, Kelley & Casimir, supra note 7, at 42. This omission introduces significant legal ambiguity and undermines the law’s enforceability, particularly given the wide range of technologies that might or might not fall within its purview.125Id.

Moreover, the statute’s notice requirements are vague, providing only a general outline of what the notice must include without specifying the level of detail required.126See id. at 46. The law is also conspicuously silent on critical enforcement mechanisms—it provides no guidance on penalties for noncompliance, fails to establish a private right of action, and lacks a designated enforcement authority.127See id. These omissions significantly weaken the law’s deterrent power and raise questions about its practical effectiveness.128Id. at 45–46. As a result, many common uses of AI in the video interviewing and remote hiring context may inadvertently escape coverage, leaving both employers and applicants in a state of uncertainty.129Id.

Jurisdictional ambiguity adds another layer of complexity. The law purports to protect applicants “based in Illinois,” but it does not clarify whether it applies to out-of-state employers, particularly when hiring for roles located outside of Illinois.130Id. at 46. Nor does it address whether an employer can legally reject applicants who decline to consent to the use of AI-driven video interviews, leaving a notable gap in both applicant protections and employer obligations.131See id.

A similar law enacted by Maryland in 2020 also reflects these regulatory shortcomings. Maryland’s statute prohibits the use of facial recognition technology during pre-employment interviews unless the applicant provides written consent via a specific waiver.132See MD. CODE ANN. LAB. & EMPL. § 3-717 (West 2020). This waiver must include the applicant’s name, the date of the interview, an acknowledgment of consent to facial recognition, and confirmation that the applicant has read the waiver.133See id. While Maryland’s law includes definitions for terms like “facial template” and “facial recognition services,” those definitions remain vague and insufficiently detailed, creating interpretive challenges that complicate employer compliance and raise enforcement concerns.134See Sonderling, Kelley & Casimir, supra note 7, at 46.

Together, these early state efforts reflect a broader pattern: while well-intentioned, current AI employment regulations are often underdeveloped, inconsistently drafted, and legally ambiguous—raising serious questions about their utility as models for broader policymaking in this rapidly evolving area.

B. The Colorado AI Act

In 2024, the Colorado enacted state Senate Bill 24-205, entitled “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (commonly referred to as the “Colorado AI Act”).135See COLO. REV. STAT. § 6-1-1701 et seq. (2024). The Colorado AI Act, which will not go into effect until 2026 at the earliest, aims to regulate the private-sector use of AI systems, and, specifically, the risk of algorithmic discrimination arising from the use of “high-risk” AI systems.136See id. (explaining that the Colorado AI Act addresses, among other things, the risk of algorithmic discrimination arising from the intended and contracted uses of “high-risk artificial intelligence system[s].”).

The legislative history of the Colorado AI Act casts a damning light on the law.137See Jake Parker, Misgivings Cloud First-in-Nation Colorado AI Law: Implications and Considerations for the Security Industry, SEC. INDUS. ASS’N (May 28, 2024), https://www.securityindustry.org/2024/05/28/misgivings-cloud-first-in-nation-colorado-ai-law-implications-and-considerations-for-the-security-industry/ [perma.cc/Y2WG-LVV8]. It was rammed through the legislative process so that it would be enacted before the EU’s Artificial Intelligence Act.138See id. Hasty drafting and expedited consideration came at the cost of quality. On the same day he signed the Colorado AI Act into law, the Colorado governor wrote to the Colorado General Assembly stating that he had “reservations” about the law.139See Letter from Jared Polis et al., supra note 25. He urged the legislature to “fine tune” certain provisions.140See id. Shortly thereafter, the governor, state attorney general, and state senate majority leader authored an open letter to the business community to “provide additional clarity” and committed to “engage in a process to revise the new law” and “minimize unintended consequences associated with its implementation.”141Id. Thus, the structure and key provisions of the Colorado AI Act will likely be amended before the effective date. Consequently, Colorado state agencies are now forced to take a “wait and see” position in the event the law is amended.142See Tamara Chuang, Governor, Lawmakers Are Already Planning Big Revisions to Colorado’s First-in-the Nation Artificial Intelligence Law, COLO. SUN (June 14, 2024), https://coloradosun.com/2024/06/14/colorado-ai-bill-revisions/ [perma.cc/K5UV-VTRF]. This will also effectively limit any available time for any notice-and-comment for any proposed rules. Meanwhile, employers cannot effectively prepare to comply with a comprehensive new law with novel requirements while waiting for the state government to fix a law that was defective from the start.

One might reasonably expect that a government enacting a sweeping statute to regulate a new, growing, and rapidly evolving technology would resolve important and fundamental aspects of the regulatory regime before it passes. At the very least, one might reasonably accuse Colorado of recklessness for charging ahead with an admittedly facially deficient law, based on vague intentions to pass subsequent amendments. Particularly with respect to a new technology in a rapidly evolving area, it behooves the government to get it right the first time.

Although the Colorado AI Act directs the state attorney general to promulgate rules to implement and enforce the law and permits other state agencies to issue related guidance and regulations, Colorado state courts will not be required to defer to the agencies’ interpretations of those rules.143See Kelley, supra note 63, at 718. Critically, Colorado courts explicitly do not defer to agency interpretations.144See id. Most notably, in 2021, the Colorado Supreme Court expressly declined to defer to a state agency’s interpretation of a statute in Nieto v. Clark’s Market, Inc.145See Nieto v. Clark’s Market, Inc., 488 P.3d 1140, 1149 (Colo. 2021). In that case, the Colorado Supreme Court emphasized that it would consider agency interpretations to be “persuasive evidence” for Colorado courts to factor into their determinations.146Id. The court further noted that it was “unwilling to adopt a rigid approach to agency deference that would require courts to defer to a reasonable agency interpretation of an ambiguous statute even if a better interpretation is available.”147Id.

C. California AI Laws and Regulations

In 2024, the Civil Rights Council, a branch of California’s Civil Rights Division, issued proposed regulations for employers’ use of AI and automated decision-making systems.148See Press Release, Cal. C.R. Dep’t, Civil Rights Council Releases Proposed Regulations to Protect Against Employment Discrimination in Automated Decision-Making Systems (May 17, 2024), https://calcivilrights.ca.gov/2024/05/17/civil-rights-council-releases-proposed-regulations-to-protect-against-employment-discrimination-in-automated-decision-making-systems/ [perma.cc/SMD4-MLWE]. While the Civil Rights Council positioned these rules as a necessary step toward protecting civil rights in the digital age, the proposed regulations were riddled with serious flaws that undermine their credibility and legal viability.149See id. First, the proposed regulations were based on studies, reports, and feedback from 2021.150See id. (noting the 2021 hearing). See also Cal. C.R. Dep’t, Civil Rights Council Final Statement of Reasons 35, https://calcivilrights.ca.gov/wp-content/uploads/sites/32/2025/06/Final-Statement-of-Reasons-regulations-automated-employment-decision-systems.pdfutm [perma.cc/ZB3X-ZKRL] (recognizing comment criticism that the Civil Rights Council “based these proposed regulations on studies, reports and feedback from three years ago, in 2021”). In the years since then, the AI landscape has undergone significant transformation, particularly with the rapid advancement of generative AI technologies.151See Kelley, supra note 65, at 275–78 (noting that generative AI tools like ChatGPT have ushered in a sea of change in multiple industries, including the legal and medical industries). Remarkably, the proposed regulations failed to even mention generative AI, despite its growing prevalence in hiring, performance evaluation, and workplace management systems.152See Press Release, supra note 148. Second, the Civil Rights Council’s authority to issue these regulations is itself questionable because it lacks clear statutory authorization to do so and appears to be stretching its mandate by framing the regulation of emerging technologies as an extension of its anti-discrimination mission.153See Letter from Michael Richards, Senior Director, U.S. Chamber of Com., to Rachael Langston, Assistant Chief Couns., Cal. C.R. Couns. (Oct. 17, 2024), https://www.uschamber.com/technology/u-s-chamber-letter-to-california-civil-rights-council-on-artificial-intelligence [perma.cc/C4AW-R834] (noting that one of the “deficiencies in the Council’s rulemaking approach” was the lack of “clarity on its legal authority to expand the scope of regulations without expressed legislative authorization to do so”). Third, several commenters—including the U.S. Chamber of Commerce—criticized the Council for disregarding stakeholder concerns about the potential for a dramatic surge in litigation targeting vendors and developers of automated decision-making tools.154See id. The U.S. Chamber of Commerce argued that “the proposed regulations could impose significant, disproportionate costs on innovation and not survive legal challenges, leaving the business community without necessary certainty.”155Id. The U.S. Chamber of Commerce further stated, “[w]e believe until these matters are fully addressed, that [the Council] should stop any further movement of the proposed rules.”156Id.

Left unchecked, California’s proliferating AI regulations could pose an existential threat to businesses that are unable to modify their operations to comply or those businesses that will suffer a serious competitive disadvantage as a result of its compliance with the state’s regulatory regime.

IV. LOCAL AI LAWS: THE NEW YORK CITY AI LAW

This Part of the Article explores how one local government has sought to regulate AI to mitigate its risks, analyzing a New York City AI law and focusing on the challenges and shortcomings of it. Like several of the efforts discussed above, the measure was poorly constructed and inadequately considered, leading to a fragmented and ineffective approach to AI regulation.

In 2021, New York City enacted what purported to be the broadest AI employment law in the United States, which ostensibly curtails employers’ use of AI tools for hiring and promotion decisions in New York City.157See Sonderling, Kelley & Casimir, supra note 7, at 47–48. As noted above, it remains unclear whether employers in New York City were broadly using AI technology in hiring and promotion decisions before the law was enacted. But despite lacking a clear need for regulation, much less such a heavy-handed one, the city government pressed ahead.

Like the Colorado AI Act, the legislative history of the NYC AI law illuminates the shortcomings associated with an expedited legislative process. Despite the law’s pro-employee intentions, a large number of civil rights groups, including the National Employment Law Project, the New York Civil Liberties Union, and the NAACP Legal Defense and Education Fund, condemned the measure as vague and ineffective, contending that it will actually “rubber-stamp” the very discrimination it seeks to prevent.158Id. at 48. Other groups similarly argued that the ordinance’s key provisions were “introduced and rammed through in a rushed process that excluded workers, civil rights groups, and other stakeholders from providing any input.”159Id. Practitioners criticized the law because “it leaves too many unanswered questions regarding the nature of the required audit, the AI tools, or processes that fall under (or outside of) the law’s mandate, as well as basic coverage.”160See id. Practitioners further argued that “[t]he law’s poor construction creates an HR nightmare for employers seeking to staff up.”161Id. at 48–49. A former EEOC Commissioner and his staff lamented that “the New York City law could have been a model for jurisdictions around the country to follow, but instead it typifies a missed opportunity and leaves important forms of discrimination unaddressed.”162Id. at 49.

The New York City law has accomplished little since it went into effect in July of 2023. A Law360 article entitled, “‘Everyone Ignores’ New York City’s Workplace AI Law” explains that most practitioners have concluded that the law has been a “toothless flop” and highly ineffective.163See Ottaway, supra note 27. Similarly, the Society for Human Resource Management, the world’s largest professional association dedicated to the practice of human resource management, published an article titled “New York City AI Law is a Bust.”164See Roy Maurer, New York City AI Law Is a Bust, SHRM (Feb. 18, 2024), https://www.shrm.org/topics-tools/news/technology/new-york-city-ai-law [perma.cc/EQN6-3M2U]. Since enforcement began in 2023, the New York City Department of Consumer and Worker Protection—which is responsible for enforcing the law but lacks the authority to initiate investigations independently—has not received any complaints to date.165See id. The law also permits employers to opt out if a human is involved in the decision-making process where the tool is used. Not surprisingly, a Cornell University study published in early 2024 found that the overwhelming majority of employers in New York City chose not to comply.166See id. Of the 391 employers surveyed, only eighteen had posted the required audit reports on their websites, and just thirteen published notices informing applicants that an automated employment decision tool was being used in their evaluation.167See Lucas Wright, Roxana Mika Muenster, Briana Vecchione, Tianyao Qu, Pika (Senhuang) Cai, Alan Smith, Comm 2450 Student Investigators, Jacob Metcalf, & J. Nathan Matias, Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability (June 2024), https://dl.acm.org/doi/fullHtml/10.1145/3630106.3658998 [perma.cc/L2GY-SZH7].

However, despite the number of businesses that are blatantly refusing to comply with the law’s requirements, there are countless other businesses that have invested substantial amounts of time and money into evaluating existing or anticipated uses of AI systems in their operations and taking steps to try and ensure compliance with the law’s requirements. These businesses are incurring significant costs to consult with vendors, lawyers, and consultants in order 1) to assess whether specific uses of AI are covered by the NYC AI law, 2) for those uses that are covered, to develop and implement new or modified policies and procedures to ensure that those uses of AI remain lawful and compliant, and 3) to perform or obtain bias audits of any AI systems covered by the law. As a result, the NYC AI law is impacting employers operating within city limits in differing ways, but it has largely failed to address the actual or perceived harms to employees that can flow from the use of AI systems.

Local jurisdictions have not only failed to support employers in navigating the challenges posed by AI; they have, in some cases, actively exacerbated them. A striking example occurred in October 2023, when New York City launched an AI-powered chatbot intended to assist small business owners.168See Jake Offenhartz, NYC’s AI Chatbot Was Caught Telling Businesses to Break the Law. The City Isn’t Taking It Down, AP (Apr. 3, 2024), https://apnews.com/article/new-york-city-chatbot-misinformation-6ebc71db5b770b9969c906a7ee4fae21 [perma.cc/3BC4-HDTZ]. Instead of offering reliable guidance, the chatbot delivered bizarre responses, misrepresented city policies, and even advised employers to break the law.169Id. For instance, the chatbot falsely claimed it was legal to fire an employee for reporting sexual harassment, failing to disclose a pregnancy, or refusing to cut their dreadlocks.170Id. In another troubling exchange, it responded to a question about whether a restaurant could serve cheese that had been nibbled on by a rodent by stating, “Yes, you can still serve the cheese to customers if it has rat bites,” advising the user to assess the “extent of the damage” and to “inform customers about the situation.”171Id. While the chatbot included a disclaimer that it may “occasionally produce incorrect, harmful or biased” information along with a warning that its responses do not constitute legal advice, these caveats do little to excuse its dangerous and misleading outputs. The incident underscores how ill-equipped some local jurisdictions are to responsibly regulate or deploy advanced AI technologies.172Id.

V. INTERNATIONAL APPROACHES

In stark contrast to the United States, many countries—particularly in Europe—have embraced a far more prescriptive and heavy-handed approach to regulating AI.173See Sonderling, Kelley & Casimir, supra note 7, at 49–50; see also Sonderling & Kelley, supra note 72, at 199. This aggressive regulatory posture is epitomized by the European Union’s AI Act, a sweeping framework that imposes extensive compliance obligations on developers and users of AI technologies.174See Sonderling & Kelley, supra note 72, at 199. While some U.S. lawmakers are beginning to view the EU model as a potential template—evident in the drafting of the Colorado AI Act, which draws heavily from the EU AI Act’s regulatory structure—there are significant concerns about the impact of this approach on hampering innovation and economic dynamism.

Rather than fostering responsible innovation, the European regulatory model has created substantial barriers to entry, especially for startups and smaller enterprises.175See id. at 191. The costs and complexities of compliance under the EU AI Act threaten to stifle competition, consolidate market power among a few large players, and discourage new entrants from developing and deploying cutting-edge AI solutions.176See, e.g., id. (noting that the European regulatory model creates unnecessary barriers to entry). These structural burdens have contributed to Europe’s lag in cultivating global tech giants. Unlike the United States, which is home to most of the tech industry-leading companies, Europe has failed to produce comparable powerhouses in the digital and AI sectors.177See id at 191. (explaining that the “heavy-handed regulatory approach seen in the EU where there are no European counterparts of Silicon Valley-based companies such as Google, Facebook, or Apple” is “the antithesis of a laboratory of the technological innovation framework”).

This disparity is no coincidence. Europe’s regulatory-first mindset, largely characterized by precaution rather than innovation, has often placed risk aversion above technological advancement.178See, e.g., id. at 163 (discussing the European Union’s establishment of strict safeguards for AI systems). As the U.S. considers its regulatory future, policymakers must be cautious not to replicate a framework that, while well-intentioned, may ultimately hinder growth, suppress innovation, and erode the competitive advantages that have made the American tech sector a global leader.

VI. RECOMMENDATIONS

This Part examines recommendations that can help reduce the risks associated with AI, reduce uncertainty, and protect employees without hampering innovation. These recommendations aim to promote a balanced regulatory framework that encourages responsible AI adoption and use while ensuring fairness, transparency, and accountability in the workplace.

A. Federal Preemption

The increasing patchwork of state and local AI-related laws, which often impose overlapping and conflicting requirements, has created significant compliance challenges for employers. Conflicting government compliance requirements impose a complex web of challenges for employers, employees, and unions alike, with far-reaching consequences for job security, workplace conditions, and even effective union representation and advocacy for their members at the bargaining table.179See Kelley, supra note 1, at 103–04. As such, Congress must seriously consider establishing a national standard that streamlines regulatory compliance and preempts conflicting state and local frameworks. Given the complexity and national scope of AI regulation, Congress is well suited to weigh the competing interests, policies, and values inherent in such policy judgments through a comprehensive bona fide legislative process. Although such comprehensive legislative treatment has been less common in recent years, it remains the best way to confront the weighty questions posed by AI technology and its impact in so many industries and aspects of life. This is especially critical for employers operating across states or countries, as the proliferation of state and local laws threatens to create countless, often inconsistent—and potentially conflicting—compliance obligations.180See id. at 105.

Federal preemption would resolve some of the concerns that are emerging at the state level. In fact, in the letter that the Colorado governor, the state’s attorney general, and the senate majority leader sent to businesses to “provide additional clarity,” the Colorado political leaders strongly criticized the state-by-state patchwork of regulation and advocated for federal preemption.181Letter from Jared Polis et al., supra note 25. A state-by-state patchwork of regulation poses significant challenges to the cultivation of a strong technology sector. This Colorado letter signals to federal policymakers the interest among state leaders in establishing a national regulatory framework for AI, rather than an intent to create one of 50 distinct regulatory frameworks. Similarly, regulatory harmonization across regulatory states would reduce the burdens of navigating a patchwork of compliance requirements—burdens that often deter investment and disproportionately hinder small technology firms.182Id. at 2.

B. Moratorium on Federal and State AI Laws

A more measured alternative to preemption would be for Congress to enact a time-limited moratorium on new federal, state, and local AI-specific laws. Such a moratorium would establish a defined “learning period” designed to prevent the proliferation of inconsistent and burdensome mandates while avoiding the immediate creation of a patchwork regulatory landscape.183See Adam Thierer, Getting AI Policy Right Through a Learning Period Moratorium, R STREET (May 29, 2024), https://www.rstreet.org/commentary/getting-ai-policy-right-through-a-learning-period-moratorium/ [perma.cc/H3F8-ZK49]. See generally Evangelos Razis & James C. Cooper, The Federalist’s Dilemma: State AI Regulation & Pathways Forward, HARV. J.L. & PUB. POL’Y (forthcoming 2025). By pausing new legislation, Congress would provide “breathing space” for algorithmic innovation, giving lawmakers, agencies, and researchers the opportunity to study emerging technologies and identify the areas that most warrant scrutiny and potential regulation.184Thierer, supra note 183. In doing so, this approach would allow for evidence-based regulations that are simultaneously more targeted and comprehensive and would close the real gaps in existing laws without hindering technological progress. Importantly, this approach would not be deregulatory. Existing federal and state statutes already apply to AI, and a moratorium would allow federal agencies to assess how their current authorities can be used to address misuse of AI in the workplace and beyond.185See Razis & Cooper, supra note 183, at 49–50. In effect, the moratorium would balance innovation with oversight, ensuring that future regulation is grounded in evidence and tailored to real risks rather than speculative concerns.186Id. at 50–51.

In theory, a moratorium would offer a dual benefit: it would halt the proliferation of state and local AI laws while also discouraging Congress from rushing to enact federal legislation driven primarily by the fear of a regulatory patchwork.187Id. By freezing the state regulatory landscape, lawmakers would gain the time to evaluate whether the existing statutes, such as Title VII, the Americans with Disabilities Act, and related statutes, are sufficient to address the risks of AI-driven discrimination in hiring and employment.188Id.; see also Thierer, supra note 183. If those laws prove adequate, the moratorium will have prevented unnecessary federal overreach adopted merely to block state action. Conversely, if genuine gaps in existing protection emerge, Congress will be able to craft targeted legislation focused specifically on unaddressed employment-related harms, avoiding the sweeping, ill-fitting mandates that often accompany premature regulation of emerging technologies.189See Razis & Cooper, supra note 183, at 50–51.

While a moratorium offers the promise of regulatory “breathing room,” it carries significant drawbacks. First, it risks delaying necessary protections for workers and consumers by freezing legislative responses precisely when AI technologies are evolving most rapidly.190See Ali Swenson, How a GOP Rift Over Tech Regulation Doomed a Ban on State AI Laws in Trump’s Tax Bill, ASSOCIATED PRESS (July 3, 2025), https://apnews.com/article/artificial-intelligence-republicans-trump-tax-bill-97d700da09cac62aa510eb4411bab24e [perma.cc/9DE3-TCZC]. Second, it could entrench harmful practices by allowing problematic uses of AI to spread unchecked during the pause. Third, if drafted too broadly, a moratorium could be used to weaken or block both proposed and existing protections for children and vulnerable consumers who are more susceptible to AI-related harms.191Id. Fourth, it may be criticized as federal overreach, undermining state authority and preventing states from serving as “laboratories of democracy.” Finally, critics warn that such a pause would amount to “AI amnesty” for big technology companies—giving powerful companies years of unregulated growth and dominance.192Id. And once the moratorium expires, lawmakers may face heightened political pressure to act quickly, increasing the likelihood of rushed or overly broad regulation—ironically creating the very problem the moratorium was designed to avoid, but at a stage when the harms are more deeply entrenched.

C. Choice of Law Framework

Another possible transformative solution moving forward is the implementation of a “choice of law” framework. Under a “choice of law” framework, parties have the autonomy to select the legal regime that governs their relationship at the time of contracting.193See Memorandum from Logan Kolas, Four Options to Tame the A.I. Patchwork, Am. Consumer Inst. for Citizen Rsch. (July 1, 2024), https://www.theamericanconsumer.org/wp-content/uploads/2024/07/AI-Policy-Memo-ACI-1.pdf [perma.cc/A9SK-NWJ2]; see Razis & Cooper, supra note 185. This approach not only sidesteps contentious debates about federal preemption of state AI regulations but also fosters a healthy, market-driven form of regulatory competition among states.194See Razis & Cooper, supra note 183, at 3–4. Rather than imposing a one-size-fits-all federal mandate that risks stifling innovation or disregarding regional priorities, a choice-of-law framework empowers states to serve as regulatory laboratories, offering diverse models of AI governance.195Id. at 59–62. In turn, businesses and workers would be able to align themselves with the jurisdiction whose legal standards best reflect their needs, values, and risk tolerance.196Id. This system would preserve state autonomy, encourage innovation in legal frameworks, and offer stakeholders meaningful choice in how AI is regulated in the employment context and beyond.197See id.

D. Self-Restraint and Reasonable Conduct

Given the morass of current regulatory efforts, many organizations are wisely charting their own path forward without waiting on relevant governmental entities to try and catch up to the rapidly developing field of AI.198See Sonderling & Kelley, supra note 72, at 156. In the absence of AI regulations, private initiatives have charted a restrained and reasonable course using AI technology in the workplace to foster responsible AI development and deployment.199See id. These private initiatives aim to harness the advantages of AI and minimize collateral negative outcomes that might incur liability.200See id. at 156–57. In recent years, many major companies have adopted and published their own AI principles as well as creating resources like templates and policies for responsible AI use. To expand their impact, they have formed partnerships, while universities and civil rights groups have also developed ethical guidelines for AI design and deployment.201See id. at 173–77.

These private initiatives are important to the technological and legal developments surrounding AI for several reasons. First, specific industries have first-hand expertise in AI development but lack any legislative body or governmental agency.202See Sonderling & Kelley, supra note 72, at 157. See also William Magnuson, Artificial Financial Intelligence, 10 HARV. BUS. L. REV. 337, 373 (2020). Since certain industries lead efforts to fund, develop, deploy, and implement AI, private initiatives are often better positioned than government regulators to address the unique challenges that arise.203See Magnuson, supra note 202, at 373–74 (using the example of role of AI in the financial industry and arguing that “it is likely that self-regulation will be significantly more effective at cabining artificial intelligence’s risks than regulatory enforcement actions could ever be”). Following Loper Bright, courts may be more willing to recognize and give weight to the technical expertise and practical experience of businesses in interpreting AI-related regulations, particularly where agency interpretations lack clear statutory grounding or domain-specific knowledge. Government mandates often have a much broader impact than their initial target, while regulatory efforts often are not necessarily designed with the specific industries that use AI in mind.204See Sonderling & Kelley, supra note 72, at 157. Second, private sector initiatives can play a vital role in building a culture of trust, transparency, and accountability in the development and use of AI technologies.205See Kristen E. Egger, Artificial Intelligence in the Workplace: Exploring Liability Under the Americans with Disabilities Act and Regulatory Solutions, 60 WASHBURN L.J. 527, 556 (2021). Third, because companies using AI are facing a constellation of legislation and various proposals at federal, state, and local levels (not to mention the international measures), many of them with differing requirements, companies need alternative compliance approaches. Responsible initiatives can help position entire industries to proactively adapt to future regulations while also mitigating potential risks associated with their AI tools and systems.206See generally Magnuson, supra note 202. Further, effective principles may be used as models for future governmental regulations, to the significant benefit of businesses.

The federal government actively encourages employers to adopt responsible AI practices, recognizing that the effectiveness of several key federal laws depends on voluntary compliance from the private sector.207See Sonderling & Kelley, supra note 72, at 192–93. U.S. antidiscrimination laws, such as Title VII, are prime examples. These laws rely heavily on employers’ efforts to voluntarily comply, monitor, and self-correct in the absence of constant regulatory oversight.208See id. In response to the rapid integration of AI in employment and business practices, multiple federal agencies—including the Federal Trade Commission, EEOC, and DOL—have issued initial guidance that emphasizes self-governance as a foundational step.209See id. These agencies promote voluntary compliance not as a regulatory afterthought but as a critical mechanism for addressing the evolving risks and ethical challenges posed by AI. By aligning internal practices with agency recommendations, organizations can not only mitigate legal risk but also demonstrate proactive leadership in building trustworthy and equitable AI systems.210Magnuson, supra note 202, at 374 (“In the end, these sorts of private sector endeavors are essential to ensure that artificial financial intelligence leads to fair, efficient, and stable outcomes.”).

VII. CONCLUSION

In the years ahead, the continued integration of AI and automation in the employment arena will likely lead to an increasing wave of regulatory and legislative activity at both the federal and state levels. As lawmakers and agencies continue to grapple with how best to address the opportunities and risks posed by AI, it is essential that regulatory efforts avoid becoming overly burdensome or counterproductive. Poorly designed rules—especially those that are vague, duplicative, or impractical—risk stifling innovation, deterring investment, and impeding the very technological advances that drive productivity and economic growth.

To be effective, any future AI regulations must strike a careful balance between protecting workers’ rights and promoting fairness, while also maintaining the flexibility needed to foster responsible innovation. This balance can only be achieved through collaborative policymaking that includes input from a broad spectrum of stakeholders—employers, employees, vendors, civil rights advocates, and others. Regulations should offer clear, workable guidance that reflects the rapidly evolving nature of AI technology, while providing enough consistency to allow companies to plan and adapt with confidence.

For employers, this shifting landscape demands vigilance and adaptability. Staying informed about regulatory developments at all levels of government is not just advisable—it is essential. As the legal framework around workplace AI becomes more fragmented and jurisdiction-specific, companies must develop nuanced legal strategies and technological solutions tailored to where they operate. Proactive compliance measures, such as conducting internal audits, updating internal policies, and implementing ethical AI review processes, will become increasingly important for mitigating legal and business risks.

Furthermore, as regulatory standards continue to rapidly evolve, employers should establish key mechanisms for regular reassessment of their AI-related practices and policies. This includes revisiting vendor agreements, updating training programs, and engaging with legal counsel to evaluate new obligations. In a multi-jurisdictional environment, where divergent rules are likely to arise, the ability to maintain a coherent yet flexible compliance framework will be a key differentiator for responsible and resilient organizations.

 


[*] Bradford J. Kelley is a Shareholder in Littler Mendelson, P.C.’s Washington, DC office. He previously served as Chief Counsel to a Commissioner of the U.S. Equal Employment Opportunity Commission (“EEOC”), and as a Senior Policy Advisor in the U.S. Department of Labor’s Wage and Hour Division.

[**] Andrew B. Rogers is the Administrator of the U.S. Department of Labor’s Wage and Hour Division. Prior to his Senate confirmation, he served as Acting General Counsel of the EEOC, where he previously held roles as Chief Counsel and Chief of Staff to the Chair. Before entering public service, he spent twelve years in private practice, focusing on wage and hour litigation and compliance, employment discrimination, and workplace violence. The views and opinions set forth herein are those of the authors and do not necessarily reflect the views or opinions of the authors’ employers. For helpful comments and advice on prior drafts, the authors thank Tessa Gelbman, Mike Skidgel, and Allan King.

Scroll to Top