Skip to content
Section 8

Building Worker Power in Cities & States:
Regulating AI in the Workplace

09/01/2024

Background

Across different sectors of the economy, the integration of artificial intelligence (AI) and algorithmic management tools is changing the experience of work. At present, these technologies’ impacts on workers’ autonomy, health, and safety are still being assessed and understood. They have increased employers’ capacity to surveil and collect data on their workers, with a growing number of unfair labor practice charges and worker complaints revealing how employers are leveraging these tools in ways that affect organizing and collective bargaining. 

Although there is currently no legislation at the federal level explicitly regulating the use of AI in workplaces, policymakers and regulatory agencies have recognized the risks such technologies can pose to workers’ rights.1 National Labor Relations Board General Counsel Jennifer Abruzzo has called for more robust enforcement of existing labor law to protect workers from intrusive surveillance practices, citing concern over how such practices could interfere with workers’ Section 7 rights. In October 2023, the Biden administration issued Executive Order 14110 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), directing federal agencies to guide the responsible development and use of AI. Accordingly, the Department of Labor released a set of guidelines on the ethical development and implementation of AI systems in the workplace, outlining principles that center worker empowerment for developers and employers.2

Objective of State Intervention

A number of cities and states have proposed or passed legislation to regulate the use of algorithmic tools in the workplace, primarily to mitigate bias and discrimination in hiring and decision-making. However, states and localities can go further by implementing a framework for joint decision-making, oversight, and accountability in the deployment of algorithmic technologies, which can help protect and strengthen worker voice and agency.

Cities and states can regulate the use of AI in the workplace in order to protect workers’ right to engage in protected concerted activity without employer interference. At a minimum, policies can and should ensure that workers have access to secure channels of communication, information, and transparency around the use of AI-enabled technologies in workplaces. Moreover, states and localities can explore options that empower workers and their organizations to monitor and enforce AI governance laws.

Read:

Worker Power and Voice in the AI Response

by Center for Labor and a Just Economy

Preemption Risks

By focusing on states’ authority to protect other realms of workers’ rights (such as privacy or civil rights) at risk in AI-enabled workplace environments, states may be able to regulate the use of digital surveillance and algorithmic management tools to prevent anti-union surveillance and automated employment decision-making without running afoul of National Labor Relations Act preemption.

Options for State or Local Action

I. Regulating Electronic Monitoring and Automated Decision-Making Tools

A number of states, including California,3 Illinois,4 New Jersey,5 New York,6 and Vermont,7 have proposed laws regulating the use of automated tools in employment-related decisions such as hiring, firing, and compensation. A Massachusetts bill introduced in 2023 regulates the use of automated decision-making tools and worker data collected through electronic surveillance tools; it also includes a private right of action for workers.8 In cases where automated systems result in consequential employment-related decisions, workers should have the opportunity to appeal and submit corrections or relevant information as applicable.

Most bills regulating the use of AI tools and automated systems include provisions mandating impact assessments. States should require that these assessments be conducted by a certified, independent third-party. Whenever AI or automated systems are involved in making material employment-related decisions, employers should provide the results of impact assessments to workers and their organizations in a timely manner. Information disclosed in an assessment could include: 

  • The design and functionality of the technology being used,
  • Types and sources of data being collected,
  • Possible risks of unlawful discrimination, and
  • Intended use of the data in decision-making processes in the workplace.

States may also regulate the use of automated decision-making systems where workers are impacted under different legal and regulatory areas, such as privacy and civil rights. In 2008, Illinois enacted the first biometric data privacy law in the United States, the Biometric Information Privacy Act (BIPA), which applies to data collected using AI technologies and also includes a private right of action for those seeking recourse.9 In 2023, workers became entitled to data privacy protections under the California Consumer Privacy Act (CCPA), which include provisions such as the right to know when employers are collecting workers’ data; the right to access, correct, and delete data; and protections from retaliation for exercising these rights.10 The law also enables unions and worker organizations to file such requests on behalf of workers. California’s Civil Rights Department has also proposed regulations to protect against discrimination from automated decision-making systems, based on protected characteristics in the workplace.11 

II. AI Procurement Policy

In recent years, state and local governments have implemented AI systems operated by private companies to facilitate automated decision-making across a range of government service programs, from allocating public benefits to managing traffic to policing communities. While these tools have been touted for their potential to build service delivery capacity (particularly for under-resourced state and local agencies), reports suggest a critical lack of transparency and oversight around the use of such technologies when governments outsource data-handling and automated decision-making processes to private vendors.12 Moreover, where courts find governments’ use of AI systems violates civil liberties by impacting the lives of their residents, the costs of litigation may far outweigh the benefits of automated processes in government service delivery.13

Spotlight: Governors Take Action on AI Safety and Transparency

Credit: Office of the Governor of California
Credit: Virginia Office of the Governor

In a number of states, governors have issued executive orders to guide the procurement and use of AI in state governments in a safe and ethical manner. In California, Governor Gavin Newsom signed an executive order outlining a process for evaluating the responsible deployment of AI. It includes provisions for public procurement to address “safety, algorithmic discrimination, data privacy, and notice of when materials are generated by AI.”14 In 2024, Governor Glenn Youngkin issued a similar executive order in Virginia, promulgating a set of AI safety and transparency standards that state agencies must follow in their procurement policies.15

State and local AI procurement policy could act as a lever for greater oversight and accountability by mandating vendor standards that ensure transparency and auditability in the design and operation of AI products. These standards could be baked into multiple junctures within the procurement process, from setting high standards in the bidding process to mandating worker protections in contract language and implementing monitoring mechanisms to ensure contract terms are being met. From the outset, public sector workers could be empowered in initial procurement decision-making processes through contract provisions establishing joint labor management committees or similar mechanisms. In the auditing process, strategic enforcement partnerships between agencies and worker organizations could strengthen the capacity to conduct regular compliance checks as well as empower workers and their representatives to monitor and assess the impacts of AI-enabled public systems on working people and their families.

III. AI Standards-Setting Boards and Standards Enforcement

In recent years, a number of cities and states have established standards-setting boards, typically consisting of government, employer, and worker representatives. As discussed in Section III, such entities give workers a voice in the process of setting wages, benefits, and labor standards. AI standard-setting boards could set guardrails for the introduction of AI in workplaces, including impact assessment mandates, transparency and consent requirements, and ethics frameworks that state and local governments could apply to AI procurement. 

These AI standards boards could be established on a sectoral basis. For example, a city or state could create separate AI boards for each sector within the jurisdiction where the use of AI in the workplace is most prevalent or anticipated. As with the other labor standards boards addressed in Section III, the state or local government agency overseeing the board could be directed to include participation by representatives from unions and worker organizations.

These boards could not only establish initial guardrails and ethics frameworks for AI implementation by sector but also play an ongoing monitoring role for workplaces that use AI software. They would act as a mechanism for trained and well-informed worker committees to engage directly with employers and states to make decisions regarding AI integration into workflows and to evaluate impact over time. Additionally, they would provide a trusted third-party entity for individuals or groups of workers to seek recourse in cases where AI is used to evaluate performance, allocate tasks, or perform other roles traditionally held by managers and human resources employees.

IV. State OSHA Protections

Currently, 28 states and Puerto Rico have plans approved by the Occupational Safety and Health Administration (OSHA) that allow them to regulate job safety and health standards within their jurisdictions. States are allowed to set their own OSHA standards, so long as the protections are at least as effective as those promulgated by the federal law. As such, they are also permitted to set standards that are more protective than those set at the federal level. In addition, states can develop plans that are inclusive of state and local public sector workers, who are excluded from federal OSHA protections. 

States can clarify within their OSHA plans that the right to a safe workplace includes the right to be free from harm caused by AI in the workplace and establish specific standards to address those harms accordingly. Standards should ensure, at a minimum, that the conditions under which workers can organize are protected from AI-enabled interference. 

Cities and states could also create grant programs modeled on OSHA’s Susan Harwood Training Grant program. It awards grants to qualifying organizations to provide training and education programs for employers and workers on workplace safety and health-related issues. To address AI’s potential harms to workers, grants could be awarded to qualifying unions and worker organizations to drive compliance with AI-related safety and health standards. It could also serve as a direct point of reference for workers covered by the AI-related standards, particularly marginalized workers.16 Through such a grant program, worker organizations could educate workers on identifying AI-related harms and provide know-your-rights trainings. 

V. State AI Bills of Rights

States could adopt AI bills of rights modeled after the White House Office of Science and Technology Policy’s (OSTP) “Blueprint for an AI Bill of Rights.” The OSTP blueprint puts forth principles and practical approaches to guide the regulation of AI-enabled systems in an equitable, safe, and responsible manner. Principles outlined in the blueprint include promoting safe and effective systems, protecting against algorithmic discrimination and abusive data practices, and providing clear notice and explanations as well as options for human intervention. Definitions of what constitutes “harm” caused by AI, informed by experiences and input from workers and their organizations, should be clearly articulated.

New York and Oklahoma have both proposed bills of rights patterned after the OSTP blueprint,17 providing residents within their jurisdictions certain rights and protections when interacting with AI-enabled decision-making systems. In the workplace context, such bills should deter employers’ use of electronic surveillance and AI tools in unlawfully interfering with labor organizing efforts. 

  1. In 2023, two federal bills regulating automated decision-making systems in the workplace were introduced: the No Robot Bosses Act and the Stop Spying Bosses Act. ↩︎
  2. U.S. Department of Labor, Artificial Intelligence and Worker Well-being: Principles for Developers and Employers, https://www.dol.gov/general/AI-Principles. ↩︎
  3. California State Legislature, AB 1651 (2023), https://legiscan.com/CA/bill/AB1651/2023. ↩︎
  4. Illinois General Assembly, HB 3773 (2024), https://www.ilga.gov/legislation/BillStatus.asp?DocNum=3773&GAID=17&DocTypeID=HB&SessionID=112&GA=103. ↩︎
  5. N.J.S.A. 1588 S. 1588 (2024), https://www.njleg.state.nj.us/bill-search/2024/S1588. ↩︎
  6. New York State Assembly, A. 7895 (2024), https://www.nysenate.gov/legislation/bills/2023/A7859. ↩︎
  7. Vermont Legislature, H. 114 (2023), https://legislature.vermont.gov/bill/status/2024/H.114. ↩︎
  8. Massachusetts Legislature, H. 1873 (2024), https://malegislature.gov/Bills/193/H1873. ↩︎
  9. Illinois General Assembly, 740 ILCS 14 (2008), https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57. ↩︎
  10. Kung Feng, Overview of New Rights for Workers under the California Consumer Privacy Act, UC Berkeley Labor Center (December 6, 2023), https://laborcenter.berkeley.edu/overview-of-new-rights-for-workers-under-the-california-consumer-privacy-act/. ↩︎
  11. Press Release, Civil Rights Council Releases Proposed Regulations to Protect Against Employment Discrimination in Automated Decision-Making Systems, California Civil Rights Department (May 17, 2024), https://calcivilrights.ca.gov/2024/05/17/civil-rights-council-releases-proposed-regulations-to-protect-against-employment-discrimination-in-automated-decision-making-systems/. ↩︎
  12. Grant Fergusson, Outsourced and Automated: How AI Companies Have Taken Over Government Decision-Making, Electronic Privacy Information Center (September 2023), https://epic.org/outsourced-automated/. ↩︎
  13. Michigan Supplemental Nutrition Assistance Program Terminations, Benefits Tech Advocacy Hub, https://www.btah.org/case-study/michigan-supplemental-nutrition-assistance-program-terminations.html. ↩︎
  14. California Exec. Order N-12-23 (2023), https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pdf. ↩︎
  15. Virginia Exec. Order 30 (2024), https://www.governor.virginia.gov/media/governorvirginiagov/governor-of-virginia/pdf/eo/EO-30.pdf. ↩︎
  16. Occupational Safety and Health Administration, Susan Harwood Training Grant Program, U.S. Department of Labor, https://www.osha.gov/harwoodgrants/. ↩︎
  17. New York State Assembly, A. 8129 (2024), https://www.nysenate.gov/legislation/bills/2023/A8129; Oklahoma Legislature, H.B. 3453 (2024), http://www.oklegislature.gov/BillInfo.aspx?Bill=hb3453&Session=2400. ↩︎

Our Latest

All Our Latest →