close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

California’s Proposed AI Employment Decision Rules: October 2024 Update
aecifo

California’s Proposed AI Employment Decision Rules: October 2024 Update

In recent years, California has positioned itself at the forefront of regulating artificial intelligence (AI) and automated decision systems (ADS), particularly regarding employment practices. The California Civil Rights Division (CRD) has actively refined its proposed rules to address concerns about the use of these systems in hiring, promotions, and other employment decisions. In response to public testimony at a July hearing, the CRD released revised rules in October 2024. These changes reflect adjustments made in response to public testimony and provide clearer definitions of key terms such as “automated decision system,” “agent,” and “employment agency.”

Focus on the revised definitions

One of the most significant updates in the October revision of the CRD’s proposed rules concerns definitions of key terms, which directly impact employers who rely on AI and ADS in their employment practices.

Automated Decision System (ADS)

In the revised rules, the definition of an ADS was expanded to include a “computer process that makes a decision or facilitates human decision-making regarding an employment benefit.” This broad definition encompasses systems using AI, machine learning, algorithms, statistics, and other data processing tools.

While the original definition provided a general framework, the October version goes further by specifying activities these systems could engage in, such as screening, assessment, categorization and making recommendations. The rules specify that the ADS may facilitate decisions regarding hiring, promotions, compensation, benefits, and other employment-related matters. However, ambiguity remains around the term “facilitate” human decision-making, raising the question of whether systems that assist but do not fully automate decisions fall under the rules.

For example, an AI tool used to verify degrees may flag discrepancies between an applicant’s claimed degree and the degree they actually earned. Even if the tool does not make the final decision, it can influence the human decision maker. This raises the question of whether such a tool, which facilitates but does not complete the decision-making process, can be qualified as an ADS subject to regulation. Accordingly, it is crucial that employers recognize that automated systems used for even seemingly minor employment decisions may fall within this scope in the absence of additional clarification.

Notably, the proposed rule specifically excludes technologies such as word processing and spreadsheet software, website hosting, and cybersecurity tools. However, there is still room for interpretation as to whether more basic automation processes, such as simple if/then workflows, fall within the scope of these regulations.

Agent

The definition of the term “agent” has also been significantly clarified. An agent is now defined as any person acting on behalf of an employer to carry out a function traditionally carried out by the employer. The notion of “function traditionally exercised by the employer” is at the heart of the revised definition. The CRD now clarifies that an agent is any person acting on behalf of an employer to carry out tasks that would historically be the responsibility of the employer. This includes key functions such as recruitment, selection, hiring, promotion and decisions related to compensation, benefits or leave. By wording it this way, the revised rule refines the definition of who can be considered an agent, ensuring that third parties who step in to perform these traditionally employer-led activities, such as vendors using AI to screen resumes or assist with hiring, are also subject to the same compliance standards.

This expanded definition highlights that even when employers outsource administrative functions, such as payroll or benefits management, those vendors may still be considered agents if they perform tasks typically managed by the employer. Employers should therefore closely examine their partnerships with third-party providers, particularly those using AI or machine learning, to ensure compliance with the revised definition. This reinforces the need for employers to recognize that the delegation of traditionally employer-led functions to third parties does not relieve them of regulatory obligations.

Employment agency

The definition of an employment agency has been refined to include any entity undertaking, for compensation, services to identify, select and recruit candidates or employees. The emphasis in the revised rules is on “selection” as a crucial step in recruiting a candidate, positioning it as a key function of employment agencies.

This revision sharpens the distinction between screening resumes for specific terms or models and the broader process of candidate selection, aligning more closely with the concept of candidate sourcing. However, the proposed rules lack a clear definition of what “verification” entails, creating potential uncertainty for employers, particularly those who rely on third-party vendors for background checks. Without explicit guidance as to whether background screening falls within the definition of screening, employers may have difficulty determining their compliance obligations.

Considerations for Employers Using AI in Criminal Screening

The proposed rules revised by the CRD place a strong emphasis on preventing discriminatory practices when using AI and ADS in employment decisions, including the sensitive area of ​​criminal background checks . Employers must ensure that their use of these systems meets the same legal standards as human decision-making, particularly the requirement that criminal history can only be considered after a job offer conditional has been made.

The rules require that the ADS used in criminal background checks operate transparently. Employers must provide applicants with the reports and decision-making criteria used by the system, ensuring compliance with anti-discrimination laws. Additionally, employers must conduct regular anti-bias testing of these systems and retain records of these tests, along with any data used, for at least four years.

The focus on transparency and fairness aligns with broader trends, such as the White House’s Blueprint for an AI Bill of Rights and the EEOC’s guidelines on algorithmic fairness. Employers should diligently audit their AI systems to avoid disparate impacts on protected classes, particularly when it comes to decisions involving criminal history. They must ensure that the criteria used by ADS are job-related and necessary for business purposes and consider less discriminatory alternatives where available.

CRD seeks public participation

The California Division of Civil Rights continues to refine its proposed rules on automated employment decision systems, and now is the time for employers to engage in the process. The CRD is accepting written comments on the most recent changes until November 18, 2024. This is a critical opportunity for employers to ensure that AI and ADS regulations are clear, practical and reflect modern hiring practices.

Comments may be submitted by email to [email protected]. For more information and to view the proposed changes, visit the CRD webpage at calcivilrights.ca.gov/civilrightscouncil.

Parting thoughts

The October revisions to the CRD’s proposed rules on automated decision systems represent an important step in California’s efforts to regulate AI in employment practices. For employers, this means taking a closer look at how AI and ADS are used, particularly in recruiting and criminal background checks. The expanded definitions of ADS, agent, and employment agency require careful consideration of the use of technology and relationships with third-party vendors. As these rules continue to evolve, staying informed and proactively assessing compliance will be crucial to adapt to California’s forward-looking AI employment regulations.