A new report released earlier this year revealed that three out of four knowledge workers use AI for work purposes – but more than half are hiding it from their leaders because they fear it makes them look replaceable. The report dated May 8, 2024, from LinkedIn and Microsoft reveals that the time is now for employers to start addressing AI use in the workplace.
White House Involvement
Even the White House is involved having recently provided employers with a series of best practices that they should consider when using AI for workplace purposes. The May 16, 2024, Fact Sheet, which was drafted with the assistance of the Department of Labor (“DOL”), isn’t law– but make no doubt about it, the Fact Sheet could be relied upon by courts and others in these early stages of AI risk management and workplace litigation. The Fact Sheet makes the following recommendations:
- Establish an AI Governance System– Employers should develop clear systems and procedures – including a human oversight component – before deploying AI in the workplace.
- Be Transparent– Employers should be transparent with workers and candidates about the AI systems being used in the workplace. This is an increasingly common theme, and one that states are looking to embrace.
- Use Worker Data Responsibly– Employers should limit the worker data collected and used by AI systems, and only use it to support legitimate business aims. In addition, all of the data needs to be protected and handled responsibly. The same is true for any data about workers created by AI systems.
- Protect Workers’ Rights – Employers should never violate or undermine workers’ inherent or statutory legal workplace rights. This includes equal employment opportunity laws (prohibiting discrimination, harassment, and retaliation). Employers need to make sure that any AI systems they are using aren’t biased.
- Be Ethical – Employers should make sure their AI systems are designed, developed, and trained in a way that protects workers. Employers need to conduct a thorough due diligence process at the outset, before putting new AI systems online.
- Empower Workers – The White House advises employers to offer workers the chance to have “genuine” input into the design, development, testing, training, use, and oversight of AI systems in the workplace.
- Use AI to Help Workers – Employers should ensure that AI systems used in the workplace assist, complement, and enable workers. There should be a focus on improving job quality for your employees, not just using AI to streamline tasks or create organizational efficiencies.
- Support Workers Impacted by AI – The White House urges all employers to work towards upskilling workers and gearing them up for the AI revolution – and supporting those whose jobs face a transition brought about by AI.
EEOC’s Enforcement Efforts
In addition, the Equal Employment Opportunity Commission (“EEOC”) has also thrown its hat in to the ring when it comes to AI.
Under the Biden administration, the EEOC is stepping up its enforcement efforts around AI and machine learning-driven hiring tools. The agency’s efforts include the following:
- Designating the use of AI in employment as a top “subject matter priority.”
- Issuing guidance on the application of the Americans with Disabilities Act (“ADA”) to AI tools in employment.
- Launching an initiative to ensure that AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws.
- Appointing a chief AI officer.
- Pursuing investigations and complaints against employers related to their use of AI in employment.
States are Getting Involved.
As I mentioned above, it’s not just the federal government that is getting involved in AI.
In May 2024, Colorado became the first US state to enact AI legislation. Effective February 1, 2026, the law applies to both developers and deployers (i.e., users) and requires the use of reasonable care to avoid algorithmic discrimination. The law targets “high-risk artificial intelligence systems,” which is any AI system that “makes, or is a substantial factor in making, a consequential decision.” A “consequential decision” is a decision that has “a material legal or similarly significant effect” on the provision or denial to Colorado residents of services, including those related to employment.
To comply with the law, employers must implement a risk management policy and program, complete an annual impact assessment, notify employees or applicants about the employer’s use of AI where AI is used to make a decision about the employee or applicant, make a publicly available statement summarizing the types of high-risk systems that the employer currently deploys, and disclose to the Colorado attorney general the discovery of algorithmic discrimination within 90 days of discovery.
The law establishes a rebuttable presumption that the employer has used “reasonable care” where the employer complies with the law’s requirements, indicating that if an employer complies with the law, an employer will have a much stronger defense in the event it faces a discrimination claim.
What Should Employers Do?
The latest activity in Colorado is a good indication of where states are going to look when it comes to the use of AI in employment. Employers should be aware of these laws and establish processes and procedures to ensure any AI technology is utilized in compliance with employment laws. Employers still have time; however, employers need to begin conducting an AI system audit to identify any potential biases or discriminatory effects, develop processes to regularly assess AI use in employment, review agreements with AI vendors, and draft or update any internal policies on AI use in employment.
Written By:
Scott M. Zurakowski, Esq.
330-497-0700
szurakowski@kwgd.com