Preview

Journal of Digital Technologies and Law

Advanced search

Using Artificial Intelligence in Employment: Problems and Prospects of Legal Regulation

https://doi.org/10.21202/jdtl.2024.31

EDN: chpesp

Contents

Scroll to:

Abstract

Objective: to identify the legal problems of using artificial intelligence in hiring employees and the main directions of solving them.

Methods: formal-legal analysis, comparative-legal analysis, legal forecasting, legal modeling, synthesis, induction, deduction.

Results: a number of legal problems arising from the use of artificial intelligence in hiring were identified, among which are: protection of the applicant’s personal data, obtained with the use of artificial intelligence; discrimination and unjustified refusal to hire due to the bias of artificial intelligence algorithms; legal responsibility for the decision made by a generative algorithm during hiring. The author believes that for the optimal solution of these problems, it is necessary to look at the best practices of foreign countries, first of all, those which have adopted special laws on the regulation of artificial intelligence for hiring and developed guidelines for employers using generative algorithms for similar purposes. Also, the European Union’s and USA’s legislative work in the area of managing risks arising from the use of artificial intelligence should be taken into account.

Scientific novelty: the article contains a comprehensive study of legal problems arising from the use of artificial intelligence in hiring and foreign experience in solving these problems, which allowed the author to develop recommendations to improve Russian legislation in this area. As for the problem of applicants’ personal data protection when using artificial intelligence for hiring, the author proposes to solve it by supplementing the labor legislation with norms that enshrine the requirements for transparency and consistency in the collection, processing and storage of information when using generative algorithms. The list and scope of personal data allowed for collection should be reflected in a special state standard. The solution to the problem of discrimination due to biased algorithms is seen in the mandatory certification and annual monitoring of artificial intelligence software for hiring, as well as the prohibition of scoring tools for evaluating applicants. The author adheres to the position that artificial intelligence cannot “decide the fate” of a job seeker: the responsibility for the decisions made by the algorithm is solely on the employer, including in cases of involving third parties for the selection of employees.

Practical significance: the obtained results can be used to accelerate the development and adoption of legal norms, rules, tools and standards in the field of using artificial intelligence for hiring. The lack of adequate legal regulation in this area creates significant risks both for human rights and for the development of industries that use generative algorithms to hire employees.

For citations:


Novikov D.A. Using Artificial Intelligence in Employment: Problems and Prospects of Legal Regulation. Journal of Digital Technologies and Law. 2024;2(3):611-635. https://doi.org/10.21202/jdtl.2024.31. EDN: chpesp

Introduction

In recent years, in the field of labor relations, artificial intelligence (further – AI) has become the most important tool for implementing management processes and procedures. Large companies and corporations are increasingly inclined to outsource hiring functions to AI technology. For example, the multinational corporation Unilever already processes 1.8 million job applications with AI and hires 30,000 new employees per year (Ginu & Anson, 2021); 99% of Fortune 500 companies (the 500 largest US companies by annual revenue) rely on AI to hire workers (Fuller et al., 2021). Foreign countries use platforms such as AllyO, Arya, BambooHR, Entelo, Ideal, Jibe, Talenture, Taleo, TextRecruit, Textio, Toptal, TurboHire, Turing, Paradox, Recruitee, Upwork, Zoom.ai, and ZohoRecruit.

Russian companies (Alfa-Bank, VTB, Dodo Pizza, Megafon, MTS, Russian Railways, Rostelecom, Sberbank, Yandex, etc.) are gradually integrating AI for hiring into their HR solutions1. According to HRlink research, 24% of Russian companies are already using AI in their hiring processes, 6% are planning to implement such solutions within a year, and 71% of HRs positively perceive the introduction of AI in their work2. Among AI tools used for hiring in Russia, there are such platforms as AmazingHiring, FriendWork Recruiter, GoRecruit Hireman, HireVue, Hurma, My new job, PeopleForce, Playhunt, Recright, Talantix, uForce, Yva.ai, Robot Vera, SberPodbor, and others3. It should be taken into account that these AI services for hiring employees are constantly being improved and supplemented with new functions, including those based on machine learning technology.

The rapid development of software for hiring and its practical application by employers in the Russian Federation raises the question of developing a state policy in this area. The passport of the national program “Digital Economy of the Russian Federation” (approved by the protocol of the Presidium of the Presidential Council for Strategic Development and National Projects of 04.06.2019 No. 7) notes the strengthening of digitalization processes in the sphere of employment and indicates the need to approve the concept of comprehensive legal regulation of relations arising in connection with the digital economy development. In 2020, the authors of this Concept pointed out that in order to transform legislation in the digital economy, it is necessary to focus on changes in labor legislation that relate to the legal protection of citizens under the “information technological innovations in the field of labor and remote employment”4.

In addition, given that AI has already changed and in the future will change even more the ways in which data on potential employees are collected, processed and analyzed, there are additional risks of human rights violations in labor. Therefore, employees, employers, developers and the state face a logical question about the legal implications of AI in hiring. It is necessary to take into account that along with the allegedly positive consequences of the global transformation of labor relations in the spheres using new information means of production, there are real adverse consequences associated with the redistribution of capital in society and the reduction of social protection of employees (Novikov, 2023). Consequently, the legal problematic of the AI use in hiring is how the relations arising from the AI use for hiring should be regulated and how to evaluate the decisions made by the algorithm from the legal viewpoint.

The problem of using AI for hiring has been discussed in the Russian (Shcherbakova, 2021; Serova & Shcherbakova, 2022) and foreign legal science (De Stefano, 2019; Köchling & Wehner, 2020; Reddy, 2022; Hunkenschroer & Kriebitz, 2023, Basu & Dave, 2024). However, to date, most studies have been fragmentary, covering some parts of this scientific problem. As a consequence, we should pay more attention to the legal problems of using AI in recruitment and try to develop recommendations to solve them within the regulatory framework.

  1. Legal problems of using artificial intelligence for hiring employees

The hiring procedure is a series of activities that can be categorized into four main stages: searching, screening, interviewing and selecting5. Accordingly, AI in hiring should be understood as an algorithm trained to make automatic hiring decisions at each of the stages (Haenlein & Kaplan, 2019). In hiring algorithms, AI is trained on data from previous candidates before and after hiring in order to make predictions about the employability of potential candidates (Kuncel et al., 2014). Technologies containing such algorithms include asynchronous video interviews, chatbots, and other automated platforms that interpret and evaluate a candidate’s response in real time and provide an interview score (Langer et al., 2019). AI algorithms define a set of rules used to transform input data into output decisions and can be trained to mimic human hiring decisions.

Employers using such technologies assume that AI tools are objective and therefore can manage the decision-making free from biases that affect human judgment6, so that companies can improve employee selection, professional development, retention and performance management (Estrada et al., 2024). Accordingly, it seems logical to conclude that the risk of discrimination and unreasonable rejection of a job application is reduced when using AI, as is the risk of hiring an underqualified worker.

In turn, as was shown in a research by M. K. Lee (2018), workers believe it is fair that humans make the final decision when it comes to employee potential or career development. If people agree that an algorithmic system performs analytical tasks (e.g., job scheduling), then human tasks (e.g., hiring, job evaluation) should be performed by humans. M. Langer et al. (2023) note that the use of AI technology in hiring, coupled with a lack of knowledge and transparency of how algorithms work, increases emotional tensions and decreases interpersonal relations and social interaction. Thus, sociological researches demonstrate that employees recognize the supportive role of AI in hiring, emphasizing the importance of the final decision made by the employer.

On the other hand, a study conducted by Y. Bigman et al. (2023) demonstrated that people are less morally outraged when the hiring decision is made by an AI algorithm rather than a human. However, this result does not prove the impartiality of an algorithm compared to a human decision, but rather confirms people’s loyalty to information technology, from which they are less likely to expect bias than from humans. Furthermore, this assumption implies that the developers of such algorithms, the data on which these technologies are built, and the organizations in which they are used, are unbiased. As A. Köchling and M. C. Werner (2020) point out, research of AI-based hiring technologies found that the algorithm can be discriminatory, but the question remains open whether algorithms are fairer than humans.

It can be stated that the use of AI algorithms to hire employees creates a foundation for social contradictions between the parties of labor relations, not to mention the legal issues discussed below.

1.1. Protection of applicant’s personal data obtained using artificial intelligence for hiring purposes

On the one hand, personal data can be part of training data used to create new algorithm models by identifying patterns. On the other hand, these mathematical models can be applied to personal data to make inferences or predictions about job applicants. AI allows automatic decision making based on factors and criteria that are not predetermined but vary depending on the database “feeding” the algorithm (Lukács & Váradi, 2023). That is, the entire functioning of an AI-assisted hiring system is based on the processing of employees’ personal data. Therefore, it is obvious that automated AI-based hiring decisions come into significant conflict with the requirements of personal data protection.

Current legislation establishes an exhaustive list of documents to be submitted by an employee during hiring. However, the scope and types of information voluntarily submitted to the employer when selecting and interviewing are not defined by law. To date, there is no unified list of personal data that can be used by AI in hiring, as well as no legal mechanism to control their collection, processing and analysis. In addition, the legislation does not limit the employer in the methods and ways of checking business qualities (the wording “in particular” when describing the content of the “business qualities” concept in the Resolution of the Plenum of the Supreme Court of the Russian Federation No. 2 of 17.03.2004 indicates that the attributes of business qualities are not exhaustive). A similar position is presented in judicial practice on cases of various, including psychological, testing during hiring to check business qualities (definition of the Moscow City Court of 24.02.2016 No. 33-3692/16, decision of the Mytishchi City Court of the Moscow region of 21.01.2016 No. 2-396/2016, definition of the Moscow City Court of 21.12.2017 No. 33-52746/2017).

Thus, the procedure for assessing the future employee’s business qualities during hiring is not normatively regulated; therefore, the employer is entitled to independently choose the form (including with the use of AI) in which such an assessment is conducted and to fix it in the local acts of the organization. The aspect of transparency and consistency of applicant’s data collection using AI is also important and should be formalized in a separate agreement.

Another vector of this problem is that personal data about the job seeker, obtained by the employer as a result of its collection by AI, are confidential and should not be used in any way other than making hiring decisions, nor stored by third parties (e.g., developers) or transferred to them. In this aspect, the greatest risk is the use of “open” AI systems such as ChatGPT, Bard and other chatbots7. Information entered into an “open” AI system may be inadvertently transferred to another user and stored in the AI neural network for further training of the system. When using “closed” AI systems (i.e. special developer programs), there is a risk of poor quality data protection and storage protocols, which may provoke leakage and dissemination of personal data of job seekers. It is necessary to take into account the problem of legal consequences of unauthorized use of personal data, which the employer received about the job seeker by means of AI and which were intentionally or negligently (due to unreliable information protection protocols) misused or transferred to third parties.

1.2. Discrimination and unjustified refusal to hire due to the bias of artificial intelligence algorithms for hiring employees

B. Sivathanu and R. Pillai (2018)point out that AI performs the necessary filtering of candidates based on various human characteristics such as experience, age, gender, and qualifications. Accordingly, using machine learning algorithms encoded in AI, patterns or preferences may be found for any of the characteristics that were not perceived by other people, including the data subject. This, in turn, sets the stage for discrimination in hiring and may increase the risks of unwarranted rejections. As a result, problems may arise when employers program an AI system not to hire a particular person or group of people for a particular position, and the system is subsequently trained not to hire that person or that group of people for other positions. It should be noted that an AI system designed to hire workers can only do this if it had been programmed and trained in a certain way using previous hiring data. For example, Sberbank has been using scoring AI to assess the likelihood of quitting when hiring an applicant since 2019. Using the system, the bank assigns a score to a candidate and calculates how soon he or she may decide to quit. The system analyzes job applicants’ resumes, previous work experience and other parameters from public sources, the consent to use of which is provided by the applicant8.

The consequences of using scoring models for hiring is well illustrated by the case of Amazon. This multinational company has not only been actively using AI to recruit employees since 2015, but has already faced legal problems as a result. Amazon’s algorithm made discriminatory decisions on hiring exclusively men, and the HR department did not check these decisions (the system was trained on resumes submitted by applicants who had been employed over a ten-year period, most of whom were men). The case came to lawsuits and eventually Amazon had to stop using AI to hire employees9. The bias of AI scoring models for hiring is also confirmed by academic research. For example, L. Chen and colleagues (2018) confirmed that women are ranked slightly lower than men by AI in search engines.

Thus, depending on how AI systems are configured, they can discriminate and weed out those people who are not suitable for them, or rank resumes based on unfair criteria developed by machine learning.

Similarly, the use of Emotion AI technology creates the risk of discrimination and unjustified refusal of employment, when emotions and intonations at the interview are read using video, audio and other biometric sensors. As O. V. Fedoseeva (2021) points out, AI performs emotion recognition using optical sensors that capture facial expressions in real time or in webcam recordings. The obtained data are processed by machine learning algorithms, determining the type of micro-expressions, tone and emotionality of the vocal response. In a broad sense, “reading” facial micro-expressions and voice tones allows AI to detect emotions of potential employees and perform occupational prediction.

Applying the Emotion AI, an employer wants not just to verify the professional competence of a potential employee, but to diagnose his or her emotional reactions to certain questions related to labor activity at a given employer (for example, these may be mimic or intonation reactions to questions about willingness to work overtime, about the reasons for leaving a previous job, etc.). For example, VCV software by Moscow developers allows viewing video interviews and, prior to face-to-face meeting, excluding obviously unsuitable candidates, as well as pre-assessing soft skills and compliance with the company’s values in order to score the applicants’ mood and behavior. The software products of another Moscow-based company, Sever.AI, make it possible to view video with answers, analyze image (candidate’s external behavior), sound (candidate’s speech, pitch), and text (content of answers).

Investigating the risks of using such software when hiring employees, employees of the Moscow Institute of Technology conducted an experiment with MyInterview and Curious Thing software products in 2021. It was found that they differently read the emotions of applicants with different cameras and microphones, at different head turns and in different areas of the screen. They also poorly understand intonations in voices spoken with a strong accent10. As T. Pradeep points out, network connectivity problems, attention deficit disorder, or lack of candidate concentration may negatively affect the applicant’s assessment when conducting interviews using AI, so human involvement is necessary to make the final decision on employment (Pradeep, 2024).

As we can see, since AI tools are driven by data derived from objective reality, it is difficult, if not impossible, to avoid the risk that AI tools encode and exacerbate certain biases. Therefore, one of the biggest challenges in AI hiring is the presence of biased algorithms – those that lead to discriminatory, not objective and illegal decisions. M. Jackson (2021) called algorithms biased if AI can replicate biases when making decisions.

The main characteristics of biased algorithms in AI-assisted hiring are:

1) sampling bias – the data on which AI learns do not accurately reflect the real world picture. As J. Chen (2023) points out, almost every machine learning algorithm relies on biased databases;

2) algorithmic bias, which arises because of the algorithm rather than the data. In algorithm development, this bias can be due to several factors such as the depth of the neural network or the prior information required by the algorithm. As Yu. S. Kharitonova et al. (2021) noted, algorithmic bias exists even when the algorithm designer has no intention of discrimination, and even when the recommender system does not take demographic information as input;

3) representation bias, which occurs during data collection and is associated with uneven data collection that does not take into account outliers or anomalies. Representation bias can also occur when population diversity is not taken into account, for example, if not all demographic groups are included equally;

4) measurement bias manifests itself in unequal conclusions or errors in the construction of the training data set. These errors can lead to biased results for certain demographic groups11.

In general, if a generative algorithm lacks quantity and quality on certain characteristics during data collection and processing, it will not be able to objectively reflect reality, leading to inevitable bias in algorithmic decisions and, consequently, to an unfair and possibly illegal decision by an employer to reject a more deserving candidate or, conversely, to hire a less qualified applicant.

1.3. Legal liability for the decision made by artificial intelligence to hire an employee

Current research contains opinions that applied AI management is already capable of showing whether the program will send its decisions to an employee (Ivanova et al., 2018). Another opinion is that current information-social changes are affecting and transforming the nature of labor relationships in such a way that personal communication will decline and person-to-person relationships will be replaced by those between workers in the digital environment (Lőrincz, 2018). These positions do not withstand criticism, because the very idea to recognize a system with AI as a subject of law contradicts such ideas about the subject of law as socio-legal value, dignity, autonomous legal will, and also comes into conflict with the composition of a legal relationship, the composition of an offense and is null and void within the institution of representation (Hisamova & Begishev, 2020).

AI cannot be a participant of social relations, as it does not have the ability to establish interaction between subjects of law regarding the satisfaction of material or cultural needs. There is also no socially significant result that AI would like to achieve. AI can solely perform datafication of the subjects of law for specific algorithmic tasks set during programming and improved by machine learning. Therefore, recognizing AI as a legal entity is not possible based on the program property of its relationship with the external world. M. H. Jarrahi (2018) notes that AI and human decision-making should complement (not replace) each other and utilize their comparative advantages. AI is a means of automating the hiring of potential employees, a digital tool for interaction between the production system elements at the level of collecting, processing, analyzing and storing information.

Thus, AI can exist in the legal reality exclusively as an object of law. All decisions made by AI must be controlled and explained by a human (employer), who is responsible for their consequences. The final decision to hire or reject an applicant based on information received from AI can only be made by the employer or its
authorized body.

  1. Foreign practice of legal regulation of the use of artificial intelligence for hiring employees

Using AI technologies to optimize decision-making for hiring is attractive for employers, but, as we have seen, it creates significant legal problems that need to be solved at the legislative level. The possibility of adopting regulations in this area is still at the stage of academic discussions and conceptual developments in Russia, so we consider it relevant to turn to the study of best practices of foreign countries.

2.1. Legal regulation of the use of artificial intelligence for hiring employees in the USA

The greatest advance in the regulation of AI-assisted hiring relations is demonstrated by the USA, where the relevant state legal acts have been adopted.

Illinois was the first state to pass a law specifically regulating the use of AI by employers conducting interviews with potential employees. The Illinois Artificial Intelligence Video Interview Act went into effect in January 202012. The law requires employers who are “considering candidates for positions located in Illinois”13 to do all of the following: before asking candidates to submit video interviews, to notify job applicants that the employer may use AI to analyze the applicant’s video interview and assess the applicant’s suitability for the job; to provide the applicant with information about how AI works and the general characteristics it uses to evaluate applicants; to obtain the applicant’s consent to be assessed by an AI. The law also stipulates that within 30 days of receiving a request from an applicant, an employer must delete the applicant’s video interview and instruct any person who receives a copy of the video interview to do the same, including any electronically backed up copies.

In addition, on August 9, 2024, the State of Illinois enacted the Artificial Intelligence Employment Act (HB3773)14. The Act, effective January 1, 2026, amends the Illinois Human Rights Act and aims to prevent discriminatory effects of the use of AI in employment decision-making. The Act requires employers to provide notice of AI use for the following employment-related purposes: recruitment, hiring, promotion, renewal, selection for training or internships, termination, disciplinary action, and setting the term of an employment contract.

A Maryland law enacted in March 202015 requires employers to meet certain requirements in order to use facial recognition technology to interview job applicants. The law requires employers to obtain signed consent from job applicants before they can use facial recognition technology “for the purpose of creating a facial template” during an interview.

The New York City Council passed Local Law 144 on Automated Employment Decision Tools on December 11, 2021, which became effective on July 5, 2023. Under Law 144, an automated employment decision tool is any computational process based on machine learning, statistical modeling, data analysis, or AI that produces a simplified result, including a score, classification, or recommendation, used to substantially assist or replace discretionary hiring decisions that affect individuals. The Act requires employers to conduct a “bias check” of any automated employment decision-making tool prior to its use and to notify employees and candidates who reside in New York of the employer’s using such tools in the assessment or evaluation for hiring or promotion, and of the job qualifications and characteristics to be evaluated by AI. Employers are also obliged to notify applicants ten days prior to using AI to make hiring decisions.

On May 17, 2024, the California Civil Rights Board published the Regulations to Protect Against Employment Discrimination in Automated Decision-Making Systems16. The Regulations define an automated decision-making system as a computational process, including one based on machine learning, statistics or other data processing or AI techniques, that tests, evaluates, ranks, classifies, recommends or otherwise makes a decision or facilitates a human decision that affects employees or applicants. The Regulation emphasizes that the use of an automated decision-making system does not replace the required individual assessment of an applicant.

The Regulations also introduce a definition of an employer’s agent, to include any person or third party who provides administration of automated decision-making systems used by an employer in making employment decisions that may result in denial of employment or otherwise adversely affect the terms, conditions, benefits, or privileges of employment. This means that employers are liable for the actions of third parties that the employer hires to operate decision-making systems if such systems have a discriminatory impact. In addition, the Regulations require employers and all other covered entities to retain any personnel or other employment records “related to any employment practice and affecting any employment benefits of any applicant or employee (including all applications, personnel, membership or referral records or files, and all machine learning data)” for four years.

On May 17, 2024, Colorado enacted a comprehensive AI regulation, the Consumer Protection for Artificial Intelligence Act17, which includes labor standards. The law, which goes into effect on February 1, 2026, applies to both developers and organizations implementing AI in their operations, and requires “reasonable care” to avoid discriminatory algorithms. The law targets “high-risk AI systems”, defined as any AI system that makes or is a significant factor in making a meaningful decision, including in employment. To comply with the law, employers must implement a risk management policy and program, conduct an annual impact assessment, notify employees or job applicants of the employer’s use of AI if it is used to make a decision regarding an employee or applicant, and make a public statement summarizing the types of high-risk systems the employer uses. Employers must report a discovery of algorithmic discrimination to the Colorado Attorney General within 90 days of the discovery.

The Equal Employment Opportunity Commission (EEOC) has played an important role in promoting potential regulations on the AI-assisted hiring in the USA. On October 28, 2021, the EEOC launched the “AI and Algorithm Fairness Initiative”18, in which it pointed out the need to examine the use of AI in hiring practices and to develop specific guidance for employers that should subsequently become the basis for legal regulation at the federal level.

On May 12, 2022, the EEOC issued “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees”19. In this guidance, the EEOC identifies the three most common ways in which employers’ use of AI may violate the rights of individuals with disabilities.

First, an employer may violate the rights of individuals with disabilities if it requires an applicant with a disability that prevents him or her from working with his or her hands to take a subject matter test that requires the use of a keyboard or trackpad without any accommodations or an alternative version of the test.

Second, an employer’s algorithm may intentionally or unintentionally screen out a person with a disability, even if he or she is able to perform the job with reasonable accommodations. This could happen, for example, if interview software designed to analyze an applicant’s problem-solving skills gives lower scores to a job applicant with a speech impediment that makes it difficult for the software to interpret his or her response according to the speech pattern that the software has been trained to recognize.

Third, the algorithmic decision-making tool that an employer uses to evaluate job candidates may violate the limitations of individuals with disabilities on disability-related questions and medical examinations. Such a violation could occur if the AI tool uses questions that either directly ask about the presence of a disability or could elicit a response that contains information about the individual’s disability.

On May 18, 2023, the EEOC issued a document entitled “Selected Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures”20 in which it outlined its vision for further regulating the AI-assisted hiring.

First, an applicant selection process that uses AI may be found to be discriminatory if the selection rate of persons of a particular race, color, religion, sex or national origin, or combination of such characteristics (e.g., a combination of race and sex) is less than 80% of the unprotected group. This situation is similar to the above-mentioned case in Amazon, where the AI made candidate selections based on previous experience and favored predominantly male candidates.

Second, employers are responsible for any adverse impact caused by AI tools purchased or used by third-party AI vendors, and cannot rely on the AI vendors’ predictions or research about whether their AI tools will negatively impact job applicants. This supports the idea that AI lacks legal personality and places the responsibility for the algorithm’s decisions on the employer.

Third, employers should systematically review AI tools to ensure that they are not discriminatory. If a probability exists that an AI tool produces an unequal impact, the employer must demonstrate that the use of the tool is job-related and consistent with business necessity and that there are no less discriminatory alternatives that are equally effective. This recommendation by the EEOC should help identify biased algorithms in AI-assisted hiring software.

On May 18, 2023, the Consumer Financial Protection Bureau (CFPB), the Department of Justice (DOJ), the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) issued a joint statement on discrimination and bias21, which highlights three areas for regulating AI in hiring:

1) applying existing legal standards – the existing laws and regulations apply equally to the use of automated systems and new technologies, the agencies shall apply the existing legal frameworks to AI;

2) addressing harmful effects – AI can perpetuate unlawful bias, automate unlawful discrimination, and lead to other harmful effects, which highlights the need for vigilance in the use of AI in employment practices;

3) protection of individual rights – it is mandatory to protect individual rights from discriminatory AI practices.

On April 24, 2024, the U.S. Department of Labor (DOL) issued Guidance on how federal contractor employers should behave when using AI to hire workers (Artificial Intelligence and Equal Employment Opportunity for Federal Contractors)22. The Guidance obliges federal contractors to justify the need to use AI to hire workers; to analyze the extent to which the AI-assisted selection process is job-related; to monitor AI programs in use for biased algorithms; and to explore potentially less discriminatory alternative procedures for selecting applicants. The Guidance emphasizes that completely excluding humans from the process could result in violations of federal employment laws. A federal contractor is responsible for using third-party AI-enabled products and services to hire workers. The Guidance also sets forth a list of “promising practices” recommended for federal contractors to follow: to notice job applicants in advance about the use of AI in hiring; to transparently explain to job applicants the policy and procedure for using AI for hiring; to ensure that the AI system received from the vendor can be controlled and monitored; to test the AI system used for hiring and tailor it to certain protected groups; to monitor the use of AI in making hiring decisions; and to ensure that the AI system used for hiring is consistent with the federal employment laws.

2.2. Legal regulation of the use of artificial intelligence for hiring employees in the European Union

Unlike the US, where federal legislation still does not regulate the use of AI for hiring employees, the European Union adopted the EU Artificial Intelligence Act23 on May 21, 2024, which provides for the creation of a common regulatory framework for the use of AI. This Regulation contains norms regulating the use of AI in labor relations, in particular in hiring employees.

The Regulation establishes three categories of AI software products (systems), divided by risk, according to which their use is regulated: prohibited systems (with unacceptable risk); systems with high risk; other AI systems (general purpose, general purpose with systemic risks). The latter category of AI software products are not covered by the Regulation at this stage and are not specifically regulated.

Unacceptable risk implies the prohibition of the use of emotional AI in employment (except for medical and security reasons), the targeted use of AI software to identify certain vulnerabilities (due to age, disability, specific social or economic situation of candidates), and the categorization of people based on biometric or personal data (by determining race, political views, trade union membership, religious, philosophical beliefs of applicants). Article 5 of the Regulation also refers to prohibited AI systems, in the context of hiring employees, those that use subconscious or manipulative techniques to distort a candidate’s behavior by significantly impairing his or her ability to make informed decisions; perform scoring based on social behavior or known, perceived or predicted personal characteristics (e.g., making a prediction about the employee’s possible dismissal based on their previous work experience); create or enhance facial recognition databases by inappropriately extracting facial images from the Internet or CCTV footage.

The Regulation classifies as high-risk AI systems software products used, inter alia, for recruiting and selecting people (placing targeted job advertisements, analyzing and filtering job applications, evaluating candidates), for making decisions affecting the terms and conditions of employment, promotion and termination of employment, for assigning tasks based on individual behavior, personality traits or characteristics, and for monitoring or evaluating people in employment relationships. According to the authors of the Regulation, these AI systems may have a significant impact on employees’ career prospects, earnings and rights; they may perpetuate historical patterns of discrimination against, for example, of women, certain age groups, persons with disabilities, persons of a certain racial or ethnic origin, or violate their fundamental rights to personal data protection and privacy24.

AI software products are not considered as high risk systems under Article 6 of the Regulation if they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including due to the lack of significant impact on the decision-making results, if one or more of the following criteria are met: a) the AI system is designed to perform a narrow procedural task; b) the AI system is designed to improve the outcome of an action previously performed by a human; c) the AI system is designed to identify decision-making patterns or deviations from previous decision-making patterns and is not intended to replace or influence a previously performed human assessment without proper human validation; d) the AI system is designed to perform a preparatory task for the assessment that is consistent with the purposes of the uses listed in Annex III to the Regulation (for example, pre-cataloging of applications from candidates using an AI algorithm).

The Regulation contains risk management methods for high-risk AI software products. These methods include: testing of the AI system (identification and analysis of foreseeable risks); risk assessment with and without the participation of a notified agency (throughout the life cycle of the AI system); development and adoption of appropriate and targeted risk management measures. Risk management is entrusted to the deployer – the person using the AI system in accordance with one’s authority (unless the AI system is used for personal non-professional activities). A deployer can be either an employer or a person who, on behalf of an employer, uses an AI system for the purpose of selecting and recruiting employees.

The Regulation sets out the responsibilities of deployers of high-risk AI systems, which, among other things, should mitigate potential violations of applicants’ rights. For example, deployers are required to provide sufficient transparency into the operation of the high-risk AI system (i.e., the AI system must be designed and used in a manner that allows the output of the system to be interpreted and used appropriately); to inform applicants and employees that they will be subject to the high-risk AI system; and to ensure an appropriate level of accuracy, reliability, and cybersecurity of the high-risk AI system (high-risk AI systems must be resilient to unauthorized attempts by third parties to alter their algorithms, results or performance due to system vulnerabilities). When using high-risk AI systems, the Regulation recommends that automatically made decisions solely should not be relied on but human beings should be involved in their final verification or evaluation.

The algorithm results provided by high-risk AI systems when recruiting employees may be influenced by biases that tend to be progressively reinforced by machine learning and thus perpetuate and aggravate the existing discrimination, in particular against persons belonging to certain vulnerable groups. The Regulation therefore draws attention to the inadmissibility of biased algorithms in high-risk AI systems. In particular, big data sets in AI systems should take into account, to the extent required by their intended purpose, features, characteristics or elements specific to the particular geographical, contextual, behavioral or functional environment in which the AI system is intended to be used. High-risk AI systems that continue to learn after being deployed should be designed to eliminate or minimize the risk of potential bias and biased results affecting the baseline for future operations.

Thus, foreign experience demonstrates the main directions in the legal regulation of the AI use for hiring, which correspond to the previously identified legal problems in this area: the provisions concerning the AI use for hiring should contain requirements for transparency and consistency of information collection, processing and storage, unbiased algorithms and their periodic monitoring, the employer’s responsibility for decisions made by AI when hiring.

Conclusions

The intensive introduction of AI in the field of labor management, in particular hiring of employees, creates both potential opportunities and significant risks. On the one hand, AI can significantly optimize and improve the efficiency of hiring procedures, but on the other hand, legal problems arise related to the violation of applicants’ rights and employer’s responsibility for algorithm errors.

Accordingly, taking into account the highlighted problems and the studied foreign experience, it is relevant for the Russian legislator to develop and include the following provisions into the labor legislation according to the three main directions of regulating the use of AI for hiring employees.

I. Transparency and consistency in the collection, processing and storage of information when using AI to hire employees.

Chapter 14 of the Labor Code of the Russian Federation sets forth the norms related to the employees’ personal data protection, including those related to ensuring transparency and consistency in the collection, processing, storage and use of such data. It seems reasonable to extend the provisions of this chapter to job applicants and job entrants and supplement the relevant articles of Chapter 14 of the Labor Code of the Russian Federation with the following provisions: employers must notify job applicants in advance in writing that AI may be used to collect, process and analyze their personal data; employers must notify job applicants in advance in writing about the use of AI to conduct and analyze video interviews; employers must explain what AI software is used, how it works and what are the characteristics of the data used to assess job applicants; job applicants must give their written consent to be assessed by AI software; employers may not share video recordings of job applicants with other parties, including software developers; employers must delete data collected by AI about job applicants, including during video interviews, within 15 days of receiving a written request from the job applicant; employers may not use AI technology to hire a disabled person.

The list and scope of personal data that is permissible to be processed by AI in hiring should be regulated through a standardization mechanism. It should be noted that in 2020 the Federal Agency for Technical Regulation and Metrology developed the Perspective Program of Standardization in the priority area “Artificial Intelligence” for the period of 2021–2024. It provides for the development of 217 standards, among which there are no standards in the field of using AI for hiring. In this case, it is necessary to take into account the provisions of GOST R 59277-2020b of 03.01.2021, which approved the “National Standard of Artificial Intelligence Systems. Classification of artificial intelligence systems”. The National Standard of AI systems classifies information depending on compliance with the following confidentiality classes: class 0 – open information; class 1 – internal information; class 2 – confidential information; class 3 – secret information. This classification can help to encode a clear list and admissible scope of applicants’ information within the AI-assisted hiring systems.

II. Unbiased artificial intelligence algorithms for hiring employees.

It is crucial to code AI for hiring in a way that avoids biased algorithms and, as a result, discrimination and unjustified rejection. A tool to ensure unbiased AI algorithms for hiring can be mandatory certification of the relevant software.

Certification of software and AI algorithms is currently not mandatory in Russia, according to the RF Government Resolution No. 982 of 01.12.2009 “On approval of the unified list of products subject to compulsory certification and the unified list of products, the conformity of which is confirmed in the form of declaration of conformity”25. It states that software is subject to confirmation of conformity with the manufacturer’s declared specifications or state standards. However, the development of AI systems and the emergence of significant risks associated with the possible violation of labor rights of citizens requires the inclusion of AI and machine learning software in the list of products subject to mandatory certification based on developed state standards, as well as periodic monitoring. Therefore, employers should be required to conduct mandatory annual monitoring of the AI technology used for hiring and send a monitoring report to the certification center where the software used by the employer is certified.

It is also relevant to consider prohibiting employers from using AI scoring models for hiring, even with the applicant’s consent, as these models have a significant risk of bias in predicting the applicant’s labor behavior.

III. Liability of the employer for the decision made by artificial intelligence to hire employees.

AI cannot have legal personality and be responsible for the results of collecting, processing and analyzing applicants’ data and making hiring decisions. Moreover, this position is already reflected in paragraph 6, part 1, of Article 86 of the current Labor Code of the Russian Federation, which stipulates that when making decisions affecting the interests of an employee, the employer has no right to base on the employee’s personal data obtained solely as a result of their automated processing or electronic receipt. This norm should also be extended to job applicants and job entrants. That is, employers are liable for any negative impact caused by AI tools. In addition, employers are liable for the actions of third parties whom the employer hires to manage decision-making systems, including automated ones, if such decision-making systems have a discriminatory impact. It is also relevant to enshrine the latter provision in Article 90 of the Labor Code of the Russian Federation.

 

1 Artificial intelligence started to select personnel in Russia. (2023, 11 August). Ura.ru. https://clck.ru/3CVvmv

2 HRlink research: 71 % of HRs treat AI positively. (2023, 26 December). Artificial intelligence in the Russian Federation. https://clck.ru/3CVvqB

3 In August 2023, the Ministry of Digital Development, Communications and Mass Media of the Russian Federation announced the launch of the State Personnel experiment on the Gostech platform, which involves the use of AI for hiring in the civil service. By 2030, it is planned to create a new information HR system for the development of civil servants based on AI. See: Artificial intelligence will hire civil servants: will the technology replace a tender commission? (2023, August 23). RG.ru. https://clck.ru/3CVvvC

4 Fund for the Center for new technologies development and commercialization. (2020). Concept of comprehensive regulation (legal regulation) of relations arising in connection with the digital economy development. Moscow.

5 Bogen M., & Rieke A. (2018). Help wanted: an examination of hiring algorithms, equity, and bias. https://goo.su/wc44

6 UNESCO. Artificial Intelligence: Examples of ethical dilemmas. (2023, 21 April). https://clck.ru/3CVwHU

7 Markel, K. A., Mildner, A. R., & Lipson, J. L. (2023, September 29). AI and employee privacy: important considerations for employers. Reuters. https://clck.ru/3CVwcu

8 Sberbank taught artificial intelligence to predict quits. (2019, October 18). Forbes. https://clck.ru/3CVwoY

9 Oppenheim, M. (2018, 11 October). Amazon scraps “sexist AI” recruitment tool. Independent. https://clck.ru/3CVwqY

10 MTI: AI interview software doesn’t even understand what language a candidate speaks. (2021, 8 July). Habr. https://clck.ru/3CVxKF

11 Roller, A. (2023, September 8). AI hiring bias: How HR can understand and mitigate potential pitfalls. https://clck.ru/3CVxwT

12 Artificial Intelligence Video Interview Act (820 ILCS 42). https://clck.ru/3CVz5D

13 Ibid.

14 Illinois House Bill 3773 (2024, September 9). https://clck.ru/3DcoQm

15 Md. Code, Lab. & Empl. § 3-717. https://clck.ru/3CVzMD

16 Regulations to Protect Against Employment Discrimination in Automated Decision-Making Systems. (2024, May 17). https://goo.su/FwwU

17 Consumer Protections for Artificial Intelligence Act. (2024, May 17). https://clck.ru/3DcuBv

18 EEOC Artificial Intelligence and Algorithmic Fairness Initiative. (2021, October 28). https://clck.ru/3CVzfn

19 EEOC The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. (2022, May 12). https://clck.ru/3CVzk3

20 EEOC Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. (2023, May 18). https://clck.ru/3CVzoG

21 Joint statement on enforcement efforts against discrimination and bias in automated systems. (2023, April 25). https://clck.ru/3CVzxw

22 Artificial Intelligence and Equal Employment Opportunity for Federal Contractors (2024, April 24). https://clck.ru/3Dcv8d

23 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). https://clck.ru/3DdUr7

24 Ibid.

25 Resolution of the Government of the Russian Federation dated 01.12.2009 No. 982. KonsultantPlyus. https://clck.ru/3EPpur

References

1. Basu, N., & Dave, R. (2024). Artificial intelligence and job sector – need for laws. Educational Administration: Theory And Practice, 30(3), 690–701. https://doi.org/10.53555/kuey.v30i3.1337

2. Bigman, Y. E., Wilson, D., Arnestad, M. N., Waytz, A., & Gray, K. (2023). Algorithmic discrimination causes less moral outrage than human discrimination. Journal of Experimental Psychology: General, 152(1), 4–27. https://doi.org/10.1037/xge0001250

3. Chen, L., Ma, R., Hannák, A. & Wilson, C. (2018). Investigating the impact of gender on rank in resume search engines. Proceedings of the 2018 chi conference on human factors in computing systems, 651, 1–14. https://doi.org/10.1145/3173574.3174225

4. Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10. https://doi.org/10.1057/s41599-023-02079-x

5. De Stefano, V. (2019). “Negotiating the algorithm”: automation, artificial intelligence and labor protection. Comparative Labor Law & Policy Journal, 41(1), 15–46. https://doi.org/10.2139/ssrn.3178233

6. Estrada, G, Coronado, M., Soria, Y., Jiménez, S., Cristobal, J., Torres, E., Camargo, M., Taipe, M., Aparicio, S., Luis, J., & Briceño B. (2024). Inteligencia artificial en la gestión de los recursos humanos. Revista de Climatolog´ıa Edici´on Especial Ciencias Sociales, 24, 2082–2092. (In Spanish).

7. Fedoseeva, O. V. (2021). On creating and developing emotional artificial intelligence. Rossiya: tendencii i perspektivy razvitiya, 16-1, 674–676. (In Russ.).

8. Fuller, J., B. Raman M., Sage-Gavin, E., Hines, K. et al. (2021). Hidden workers: untapped talent. Harvard Business School Project on Managing the Future of Work and Accenture.

9. Ginu, G., Mary, T., Anusha, B., & Anson, M. (2021). A systematic review of artificial intelligence and hiring: Present Position and Future Research Areas. Indian Journal of Economics and Business, 20(2), 57–70. http://dx.doi.org/10.5281/zenodo.5407602

10. Haenlein, M. & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review, 61(4), 5–14. https://doi.org/10.1177/0008125619864925

11. Hisamova, Z. I., & Begishev, I. R. (2020). The nature of artificial intelligence and the problem of legal personality determination. Moscow Juridical Journal, 2, 96–106. (In Russ.). https://doi.org/10.18384/2310-6794 2020-2-96-106

12. Hunkenschroer, A. L., & Kriebitz, A. (2023). Is AI recruiting (un)ethical? A human rights perspective on the use of AI for hiring. AI Ethics, 3, 199–213. https://doi.org/10.1007/s43681-022-00166-4

13. Ivanova, M., Bronowicka, J., Kocher, E., & Degner, A. (2018). The App as a Boss? Control and Autonomy in Application-Based Management: Working Paper. Europa-Universität Viadrina Frankfurt. https://doi.org/10.11584/arbeit-grenze-fluss.2

14. Jackson, M. (2021). Artificial intelligence & algorithmic bias: the issues with technology reflecting history & humans. Journal of Business & Technology Law, 16(2), 299–316.

15. Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. https://doi.org/10.1016/j.bushor.2018.03.007

16. Kharitonova, Yu. S., Savina, V. S., & Pagnini, F. (2021). Artificial intelligence’s algorithmic bias: ethical and legal Issues. Perm University Herald. Juridical Sciences, 53, 488–515. (In Russ.). https://doi.org/10.17072/19954190-2021-53-488-515

17. Köchling, A., & Wehner, M. C. (2020). Discriminated by an algorithm: a systematic preview of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 13, 795–848. https://doi.org/10.1007/s40685-020-00134-w

18. Kuncel, N., Klieger, D., & Ones, D. (2014). In hiring, algorithms beat instinct. Harvard Business Review, 92(5), 32.

19. Langer, M., Cornelius, J. K., & Andromachi, F. (2023). Information as a double-edged sword: the role of computer experience and information on applicant reactions towards novel technologies for personnel selection. Computers in Human Behavior, 81, 19–30. https://doi.org/10.1016/j.chb.2017.11.036

20. Langer, M., König, C. J., Sanchez, D. R.-P., & Samadi, S. (2019). Highly automated interviews: applicant reactions and the organizational context. Journal of Managerial Psychology, 35(4), 301–314. https://doi.org/10.1108/jmp-09-2018-0402

21. Lee, M. K. (2018). Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 1–16. https://doi.org/10.1177/2053951718756684

22. Lőrincz, G. (2018). Kommentár a munka törvénykönyvéről szóló 2032. évi I. törvényhez: Munkajogi sci-fi. Pécsi Munkajogi Közlemények, 11(1-2), 7–34. (In Hungarian)

23. Lukács, A., & Váradi S. (2023). GDPR-compliant AI-based automated decision-making in the world of work. Computer Law & Security Review, 50, 105848. https://doi.org/10.1016/j.clsr.2023.105848

24. Novikov, D. A. (2023). Critical remarks on the liberal understanding in sociological and legal studies of the phenomenon of labour in the information society. Russian Journal of Labour & Law, 13, 81–91. (In Russ.). https://doi.org/10.21638/spbu32.2023.105

25. Pradeep, T. (2024). Labour law in the era of artificial intelligence and automation. International Journal For Multidisciplinary Research, 6(2). https://doi.org/10.36948/ijfmr.2024.v06i02.16324

26. Reddy, S. (2022). The legal issues regarding the use of artificial intelligence to screen social media profiles for the hiring of prospective employees. Obiter, 43(2), 113–131. https://doi.org/10.17159/obiter.v43i2.14254

27. Serova, A. V., & Shcherbakova, O. V. (2022). The employee’s right to privacy transformation: digitalization challenges. Kutafin Law Review, 9(3), 437–465. https://doi.org/10.17803/2713-0525.2022.3.21.437-465

28. Shcherbakova, O. V. (2021). The use of artificial intelligence programs when recruiting employees. Electronic Supplement to the Russian Juridical Journal, 3, 72–76. (In Russ.). https://doi.org/10.34076/22196838_2021_3_72

29. Sivathanu, B., & Pillai, R. (2018). Smart HR 4.0 – how industry 4.0 is disrupting HR. Human Resource Management International Digest, 26(4), 7–11. https://doi.org/10.1108/hrmid-04-2018-0059


About the Author

D. A. Novikov
Saint Petersburg State University
Russian Federation

Denis A. Novikov – Cand. Sci. (Law), Associate Professor, Department of Labor and Social Law

7–9 Universitetskaya nab., 199034 Saint Petersburg

Scopus Author ID: https://www.scopus.com/authid/detail.uri?authorId=57218897105  

WoS Researcher ID: https://www.webofscience.com/wos/author/record/CAA-7871-2022

Google Scholar ID: https://scholar.google.com/citations?user=gEjH4S4AAAAJ 

RSCI Author ID: https://elibrary.ru/author_items.asp?authorid=1149154


Competing Interests:

 The author declares no conflict of interests.



  • foreign experience of regulating the use of artificial intelligence for hiring employees;
  • protection of applicant’s personal data obtained through the use of artificial intelligence;
  • discrimination and unjustified refusal to hire due to bias of artificial intelligence algorithms;
  • legal liability for the decision made by a generative algorithm when hiring an employee.

Review

For citations:


Novikov D.A. Using Artificial Intelligence in Employment: Problems and Prospects of Legal Regulation. Journal of Digital Technologies and Law. 2024;2(3):611-635. https://doi.org/10.21202/jdtl.2024.31. EDN: chpesp

Views: 1920


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2949-2483 (Online)