AI Discrimination in Hiring, and What We Can Do 国产视频 It
Thanks for your application! Our algorithm will be in touch.
Late last year, the U.S. Equal Employment Opportunity Commission launched an to monitor the use of AI in employment decisions and ensure compliance with federal civil rights laws. It was about time. By 2020, of hiring managers in the U.S. were using algorithmic software and AI tools in their recruitment process.
The pandemic profoundly shook the labor market. But the biggest transformation occurred in April 2021, when people willingly quit their jobs鈥攎arking the beginning of the 鈥淕reat Resignation.鈥 Fast forward to the present, and the future of work still remains uncertain. The Bureau of Labor Statistics the U.S. economy will add 8.3 million jobs from 2021 to 2023, making a streamlined hiring process all the more important. But the discriminatory hiring that can result from the use of AI inevitably hurts people of color, women, people with disabilities, and society as a whole. Improved employment outcomes for members of these groups .
Job seekers now often use digital spaces to find jobs and connect with recruiters, especially on internet platforms like Indeed, LinkedIn, and Monster. Companies like Workable, Taleo, and Recruitee offer applicant tracking systems (ATS) to help hiring managers streamline the recruiting process, while candidates use JobScan and VMock to help improve how their resume appears to standard screening algorithms.
Alongside efforts to close the gender gap via pay equity and investments in DEI initiatives, hiring managers are turning to digital recruitment to expand the pool of potential candidates. One tool that鈥檚 commonly used is , a product that connects recruiters with possible candidates. Like other major job platforms鈥擹ipRecruiter and Glassdoor, to name two鈥擫inkedIn collects explicit, implicit, and behavioral data to connect recruiters to candidates and present opportunities to job seekers. Explicit data comprises everything on a candidate鈥檚 profile. Any inferences that can be drawn from a profile are categorized as implicit data鈥攆or example, a data analyst鈥檚 profile could convey programming or data scraping skills, even if the analyst doesn鈥檛 directly mention those skills. Behavioral data includes all of your actions on the platform, from the positions you search for to the types of posts you engage with.
So where does the bias come into play?
Bias in Algorithmic Screening
suggests that women often downplay their skills on resumes, while men often exaggerate and include phrases tailored to the position鈥攎aking their resumes stand out to an algorithm. Applicants may also unconsciously use gendered language by including words that are associated with gender stereotypes. For example, men are more likely to use assertive words like 鈥渓eader,鈥 鈥渃ompetitive,鈥 and 鈥渄ominant,鈥 whereas women may use words like 鈥渟upport,鈥 鈥渦nderstand,鈥 and 鈥渋nterpersonal.鈥 This can put female applicants at a disadvantage by replicating the gendered ways in which hiring managers judge applicants 鈥攚hen the algorithm scans their resumes compared with those of male counterparts, it may read the men as more qualified based on the active language they鈥檙e using. Gendered language isn鈥檛 exclusive to applicants, though. Job descriptions are flooded with gendered language; enough to warrant the creation of a to check whether a job description is biased.
Nearly six decades ago, Title VII of the Civil Right Act of 1964 made it illegal for firms to discriminate on the basis of race, sex, religion, and national origin. However, unregulated algorithmic screening tools don鈥檛 always comply with this mandate. A found evidence of racial discrimination in present-day hiring processes. The researchers submitted around 84,000 fake applications to entry-level positions at companies across the U.S. They found that applications submitted with distinctively Black names, like Antwan, Darnell, Kenya, and Tamika, were less likely on average to receive a response compared to applications with distinctively white names like Brad, Joshua, Erin, and Rebecca.
But racial and gender discrimination are only the beginning of the bias that AI perpetuates. Individuals with disabilities who are covered by the Americans with Disabilities Act have the right to request accommodations during the hiring process鈥攔ights that are ensured by the Equal Employment Opportunity Commission (EEOC). Earlier this year, the EEOC and the Department of Justice (DOJ) Civil Rights Division released warning employers that the use of algorithmic screening tools could be a violation of the ADA. Certain hiring practices, like personality tests, AI-scored video interviews, and gamified assessments, fail to consider individuals who may need accommodations. If an individual with anxiety speaks at a rapid pace during a video interview, an algorithm that links a comfortable speaking pace with successful career outcomes would assign that candidate a low score.
Regulating AI
Discriminatory screening algorithms across all sectors are gathering more attention from legislators and policymakers. The strongest effort comes from D.C. Attorney General Karl Racine, who announced an bill banning algorithmic discrimination at the end of 2021. Previously, U.S. Senators Ron Wyden (D-Ore.) and Cory Booker (D-N.J.) joined Representative Yvette Clarke, (D-N.Y.) to introduce the . The bill would require companies to conduct 鈥渋mpact assessments鈥 scanning systems for bias, effectiveness, and other factors when using AI to make key decisions related to employment, loans, and even housing applications. It also proposes the creation of a public repository at the Federal Trade Commission to track and monitor these systems.
In addition to local legislative efforts, OTI recommends a number of steps that digital platforms should take to make their algorithms fair and transparent, like publishing detailed policies that allow consumers to understand how a company uses algorithmic systems and for what purposes. In addition, companies should be more transparent by describing how they use personal data to train and inform systems鈥攚hile providing users with the opportunity to opt out of using algorithmic systems.
What Can Job Seekers Do?
It begins with your resume. Companies use ATS to scan resumes for keywords that match the language used in the job description, so you should list your skills, use action-oriented words鈥攑ulling from the job description鈥攁nd include quantitative details. If you鈥檙e applying for positions that don鈥檛 require design or visual skills, consider using a basic format in lieu of a well-formatted resume; most career coaches submitting a Microsoft Word or PDF file. You whether a company uses ATS by looking for branding on the website or checking the web domain of an online application for the name of an ATS program.
Though screening algorithms streamline the hiring process for companies, the use of AI discourages job seekers and risks subjecting them to biased outcomes. Every party has a role to play in fixing the problem. Tech companies that design the algorithms need to make the datasets that inform those decisions publicly available, explain to candidates how they鈥檙e being assessed, and inform them of how their data will be used. Organizations using these tools need to recognize their limitations and introduce alternative hiring efforts to recruit more women, people of color, and people with disabilities. Applicants can, and should, report companies that use discriminatory hiring practices, and our elected officials must step in to prioritize workers鈥 rights and hold companies accountable. Achieving equitable employment outcomes and driving social and economic mobility starts with deconstructing technological barriers that deny job seekers a fair shot at success.