Automating Inequity: Why AI Demands Legal Oversight

Shea Holman Kilian

December 2, 2025

Automating Inequity: Why AI Demands Legal Oversight

Artificial intelligence is not a distant, futuristic technology. It is already embedded in our homes and offices. From the automation processes used by 99% of Fortune 500 companies during the hiring process to the Alexa device perched on your kitchen counter, AI is shaping how we work, live, and imagine the future.

Despite the immense credit we give to Large Language Models (LLMs), AI fails to reflect society. It absorbs our biases, repackages them, and spits them back out at scale. That means the same systemic inequities we have fought against for decades, pay gaps, hiring discrimination, and even harassment, are now being automated and amplified. 

The stakes are clear: equity, safety, and representation. Unless the legal profession steps in to demand accountability, AI will continue to deepen inequality rather than dismantle it.

Feminized Tech, Feminist Problems

Consider Alexa, Siri, or Cortana. These AI assistants are almost universally coded as female. They are given women’s names, women’s voices, and personalities designed to be helpful, polite, and obedient. Journalist Leah Fessler published an article in Quartz, which found that Siri responded to the insult “You’re a bitch” with the coy reply: “I’d blush if I could.” Alexa, meanwhile, responded to harassment with a demure “Thanks for the feedback.” These programmed interactions reinforce a dangerous stereotype: women as subservient, accommodating, and endlessly available at the push of a button. AI anthropomorphized as female does not challenge sexism; it entrenches it.

By contrast, when AI is designed for power, action, or expertise (think IBM’s Watson, Boston Dynamics’ Atlas, or MIT’s rescue robot Hermes), it is masculinized. It has a male-coded name – or, in the case of Atlas and Hermes, the names of Gods – a male-coded voice, and is built for thinking and doing. The pattern is consistent: female AI serves; male AI commands.

According to Stanford University Professor Clifford Nass, author of The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships, “It’s much easier to find a female voice that everyone likes than a male voice that everyone likes.” However, that supposed “likability” comes at a price; namely, the reinforcement of gendered divisions between care and authority.

From the Résumé to the Paycheck: Bias Baked In

The concern runs deeper than robots with female voices. AI is already reshaping the workplace in ways that directly harm women.

Take Amazon’s failed recruiting tool. In 2015, Amazon discovered that its experimental AI system was downgrading résumés from women’s colleges and penalizing applicants who listed affiliations like “Women’s Chess Club.” Why? The algorithm was trained on a decade’s worth of résumés submitted to Amazon, most of which came from men. Amazon's computer models were trained to vet applicants by observing patterns in resumes submitted to the company over 10 years—a reflection of male dominance across the tech industry. The system simply learned to mimic history.

Or consider wage prediction tools. A 2025 study revealed that leading AI models, including ChatGPT, recommended significantly lower salaries for women than for identically qualified men. Researchers prompted each Large Language Model with user profiles that differed only by gender but included the same education, experience, and job role. Then they asked the models to suggest a target salary for an upcoming negotiation. In one iteration of the study, ChatGPT’s o3 model was prompted to advise a female job applicant. The model suggested requesting a salary of $280,000. Immediately following, the researchers made the same prompt but for a male applicant. This time, the model suggested a salary of $400,000. According to Professor Ivan Yamshchikov, “The difference in the prompts is two letters; the difference in the ‘advice’ is $120K a year.” Under the guise of neutrality, these systems perpetuate the wage gap while shielding employers behind a veneer of “data-driven” decision-making.

Hiring bias compounds the problem. Researchers at the University of Washington tested résumé-screening LLMs and found significant racial, gender, and intersectional bias in how three state-of-the-art Large Language Models ranked resumes. The researchers varied names typically associated with white and Black men and women across over 550 real-world resumes and found the LLMs favored white-associated names 85% of the time, female-associated names only 11% of the time, and never favored Black male-associated names over white male-associated names.

The message is stark: automated hiring tools are not neutral. They are simply scaling up the inequities already baked into our labor market.

Intersectional Harms

Bias in AI does not fall evenly. It compounds at the intersections of race, gender, sexual orientation, disability, and other protected characteristics. 

In August, Janice Gassam Asare, Ph.D., ran an experiment in which she used ChatGPT’s image generator, DALL-E, to create eight images and examine how three different AI tools evaluate job candidates based on them. The photos featured a Black woman and a white woman, both in their late 30s and wearing identical outfits, with varying hairstyles.

Two AI tools, Clarifai and Amazon Rekognition, produced results showing that the Black woman with braids was the only one not labeled as intelligent. Claude by Anthropic, the third AI tool, gave the Black woman with braids a more positive rating and the one with a “teen weeny afro” the highest intelligence rating among her hairstyles. Results from all three tools showed the Black woman with straight hair consistently received the highest professionalism scores. In contrast, the white woman’s hairstyles were not penalized for intelligence or social traits across any of the tools.

Other tools erase nonbinary identities entirely. Automated gender recognition removes your opportunity to self-identify, and instead infers your gender from data collected about you. This reduces identity to a binary, inferring gender from traits like jawline, name, or makeup use. For trans and nonbinary individuals, this is not just inaccurate; it is erasure. 

Legal frameworks that fail to grapple with intersectional harms risk entrenching inequality rather than dismantling it.

The Legal Landscape: Fragmented but Growing

The good news is that regulators are beginning to pay attention. The bad news is that the U.S. is lagging behind.

The EU AI Act

Officially effective on August 1, 2024, this legislation marks the world's first comprehensive legal framework for AI. With its implementation, the EU set a global precedent for how AI systems are developed, deployed, and governed. The Act establishes a risk-based classification system, with strict requirements for “high-risk” systems. AI in the context of employment and workers’ management is one of the areas that is considered high-risk under the EU AI Act. This includes AI systems intended to be used for (i) recruitment or selection purposes, or (ii) making decisions that affect terms of the work-related relationship, promotion or termination of work-related contractual relationships, allocating tasks on the basis of individual behaviour or personal traits, or monitoring or evaluation of individuals in the workforce. These systems must undergo conformity assessments and allow for effective human oversight. Fines for violations can reach the higher of €35 million or 7% of a company’s annual global turnover.

The U.S. Federal Landscape

By contrast, U.S. federal guidance remains largely advisory. In 2023, under the Biden Administration, the EEOC issued technical guidance clarifying that AI hiring tools fall under Title VII’s disparate impact framework. Employers were urged, but not required, to audit their tools for bias. That same year, the Department of Labor also published AI principles emphasizing worker empowerment, transparency, and responsible data use. Under the new Administration, these technical assistance reports are no longer in use. 

Congress has, from time to time, floated proposals ranging from prohibiting certain employer uses of automated decision systems to mandating registration of advanced AI models with independent oversight bodies. 

For now, these remain aspirational.

State-Level Action

States are experimenting with more concrete rules. Illinois’s Artificial Intelligence Video Interview Act applies to all employers that use an AI tool to analyze video interviews of applicants for positions based in Illinois. The law has two components that employers need to be aware of: (1) notice and consent; and (2) privacy and deletion rights. The first part requires employers to notify applicants before the interview that AI may be used to analyze their video interview and assess their fitness for the position. The second part states that an employer may not share applicant videos, except with persons whose expertise or technology is necessary to evaluate an applicant’s fitness for a position. The law gives applicants the right to request that their video be deleted. 

New York prohibits employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of its use, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates.

While these are important steps, the patchwork nature of this legislation leaves workers in other jurisdictions unprotected. Furthermore, the One Big Beautiful Bill Act (OBBBA) imposed a 10-year moratorium on the enforcement of most state and local laws that target AI systems. As such, OBBBA pauses the enforcement of existing state AI laws and regulations and takes precedence over emerging AI legislation in state legislatures across the country.

Promising Practices: Representation and Accountability

Individuals working in AI already understand the stakes. Deloitte’s Women and Generative AI report concludes that greater representation of women in AI improves overall system design and functionality. Of workers surveyed, 71% say expanding women’s roles in AI brings perspectives needed in the industry. 63% believe that machine learning will always produce biased results as long as it remains a male-dominated field.

The solution is not just to audit algorithms but to change who builds them. Diverse teams have been shown to innovate more, make better decisions, and produce fairer outcomes. Representation matters, not just in the datasets AI learns from, but in the coding rooms where AI is created.

For the legal profession, this means several things:

  1. Litigate for Equity. Lawyers should challenge biased AI under Title VII and other antidiscrimination statutes, holding employers accountable when their tools produce disparate impacts.

  2. Advocate for Regulation. Support federal and state laws that require bias audits, transparency, and legal liability for harms caused by AI.

  3. Demand Representation. Push for inclusion of women, nonbinary, and underrepresented voices in AI development, governance, and policy.

  4. Protect Clients. Advise clients not only on compliance but on reputational risk. Companies deploying biased AI face not just lawsuits but public backlash.

  5. Educate Ourselves. Lawyers cannot afford to treat AI as a “tech issue.” It is a legal and equity issue. The time to skill up is now.

Conclusion: Equitable AI as a Legal Imperative

We cannot allow AI to become yet another frontier where underrepresented populations are sidelined and stereotyped. Technology is never neutral; it reflects the values of its creators. Right now, too many of those values are reinforcing old hierarchies. A more inclusive approach to AI ensures that the people historically excluded from building technology are centered in its design and governance.

For the legal profession, the challenge is clear. We must move from passive observers to active shapers of this technological moment. That means litigating inequities, writing regulations, and demanding accountability. It also means making space for new voices at the table, because equity in AI will never come from algorithms alone. It will come from us.

Shea Holman Killian is an Assistant Professor of Legal Studies at George Mason University, where she teaches various law and government courses and guides students through the Jurisprudence Learning Community (JPLC). She also serves as a member of the Schar School of Policy and Government’s Gender and Policy Center advisory board, contributing her expertise to advancing gender equity in policy and governance. Outside of George Mason, Shea serves as Counsel at the Purple Method, providing strategic legal guidance, overseeing policy development, and collaborating with stakeholders to create safer and more equitable workplaces.

<All Posts

Thank you to our Writers in Residence sponsor: