Imagine you are building a giant, automated robot chef to serve food to a massive city. You want this robot to be fair. But what does "fair" actually mean in this context?
This paper argues that most people trying to make AI fair are looking at the problem through the wrong lens. They are mostly focused on Distributive Equality (making sure everyone gets an equal slice of the pie), but they are ignoring Relational Equality (making sure everyone is treated with the same respect and dignity at the table).
Here is a simple breakdown of the paper's argument, using everyday analogies.
1. The Old Way: Just Counting the Slices (Distributive Equality)
For a long time, researchers have tried to fix unfair AI by looking at numbers. They ask: "Does the robot give the same number of job interviews to men and women?" or "Does it reject the same percentage of loan applications from different racial groups?"
This is called Distributive Equality. It's like a teacher handing out cookies. If the teacher gives 50 cookies to the boys and 50 to the girls, the teacher thinks, "I am fair!"
- The Good: This works well for Allocative Harms. These are situations where the AI steals resources. For example, if an AI hides housing ads from minority groups, it's stealing their "cookies" (opportunities). Fixing the numbers helps here.
- The Bad: This fails when the problem isn't about how many cookies you get, but how you are treated while getting them.
2. The Missing Piece: The Tone of the Voice (Relational Equality)
The paper argues that AI causes a second, deeper kind of harm called Representational Harm. This isn't about losing a job or a loan; it's about being insulted, erased, or mocked by the machine.
Imagine the robot chef doesn't just serve you a smaller portion; it serves you a tiny, burnt cookie while saying, "You people only deserve this because you're lazy," or it refuses to serve you at all because it thinks you don't exist.
This is Relational Equality. It asks: "Does the AI treat me as an equal human being, or does it treat me like a second-class citizen?"
The paper identifies four ways AI messes this up:
- Stereotyping: The AI thinks all nurses are women and all doctors are men. It's not just a math error; it's reinforcing the idea that women aren't leaders.
- Demeaning: The AI generates stories where people of a certain race are always the criminals, while others are always the heroes. It's like a bully constantly calling you names.
- Erasing: The AI's face recognition software can't see Black women. It's like the robot chef pretending you aren't even in the room.
- Reifying: The AI forces everyone into rigid boxes (like "Male" or "Female" only) and punishes anyone who doesn't fit, ignoring that human identity is fluid.
The Core Problem: You can't fix "being insulted" just by giving the victim a bigger cookie. You have to change the attitude of the robot. Distributive equality (counting cookies) cannot explain why it's wrong to be called a "gorilla" by an image tagger.
3. The Solution: A Two-Pronged Approach
The author proposes a Multifaceted Framework. We need both lenses to fix AI.
- Lens A (Distributive): Make sure the AI distributes opportunities fairly (no stealing jobs or loans).
- Lens B (Relational): Make sure the AI treats everyone with equal respect and doesn't reinforce social hierarchies (no insults, no erasing, no stereotypes).
Why not just use Lens B?
Some philosophers say, "If we treat everyone with equal respect, the resources will follow." The author says, "No, that's too simple." If you ignore the resource distribution, you might have a polite robot that still starves the poor. If you ignore the respect, you might have a robot that gives everyone a cookie but calls them "worthless" while doing it. We need both.
4. Why "Technical Fixes" Aren't Enough
The paper critiques current attempts to fix AI. Many developers try to fix "Representational Harm" by just redistributing the output.
- Example: If an AI generates too many pictures of white men as doctors, the fix is to force it to generate 50% women and 50% men.
The Flaw: This is like a bully who stops calling you names but still pushes you around. Just swapping the numbers doesn't fix the deep-seated prejudice.
- If you force an AI to show a Black Pope or an Asian Nazi soldier just to "balance the numbers," you aren't being fair; you're being historically inaccurate and disrespectful.
- True fairness isn't just about the ratio of images; it's about the context and the power dynamics.
5. How Do We Actually Fix It? (The Roadmap)
The paper suggests we can't just tweak the code. We have to change the whole process of building AI, involving real people:
- Community Data Collection: Don't just scrape the internet for data. Ask the communities being harmed to help build the data. (e.g., Let African communities help train translation tools for their own languages).
- Critical Reflection: Developers and users need to stop and think: "What biases are we bringing in? Who is being hurt?"
- User Agency: Don't hide the AI's mistakes. Show users how the AI works so they can spot the bias. Give them tools to say, "Hey, this story is wrong."
- Iterative Process: Fixing AI isn't a one-time patch. It's a continuous conversation. We need to keep listening to the people being harmed and adjust the system as society changes.
The Big Takeaway
Making AI fair isn't just about balancing the math. It's about human dignity.
If an AI system is "fair" only in the sense that it gives everyone an equal chance to fail, but it does so while insulting their identity, it is not truly fair. We need to build systems that not only distribute resources equally but also treat every human being as an equal partner in society, not as a stereotype or a statistic.
Get papers like this in your inbox
Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.