Fairness-in-the-Workflow: How Machine Learning Practitioners at Big Tech Companies Approach Fairness in Recommender Systems

Through interviews with 11 practitioners at large technology companies, this paper maps the workflow of integrating fairness into recommender systems, identifying key technical and organizational challenges such as defining fairness, balancing stakeholder interests, and facilitating cross-team collaboration, while offering actionable recommendations for the community.

Jing Nathan Yan, Emma Harvey, Junxiong Wang, Jeffrey M. Rzeszotarski, Allison Koenecke

Published 2026-03-02
📖 6 min read🧠 Deep dive

Imagine you are walking through a massive, bustling digital marketplace. This isn't just any market; it's run by giant, invisible shopkeepers (the Recommender Systems) who decide what you see, what you buy, and what ideas you encounter. They are the reason you see a specific pair of shoes on Amazon or a specific political video on social media.

The problem? These invisible shopkeepers can be biased. They might accidentally favor one group of people over another, or push certain ideas so hard that they distort reality.

This paper is like a behind-the-scenes documentary featuring 11 of the actual engineers and scientists ("the mechanics") who build and maintain these digital marketplaces at giant tech companies. The researchers asked them: "How do you try to make sure your machines are fair, and what makes it so hard?"

Here is the story of their findings, explained simply:

1. The Job: Building a Fair Machine

Think of building a recommender system like cooking a giant, complex stew.

  • The Ingredients (Data): You need data about users and products.
  • The Recipe (The Model): You write code to decide how to mix them.
  • The Taste Test (Fairness): You have to make sure the stew tastes good for everyone, not just the people who usually eat it.

The researchers found that these mechanics have a specific workflow. It starts in a quiet kitchen (Offline Development) where they gather ingredients and write recipes. Then, they serve the stew to the public (Online Development) and watch how people react.

2. The Three Big Hurdles (Technical Challenges)

Hurdle A: Defining "Fair" is Like Defining "Delicious"
In school, you learn that "fair" means everyone gets the same size slice of cake. But in the real world, it's messier.

  • The Problem: If you have a dating app, is it fair to show everyone the same number of profiles? Or is it fair to show people profiles they are actually likely to date?
  • The Metaphor: Imagine you are a chef. One customer says, "I want a spicy dish." Another says, "I want a sweet dish." If you make the dish "fair" by making it mild, you might upset both of them. The mechanics struggle because there is no single rulebook for what "fair" means in every situation. They often have to invent their own rules on the fly.

Hurdle B: The "Too Many Cooks" Problem
Recommender systems serve two main groups: the Users (people scrolling) and the Providers (sellers, creators, advertisers).

  • The Problem: What's fair for the user might be unfair for the creator, and vice versa.
  • The Metaphor: Imagine a traffic light. If you make the light stay green for the pedestrians (users) for a long time, the cars (creators) get stuck and can't deliver their goods. The mechanics have to balance these competing interests constantly. Sometimes, they have to choose who gets the "green light," and that's a really hard decision to make.

Hurdle C: The "Moving Target"
Unlike a static bridge, a recommender system changes based on how people use it.

  • The Problem: If the system shows you a video you like, you click it. The system thinks, "Oh, they love this!" and shows you more of it. Soon, you are in a bubble, only seeing one type of content. This is called a Feedback Loop.
  • The Metaphor: It's like a snowball rolling down a hill. It starts small, but as it rolls, it picks up more snow (data) and gets huge. By the time the mechanics realize the snowball is too big and rolling the wrong way, it's very hard to stop it. They often have to wait until the system is live to see these problems, which is like trying to fix a car while driving it at 100 mph.

3. The Office Politics (Organizational Challenges)

Challenge A: The "No Time" Crisis
The mechanics are under immense pressure to keep the system running fast and profitable.

  • The Metaphor: Imagine you are a mechanic in a busy garage. The boss says, "Fix the engine so the car goes faster!" You also want to say, "Hey, let's also make sure the brakes are safe for everyone." But the boss says, "We don't have time for safety checks today; we need to get cars out the door."
  • The Reality: The mechanics reported spending less than 10% of their time on fairness. It's not that they don't care; it's that "fairness" isn't seen as an emergency, so it gets pushed to the bottom of the to-do list.

Challenge B: The "Language Barrier"
The mechanics talk to two other teams: Lawyers and Fairness Experts.

  • The Lawyers: They speak a clear language: "You cannot use this data, or you will get sued." The mechanics love this because the rules are black and white.
  • The Fairness Experts: They speak a different language. They talk about "societal impact," "psychological bias," and "ethical nuance."
  • The Metaphor: It's like a Translator trying to explain a poem to a computer. The Fairness Experts say, "This feels unfair." The Mechanics ask, "Okay, but what line of code do I change?" The two teams often struggle to understand each other because they don't share a common dictionary.

4. What Should We Do? (The Recommendations)

The paper suggests a few ways to fix this:

  1. Write It Down (Documentation): Mechanics often forget what they did last year. They need better "recipe books" so they don't have to reinvent the wheel every time they try to fix a bias.
  2. Start Early: Don't wait until the car is built to check the brakes. Bring the Fairness Experts into the kitchen while the recipe is being written, not just when the food is served.
  3. Learn Each Other's Languages: The mechanics and the fairness experts need to learn to speak the same language. Maybe the Fairness Experts need to learn a bit of code, and the mechanics need to learn a bit of ethics.
  4. Give Them Time: Companies need to officially pay mechanics to spend time thinking about fairness, just like they pay them to make the system faster.

The Bottom Line

The people building these systems want to do the right thing. They know their creations shape society. But they are stuck in a difficult spot: they are trying to build a perfectly fair machine in a world that is messy, fast-paced, and full of conflicting rules. They need better tools, more time, and a better way to talk to each other to make the digital world fair for everyone.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →