Retrofitters, pragmatists and activists: Public interest litigation for accountable automated decision-making

Drawing on interviews with Australian legal and advocacy experts, this paper analyzes public interest litigation as a pragmatic "retrofitting" strategy to enforce accountability for automated decision-making under existing laws, while identifying the necessary institutional reforms to overcome current legal and systemic limitations.

Henry Fraser, Zahra Stardust

Published 2026-03-16
📖 5 min read🧠 Deep dive

Imagine you live in a world where invisible, super-fast robots (algorithms) are making life-or-death decisions about your money, your job, your freedom, and your safety. Sometimes, these robots make terrible mistakes, treating people unfairly or stealing their money. This is called Automated Decision-Making (ADM).

In Australia, the government and big companies are building these robots, but the rules for how they should behave are still being written. Because the new laws are stuck in traffic (due to political delays), the people who got hurt by these robots have to fight back using the old laws we already have.

This paper is a "field guide" for the brave people (lawyers, activists, and scholars) who are trying to fix these broken robots using the legal system. Here is the breakdown of their strategy, explained simply:

1. The Problem: The "Retrofit" Job

The authors call the lawyers "Retrofitters."

Think of the law like an old, sturdy house built 100 years ago. The new technology (AI) is like a giant, modern swimming pool that someone just dropped into the middle of the living room. The old house wasn't built to hold a pool.

  • The Challenge: You can't just tear the house down and build a new one immediately because the government hasn't passed the new building codes yet.
  • The Solution: The "Retrofitters" have to creatively modify the old house to fit the new pool. They take old legal tools (like laws about property theft or privacy) and stretch them to cover new problems (like an AI stealing your welfare money).
  • The Analogy: It's like using a wrench to fix a computer. It's not the perfect tool, but if you know how to twist it just right, it might work.

2. The Strategy: It's Not Just About Winning

The paper argues that these lawyers aren't just trying to win a single case for one person. They are playing a long game called "Strategic Litigation."

  • The "Test Drive" Approach: Sometimes, they take a case to court just to see how the judge reacts. Even if they lose the case, they might learn something important.
    • Example: Imagine a lawyer sues a city council for not stopping harassment at a clinic. They lose the lawsuit. But, the judge says, "The city council isn't responsible; the law needs to change." Suddenly, the government realizes they must pass a new law to fix the gap. Losing the battle helped win the war.
  • The "Bolt-On" Tactic: Lawyers often take a boring, safe legal claim (like "you breached a contract") and attach a bigger, public interest argument to it (like "this algorithm is racist"). They hope that even if the court rejects the big argument, it plants a seed for the future.
  • The "Money" Incentive: To get big law firms and investors to help, the lawyers often have to find cases where the victim can get money damages. If there's no money in it, the "Retrofitters" can't afford to do the heavy lifting.

3. The Obstacles: Why It's So Hard

The paper admits that this is an uphill battle with three massive hurdles:

  • The "Black Box" Mystery: You can't sue a robot if you don't know how it works. The companies keep their code secret (like a secret recipe). Without transparency, lawyers are like detectives trying to solve a murder without ever seeing the crime scene.
  • The "Cost" Trap: Suing the government or big tech companies is incredibly expensive. If you lose, you might have to pay their legal bills too. This scares away poor people who are often the ones hurt the most by these robots.
  • The "Old School" Judges: Some judges are used to old ways of thinking. They might struggle to understand that a computer can be "racist" without a human being racist. They might say, "The computer made a mistake, but it wasn't intentional," and let the company off the hook.

4. The Toolkit: What Needs to Happen Next

To make this "Retrofitting" work, the authors say we need to build a better ecosystem, like a support network for these legal fighters:

  • Flashlights (Transparency): We need laws that force companies to shine a light on their algorithms. We need to see the "recipe" so we know if it's poisoned.
  • The "Complaint Hub" (Aggregation): Right now, if 1,000 people get hurt by a robot, they each complain separately, and no one notices the pattern. We need a central place to collect all these complaints so we can see the "big picture" of the harm.
  • Safety Nets (Funding): We need to stop punishing people for trying to do the right thing. If a poor person sues a giant corporation and loses, they shouldn't be bankrupted. We need special rules to protect them.
  • The "Hackathon" (Community): Lawyers, tech experts, and community groups need to hang out together (like at a "hackathon") to brainstorm new ways to use the law before the problems even happen.

The Bottom Line

This paper is a call to action. It says: "Don't wait for the perfect new laws to arrive."

While we wait for the government to write the new rulebook, we have to be clever, creative, and brave. We need to use the old laws like a Swiss Army knife to cut through the injustice. It's messy, it's hard, and it requires a lot of teamwork, but it's the only way to hold these powerful automated systems accountable right now.

In short: The robots are running wild. The new rules are stuck in traffic. So, the "Retrofitters" are using old tools to build a fence around the robots, one creative lawsuit at a time.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →