When AI Fails, What Works? A Data-Driven Taxonomy of Real-World AI Risk Mitigation Strategies

This paper presents an empirically grounded, expanded taxonomy of AI risk mitigation strategies derived from a dataset of nearly 7,000 real-world incident reports, introducing four new categories and significantly broadening the framework's coverage to better address systemic failures in high-stakes AI deployments.

Evgenija Popchanovska, Ana Gjorgjevikj, Maryan Rizinski, Lubomir Chitkushev, Irena Vodenska, Dimitar Trajanov

Published 2026-03-05
📖 5 min read🧠 Deep dive

Imagine you've built a super-smart robot assistant to help run your business, write your emails, and even diagnose illnesses. At first, it's amazing. But then, things start going wrong. Sometimes the robot lies (hallucinates), sometimes it gets fired up and says something offensive, and sometimes it accidentally breaks a law.

For a long time, experts only looked at the robot's "brain" (the code) to figure out why it failed. They thought, "If we just fix the code, everything will be fine."

This paper says: "That's not enough."

The authors argue that when AI fails, it's rarely just a glitch in the code. It's usually a chain reaction involving the people using it, the rules (or lack of rules) around it, and the environment it's working in. It's like a car crash: blaming the engine isn't enough if the driver was texting, the road was icy, and the brakes weren't inspected.

Here is a simple breakdown of what this paper does, using some everyday analogies:

1. The Problem: The "Whack-a-Mole" Approach

Right now, when an AI messes up, companies are playing Whack-a-Mole.

  • The Mole: An AI makes a mistake.
  • The Hammer: The company panics and tries to fix that one specific mistake.
  • The Result: They fix one hole, but two more pop up elsewhere because they don't have a map of where the holes usually are.

The authors looked at nearly 10,000 news stories about AI disasters (from fake legal documents to robots hurting people) to build a Master Map (a Taxonomy) of exactly how and why these failures happen, and—more importantly—how people actually fixed them.

2. The Solution: A New "Fix-It" Menu

The paper updates an old "Fix-It Menu" (the MIT AI Risk Taxonomy) by adding four new categories of solutions that were missing before. Think of these as new tools in a mechanic's toolbox that we didn't know we needed.

🛑 Category 1: The "Panic Button" (Corrective & Restrictive Actions)

Sometimes, you just have to hit the brakes.

  • The Analogy: Imagine a chef puts a poisonous ingredient in a soup. The fix isn't just to taste it again; it's to dump the whole pot and stop serving that dish until it's safe.
  • What it means: Turning off a specific AI feature, shutting down a bot, or banning the AI from certain areas (like a hospital) until it's fixed.

⚖️ Category 2: The "Judge's Gavel" (Legal & Regulatory Actions)

Sometimes, the only thing that stops a bad actor is the law.

  • The Analogy: If a neighbor keeps playing loud music at 3 AM, you don't just ask them nicely to stop; you call the police or get a court order.
  • What it means: Lawsuits, fines, government investigations, and court orders that force companies to behave.

💸 Category 3: The "Wallet Shock" (Financial & Market Controls)

Money talks. If it costs too much to be reckless, people will stop being reckless.

  • The Analogy: If you drive a car without insurance and crash, you don't just get a ticket; you might lose your license or have to pay thousands in damages that ruin your savings.
  • What it means: Fines, compensation for victims, banning a company from selling their product, or making them pay for the cleanup.

🙅 Category 4: The "Bull in a China Shop" (Avoidance & Denial)

This is the tricky one. Sometimes, when things go wrong, the company doesn't fix it; they just deny it happened or say, "It's not our fault."

  • The Analogy: You break a vase, and instead of saying "I'm sorry, I'll buy a new one," you say, "The vase was already broken," or "I didn't touch it."
  • What it means: Companies refusing to take responsibility, refusing to remove bad content, or claiming they are following the rules even when they aren't. The paper notes this is a strategy (albeit a bad one) that companies use to limit their liability.

3. The "New" Stuff They Found

The authors also found two other important things that were missing from the old maps:

  • The Detective Work (Incident Investigation): It's not enough to fix the leak; you need to hire a detective to find out why the pipe burst in the first place so it doesn't happen again.
  • The Training Wheels (Training & Support): If the machine is confusing, maybe the people using it need more training, or the victims need counseling. It's about helping humans cope with the robot's mistakes.

4. Why This Matters to You

This paper is like a Safety Manual for the AI Age.

  • For Companies: It stops them from guessing. Instead of "Maybe we should apologize," they can look at the map and say, "Oh, this is a 'Legal' issue, so we need a lawyer and a fine, not just a tweet."
  • For Governments: It helps them write better laws because they know exactly what kinds of failures are happening in the real world.
  • For You: It means that when AI fails, there is a better system in place to catch the fall, fix the problem, and make sure the company pays up, rather than just hoping the next update fixes it.

In short: The paper says, "AI is powerful, but it's also dangerous. We can't just hope the code gets better. We need a complete system of rules, punishments, and fixes that covers everything from the computer chip to the courtroom."