Care Plan Generation for Underserved Patients Using Multi-Agent Language Models: Applying Nash Game Theory to Optimize Multiple Objectives

This study demonstrates that a Nash bargaining-based multi-agent language model framework significantly improves the safety and efficiency of care plans for underserved Medicaid patients compared to a single-model baseline, while highlighting that equity requires explicit design attention rather than emerging automatically from multi-objective optimization.

Basu, S., Baum, A.

Published 2026-02-25
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine you are a doctor managing a caseload of hundreds of patients who are struggling with complex health issues. These patients aren't just dealing with diabetes or heart disease; they are also fighting hunger, homelessness, and lack of transportation.

In the real world, there aren't enough doctors to spend hours with every single patient. So, hospitals are trying to use Artificial Intelligence (AI) to help write "Care Plans"—roadmaps that tell patients what to do next.

But here's the problem: AI is like a single person trying to juggle three balls at once, and it usually drops one.

  • If the AI tries to be Safe (covering every possible medical risk), the plan becomes a 50-page novel that no one can read.
  • If the AI tries to be Efficient (short and quick), it might forget to mention that the patient can't afford their medicine.
  • If the AI tries to be Fair (addressing social needs), it might miss a critical medical warning.

This paper asks: Can we build a team of AI "experts" who argue with each other to find the perfect middle ground?

The Solution: A "Boardroom" of AI Agents

Instead of one AI trying to do everything, the researchers built a system with four specialized AI agents working together, using a mathematical concept called Nash Bargaining (think of it as a very fair, logical negotiation).

Here is how the team works, using a Restaurant Kitchen analogy:

  1. The Head Chef (Generator): This AI looks at the patient's file and cooks up a first draft of the care plan.
  2. The Safety Inspector (Safety Agent): This agent tastes the dish and yells, "Wait! You forgot to mention the patient's high blood pressure! That's dangerous!"
  3. The Time Manager (Efficiency Agent): This agent looks at the menu and says, "This recipe is too long. If we give the patient a 20-page plan, they won't read it. Let's cut the fluff."
  4. The Community Liaison (Equity Agent): This agent checks the ingredients and says, "The plan says 'go to the gym,' but this patient has no shoes and lives in a food desert. We need to change the plan to fit their reality."

The "Nash Bargaining" Step:
Usually, the Head Chef would just pick one person's advice. But in this study, the four agents sit down at a table. They don't just vote; they negotiate. They use a special math formula (Nash Bargaining) that forces them to find a solution where no one loses out. It finds the "sweet spot" where the plan is safe enough, short enough, and fair enough all at the same time.

The Experiment: The "Taste Test"

The researchers tested this "Team AI" against a "Solo AI" (a single AI trying to critique its own work, like a chef tasting their own soup over and over). They did this with 200 real Medicaid patients from Virginia and Washington.

The Rules of the Game:

  • Privacy First: All the thinking happened on computers inside the hospital. No patient data ever left the building (a strict rule for medical privacy).
  • Fair Comparison: Both the Team AI and the Solo AI were given the exact same amount of computer power to work with.

The Results: Who Won?

The researchers had a "Judge" (another AI) rate the care plans on a scale of 0 to 1 for Safety, Efficiency, and Fairness.

  • Safety & Efficiency: The Team AI (Nash) won clearly. The plans were safer (fewer missed risks) and more efficient (easier to read and act on) than the Solo AI. It's like the Team AI served a meal that was both nutritious and easy to eat.
  • Fairness (Equity): This is the twist. The Team AI did not do better than the Solo AI at addressing social needs. The score was exactly the same.

Why did the Team AI fail at Fairness?
The authors realized that simply having a "Fairness Agent" isn't enough if the rules of the game don't force the AI to actually solve the problem. It's like having a nutritionist on the team who points out the patient is hungry, but the kitchen doesn't have any food to give them. The AI could see the problem, but the system wasn't designed to fix the deep-rooted social issues.

The Big Takeaway

This study proves that AI teams are better than AI individuals when it comes to balancing safety and speed. By letting specialized agents "negotiate" using math, we can create care plans that are practical and safe for patients.

However, the study also warns us: You can't just "add" fairness to an AI and expect it to work. If you want AI to be truly fair to underserved patients, you have to design the system specifically to solve those problems, not just hope the math will fix it automatically.

In short:

  • Old Way: One AI tries to do everything and ends up doing a mediocre job at all three.
  • New Way: A team of specialized AIs negotiates to create a plan that is safe, short, and smart.
  • The Catch: Even a great team needs a better "menu" (system design) to truly solve deep social inequalities.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →