Measuring Research Convergence in Interdisciplinary Teams Using Large Language Models and Graph Analytics

This paper introduces an AI-driven, multi-layer framework that integrates large language models and graph analytics to map and evaluate research convergence in interdisciplinary teams by extracting NABC-aligned viewpoints, analyzing their flow and influence over time, and validating findings through human-in-the-loop expert assessment.

Wenwen Li, Yuanyuan Tian, Sizhe Wang, Amber Wutich, Paul Westerhoff, Sarah Porter, Anais Roque, Jobayer Hossain, Patrick Thomson, Rhett Larson, Michael Hanemann

Published 2026-03-24
📖 5 min read🧠 Deep dive

Imagine a group of experts trying to solve a massive, messy puzzle: how to bring clean water to communities that don't have it.

This team includes engineers, lawyers, social scientists, data analysts, and community organizers. They are all brilliant, but they speak different "languages." An engineer talks about pipes and filters; a lawyer talks about rights and regulations; a sociologist talks about community trust.

The big question is: Are they actually learning from each other and building a single, unified solution, or are they just talking past one another?

This paper introduces a new, high-tech way to answer that question. Instead of waiting years to see if they publish a joint paper (which is like waiting for the puzzle to be finished to see if the pieces fit), the authors used Artificial Intelligence (AI) to watch the team in real-time as they worked.

Here is how they did it, using some fun analogies:

1. The "Common Language" Translator (The NABC Framework)

First, the researchers needed a way to translate these different experts into a common language. They used a structure called NABC (Needs, Approaches, Benefits, Competition).

  • Think of it like a universal recipe card. No matter if you are a chef or a farmer, you have to fill out the same four boxes: What do we need? What's our plan? Why is it good? Who else is trying to do this?
  • By forcing everyone to speak in this format, the team's ideas became easier to compare.

2. The AI "Scribe" and "Detective" (Large Language Models)

The team recorded 11 meetings where they presented their ideas. The researchers fed these recordings into a Large Language Model (LLM)—a super-smart AI.

  • The Scribe: The AI read the transcripts and pulled out the key "viewpoints" (ideas) from each person, organizing them into the NABC boxes.
  • The Detective: The AI then started looking for connections. It asked: "Did the lawyer's idea about water rights inspire the engineer's new filter design?" or "Did the data scientist's map of poor areas change how the social scientist talked to the community?"

3. The "Social Map" (Graph Analytics)

Once the AI collected all the ideas, the researchers turned them into a 3D visual map (a graph).

  • The "Popular" Ideas (The Town Square): Imagine a crowded town square. If many people are talking about the same thing (e.g., "Water is unsafe for kids"), that idea becomes a big, glowing node in the center of the map. This shows convergence—everyone agrees on this core problem.
  • The "Unique" Ideas (The Lighthouse): Some ideas are very specific, like a water engineer talking about a specific chemical process. These are small, isolated islands on the map. They are unique and important, but they haven't been shared with the rest of the team yet.
  • The "Influence" (The Wind): The map also shows arrows. If an arrow points from the Lawyer to the Engineer, it means the Lawyer's idea influenced the Engineer. The researchers used math to see who was the "wind" blowing ideas around the room.

4. The "Time-Lapse Movie" (Temporal Analysis)

Finally, they watched how this map changed over the 12 months.

  • Early on: The map looked like a scattered cloud of disconnected dots. Everyone was shouting their own unique ideas.
  • Later on: The dots started connecting. The "Town Square" got bigger, and the "Islands" started building bridges to the mainland.
  • The Result: They measured this by counting the "threads" connecting the dots. As time went on, the threads increased, proving that the team was converging—they were weaving their separate threads into a single, strong rope.

Why This Matters

Usually, we only know if a team is working well after the project is done, by looking at their final report. This new method is like having a GPS tracker for a team's brain.

It allows leaders to see:

  • Are we stuck? (If the map stays scattered, maybe we need a new meeting to connect the dots.)
  • Who is the glue? (We found that "Participatory Social Science" was the glue holding the team together, connecting the engineers to the lawyers.)
  • Are we converging? (Yes, the map shows they are moving from "many voices" to "one chorus.")

The "Human-in-the-Loop" Safety Net

The authors were careful not to trust the AI blindly. They knew AI can sometimes "hallucinate" (make things up). So, they used a Human-in-the-Loop approach:

  • After the AI suggested a connection, a human expert checked it.
  • It was like a teacher grading a student's essay. The AI did the heavy lifting, but the human made sure the logic was sound.

The Bottom Line

This paper shows that we can use AI to turn the messy, invisible process of teamwork into a clear, visual story. It proves that when diverse experts work together, they don't just add their ideas up; they multiply them, creating a solution that is greater than the sum of its parts. And now, we have a way to measure exactly how that magic happens.