Science-wide mapping and ranking of institutions based on affiliated authors' impact and research integrity proxies

This study generates a comprehensive dataset and proposes a penalized percentile ranking system for nearly 7,000 research institutions, balancing the volume of top-cited authors against proxies for research integrity issues like excessive self-citation, publication in discontinued journals, and retractions to provide a more nuanced assessment of institutional impact.

Ioannidis, J., Baas, J., Boverhof, R., Voyant, C.

Published 2026-04-12
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine the world of scientific research as a massive, bustling city with thousands of different neighborhoods (universities, research institutes, and tech companies). For years, people have tried to rank these neighborhoods based on how "famous" their residents are. Usually, they just count how many people in a neighborhood have won a "Best Citizen" award (highly cited papers).

But this paper argues that this method is flawed. It's like judging a neighborhood only by the number of celebrities living there, while ignoring whether those celebrities are actually good neighbors, or if some of them are just buying awards, shouting their own name to get attention, or living in houses that are falling apart.

Here is a simple breakdown of what the authors did, using some creative analogies:

1. The Problem: Counting Stars vs. Checking the Neighborhood

Most rankings are like a Gold Star Count. They look at a university and say, "Wow, you have 1,000 famous scientists! You are #1!"

  • The Flaw: This ignores the size of the school. A giant university with 50,000 students might have 1,000 stars, but that's only 2% of its population. A tiny, elite research lab with only 20 people might have 10 stars (50% of its population). The giant school looks "bigger," but the tiny lab is actually more "star-studded."
  • The Integrity Issue: Even worse, some of those "stars" might be fake. They might be famous because they only cite their own work (like shouting "I'm great!" over and over), they publish in sketchy magazines that were later banned, or they have papers that were taken back (retracted) because of mistakes or cheating.

2. The Solution: The "Smart Neighborhood Score"

The authors (led by Dr. John Ioannidis) decided to build a new scoreboard. Instead of just counting stars, they created a "Smart Neighborhood Score" that balances Fame with Good Behavior.

They looked at nearly 7,000 institutions and applied three specific "Integrity Checks":

  • The "Echo Chamber" Check (Self-Citations):
    • Analogy: Imagine a town where the mayor only talks about himself at every town hall meeting.
    • The Rule: If a scientist cites their own work too much (more than 95% of their peers), it's a red flag. It suggests they are trying to game the system to look more popular than they are.
  • The "Banned Magazine" Check (Discontinued Titles):
    • Analogy: Imagine a chef who only cooks in restaurants that have been shut down by health inspectors for serving rotten food.
    • The Rule: If a scientist publishes mostly in journals that Scopus (the giant library database) has kicked out for being low-quality, it's a penalty.
  • The "Recalled Product" Check (Retractions):
    • Analogy: Imagine a car manufacturer that has to recall thousands of cars because the brakes don't work.
    • The Rule: If a scientist has papers that were officially taken back (retracted) because of errors or misconduct (not just a typo by the publisher), the institution gets a heavy penalty.

3. The Formula: The "Good Neighbor" Equation

The authors created a math formula that looks like this:

Your Score = (How many famous scientists you have) MINUS (How many bad behaviors they show).

  • The Penalty: If an institution has a scientist who published a retracted paper, it's a huge hit. The authors say that two retracted papers are as bad as losing one famous scientist.
  • The Bonus/Penalty: If an institution has too many scientists who shout at themselves (self-cite) or publish in banned magazines, their score goes down.

4. The Shocking Results: Who Won and Who Lost?

When they ran the numbers, the rankings changed dramatically.

  • The Old Winners: Huge universities like Harvard and Stanford still did very well, but not always at the very top of the proportion list.
  • The New Stars: Small, elite research institutes and tech companies (like Meta FAIR, Princeton, and Carnegie Mellon) shot to the top. Why? Because a huge chunk of their scientists are actually top-tier, and they have very few "bad apples."
  • The Big Drop: Some countries and institutions that looked great on the old "Gold Star" lists fell hard.
    • Saudi Arabia, China, Malaysia, Iran, India, and Indonesia saw their rankings plummet.
    • Why? While they have many famous scientists, a surprisingly high number of them were flagged for the "bad behaviors" (retractions, self-citing, or publishing in banned journals).
    • The Analogy: It's like a city that claims to have the most "Best Citizen" awards, but when you check the records, you find that half the winners were caught cheating or living in condemned buildings. Once you subtract those, the city isn't looking so great anymore.

5. The Takeaway: Quality Over Quantity

The main message of this paper is: Don't just count the trophies; check if they were earned fairly.

The authors aren't saying these countries or institutions are "bad." They are saying that if you want to know who is truly excellent, you have to look at the whole picture:

  1. How many great scientists do they have?
  2. What percentage of their total staff are great?
  3. Are those great scientists playing by the rules?

They provide a free, public database so that anyone can check these "Smart Neighborhood Scores." It's a tool to help universities, governments, and the public stop being fooled by big numbers and start looking for honest, high-quality science.

In short: It's a reminder that in science, as in life, it's better to be a small group of honest, hard-working people than a huge crowd of people trying to cheat their way to the top.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →