Perceptions of Artificial Intelligence in the Editorial and Peer Review Process: A Cross-Sectional Survey of Traditional, Complementary, and Integrative Medicine Journal Editors

A cross-sectional survey of Traditional, Complementary, and Integrative Medicine journal editors reveals that while they recognize the potential of artificial intelligence to support routine editorial tasks, its actual adoption remains limited due to a lack of institutional policies, training, and ethical guidance.

Ng, J. Y., Bhavsar, D., Krishnamurthy, M., Dhanvanthry, N., Fry, D., Kim, J. W., King, A., Lai, J., Makwanda, A., Olugbemiro, P., Patel, J., Virani, I., Ying, E., Yong, K., Zaidi, A., Zouhair, J., Lee, M. S., Lee, Y.-S., Nesari, T. M., Ostermann, T., Witt, C. M., Zhong, L., Cramer, H.

Published 2026-03-04
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine the world of academic publishing as a massive, high-stakes baking competition.

In this competition, scientists (the bakers) submit their recipes (research papers) to a panel of judges (the journal editors). These judges have to taste the dishes, check if the ingredients are fresh, ensure the recipe is original, and decide if it's good enough to be published in the "World's Best Cookbook."

Now, imagine a new, super-smart robot assistant (Artificial Intelligence or AI) has just entered the kitchen. This robot can read recipes instantly, check for spelling mistakes, and even spot if someone copied a recipe from the internet.

This paper is like a survey sent to the head judges of the "Traditional, Complementary, and Integrative Medicine" (TCIM) baking competition. These judges specialize in unique, ancient, and culturally diverse recipes (like herbal remedies, acupuncture, and meditation) that don't always fit the standard "Western" cooking rules.

Here is what the survey found, translated into everyday language:

1. The Judges Know the Robot, But Don't Let It Cook Yet

The Situation: About 70% of the judges know who this robot assistant is and have played with it at home. They've used it to write emails or summarize news.
The Catch: When it comes to their actual job as judges, over 60% have never let the robot help them.

  • Analogy: It's like a master chef who owns a high-tech food processor but still insists on chopping onions by hand because they aren't sure if the machine will ruin the flavor of their special, ancient spice blend.

2. What the Judges Would Let the Robot Do

The judges are willing to let the robot handle the boring, repetitive chores. They love the idea of the robot:

  • Checking the Grammar: Fixing typos and making sure the recipe is written in clear English (81% support).
  • Checking for Plagiarism: Making sure the recipe wasn't stolen from someone else (67% support).
  • Analogy: They'd happily let the robot wash the dishes and scrub the pots, but they won't let it decide if the soup tastes good.

3. What the Judges Don't Want the Robot to Do

The judges are very skeptical about letting the robot handle the "human" parts of the job. They don't want the robot:

  • Talking to the Bakers: Handling complaints or explaining why a recipe was rejected.
  • Making the Final Call: Deciding if a complex, culturally unique recipe is actually "good" science.
  • Analogy: They fear the robot is like a robot butler who can't understand that a specific spice is sacred in one culture but offensive in another. The judges worry the robot will miss the "soul" of the recipe.

4. The Big Problem: No Instruction Manual

The biggest issue isn't that the judges hate the robot; it's that nobody taught them how to use it safely.

  • The Stats: About 65% of the judges said their journal or publisher has no rules (policies) about using AI. Even worse, most haven't taken any training courses.
  • Analogy: Imagine giving a judge a brand-new, complex robot butler but no instruction manual. The judges are saying, "We know this thing is powerful, but we don't know if it's safe to let it near our precious ingredients, and we don't know who to blame if it burns the kitchen down."

5. The Fear of "Hallucinations" and Bias

The judges are worried about two main things:

  • The "Lying" Robot: AI sometimes makes things up (called "hallucinations"). If the robot invents a fake ingredient or a fake study, it could ruin the whole cookbook.
  • The Biased Robot: If the robot was trained on data that only likes "Western" cooking, it might unfairly reject the "Eastern" or "Traditional" recipes this journal specializes in.
  • Analogy: It's like worrying that the robot might accidentally add poison to the soup because it read a fake recipe online, or because it thinks "spicy" is bad just because its training data came from a place that doesn't like spice.

6. The Future: "We Need It, But We Need Rules"

Despite the fears, 83% of the judges believe AI will be very important in the future. They see it as a tool that could save them hours of work.

  • The Verdict: They are ready to adopt the robot, but only if someone writes a clear instruction manual (policies) and gives them a cooking class (training) on how to use it without ruining the food.

Summary

The paper concludes that the judges of these unique medical journals are cautiously optimistic. They see the robot as a helpful assistant for washing dishes and checking spelling, but they are terrified of letting it taste the food or make the final decisions.

The takeaway: Before we let AI run the kitchen, we need to teach the judges how to use it, write clear rules about what it can and cannot do, and make sure it doesn't accidentally ruin the special, cultural flavors of the recipes it's reviewing.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →