From Carb Counting to Diagnosis: Real World Patient Uses and Attitudes Toward Large Language Models in Diabetes Management

This paper investigates how patients with diabetes currently utilize large language models (LLMs) for diverse self-management tasks, revealing that these tools serve as multifaceted aids for interpretation, decision-making, and emotional support while highlighting the need for safer integration into clinical ecosystems.

Nkweteyim, R. N., Shet, V. G., Iregbu, S., He, L.

Published 2026-03-19
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine living with diabetes is like trying to pilot a very complex, high-tech spaceship. You have to constantly check your fuel gauges (blood sugar), adjust your engines (insulin), plan your route (diet), and make sure the ship doesn't crash, all while the rules of space travel keep changing. It's exhausting, confusing, and sometimes you feel like you're flying blind.

This paper is a report card on how people are trying to use a new, super-smart co-pilot called Large Language Models (LLMs)—like ChatGPT, Gemini, or Copilot—to help them fly their diabetes spaceship.

Here is the breakdown of what the researchers found, explained simply:

1. The Problem: The Pilot is Overworked

Managing diabetes is a full-time job. Patients have to count carbs, check numbers, take meds, and worry about the future. They often feel overwhelmed and don't always have a doctor available 24/7 to answer questions like, "Can I eat this?" or "Why is my sugar high right now?"

2. The New Tool: The "Super-Intern"

Enter the LLM. Think of an LLM as a super-intelligent, 24/7 intern who has read every medical book, website, and diet plan in existence. It can chat with you, answer questions instantly, and look at your data.

The researchers went into online communities (like Reddit) where diabetes patients hang out and asked: "How are you using this new intern?" They looked at nearly 3,000 posts and comments to see what was actually happening in the real world.

3. What Patients Are Doing with Their "Intern"

The study found that patients are using these AI tools for nine different jobs, ranging from "light chores" to "heavy lifting."

  • The Meal Planner (Most Popular): Just like asking a friend for dinner ideas, patients ask the AI, "What can I eat that won't spike my sugar?" or "Help me count the carbs in this pizza."
  • The Data Detective: Patients upload charts of their blood sugar levels and ask the AI, "Why did my sugar go up last night?" The AI acts like a detective, looking for patterns in the numbers that a human might miss.
  • The Translator: Medical jargon is like a foreign language. Patients use the AI to translate complex lab results (like "HbA1c") into plain English.
  • The Emotional Cheerleader: Sometimes, patients just need to vent. They chat with the AI to feel less alone or to get a joke to lighten the mood.
  • The Troubleshooter: If a medical device (like a continuous glucose monitor) starts beeping annoyingly, patients ask the AI, "Is this broken? How do I fix it?"
  • The Second Opinion: Some patients are even asking the AI, "Based on my symptoms, do I have diabetes?" or "Is my doctor right about my diagnosis?"

4. The Good, The Bad, and The Dangerous

The researchers found that while the "intern" is helpful, it's not perfect.

  • The Good: It's fast, it's always awake, and it makes patients feel less alone. It helps them understand their disease better and reduces the mental load of managing it.
  • The Bad: The intern sometimes hallucinates. This means it confidently makes things up. One user asked for diet advice, and the AI suggested eating 6 eggs a day, which was the opposite of what their nutritionist said.
  • The Dangerous: This is the big worry. Some patients are letting the AI make life-or-death decisions. They are asking the AI, "How much insulin should I take right now?" and then actually doing it.
    • The Analogy: Imagine asking your super-smart intern to adjust the nuclear reactor on your spaceship. If the intern gets the math wrong, the ship explodes. The AI is not a doctor; it doesn't have the legal license or the human judgment to make those calls.

5. The Big Lesson: Don't Fire the Pilot, Hire a Better Co-Pilot

The paper concludes that we can't stop patients from using these tools; they are too useful and too accessible. Instead, we need to figure out how to use them safely.

  • For Patients: Treat the AI like a library, not a doctor. Use it to learn, to brainstorm, and to organize your data, but always double-check the big decisions with a real human professional.
  • For Doctors: Don't ban the tool. Instead, ask your patients, "Are you using AI? What is it telling you?" and help them understand the difference between a helpful suggestion and a dangerous order.
  • For Tech Companies: We need to build "guardrails." The AI should be programmed to say, "I am not a doctor. Please check with your healthcare team before changing your insulin."

The Bottom Line

Patients are already using AI to manage their diabetes, and they are doing it in creative, sometimes risky, ways. The goal isn't to stop them, but to make sure the "super-intern" is supervised by a human pilot so that the spaceship stays safe and on course.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →