Engineering FAIR Privacy-preserving Applications that Learn Histories of Disease

This paper presents a successful in-browser deployment of a generative transformer model for predicting individual disease morbidity risk, demonstrating a secure, client-side architectural blueprint that adheres to the Reusability principle of FAIR data standards while eliminating privacy concerns associated with data downloads.

Ines N. Duarte, Praphulla M. S. Bhawsar, Lee K. Mason, Jeya Balaji Balasubramanian, Daniel E. Russ, Arlindo L. Oliveira, Jonas S. Almeida

Published 2026-03-03
📖 5 min read🧠 Deep dive

🏥 The Big Problem: The "Glass House" of Medical AI

Imagine you have a very smart, super-intelligent doctor (an AI) who can look at your medical history and predict what illnesses you might get in the future. This is amazing for early prevention!

However, there's a catch. To get this doctor's advice, you usually have to mail your entire medical file to a giant, central hospital server.

  • The Risk: You have to trust that the server won't leak your secrets, get hacked, or sell your data.
  • The Fear: Many people are scared to use these tools because they don't want their private health data leaving their house.

🚀 The Solution: The "Pocket Doctor"

The team in this paper asked a simple question: "What if we didn't have to send the data to the doctor? What if we could bring the doctor to the data?"

They built a web application that runs entirely inside your web browser (like Chrome or Safari).

  • No Uploads: Your medical history never leaves your computer.
  • No Installations: You don't need to download a heavy app; you just visit a website.
  • The Result: The AI does the math right there on your screen, in your "living room," and then disappears. Your data never travels across the internet.

🧱 How Did They Do It? (The Engineering Magic)

To make a giant, complex AI run on a regular computer without a supercomputer, they used three main "tools":

1. The Universal Translator (ONNX)

Think of the original AI model as a high-end French chef trained in a fancy kitchen (using a framework called PyTorch). You can't just ask this chef to cook in a tiny camping stove (a web browser); the equipment doesn't match.

The team used a tool called ONNX to translate the chef's recipe into a "Universal Language."

  • The Analogy: It's like taking a complex French recipe and rewriting it so a campfire, a microwave, or a fancy stove can all cook the exact same dish. This allowed the AI to leave its "fancy kitchen" and run on the "camping stove" of your web browser.

2. The Bridge (WebAssembly)

Even with the recipe translated, your browser is usually a bit slow at doing heavy math.

  • The Analogy: Imagine the browser is a bicycle, but the AI needs a race car. WebAssembly is like a turbocharger. It lets the browser run the AI code at nearly the speed of a native computer program, making it fast enough to give you predictions in real-time.

3. The Personal Assistant (The SDK)

The AI speaks in a language of numbers and codes that humans can't read.

  • The Analogy: The team built a custom JavaScript SDK (a software toolkit) that acts as a translator.
    • Input: You type in your medical history (e.g., "I had a broken leg at age 10"). The SDK translates this into numbers the AI understands.
    • Processing: The AI runs the math.
    • Output: The SDK takes the AI's raw numbers and turns them back into human words: "Based on this, you have a 15% chance of developing arthritis by age 60."

🎮 What Does It Look Like?

The team created a website (a "Delphi App") that looks like a timeline.

  1. Left Side: You enter your past health events (like a timeline of your life).
  2. Right Side: As you type, the AI instantly draws a line into the future, predicting what might happen next.
  3. The Magic: All of this happens instantly on your screen. If you refresh the page, the data is gone. It never touched a server.

🌟 Why Is This Important? (The "FAIR" Principles)

The paper mentions "FAIR" principles (Findable, Accessible, Interoperable, Reusable). In simple terms:

  • Interoperable: They proved that AI models don't have to be stuck in one specific computer system. They can move anywhere.
  • Reusable: They didn't just build a one-time trick; they built a blueprint (a "blueprint for privacy") that other developers can use to build their own secure medical apps.

⚠️ The Catch (Limitations)

The authors are honest about the limitations:

  • The Training Data: The AI they used was trained on fake (synthetic) data because real patient data is too private to share easily. So, while the technology works perfectly, the predictions aren't as accurate as they would be if the AI had learned from millions of real patients.
  • Future Goal: They hope to eventually use real data (with strict privacy rules) to make the predictions truly life-saving.

🏁 The Bottom Line

This paper is a proof-of-concept that says: "We don't have to choose between powerful AI and your privacy."

By moving the "brain" of the AI from a central server to your local browser, they created a secure way to use advanced medical technology. It's like taking a supercomputer out of the cloud and shrinking it down to fit in your pocket, so you can keep your secrets safe while still getting the benefits of the future.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →