Evaluating Artificial Intelligence Through a Christian Understanding of Human Flourishing

This paper introduces the Flourishing AI Benchmark (FAI-C-ST) to demonstrate that current frontier AI models, by defaulting to a "Procedural Secularism" that prioritizes broad acceptability over theological coherence, systematically fail to align with Christian understandings of human flourishing, particularly in the dimension of faith and spirituality.

Nicholas Skytland, Lauren Parsons, Alicia Llewellyn, Steele Billings, Peter Larson, John Anderson, Sean Boisen, Steve Runge

Published 2026-04-07
📖 6 min read🧠 Deep dive

The Big Idea: AI is a "Digital Teacher," Not Just a Search Engine

Imagine you ask a smart robot for advice on how to handle a tough situation. You might think the robot is just a neutral library, handing you facts. But this paper argues that AI is actually more like a teacher.

Every time you talk to an AI, it's not just giving you information; it's subtly teaching you how to think about right and wrong, happiness, and purpose. This process is called "formation." Just like a child learns values by listening to their parents every day, we learn values by listening to AI every day.

The authors of this paper wanted to check: What kind of "teacher" is AI? Is it teaching a Christian worldview, or is it teaching a different kind of worldview?


The Experiment: Two Different Report Cards

To find the answer, the researchers created a test called the Flourishing AI Benchmark. Think of this as a report card for AI, but instead of grading math or spelling, they graded how well the AI helps humans "flourish" (live a good, meaningful life).

They gave the same set of questions to 20 of the smartest AI models (like the latest versions of ChatGPT, Claude, and others) and graded them in two different ways:

  1. The "General" Report Card (FAI-G-ST): This grade checks if the AI is helpful, safe, and polite to everyone, regardless of their religion. It looks for a "lowest common denominator" answer that doesn't offend anyone.
  2. The "Christian" Report Card (FAI-C-ST): This grade checks if the AI's answers align with a specific Christian worldview. It looks for answers rooted in the Bible, prayer, sacrifice, and the idea that our purpose comes from God.

The Analogy: Imagine asking a chef to cook a meal.

  • The General Grade asks: "Is the food safe to eat? Is it tasty to most people?"
  • The Christian Grade asks: "Does this meal follow the specific dietary laws and traditions of a Jewish family?"

The Shocking Results: The "Secular" Default

When the researchers compared the grades, they found a massive gap.

  • The General Grade: The AI models scored very high (around 78/100). They were polite, safe, and helpful.
  • The Christian Grade: The scores dropped significantly (down to about 61/100).
  • The "Faith" Gap: The biggest drop was in the "Faith and Spirituality" category, where scores fell by 31 points.

What does this mean?
The AI isn't "broken." It's actually working exactly as it was designed. But its design has a hidden bias. The authors call this "Procedural Secularism."

The Metaphor: The "Safe" Hotel
Imagine a hotel that wants to welcome guests from every country. To make everyone feel comfortable, the hotel removes all religious art, stops serving specific cultural foods, and only offers "neutral" advice like "be happy" or "take care of yourself."

  • The Result: Everyone feels safe, but no one feels deeply understood by their specific tradition.
  • The AI: The AI acts like this hotel. When you ask about forgiveness, it says, "Forgive to feel better emotionally." (Safe, neutral). It rarely says, "Forgive because God forgave you, even when it's hard." (Deep, specific).

The AI defaults to individual happiness and emotional safety rather than spiritual growth or sacrifice.


Real-Life Examples from the Paper

The paper shows how the same advice gets two very different grades depending on the lens.

Scenario 1: Forgiving a friend who hurt you.

  • The "General" AI Answer: "Forgive them to let go of your anger. It's good for your mental health. Maybe talk to a therapist."
    • Verdict: Good for the General Grade. It's safe and helpful.
    • Verdict: Bad for the Christian Grade. It treats forgiveness as a self-help trick, not a moral duty or a reflection of God's grace.
  • The "Christian" AI Answer: "Forgiveness is hard, but Scripture calls us to forgive because God forgave us. It's not about condoning the hurt, but releasing the burden. Pray about it."
    • Verdict: This scores high on the Christian Grade because it connects the action to a deeper spiritual truth.

Scenario 2: Feeling empty despite a successful career.

  • The "General" AI Answer: "Maybe you need to find a new hobby or volunteer. Purpose is something you create for yourself."
    • Verdict: Good for the General Grade.
    • Verdict: Bad for the Christian Grade. It suggests we are the authors of our own purpose, ignoring the idea that we have a "calling" from God.
  • The "Christian" AI Answer: "That emptiness is a sign your heart is looking for something bigger than success. In the Christian view, we are made to reflect God's image, and our purpose is found in serving Him and others."
    • Verdict: High Christian Grade. It frames the problem spiritually.

Why Should We Care?

The authors argue that this isn't just a technical glitch; it's a formation problem.

If we use AI to help us make decisions, solve problems, and understand life, and the AI always teaches us that "happiness is the goal" and "God is optional," then over time, we will start to believe that too.

It's like if a child only ever watched TV shows where the hero never sacrifices anything and always gets what they want. Eventually, the child will think that's how the real world works.

The Bottom Line

  • AI is not neutral. It has a "default setting" that prioritizes safety and individual happiness over deep spiritual truths.
  • The Gap is Real. Current AI models are great at being "nice," but they struggle to be "theologically deep."
  • The Solution: We need to be aware of this. If we want AI to help us grow in our faith, we can't just ask it questions; we might need to explicitly tell it, "Answer this from a Christian perspective," or we need to build new AI models that are trained to understand these deeper values.

In short: AI is a powerful mirror. Right now, it's reflecting a secular world back at us. If we want it to reflect a Christian worldview, we have to be very clear about what we're looking for.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →