Cultural Perspectives and Expectations for Generative AI: A Global Survey Approach

This paper presents findings from a large-scale global survey that explores diverse cultural perspectives on Generative AI, distilling community-defined understandings of culture to propose recommendations for more inclusive and sensitive AI development, including participatory approaches and frameworks for addressing cultural boundaries.

Erin van Liemt, Renee Shelby, Andrew Smart, Sinchana Kumbale, Richard Zhang, Neha Dixit, Qazi Mamunur Rashid, Jamila Smith-Loud

Published 2026-03-09
📖 5 min read🧠 Deep dive

Imagine you are a chef trying to cook a single, giant pot of stew for the entire world. You want everyone to enjoy it, but you realize that what tastes like "comfort food" in one country might be considered offensive or even inedible in another.

This paper is essentially a massive taste-test survey conducted by Google researchers to figure out exactly what ingredients (cultural elements) people want in their AI "stew" and, more importantly, which ingredients they absolutely refuse to eat.

Here is the breakdown of their findings, translated into everyday language:

1. The Problem: The AI Chef is Missing the Recipe

Right now, Generative AI (the tech that writes stories or draws pictures) is like a chef who only learned to cook using recipes from a few specific neighborhoods (mostly Western, English-speaking ones). Because of this, when the AI tries to cook a dish for someone in India, Nigeria, or Brazil, it often gets the spices wrong. It might accidentally serve a "sacred" dish as a "joke" or mix up historical facts, leading to stereotypes or hurt feelings.

The researchers asked: "How do we teach this AI to respect the unique flavors of every culture?"

2. The Survey: Asking 5,600 People What Matters

To answer this, they didn't just guess. They asked 5,629 people from 13 different countries (including the US, Brazil, India, Japan, Nigeria, and more) two main things:

  • "What does 'culture' mean to you?"
  • "What parts of your culture should the AI NEVER touch?"

3. The Big Discoveries (The "Flavor Profiles")

A. What is Culture? (It's not just geography)
People didn't just say, "My culture is my country." They described it as a mix of:

  • The "Ancestral" Flavor: For many in Asia and Africa, culture is about ancestors, traditions, and the "total way of life."
  • The "Artistic" Flavor: In Europe, people often pointed to specific art, music, and famous landmarks (like the Eiffel Tower).
  • The "Religious" Flavor: This was the big winner. Across almost every country, Religion and Tradition were the most important parts of identity.

B. The "Red Lines" (The Forbidden Ingredients)
The researchers asked: "Is there anything you want the AI to never draw or talk about?"

  • The Consensus: Over 20% of people said "Yes," and in some countries, it was over 30%.
  • The Top Forbidden Items:
    • Sacred Things: Prayers, religious rituals, and holy texts. People feel AI lacks the "soul" to handle these respectfully.
    • Trauma: Specific historical tragedies (like slavery in the US or the Holocaust in Germany) that shouldn't be turned into a quick AI image.
    • Fake People: Many people (especially in Germany) said, "Don't make fake pictures of real people at all."
    • Misused Symbols: Using sacred symbols (like a religious statue) just to sell a product or make a meme.

C. The "Surprise" Sensitivities
While religion was the global "Red Line," different countries had their own specific sensitivities:

  • South Korea: People were very sensitive about health status and where you live.
  • India & UAE: The concept of "Caste" (a social hierarchy) was a major sensitive topic, even though it wasn't a top concern in other countries.
  • The US & Germany: Historical trauma was a huge "do not touch" zone.

4. The Recommendations: How to Fix the AI Kitchen

The authors suggest that AI developers stop trying to use a "one-size-fits-all" recipe. Instead, they propose a new approach with four pillars:

  1. Listen First (Awareness): Don't just guess what culture is. Use surveys and interviews to actually ask people what matters to them.
  2. Invite the Locals (Participation): Instead of just having a team of engineers in California decide what's "safe," bring in local religious leaders, historians, and community members to help train and check the AI. Think of it as hiring a local taste-tester for every region.
  3. Customize the Settings (Multi-facetedness): The AI should have "regional settings."
    • Example: If a user in the UAE asks for an image of a religious site, the AI should be extremely strict and accurate. If a user in France asks for a painting of a historical event, the AI might have more creative freedom.
  4. Create "Safety Layers" (Nuance): Not all sensitive topics are the same. The authors suggest a tiered system:
    • Tier 1 (The Hard Stop): Things that should never be generated (e.g., specific prayers, sacred rituals).
    • Tier 2 (The High-Stakes Zone): Things that can be generated, but only if the AI is 100% accurate and doesn't stereotype (e.g., historical figures, religious artifacts).

The Bottom Line

This paper argues that for AI to truly be "global," it needs to stop acting like a tourist who only knows the surface-level facts and start acting like a respectful guest who understands the deep, sacred, and sometimes painful history of the people it is serving.

In short: If you want to build an AI that the whole world trusts, you have to ask the world what they are comfortable with, respect their "Red Lines," and realize that Religion and Tradition are the most important ingredients in the recipe.