Evaluating AI-Enabled deception vulnerability amongst Sub-Saharan-Africa migrants

This study evaluates the vulnerability of Sub-Saharan African migrants to AI-enabled deception, finding that prior exposure to targeting is the strongest predictor of risk, while confidence in identifying AI content and high verification effort serve as significant protective factors.

Deborah Oluwasanya

Published Tue, 10 Ma
📖 5 min read🧠 Deep dive

Here is an explanation of Dr. Deborah Oluwasanya's paper, translated into simple, everyday language with some creative analogies.

The Big Picture: The "Digital Sandwich" Trap

Imagine you are at a deli. The most delicious sandwich isn't just a pile of lies; it's a sandwich. You have a slice of truth on the bottom, a slice of truth on the top, and right in the middle? A slice of pure poison (the lie).

That is how AI-enabled deception works today. Scammers use Artificial Intelligence to create messages, videos, or voices that look and sound 99% real. They mix just enough truth to make you trust them, hiding the lie in the middle so you don't suspect a thing.

This study asks a scary question: Are Sub-Saharan African (SSA) migrants, who often send money home across borders, the perfect targets for these "poison sandwiches"?

The Study: Who Did They Ask?

The researcher, Dr. Oluwasanya, didn't just guess. She went out and asked 31 professionals from Nigeria (mostly living in the UK and US) to fill out a survey. She wanted to know:

  1. How long have you been abroad?
  2. Do you send money home often?
  3. Have you ever been scammed by AI?
  4. How good are you at spotting a fake AI message?

The Big Surprises (The Results)

The study found some things that might surprise you, and some that make perfect sense.

1. The "Target Practice" Effect (The Bullseye)

The Finding: The biggest sign that you are vulnerable is if you have already been targeted by a scam before.
The Analogy: Think of a scammer like a fishing net. If you get caught in the net once, it doesn't mean you are a "bad swimmer." It means the fisherman saw you, marked you as a "good catch," and is now coming back with a bigger net.
What it means: Scams aren't random. If a scammer tries to trick you and you respond (even just to say "no"), they know you are real, you are reachable, and you might be vulnerable. They will try again, and harder.

2. The "Superpower" of Confidence

The Finding: People who felt confident they could tell the difference between a human and an AI were much less likely to get scammed.
The Analogy: Imagine AI deception is a magic trick. If you know how the magician does the trick (the sleight of hand), the magic doesn't work on you.
What it means: If you know what AI looks like, you have a "superpower." You can spot the fake voice or the weirdly perfect photo. This confidence acts like a shield.

3. The "Pause and Check" Habit

The Finding: People who took the time to double-check a suspicious message before acting were safer.
The Analogy: Think of a suspicious message like a strange package left on your porch. A vulnerable person might open it immediately. A safe person puts on gloves, checks the label, and calls the police to verify it.
What it means: The act of verifying (pausing, checking, asking a friend) is the most powerful tool you have. It's a habit you can learn.

4. The Myth of the "Remittance Risk"

The Finding: Surprisingly, sending money home (remittances) or living abroad for a long time did not make people more vulnerable.
The Analogy: Many people thought that because migrants send money often, they are like sitting ducks for scammers. But the study showed that the act of sending money isn't the problem. The problem is the scammer's trick.
What it means: You aren't vulnerable just because you are a migrant. You are vulnerable if you don't check your "digital mail" carefully. The risk isn't in your bank account; it's in your brain's ability to spot a lie.

The "Sandwich" Solution: What Should We Do?

The paper suggests we need to change how we protect ourselves and our communities. Here are the three main strategies, explained simply:

1. The "Labeling" Law (Infrastructure)
Just like food packaging must list ingredients, AI content should be labeled. If a video is made by AI, it should have a stamp saying "Made by Robot." This helps us spot the "poison sandwich" before we take a bite.

2. The "Inoculation" Shot (Behavior)
Think of AI scams like a virus. The best way to fight a virus isn't just to wear a mask; it's to get a vaccine.

  • The Vaccine: "Inoculation training." This means showing people fake AI scams before they happen. Let them practice spotting the fake. When the real scam comes, their brain recognizes the pattern immediately, just like your body fights a virus it's seen before.

3. The "Stop and Verify" Rule (Policy)
Apps and banks should force a "Pause Button." Before you send money or click a link, the system should ask, "Are you sure? Have you checked this?" This small delay gives your brain time to switch from "panic mode" to "thinking mode."

The Bottom Line

Being a migrant or sending money home doesn't make you a target. Being unprepared makes you a target.

The study tells us that if we teach people to spot the fake, trust their gut, and take a second to verify, we can stop the "poison sandwiches" from working. It's not about having better technology; it's about having smarter, more cautious habits.