This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer
The Big Question: Who Needs to Know About the AI?
Imagine you are baking a cake. Sometimes you bake it entirely from scratch. Other times, you use a pre-made mix, or maybe you ask a friend to help you frost it.
Now, imagine a new rule is being debated: "Do you have to tell people if you used help (or a machine) to make your cake?"
This paper asks a very specific question: When do people (the readers) and the bakers (the writers) think it is necessary to admit that AI was used?
The researchers found that readers and writers often see this situation through very different lenses, and the answer depends heavily on how the AI was used.
The Three Main Ingredients of the Study
To figure this out, the researchers set up a "tasting menu" of 727 different scenarios (called vignettes). They mixed and matched three main ingredients to see how they changed people's minds:
- The Perspective (Who is talking?): Are you the Reader (eating the cake) or the Writer (baking it)?
- The Purpose (What is the cake for?): Is it for a serious job interview (Evaluation), a school lesson (Learning), or just a fun story for a blog (Entertainment)?
- The Procedure (How was the cake made?): This is the most complex part. They looked at four specific ways AI was involved:
- Replaceability: Could the writer have made this cake without the AI? (If the answer is "No," the AI was essential).
- Effort: Did the writer still work hard, or did the AI do almost everything?
- Intentionality: Did the writer give the AI a strict recipe and steer the ship, or did they just say "Make me a cake!" and let the AI take the wheel?
- Directness: Did the writer copy the AI's output directly onto the plate, or did they use it just as a reference?
The Surprising Results
Here is what the study found, translated into plain English:
1. The "Self-Serving" Gap
The Finding: Readers are much more likely to say, "Yes, you must tell us!" than writers are.
The Analogy: Imagine a magician. The audience (readers) wants to know if a trick is real or if it's a machine doing the work. But the magician (writer) might think, "I still performed the show; the machine was just a prop." The study suggests writers often feel less need to disclose because they still feel like the "owner" of the work, even if they used AI.
2. The "Purpose" Myth
The Finding: Surprisingly, it didn't matter much why the text was written.
The Analogy: You might think people care more about honesty when the text is for a serious job application (Evaluation) than for a funny blog post (Entertainment). But the study found that people's need to know about AI usage was roughly the same whether it was for school, work, or fun. The "stakes" didn't change the need for disclosure.
3. The "How" Matters Most (Procedural Factors)
The study found that how the AI was used changed everything. Disclosure was seen as more necessary when:
- The AI was Irreplaceable: If the writer couldn't have done it alone without the AI, people felt it was crucial to admit it.
- The AI Took the Wheel (Low Intentionality): If the writer just typed "Write a story" and let the AI decide the plot, readers felt a strong need to know.
- The AI Output was Used Directly: If the writer just pasted the AI's text with very few changes, disclosure was seen as necessary.
The Twist on Effort:
The researchers thought that if a writer put in very little effort (letting the AI do 90% of the work), people would demand disclosure. They were wrong. The amount of effort the writer put in didn't significantly change whether people thought disclosure was necessary. It wasn't about how tired the writer was; it was about how much the AI actually did.
4. The "Steering" Paradox
The Finding: This is the most confusing part.
- Readers think: "If you let the AI drive the car (Low Intentionality), you must tell us!"
- Writers think: "If I let the AI drive the car, I feel less need to tell anyone."
- The Analogy: It's like a driver who says, "I didn't really drive; the GPS drove." The driver feels less responsible, but the passengers (readers) feel like they need to know the car was on autopilot.
What Does This Mean for the Future?
The paper suggests that because writers and readers see things differently, we can't just rely on writers to "do the right thing" on their own.
The Solution: We need better tools.
Instead of just asking writers to "disclose AI use," platforms should help them explain how they used it.
- Don't just say: "I used AI."
- Do say: "I used AI to generate the outline, but I wrote the sentences myself," or "I let the AI write the whole draft, and I just edited it."
The paper suggests designing tools that automatically draft these explanations for writers, making it easier for them to be transparent in a way that matches what readers actually care about (like whether the AI did the heavy lifting or just helped with a few words).
Summary
- Readers want to know more about AI use than Writers do.
- Why the text was written (school vs. fun) doesn't change the need for disclosure.
- How the AI was used matters most: If the AI did the heavy lifting, took the lead, or was copied directly, people feel disclosure is necessary.
- Effort doesn't matter as much as we thought.
- Writers often feel less need to disclose when they let the AI take the lead, even though Readers feel the opposite.
Drowning in papers in your field?
Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.