Vibe Researching as Wolf Coming: Can AI Agents with Skills Replace or Augment Social Scientists?

This paper argues that AI agents equipped with specialized skills can augment, but not fully replace, social scientists by executing codifiable research tasks autonomously through "vibe researching," while highlighting the enduring necessity of human theoretical originality and tacit knowledge alongside the profession's emerging risks of stratification and pedagogical crisis.

Yongjun Zhang

Published 2026-03-10
📖 6 min read🧠 Deep dive

Imagine you are a chef trying to create a new, award-winning dish. In the past, you had to chop every vegetable, measure every spice, and stir every pot yourself. If you wanted to try a complex recipe, it took you weeks of hard labor.

Now, imagine you have a super-intelligent kitchen robot (an AI Agent) that can do all the chopping, measuring, stirring, and even plating for you. It can read 10,000 cookbooks in a second, check if the ingredients are fresh, and even write the recipe card for you.

This paper, titled "Vibe Researching as Wolf Coming," asks a scary but exciting question: If the robot does all the cooking, what is left for the human chef to do?

Here is the breakdown of the paper in simple terms, using some creative analogies.

1. The "Wolf" is Already Here

The title references the fable of "The Boy Who Cried Wolf," but with a twist: The wolf is real, and it's at the door.

In the past, computers were just calculators (Wave 1) or data collectors (Wave 2). They could do math or find facts, but they couldn't think.

  • The Old Way: You asked the computer to run a number, and it gave you a result. You had to figure out what the result meant.
  • The New Way (AI Agents): The computer is now a "research assistant" that can do the whole job. It can find the books, read them, write the essay, check the grammar, and even pretend to be a critic reviewing your work.

The author calls this "Vibe Researching." It's like "vibe coding" (where you tell a computer what you want, and it writes the code without you reading the details). In research, you tell the AI your idea, and it builds the whole study for you.

2. The Tool: "Scholar-Skill"

The author built a specific tool called Scholar-Skill. Think of this as a Swiss Army Knife for researchers that has 26 different blades.

  • It can write the introduction.
  • It can find the data.
  • It can run the complex math.
  • It can format the paper for specific magazines (journals).
  • It can even simulate a panel of critics to tell you how to fix your paper before you submit it.

The author used this tool to write the paper you are reading right now. It did the heavy lifting, but the human (the author) still had to steer the ship.

3. The Big Problem: The "Cognitive Boundary"

The paper argues that we shouldn't just ask, "Can AI do this task?" Instead, we need to ask, "Does this task require a human soul (or 'tacit knowledge')?"

The author creates a map to separate tasks into two groups:

  • The "Recipe" Tasks (High Automation): These are like following a strict recipe. "Add 2 cups of flour," "Run this specific math formula," "Check for spelling errors."
    • The AI is amazing here. It never gets tired, never makes a math error, and can read every book in the library in seconds.
  • The "Gut Feeling" Tasks (Low Automation): These are like knowing which dish to cook for a specific guest, or knowing that a certain ingredient tastes weird in this specific season. This requires "field knowledge"—the unwritten rules, the politics, the intuition of what matters right now.
    • The AI struggles here. It can mimic the style, but it doesn't truly understand why an idea is brilliant or why a specific theory is dead. It can't feel the "vibe" of the academic community.

The Danger Zone: The paper warns that the boundary isn't between "Step 1" and "Step 2." The boundary cuts through every step. Even when writing a paper (a "Recipe" task), you need "Gut Feeling" to know if the argument makes sense. If you let the AI do the "Gut Feeling" parts, you might publish nonsense that looks perfect on the surface.

4. The Three Big Risks

If we let the AI take over too much, three bad things happen:

  1. The "Fragile" Chef: If you let the robot chop all the vegetables, you forget how to hold a knife. If the robot makes a mistake (and it will), you won't have the skills to catch it. You become a "manager" of a process you don't understand.
  2. The Rich Get Richer (Stratification): Only researchers at fancy universities with money can afford the best AI tools. Researchers at smaller schools or in other languages might get left behind. This creates a bigger gap between the "haves" and "have-nots" in science.
  3. The Student Crisis: If we teach students to just "press the button" and let the AI do the work, they won't learn how to think. When they graduate, they might be great at using the tool, but terrible at solving problems when the tool fails.

5. How to Survive the Wolf (The 5 Rules)

The author doesn't say "Ban the AI!" Instead, he says, "Use it wisely." Here are the 5 rules for "Responsible Vibe Researching":

  1. Be Honest (Disclose): Tell people, "I used an AI to help write this." It's not cheating; it's just being transparent.
  2. Double-Check Everything (Verify): Never trust the robot blindly. If the AI writes code, you must run it yourself. If it cites a book, check if the book actually exists.
  3. Keep Your Skills Sharp (Maintain Skills): Don't stop doing the hard work. Sometimes, write a paragraph by hand or run a math problem without the AI. You need to keep your "muscles" strong so you can spot when the AI is wrong.
  4. Keep the Big Ideas Human (Protect Originality): The AI can give you 100 ideas, but you must choose the one that is truly new and important. The "spark" must come from you.
  5. Share the Tools (Design for Access): Make sure these tools are available to everyone, not just the rich and powerful.

The Bottom Line

The paper ends with a powerful analogy: Aviation.
Pilots don't just "fly" anymore; they use autopilot. But they still train for years, check the instruments, and know how to take over if the autopilot fails.

AI is the autopilot for social science. It can fly the plane faster and smoother than ever before. But we still need to be the pilots. If we let go of the controls completely, we might crash into a mountain we didn't see coming.

The Wolf is here. The question isn't whether to let it in (it's already inside). The question is: Will you let it drive the car, or will you keep your hands on the wheel?