Social Proof is in the Pudding: The (Non)-Impact of Social Proof on Software Downloads

Through two field experiments on GitHub involving the manipulation of repository stars and package download counts, the study finds that social proof metrics have no discernible impact on subsequent software downloads or developer engagement, suggesting that open-source software choices are not easily gamed by inflating these indicators.

Lucas Shen, Gaurav Sood

Published Tue, 10 Ma
📖 4 min read☕ Coffee break read

Imagine you walk into a massive, chaotic flea market where thousands of vendors are selling tools. You need a specific wrench, but you've never seen any of these tools before. How do you decide which one to buy?

You probably look for social proof. You ask yourself: "Who else is buying this? How many people have given this tool a thumbs-up? Is it a best-seller?"

This is exactly how software developers choose "open-source" code (the digital tools that build our apps, websites, and systems). They can't inspect every single line of code for security flaws because it takes too long. So, they rely on the "crowd's" opinion. On the platform GitHub, this opinion is measured by "Stars" (like "Likes" on Facebook) and "Downloads."

The big fear is that bad guys (hackers) could fake these numbers. They could buy fake "Stars" or use robots to fake "Downloads" to make a dangerous, virus-ridden tool look popular and trustworthy, tricking developers into using it.

The Big Question: If a hacker fakes the popularity of a software tool, will it actually trick people into downloading it?

The Experiment: "The Fake Popularity Test"

Two researchers decided to find out by running a real-life experiment, like a scientist testing a new fertilizer on a field of crops. They did this in two ways:

1. The "Star" Experiment (The Like Button Test)

  • The Setup: They picked 100 brand-new, unknown software tools on GitHub.
  • The Trick: For half of them, they bought fake "Stars" from a black-market website. For the other half, they asked their friends to click the "Star" button.
  • The Result: They waited to see if these "popular" tools got more downloads.
  • The Verdict: Nothing happened. The tools with the fake stars didn't get any more downloads than the tools with zero stars. The developers didn't fall for the trick.

2. The "Download" Experiment (The Sales Counter Test)

  • The Setup: They picked thousands of software packages.
  • The Trick: They wrote a computer script to "download" these packages 100 times each, making the official download counter look huge (like a store that suddenly sold 100x more items than usual).
  • The Result: They waited to see if this fake sales spike made real people download the software.
  • The Verdict: Again, nothing happened. The real download numbers didn't budge.

Why Didn't the Trick Work?

You might wonder, "But isn't social proof supposed to work?" (Think of how we buy the most popular ice cream flavor). The researchers explain that software developers are different from regular shoppers for a few reasons:

  1. The "Stakes" are Higher: If you buy a bad ice cream, you just get a stomach ache. If a developer installs bad code, their entire website could crash, or their company could get hacked. Because the consequences are so scary, developers are much more careful. They don't just look at the "Star" count; they look at the code itself, the documentation, and the history of the project.
  2. They Know the Game: Developers know that "Stars" can be bought cheaply. It's like knowing that a restaurant can pay people to write fake 5-star reviews. Because they know the signal is "noisy" (unreliable), they ignore it.
  3. The "New Kid" Factor: The researchers tested this on new tools, where you'd expect social proof to matter most. Even then, the fake popularity didn't work.

The Takeaway: "The Absence of Evidence is Not Evidence of Absence"

The researchers conclude that faking a little bit of popularity doesn't work to trick developers.

However, they add a very important warning: Just because the trick didn't work in their small experiment doesn't mean it's impossible.

  • Imagine if the bad guys didn't just buy 50 stars, but 50,000 stars.
  • Imagine if they did this not just once, but kept it up for years.
  • Imagine if the "decision maker" wasn't a human developer, but an AI robot that just picks the most popular thing without thinking.

In simple terms:
The study is like testing if a single fake "Sold Out" sign on a store window makes people buy a product. It didn't work. But if the bad guys put up a million fake signs and flooded the street with actors, they might eventually succeed.

The Bottom Line:
For now, developers are smart enough to see through small lies. But as technology changes (like AI making decisions for us), the rules might change, and the "fake popularity" trick could become a much bigger danger. The researchers are saying, "Don't get complacent; keep watching the signs."