An Approach for Safe and Secure Software Protection Supported by Symbolic Execution

This paper presents a novel copy-protection method for industrial control software that binds programs to specific hardware using Physically Unclonable Functions (PUFs) and employs symbolic execution to guarantee safety property preservation and security against reverse engineering.

Daniel Dorfmeister, Flavio Ferrarotti, Bernhard Fischer, Evelyn Haslinger, Rudolf Ramler, Markus Zimmermann

Published Thu, 12 Ma
📖 4 min read☕ Coffee break read

Imagine you own a very expensive, high-tech coffee machine. You want to sell the software that controls it, but you don't want people to steal that software and run it on a cheap, knock-off machine they built in their garage.

Usually, software is like a recipe: if you copy the recipe, you can bake the cake anywhere. But this paper proposes a new way to protect software so that it only works on the exact machine it was designed for. If you try to run it on a different machine, the software doesn't just crash or stop working; it keeps working, but it starts doing weird, unpredictable things that are still safe, just not useful to the thief.

Here is how they do it, broken down into simple concepts:

1. The "Digital Fingerprint" (The PUF)

Every physical object has tiny, microscopic imperfections. A fingerprint is unique to a person; a specific piece of computer memory (like DRAM) has unique physical quirks caused by the manufacturing process.

The authors use something called a PUF (Physically Unclonable Function). Think of the PUF as a magical lock on the machine.

  • The Challenge: The software asks the machine a question (e.g., "What is the electrical resistance of this specific memory cell?").
  • The Response: Because of the microscopic imperfections, the machine gives a unique answer.
  • The Catch: You cannot clone this machine. Even if a thief builds an exact replica of the hardware, the microscopic imperfections will be slightly different, so the "answer" to the question will be wrong.

2. The "Safe-But-Weird" Strategy

In the past, if software detected it was on the wrong machine, it might just shut down or crash. But in industrial settings (like controlling a robot arm or a traffic light), a crash can be dangerous.

This paper introduces a "Safe-by-Design" approach.

  • On the Real Machine: The software asks the PUF, gets the correct answer, and proceeds to the next step perfectly.
  • On a Fake Machine: The software asks the PUF, gets the wrong answer, and realizes, "Oh, I'm not on the right machine." Instead of crashing, it enters a "Safe Mode."
    • Analogy: Imagine a traffic light controller. On the real machine, it turns the light Green, then Red, then Yellow. On a fake machine, the software gets confused and starts turning the lights Green, then Red, then Green again, or maybe it flashes them randomly.
    • The Result: The traffic still flows (safely), but it's chaotic and doesn't follow the intended schedule. The thief can't use the machine for its intended purpose, but no one gets hurt.

3. The "Crystal Ball" (Symbolic Execution)

How do the authors know that the "Safe Mode" won't accidentally cause an accident (like turning two traffic lights green at the same time)?

They use a technique called Symbolic Execution.

  • The Metaphor: Imagine you are a director rehearsing a play. Instead of using real actors, you use "ghost actors" who represent every possible situation at once. You run the script through every possible combination of events to see if anything bad could happen.
  • The Application: Before the software is even released, the authors run this "ghost rehearsal." They mathematically prove that no matter what wrong answer the fake machine gives, the software will never enter a state that causes a disaster. It guarantees that the "weird behavior" is always safe.

4. Why Thieves Can't Win

The paper argues that stealing this software is a nightmare for a hacker:

  • Static Analysis (Reading the code): Even if the hacker steals the code and tries to read it, they can't figure out the correct answers to the PUF questions because those answers depend on the physical hardware, which they don't have.
  • Dynamic Analysis (Running the code): If they try to run the code on a fake machine, it just behaves chaotically. To figure out the "right" path, they would have to reverse-engineer the entire logic of the machine from scratch.
  • The Cost: It would take so much time and money to crack the protection that it's cheaper to just build a new machine from scratch.

Summary

This paper presents a security system that binds software to a specific piece of hardware using its unique physical "fingerprint." If the software is stolen and run on the wrong machine, it doesn't break; it just acts strangely but safely. The authors use advanced mathematical "rehearsals" to prove that this strange behavior will never cause an accident, making industrial software theft both difficult and economically unviable.