RISCBench: Benchmarking RISC-V Orchestration Efficiency in FPGA and FPGA-Like Computing Engines

This paper introduces RISCBench, a benchmark suite and methodology that quantifies orchestration efficiency in heterogeneous RISC-V systems using a new Sustained Instantaneous Throughput (SIT) metric to address the limitations of conventional performance indicators in FPGA and accelerator-class platforms.

Dave Ojika, Projjal Gupta, Preethi Budi, Herman Lam, Shreya Mehrotra

Published 2026-03-10
📖 3 min read☕ Coffee break read

Imagine you are running a massive, high-tech kitchen. You have incredible, super-fast ovens (the accelerators) that can cook thousands of dishes in a second. But, you also have a head chef (the control core) whose job isn't to cook, but to tell the ovens when to start, move ingredients around, and make sure the dishes don't collide.

For years, when people evaluated these kitchens, they only looked at the ovens. They asked, "How many dishes can this oven cook in one second?" They measured the peak speed (like FLOPs or TOPS).

The Problem:
The authors of this paper, Dave, Preethi, and their team, realized that measuring the oven's speed is useless if the chef is slow, confused, or running out of ingredients. Even if your oven can cook 1,000 pizzas a minute, if your chef takes 10 minutes to decide where to put the cheese, your actual output is terrible.

In the world of computers (specifically FPGAs and RISC-V chips), the "chef" is the control system. As computers get more complex, the "chef" often becomes the bottleneck, slowing everything down.

The Solution: RISCBench
The team created a new tool called RISCBench. Think of this as a new way to judge a kitchen. Instead of just timing how fast the oven can go, they time how fast the kitchen actually works over a whole dinner service.

They introduced a new metric called SIT (Sustained Instantaneous Throughput). Here is the best way to understand it:

  • The Old Way (Peak Speed): Imagine a race car driver hitting 200 mph for exactly 3 seconds on a straightaway. The old metrics say, "Wow, 200 mph! That's a fast car!"
  • The New Way (SIT): The SIT metric looks at the whole race. It sees that the driver hit 200 mph for 3 seconds, but then spent 2 minutes stuck in traffic, waiting for a pit crew, and getting lost. The SIT score reflects the average speed over the whole race, not just the 3 seconds of glory.

What They Found:
They tested this on two types of computer setups:

  1. The Soft Core: Like a chef working in a temporary, makeshift kitchen (a standard FPGA).
  2. The Hard Core: Like a chef in a state-of-the-art, permanent kitchen with dedicated staff (an advanced accelerator chip).

They ran a test where the "chef" had to move ingredients (data) around constantly.

  • At the start: The kitchen ran perfectly. The ovens were blazing hot, and the output was huge.
  • A moment later: The "chef" got overwhelmed. The instructions to move ingredients got backed up. The ovens had to wait. The speed dropped.

The old metrics would have ignored this drop because they only looked at the first few seconds. RISCBench and SIT caught this drop. They showed that as computers get more tightly packed together, the "chef" (orchestration) becomes the most important part of the system. If the chef isn't efficient, the super-fast ovens are wasted.

Why It Matters:
This is a big deal for AI and future computing. As we build smarter chips for things like self-driving cars or AI assistants, we can't just make the "engines" faster. We have to make sure the "control system" is just as good at managing the chaos.

The Takeaway:
The authors have made their tool open-source (free for everyone to use). They are inviting other engineers and researchers to stop just measuring the "top speed" of computers and start measuring how well they perform over time. It's a shift from asking "How fast can it go?" to "How fast does it go when things get real?"