Derivatives on Graphs for the Positive Calculus of Relations with Transitive Closure

This paper establishes that the equational theory of the positive calculus of relations with transitive closure (PCoR*) is EXPSPACE-complete by introducing graph derivatives that extend word-based derivatives to construct finite automata on path decompositions, leveraging a linearly bounded pathwidth model property to decide equivalence.

Yoshiki Nakamura

Published 2026-03-12
📖 6 min read🧠 Deep dive

Imagine you are trying to solve a massive, complex puzzle. The pieces are not just shapes, but entire maps of relationships between things. Some pieces tell you "A is related to B," others say "A is related to B and C," and some say "You can get from A to B by going through any number of intermediate stops."

This is the world of PCoR* (Positive Calculus of Relations with Transitive Closure). It's a mathematical language used by computer scientists to describe how things connect, move, and interact. The big question this paper answers is: "Can we always tell if two different descriptions of these maps actually mean the exact same thing?"

Here is the story of how the author, Yoshiki Nakamura, solved this puzzle.

1. The Problem: The "Map" Mystery

Think of a Relation as a subway map.

  • Identity (1): A station connected to itself.
  • Composition (;): Taking the red line, then the blue line.
  • Union (+): You can take the red line OR the blue line.
  • Intersection (∩): You can only go where the red line AND the blue line overlap.
  • Transitive Closure (*): You can take the red line as many times as you want, looping around the city.

For decades, mathematicians knew that if you added a "Complement" operator (meaning "everything except this route"), the puzzle became impossible to solve (undecidable). But if you stick to just the "Positive" operators (the ones listed above), the question remained: Is there a fast enough way to check if two complex subway maps are identical?

The answer turned out to be Yes, but it's a very hard "Yes." It belongs to a complexity class called EXPSPACE-complete.

  • Analogy: Imagine trying to solve a Sudoku puzzle. A normal Sudoku is easy (P). A Sudoku with a million cells is hard (NP). This problem is like a Sudoku where the number of cells grows exponentially with the size of the puzzle. It's solvable, but it requires a supercomputer with a massive amount of memory.

2. The Solution: "Derivatives on Graphs"

To solve this, Nakamura invented a new tool called Derivatives on Graphs.

The Old Way (Words):
In the 1960s, mathematicians figured out how to handle simple "words" (like strings of letters: "cat", "dog"). They used a trick called Brzozowski's Derivatives.

  • Analogy: Imagine you have a sentence: "The quick brown fox."
  • If you ask, "What comes after 'The'?" the derivative gives you "quick brown fox."
  • If you ask, "What comes after 'The quick'?" it gives you "brown fox."
  • By peeling off one letter at a time, you can build a machine (an automaton) that checks if a sentence is valid.

The New Way (Graphs):
Nakamura realized that relationships aren't just lines of text; they are networks (graphs). You can't just peel off a "letter" from a map; you have to peel off a path or a section of the map.

He extended the "peeling" idea to 2D maps.

  • Analogy: Imagine you have a giant, tangled ball of yarn representing a complex relationship. Instead of trying to look at the whole ball at once, you cut off a small, manageable knot (a "bag" of the map). You ask: "If I remove this knot, what does the rest of the map look like?"
  • He calls this a Derivative. It transforms a complex map into a simpler map, step-by-step.

3. The Magic Trick: "Gluing" and "Un-gluing"

The genius of the paper lies in how he handles the complexity.

The Pathwidth Property:
Nakamura proved that any map generated by these rules has a hidden structure: it can be broken down into a sequence of small, overlapping chunks (like a train of connected train cars). This is called Pathwidth.

  • Analogy: Even a huge, messy city can be viewed as a series of neighborhoods. You don't need to see the whole city to understand how to get from point A to point B; you just need to know the layout of the specific neighborhoods you pass through.

The Decomposition Theorem:
This is the core of the paper. Nakamura showed that you don't need to analyze the whole giant map at once.

  1. Break it down: Split the giant map into a sequence of small, overlapping neighborhoods.
  2. Analyze locally: Use his "Derivative" tool to check the rules in each small neighborhood.
  3. Glue it back together: Because the neighborhoods overlap, you can mathematically "glue" the results together to see if the whole map works.
  • Analogy: Imagine checking if a long chain of paperclips is unbroken. Instead of pulling the whole chain, you check the connection between clip 1 and 2, then 2 and 3, then 3 and 4. If every local connection is solid, the whole chain is solid.

4. The Result: A Finite Machine

By using these derivatives and the "gluing" trick, Nakamura built a Finite Automaton (a simple state machine) that can read these complex maps.

  • Because the maps have a limited "width" (pathwidth), the machine doesn't need infinite memory. It just needs enough memory to hold the current "neighborhood" it's looking at.
  • This proves that we can always decide if two maps are equal, but the amount of memory needed grows very fast (exponentially) as the maps get more complex.

5. Why This Matters

This isn't just about abstract math. This logic is the backbone of:

  • Database Queries: Checking if two ways of asking a database for data will return the same results.
  • Program Verification: Proving that a piece of software behaves exactly as intended, even with loops and complex conditions.
  • Hybrid Logic: A way to reason about systems that change over time and space (like traffic control or robot navigation).

Nakamura also showed that even if you add "Tests" (like "Is the light red?") or "Nominals" (like "Go to the specific station named 'Central'"), the problem remains solvable, though still very memory-intensive.

Summary

Yoshiki Nakamura took a problem that looked like trying to untangle a giant, infinite knot of string. He realized the knot had a hidden, linear structure. He invented a way to "cut" the knot into small, manageable pieces, analyze each piece, and then mathematically prove that if the pieces fit, the whole knot is solved.

He proved that while the puzzle is incredibly difficult (requiring a supercomputer's memory), it is solvable. We have a method to check if two complex relationship maps are truly the same.