Positionality in Σ_0^2 and a completeness result

This paper establishes that prefix-independent positional objectives in Σ02\Sigma_0^2 with a neutral letter are characterized by history-deterministic monotone co-Büchi automata over countable ordinals, a result that generalizes prior criteria, provides a new proof of closure under union, proves the positionality of mean-payoff games over arbitrary graphs, and demonstrates a completeness property for objectives positional over finite graphs.

Pierre Ohlmann, Michał Skrzypczak

Published 2026-03-13
📖 5 min read🧠 Deep dive

Imagine you are playing a very long, perhaps even infinite, board game against an opponent. The game takes place on a map (which could be a small town or a sprawling, infinite universe). Every time you move a piece, you pick up a "token" with a number or a letter on it. The goal is to create a specific pattern with the sequence of tokens you collect over time.

In this paper, the authors are studying a special kind of strategy called a "positional strategy."

The Core Concept: The "Amnesiac" Player

Usually, to win a complex game, you need a memory. You might think, "I went left twice and then right once, so now I should go up." This is a strategy based on history.

A positional strategy is like playing with amnesia. You only look at where you are right now. You don't care how you got there.

  • The Question: For which types of games can you win without remembering the past? If you have a winning strategy, does a "forgetful" (positional) strategy also exist?

The authors focus on a specific class of games called Σ20\Sigma^0_2 objectives. In plain English, these are games where the winning condition is a bit complex but not impossible to describe: "You win if, eventually, you stop doing something bad, or if you do something good infinitely often, but with a specific structure."

The Big Discovery: The "Magic Filter"

The authors found a way to describe exactly which of these "forgetful-friendly" games exist. They proved that a game allows for a positional strategy if and only if it can be recognized by a specific type of machine called a "History-Deterministic Monotone Co-Büchi Automaton."

Let's break down that scary name with an analogy:

  1. The Machine (Automaton): Imagine a robot that watches your game. It has a set of rooms (states) it can be in.
  2. Monotone: The robot's rooms are arranged on a staircase. You can only move down the stairs or stay on the same step. You can never go back up. This means the robot is "simplifying" the game as it goes along.
  3. History-Deterministic: This is the magic trick. Usually, a robot might need to guess which path to take based on the past. But this specific robot is smart enough to make the right choice just by looking at the current step and the current move, without needing to guess. It's like a GPS that never gets lost, even if you take a wrong turn, because it instantly recalculates the best path forward.
  4. Co-Büchi: This is the rule for winning. The robot wins if it stops visiting a specific "bad" room after a while.

The Takeaway: If your game's rules can be translated into this "staircase robot" that never needs to guess, then you can win the game by just looking at your current location. You don't need a memory.

The "Neutral Letter" Secret Sauce

The paper mentions a "neutral letter." Think of this as a "Do Nothing" button.

  • In a game of chess, a neutral move might be passing your turn (if allowed).
  • In a game of collecting numbers, it might be adding a zero.

The authors found that if your game has this "Do Nothing" button that doesn't change the outcome, their "staircase robot" rule works perfectly. It's a safety net that makes the math work out.

Real-World Applications: The Mean-Payoff Game

One of the most famous games in this field is the Mean-Payoff Game. Imagine you are a delivery driver. Every road you take has a cost (or a reward).

  • Goal: You want your average cost per mile to be negative (meaning you make a profit on average).
  • The Problem: It was known that on finite maps (small towns), you can win with a simple, forgetful strategy. But on infinite maps (the whole world), people thought you needed a complex memory to win.

The Paper's Breakthrough:
The authors proved that even on infinite maps, if you tweak the rules slightly (specifically, if you want your average to be strictly less than zero, not just less than or equal to), you can win with a simple, forgetful strategy! They showed that this complex game is actually just a fancy version of a "bounded" game, which we already know is simple to play.

The "Completeness" Result: The Universal Translator

Finally, the paper offers a "Completeness Result." This is like a universal translator for game rules.

Imagine you have a game rule that is easy to win on small maps (finite arenas) but impossible to win on big maps without a memory. The authors say:

"Don't worry! We can translate your rule into a new rule that is equivalent on small maps, but is simple enough to be won with a forgetful strategy even on infinite maps."

It's like taking a complicated recipe that only works in a small kitchen and rewriting it so it works in a giant industrial kitchen, without changing the taste of the dish for the small kitchen.

Summary

  1. The Problem: When can you win an infinite game without remembering the past?
  2. The Answer: When the game's rules can be modeled by a "staircase robot" that simplifies the game as it goes and never needs to guess.
  3. The Proof: They proved this for a wide class of games (Σ20\Sigma^0_2) and showed that if a game has a "neutral" move, this rule holds.
  4. The Result: They solved a long-standing mystery about "Mean-Payoff" games, proving they are simpler than we thought, and provided a method to turn "hard" finite-game rules into "easy" infinite-game rules.

In short: If your game has a certain mathematical structure (like a staircase), you don't need a memory to win. And if it doesn't, we can often rewrite the rules so that you do.