Tight Convergence Rates for Online Distributed Linear Estimation with Adversarial Measurements

This paper establishes tight non-asymptotic convergence rates for a two-timescale 1\ell_1-minimization algorithm in asynchronous distributed linear estimation, providing a unified finite-time characterization of robustness against adversarial measurements and identifying conditions for exact or projected recovery of the mean vector.

Nibedita Roy, Vishal Halder, Gugan Thoppe, Alexandre Reiffers-Masson, Mihir Dhanakshirur, Naman, Alexandre Azor

Published 2026-04-09
📖 5 min read🧠 Deep dive

Imagine you are the captain of a ship trying to navigate through a foggy ocean. You can't see the stars directly, so you rely on a team of 100 lookouts (the "workers") scattered around the deck. Each lookout has a telescope pointed at a specific angle, and they shout out a single number to you, the captain (the "server"), representing what they see.

Your goal is to figure out the ship's true position (the "mean" or average location) based on these shouted numbers.

The Problem: Liars and Chaos

In a perfect world, every lookout is honest, and they all shout their numbers at the exact same time. But in the real world, two things go wrong:

  1. The Liars (Adversaries): A few lookouts are actually saboteurs. They might be paid by the enemy to shout completely random, crazy numbers to confuse you.
  2. The Chaos (Asynchrony): The lookouts don't shout in unison. Sometimes one shouts, then you wait, then another shouts. You never get a full report from everyone at once; you get a trickle of information over time.

Most existing navigation systems (algorithms) fail here. If you try to average the numbers, the liars drag your estimate off course. If you try to wait for everyone to shout before making a move, the ship drifts too far while you wait.

The Solution: A Two-Step Dance

The authors of this paper propose a clever new way to navigate, which they call a "Two-Timescale Algorithm." Think of it as a dance with two different speeds:

Step 1: The Fast Learner (The "Memory" Update)
Every time a lookout shouts a number, you immediately update your internal "memory" of what that specific lookout usually says.

  • If Lookout #5 usually shouts "10" but today shouts "1000," your memory slowly adjusts to realize, "Okay, maybe today was a fluke, or maybe Lookout #5 is lying."
  • This happens fast and continuously. You are constantly refining your understanding of the noise.

Step 2: The Slow Navigator (The "Position" Update)
Once you have a slightly better idea of what the noise looks like, you take a slow, careful step to adjust your ship's position.

  • You don't just trust the latest shout. You look at the difference between what you expected to hear (based on your memory) and what you actually heard.
  • If the difference is huge, you know something is wrong. You use a special mathematical trick (called 1\ell_1-minimization, which is like a "lie detector" for data) to ignore the outliers and focus on the consistent pattern.

Why This is a Big Deal

The paper proves two amazing things about this dance:

  1. It's Fast (Convergence Rates): They didn't just say "it works eventually." They calculated exactly how fast it works. They proved that even with liars and chaos, your ship will reach the correct destination at the fastest possible speed allowed by math. It's like saying, "Not only will you find the island, but you'll get there in exactly 3 days and 4 hours, no matter how bad the weather is."
  2. It Works in the Chaos (Asynchrony): Most other methods require everyone to shout at the same time (synchronous). This method works perfectly even if the lookouts are shouting one by one, randomly, and at different speeds.

The "Magic" Condition (The Net)

There is one catch. For this to work, the way the lookouts are positioned (the "sensing matrix") must be good.

  • The Analogy: Imagine the liars are trying to pull the ship in a specific direction. For you to resist them, the honest lookouts must be positioned in such a way that their combined "pull" is stronger than the liars' pull.
  • The paper proves that as long as the honest lookouts form a "stronger net" than the liars can tear, you will find the truth.

What If the Net is Weak? (Partial Recovery)

What if the liars are so strong that you can't find the exact position? The paper has a backup plan.

  • The Metaphor: Imagine you can't find the ship's exact GPS coordinates, but you can perfectly determine its latitude (North/South), even if the longitude (East/West) is still foggy.
  • The authors show that even if you can't recover the whole picture, you can still perfectly recover the most important part of the picture. This is huge for real-world applications like network monitoring, where knowing some things perfectly is better than knowing everything vaguely.

Real-World Impact

This isn't just about ships. This applies to:

  • Smartphone Networks: Figuring out where a virus is spreading even if some phones are sending fake data.
  • Power Grids: Detecting a cyberattack on the grid even if some sensors are hacked.
  • Federated Learning: Training AI on millions of phones without letting a few bad phones ruin the model.

In short: The authors built a navigation system that is mathematically proven to be the fastest and most robust way to find the truth in a world full of liars and confusion. They didn't just say "it works"; they gave you the exact speedometer and a backup plan for when the storm gets too wild.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →