Repurposing Backdoors for Good: Ephemeral Intrinsic Proofs for Verifiable Aggregation in Cross-silo Federated Learning
This paper proposes a lightweight, backdoor-based "Intrinsic Proofs" framework that ensures verifiable aggregation in cross-silo federated learning by embedding ephemeral verification signals into model parameters, thereby achieving high detection rates against malicious servers with over 1000x speedup compared to traditional cryptographic methods while preserving client anonymity and final model utility.