Trust via Reputation of Conviction
This paper proposes a mathematical framework for trust grounded in "conviction"—the likelihood of a source's stance being vindicated by independent consensus—arguing that this regime-independent metric, rather than correctness or faithfulness, provides the robust foundation for evaluating sources, particularly AI agents, through continuous verification and accrued reputation.