Risk-Adjusted Harm Scoring for Automated Red Teaming for LLMs in Financial Services

This paper introduces a risk-aware evaluation framework for Large Language Models in financial services, featuring a domain-specific taxonomy, an automated multi-round red-teaming pipeline, and a Risk-Adjusted Harm Score (RAHS) metric to better capture and quantify severe, operationally actionable security failures that traditional domain-agnostic benchmarks miss.

Fabrizio Dimino, Bhaskarjit Sarmah, Stefano PasqualiThu, 12 Ma💰 q-fin