To Predict or Not to Predict? Towards reliable uncertainty estimation in the presence of noise
This study evaluates uncertainty estimation methods for multilingual text classification under noisy conditions, finding that while softmax-based approaches struggle in low-resource or domain-shift scenarios, Monte Carlo dropout offers robust calibration and significantly improves performance by enabling the abstention of uncertain predictions.