Many AI Analysts, One Dataset: Navigating the Agentic Data Science Multiverse

This paper demonstrates that fully autonomous AI analysts can cheaply replicate the analytic diversity and conflicting conclusions observed in human many-analyst studies, revealing that empirical results are highly sensitive to analytic choices and prompting a new transparency norm requiring multiverse-style reporting and full prompt disclosure for AI-generated science.

Martin Bertran, Riccardo Fogliato, Zhiwei Steven Wu2026-03-12🤖 cs.AI

PatchDenoiser: Parameter-efficient multi-scale patch learning and fusion denoiser for Low-dose CT imaging

PatchDenoiser is a lightweight, parameter-efficient multi-scale patch-based framework that effectively denoises low-dose CT images by balancing noise suppression with anatomical detail preservation, outperforming state-of-the-art CNN and GAN methods while significantly reducing computational costs and energy consumption.

Jitindra Fartiyal, Pedro Freire, Sergei K. Turitsyn, Sergei G. Solovski2026-03-12🤖 cs.AI

Adversarial Hubness Detector: Detecting Hubness Poisoning in Retrieval-Augmented Generation Systems

This paper introduces Hubscan, an open-source security scanner that utilizes a multi-detector architecture to identify and mitigate hubness poisoning attacks in Retrieval-Augmented Generation (RAG) systems, achieving high recall rates in detecting adversarial hubs across various vector databases and real-world benchmarks.

Idan Habler, Vineeth Sai Narajala, Stav Koren, Amy Chang, Tiffany Saade2026-03-12🤖 cs.AI

AMLRIS: Alignment-aware Masked Learning for Referring Image Segmentation

This paper proposes Alignment-Aware Masked Learning (AML), a training strategy that improves Referring Image Segmentation by quantifying pixel-level vision-language alignment to mask unreliable regions during optimization, thereby achieving state-of-the-art performance without architectural changes or inference overhead.

Tongfei Chen, Shuo Yang, Yuguang Yang, Linlin Yang, Runtang Guo, Changbai Li, He Long, Chunyu Xie, Dawei Leng, Baochang Zhang2026-03-12🤖 cs.AI

Defensive Refusal Bias: How Safety Alignment Fails Cyber Defenders

This paper identifies and quantifies "Defensive Refusal Bias," a safety alignment failure in large language models where legitimate cybersecurity defenders are disproportionately denied assistance for critical tasks due to the presence of security-sensitive keywords, a problem exacerbated by explicit authorization attempts and current reliance on semantic similarity rather than intent reasoning.

David Campbell, Neil Kale, Udari Madhushani Sehwag, Bert Herring, Nick Price, Dan Borges, Alex Levinson, Christina Q Knight2026-03-12🤖 cs.AI

CARE: Towards Clinical Accountability in Multi-Modal Medical Reasoning with an Evidence-Grounded Agentic Framework

The paper introduces CARE, an evidence-grounded agentic framework that enhances clinical accountability and reasoning accuracy in multi-modal medical AI by decomposing tasks into specialized modules for entity proposal, pixel-level localization, and evidence-based reasoning, thereby outperforming state-of-the-art models on medical VQA benchmarks.

Yuexi Du, Jinglu Wang, Shujie Liu, Nicha C. Dvornek, Yan Lu2026-03-12🤖 cs.AI

One Model, Many Skills: Parameter-Efficient Fine-Tuning for Multitask Code Analysis

This paper presents the first comprehensive evaluation of parameter-efficient fine-tuning (PEFT) for multitask code analysis, demonstrating that a single shared PEFT module can match or surpass full fine-tuning performance while significantly reducing computational and storage costs, provided that tasks are strategically grouped based on factors like complementarity and stability.

Amal Akli, Maxime Cordy, Mike Papadakis, Yves Le Traon2026-03-12💻 cs

Causally Grounded Mechanistic Interpretability for LLMs with Faithful Natural-Language Explanations

This paper presents a pipeline that bridges mechanistic interpretability and natural language explanations by identifying causally important attention heads in GPT-2 Small, generating high-quality explanations via LLMs, and evaluating their faithfulness to reveal that while explanations can be sufficient, they often lack comprehensiveness due to distributed backup mechanisms.

Ajay Pravin Mahale2026-03-12💬 cs.CL