SecureRAG-RTL: A Retrieval-Augmented, Multi-Agent, Zero-Shot LLM-Driven Framework for Hardware Vulnerability Detection

This paper introduces SecureRAG-RTL, a novel Retrieval-Augmented Generation framework that significantly improves hardware vulnerability detection in HDL designs by integrating domain-specific knowledge retrieval with large language models, achieving an average 30% increase in accuracy while also releasing a new benchmark dataset of 14 annotated designs.

Touseef Hasan, Blessing Airehenbuwa, Nitin Pundir, Souvika Sarkar, Ujjwal GuinMon, 09 Ma🤖 cs.AI

Knowing without Acting: The Disentangled Geometry of Safety Mechanisms in Large Language Models

This paper proposes the Disentangled Safety Hypothesis (DSH), which reveals that large language models separate safety "recognition" and "refusal execution" into distinct geometric subspaces, enabling the development of the Refusal Erasure Attack (REA) to bypass safety mechanisms by surgically disabling the refusal axis while preserving harmful content generation.

Jinman Wu, Yi Xie, Shen Lin, Shiqian Zhao, Xiaofeng ChenMon, 09 Ma🤖 cs.AI

When Specifications Meet Reality: Uncovering API Inconsistencies in Ethereum Infrastructure

This paper introduces APIDiffer, a specification-guided differential testing framework that automatically detects API inconsistencies across Ethereum clients by generating real-world test cases and using large language models to filter false positives, successfully uncovering 72 confirmed bugs and significantly outperforming existing tools in coverage and accuracy.

Jie Ma, Ningyu He, Jinwen Xi, Mingzhe Xing, Liangxin Liu, Jiushenzi Luo, Xiaopeng Fu, Chiachih Wu, Haoyu Wang, Ying Gao, Yinliang YueMon, 09 Ma💻 cs

A LINDDUN-based Privacy Threat Modeling Framework for GenAI

This paper introduces a novel, LINDDUN-based privacy threat modeling framework specifically designed for Generative AI systems, which expands the existing threat taxonomy with new categories and examples derived from a systematic literature review and validated through a case study on an AI Agent system.

Qianying Liao, Jonah Bellemans, Laurens Sion, Xue Jiang, Dmitrii Usynin, Xuebing Zhou, Dimitri Van Landuyt, Lieven Desmet, Wouter JoosenMon, 09 Ma💻 cs

Alkaid: Resilience to Edit Errors in Provably Secure Steganography via Distance-Constrained Encoding

The paper proposes Alkaid, a provably secure steganographic scheme that achieves deterministic robustness against edit errors by integrating minimum distance decoding into the encoding process, thereby significantly outperforming state-of-the-art methods in decoding success rates, payload capacity, and encoding speed.

Zhihan Cao, Gaolei Li, Jun Wu, Jianhua Li, Hang Zhang, Mingzhe ChenMon, 09 Ma🔢 math

An Integrated Failure and Threat Mode and Effect Analysis (FTMEA) Framework with Quantified Cross-Domain Correlation Factors for Automotive Semiconductors

This paper proposes an Integrated Failure and Threat Mode and Effect Analysis (FTMEA) framework for automotive semiconductors that unifies functional safety and cybersecurity assessments by introducing quantified Cross-Domain Correlation Factors to accurately identify and prioritize synergistic risks that traditional methods often overlook.

Antonino Armato, Marzana Khatun, Sebastian FischerMon, 09 Ma💻 cs

Designing Trustworthy Layered Attestations

This paper proposes a framework for designing trustworthy layered attestations that overcome the limitations of shallow verification by structuring systems to isolate critical components and layer evidence across them, achieving reliable security against strong adversaries with negligible performance overhead using widely available hardware and software.

Will Thomas, Logan Schmalz, Adam Petz, Perry Alexander, Joshua D. Guttman, Paul D. Rowe, James CarterMon, 09 Ma💻 cs

ESAA-Security: An Event-Sourced, Verifiable Architecture for Agent-Assisted Security Audits of AI-Generated Code

This paper introduces ESAA-Security, an event-sourced architecture that transforms AI-assisted security auditing from unreliable free-form LLM conversations into a traceable, reproducible, and verifiable governance pipeline by separating agent cognition from deterministic state mutations to ensure immutable audit trails for AI-generated code.

Elzo Brito dos Santos FilhoMon, 09 Ma🤖 cs.AI