Distilling Formal Logic into Neural Spaces: A Kernel Alignment Approach for Signal Temporal Logic

This paper proposes a novel framework that distills the geometric semantics of Signal Temporal Logic into a Transformer encoder via kernel alignment, enabling efficient, invertible, and semantically faithful neural representations that overcome the computational limitations of symbolic kernels and the structural deficiencies of syntax-based embeddings.

Sara Candussio, Gabriele Sarti, Gaia Saveri + 1 more2026-03-06💬 cs.CL

Med-V1: Small Language Models for Zero-shot and Scalable Biomedical Evidence Attribution

The paper introduces Med-V1, a family of efficient 3-billion-parameter small language models trained on synthetic data that achieve performance comparable to frontier models like GPT-5 for biomedical evidence attribution and hallucination detection, while enabling scalable applications such as analyzing citation validity and identifying evidence misattributions in clinical guidelines.

Qiao Jin, Yin Fang, Lauren He + 12 more2026-03-06🤖 cs.AI