ContextBench: Modifying Contexts for Targeted Latent Activation

This paper introduces ContextBench, a benchmark for evaluating methods that generate fluent inputs to trigger specific latent features in language models, and demonstrates that enhanced Evolutionary Prompt Optimization variants achieve state-of-the-art performance in balancing elicitation strength with linguistic fluency.

Robert Graham, Edward Stevinson, Leo Richter, Alexander Chia, Joseph Miller, Joseph Isaac BloomMon, 09 Ma🤖 cs.AI

Iterative Quantum Feature Maps

The paper proposes Iterative Quantum Feature Maps (IQFMs), a hybrid quantum-classical framework that constructs deep architectures by iteratively connecting shallow, noise-resilient quantum feature maps with classically computed weights to mitigate hardware limitations and achieve performance comparable to classical neural networks without optimizing variational quantum parameters.

Nasa Matsumoto, Quoc Hoan Tran, Koki Chinzei, Yasuhiro Endo, Hirotaka OshimaMon, 09 Ma⚛️ quant-ph

Learning the action for long-time-step simulations of molecular dynamics

This paper proposes a machine learning approach that learns data-driven, structure-preserving (symplectic and time-reversible) maps equivalent to the mechanical action of a system, enabling accurate long-time-step molecular dynamics simulations that eliminate the energy conservation and equipartition artifacts typical of non-structure-preserving ML predictors.

Filippo Bigi, Johannes Spies, Michele CeriottiMon, 09 Ma🔬 cond-mat.mtrl-sci

Escaping Model Collapse via Synthetic Data Verification: Near-term Improvements and Long-term Convergence

This paper demonstrates that injecting external verification into synthetic data retraining can prevent model collapse and yield near-term improvements, though theoretical analysis and experiments across linear regression, VAEs, and LLMs show that long-term performance ultimately converges to the verifier's knowledge center and may plateau or decline if the verifier is imperfect.

Bingji Yi, Qiyuan Liu, Yuwei Cheng, Haifeng XuMon, 09 Ma🤖 cs.LG

Data-Driven Global Sensitivity Analysis for Engineering Design Based on Individual Conditional Expectations

This paper proposes a novel global sensitivity analysis method based on Individual Conditional Expectation (ICE) curves that overcomes the limitations of traditional Partial Dependence Plots (PDPs) in capturing input interactions, offering a mathematically proven, more informative metric for explainable machine learning in engineering design.

Pramudita Satria Palar, Paul Saves, Rommel G. Regis, Koji Shimoyama, Shigeru Obayashi, Nicolas Verstaevel, Joseph MorlierMon, 09 Ma🤖 cs.AI

Learning Optimal Distributionally Robust Individualized Treatment Rules Integrating Multi-Source Data

This paper proposes a prior information-based distributionally robust individualized treatment rule (PDRO-ITR) that integrates multi-source data to address posterior shift by maximizing the worst-case policy value over a covariate-dependent uncertainty set, thereby ensuring robust performance and achieving superior results in simulations and real-world applications.

Wenhai Cui, Wen Su, Xingqiu ZhaoMon, 09 Ma🤖 cs.LG

Behavior-dLDS: A decomposed linear dynamical systems model for neural activity partially constrained by behavior

This paper introduces behavior-decomposed linear dynamical systems (b-dLDS), a novel modeling approach that disentangles behavior-related neural dynamics from internal computations in large-scale brain recordings, demonstrating superior performance over existing supervised models and successfully scaling to tens of thousands of neurons in zebrafish hindbrain data.

Eva Yezerets, En Yang, Misha B. Ahrens, Adam S. CharlesMon, 09 Ma🤖 cs.LG

Agnostic learning in (almost) optimal time via Gaussian surface area

This paper improves the known bounds for agnostic learning of concept classes with bounded Gaussian surface area by demonstrating that a polynomial degree of O~(Γ2/ε2)\tilde{O}(\Gamma^2 / \varepsilon^2) suffices for ε\varepsilon-approximation, thereby yielding near-optimal complexity for learning polynomial threshold functions in the statistical query model.

Lucas Pesenti, Lucas Slot, Manuel WiedmerMon, 09 Ma🤖 cs.LG