Co-Diffusion: An Affinity-Aware Two-Stage Latent Diffusion Framework for Generalizable Drug-Target Affinity Prediction

Co-Diffusion is a novel two-stage latent diffusion framework that enhances generalizable drug-target affinity prediction by aligning embeddings in an affinity-steered manifold and employing modality-specific diffusion as a regularizer, thereby achieving superior zero-shot performance on unseen molecular scaffolds and protein families compared to state-of-the-art methods.

Yining Qian, Pengjie Wang, Yixiao Li, An-Yang Lu, Cheng Tan, Shuang Li, Lijun LiuFri, 13 Ma📊 stat

Efficient Approximation to Analytic and LpL^p functions by Height-Augmented ReLU Networks

This paper introduces a three-dimensional height-augmented ReLU network architecture that achieves significantly more efficient exponential approximation rates for analytic functions and provides the first quantitative, non-asymptotic high-order approximation for general LpL^p functions, thereby advancing the theoretical foundation for designing parameter-efficient neural networks.

ZeYu Li, FengLei Fan, TieYong ZengFri, 13 Ma📊 stat

RIE-Greedy: Regularization-Induced Exploration for Contextual Bandits

This paper proposes RIE-Greedy, a contextual bandit algorithm that leverages the inherent stochasticity of cross-validation-based regularization in black-box models to induce effective exploration, theoretically matching Thompson Sampling in simple cases and empirically outperforming existing methods in large-scale applications.

Tong Li, Thiago de Queiroz Casanova, Eric M. Schwartz, Victor Kostyuk, Dehan Kong, Joseph J. WilliamsFri, 13 Ma📊 stat

Exploiting Expertise of Non-Expert and Diverse Agents in Social Bandit Learning: A Free Energy Approach

This paper proposes and validates a free energy-based social bandit learning algorithm that enables agents to effectively identify and leverage the expertise of both expert and non-expert peers without reward information, thereby achieving optimal policy convergence and logarithmic regret while significantly outperforming existing methods in diverse social learning scenarios.

Erfan Mirzaei, Seyed Pooya Shariatpanahi, Alireza Tavakoli, Reshad Hosseini, Majid Nili AhmadabadiFri, 13 Ma📊 stat

A Further Efficient Algorithm with Best-of-Both-Worlds Guarantees for mm-Set Semi-Bandit Problem

This paper establishes that the Follow-the-Perturbed-Leader (FTPL) algorithm, when combined with Fréchet or Pareto distributions and an improved conditional geometric resampling technique, achieves optimal best-of-both-worlds regret guarantees for mm-set semi-bandit problems while reducing computational complexity from O(d2)O(d^2) to O(md(log(d/m)+1))O(md(\log(d/m)+1)).

Botao Chen, Jongyeong Lee, Chansoo Kim, Junya HondaFri, 13 Ma📊 stat

Language Generation with Replay: A Learning-Theoretic View of Model Collapse

This paper provides a learning-theoretic analysis of model collapse by introducing a replay adversary framework, demonstrating that while replay does not hinder uniform generation, it fundamentally limits non-uniform and limit-based generation, thereby offering theoretical justification for practical mitigation strategies like data cleaning and watermarking while also revealing their potential failure points.

Giorgio Racca, Michal Valko, Amartya SanyalFri, 13 Ma📊 stat

Uncovering Locally Low-dimensional Structure in Networks by Locally Optimal Spectral Embedding

This paper introduces Local Adjacency Spectral Embedding (LASE), a novel method that overcomes the limitations of global spectral embedding by uncovering locally low-dimensional network structures through weighted spectral decomposition, thereby improving local reconstruction, visualization, and theoretical guarantees via finite-sample bounds and spectral gap analysis.

Hannah Sansford, Nick Whiteley, Patrick Rubin-DelanchyFri, 13 Ma📊 stat