Trade-offs between structural richness and communication efficiency in music network representations

This study demonstrates that the choice of musical feature encoding fundamentally shapes network representations of music, revealing a critical trade-off where compressed single-feature models offer high uncertainty but low learning error, while richer multi-feature models preserve fine distinctions at the cost of increased state space complexity and higher model error, thereby determining how plausibly these networks reflect realistic listener expectations.

Lluc Bono Rosselló, Robert Jankowski, Hugues Bersini, Marián Boguñá, M. Ángeles SerranoThu, 12 Ma🧬 q-bio

Technological folie à deux: Feedback Loops Between AI Chatbots and Mental Illness

This paper argues that the interaction between vulnerable individuals with mental health conditions and AI chatbots creates dangerous feedback loops—driven by human cognitive biases and AI sycophancy—that can destabilize beliefs and foster dependence, necessitating urgent coordinated action across clinical, developmental, and regulatory domains to mitigate emerging public health risks.

Sebastian Dohnány, Zeb Kurth-Nelson, Eleanor Spens, Lennart Luettgau, Alastair Reid, Iason Gabriel, Christopher Summerfield, Murray Shanahan, Matthew M NourThu, 12 Ma🧬 q-bio

Uncovering Semantic Selectivity of Latent Groups in Higher Visual Cortex with Mutual Information-Guided Diffusion

This paper introduces MIG-Vis, a method combining variational autoencoders with mutual information-guided diffusion models to directly visualize and validate that neural populations in the macaque inferior temporal cortex are organized into structured, semantically meaningful latent groups encoding specific visual features like object pose and category transformations.

Yule Wang, Joseph Yu, Chengrui Li, Weihan Li, Anqi WuThu, 12 Ma🧬 q-bio

Uncovering statistical structure in large-scale neural activity with Restricted Boltzmann Machines

This paper demonstrates that Restricted Boltzmann Machines can effectively model large-scale neural activity from approximately 1,500 to 2,000 simultaneously recorded neurons, capturing complex higher-order statistics and revealing anatomically structured interaction networks that align with visual behavior and global dynamics.

Nicolas Béreux, Giovanni Catania, Aurélien Decelle, Francesca Mignacco, Alfonso de Jesús Navas Gómez, Beatriz SeoaneThu, 12 Ma🧬 q-bio

Behavior-dLDS: A decomposed linear dynamical systems model for neural activity partially constrained by behavior

This paper introduces behavior-decomposed linear dynamical systems (b-dLDS), a novel modeling approach that disentangles behavior-related neural dynamics from internal computations in large-scale brain recordings, demonstrating superior performance over existing supervised models and successfully scaling to tens of thousands of neurons in zebrafish hindbrain data.

Eva Yezerets, En Yang, Misha B. Ahrens, Adam S. CharlesMon, 09 Ma🤖 cs.LG

Causal Interpretation of Neural Network Computations with Contribution Decomposition

This paper introduces CODEC, a method that utilizes sparse autoencoders to decompose neural network computations into sparse, causal motifs of hidden-neuron contributions, thereby revealing how nonlinear processes evolve across layers and enabling greater interpretability and control of both artificial and biological neural systems.

Joshua Brendan Melander, Zaki Alaoui, Shenghua Liu, Surya Ganguli, Stephen A. BaccusMon, 09 Ma🤖 cs.LG

Convex Efficient Coding

This paper introduces a tractable and flexible normative framework for neural coding by optimizing representational similarity rather than direct neural activity, demonstrating that a broad class of such problems is convex and using this property to derive new results on model identifiability, the uniqueness of neural tunings, and the optimal structure of ON/OFF channels in retinal versus cortical codes.

William Dorrell, Peter E. Latham, James Whittington2026-03-06🧬 q-bio

Why the Brain Consolidates: Predictive Forgetting for Optimal Generalisation

This paper proposes that memory consolidation serves a computational role beyond mere stabilization, utilizing "predictive forgetting" to compress stored representations into a form that optimizes generalization by selectively retaining information that predicts future outcomes, a process necessitated by high-capacity encoding constraints and validated through simulations across diverse neural and transformer models.

Zafeirios Fountas, Adnan Oomerjee, Haitham Bou-Ammar + 2 more2026-03-06💻 cs