Bridging Computational Social Science and Deep Learning: Cultural Dissemination-Inspired Graph Neural Networks

This paper introduces AxelGNN, a novel Graph Neural Network architecture inspired by Axelrod's cultural dissemination model that utilizes similarity-gated interactions, segment-wise feature copying, and global polarization to effectively address oversmoothing and heterophily challenges while achieving competitive performance across diverse graph types.

Asela Hevapathige2026-03-05🤖 cs.AI

Learning in an Echo Chamber: Online Learning with Replay Adversary

This paper introduces the Online Learning in the Replay Setting to model systems training on self-annotated data, establishing the Extended Threshold dimension as the exact measure of learnability and proving that while proper learners may fail catastrophically, specific improper algorithms can achieve optimal mistake bounds against replay adversaries.

Daniil Dmitriev, Harald Eskelund Franck, Carolin Heinzler + 1 more2026-03-05🤖 cs.LG

FLOWR.root: A flow matching based foundation model for joint multi-purpose structure-aware 3D ligand generation and affinity prediction

FLOWR.root is an SE(3)-equivariant flow-matching foundation model that unifies structure-aware 3D ligand generation with multi-purpose affinity prediction and confidence estimation, achieving state-of-the-art performance through mixed-fidelity training and parameter-efficient finetuning for efficient, high-quality drug design.

Julian Cremer, Tuan Le, Mohammad M. Ghahremanpour + 3 more2026-03-05🤖 cs.LG

Learning Explicit Single-Cell Dynamics Using ODE Representations

The paper proposes Cell-Mechanistic Neural Networks (Cell-MNN), an end-to-end encoder-decoder architecture that utilizes locally linearized ODEs to efficiently model single-cell differentiation dynamics and explicitly learn interpretable, biologically consistent gene interactions, outperforming current state-of-the-art methods in scalability and interpretability.

Jan-Philipp von Bassewitz, Adeel Pervez, Marco Fumero + 3 more2026-03-05🤖 cs.LG

The Geometry of Reasoning: Flowing Logics in Representation Space

This paper proposes a novel geometric framework that models LLM reasoning as smooth flows in representation space, demonstrating through empirical experiments that next-token prediction enables models to internalize logical invariants as higher-order geometry, thereby challenging the "stochastic parrot" hypothesis and suggesting a universal representational law underlying machine understanding.

Yufa Zhou, Yixiao Wang, Xunjian Yin + 2 more2026-03-05🤖 cs.AI

Circuit Insights: Towards Interpretability Beyond Activations

This paper introduces WeightLens and CircuitLens, two complementary methods that advance mechanistic interpretability by analyzing feature weights and component interactions directly, thereby overcoming the limitations of activation-based approaches in scalability, robustness, and the ability to capture circuit-level dynamics without relying on external explainer models or datasets.

Elena Golimblevskaia, Aakriti Jain, Bruno Puri + 3 more2026-03-05🤖 cs.AI