Why the Brain Consolidates: Predictive Forgetting for Optimal Generalisation

This paper proposes that memory consolidation serves a computational role beyond mere stabilization, utilizing "predictive forgetting" to compress stored representations into a form that optimizes generalization by selectively retaining information that predicts future outcomes, a process necessitated by high-capacity encoding constraints and validated through simulations across diverse neural and transformer models.

Zafeirios Fountas, Adnan Oomerjee, Haitham Bou-Ammar + 2 more2026-03-06💻 cs

A Benchmark Study of Neural Network Compression Methods for Hyperspectral Image Classification

This paper presents a systematic benchmark study evaluating the effectiveness of pruning, quantization, and knowledge distillation in compressing neural networks for hyperspectral image classification, demonstrating that these methods can significantly reduce model size and computational costs while maintaining competitive accuracy for resource-constrained remote sensing applications.

Sai Shi2026-03-06💻 cs

Evaluating GPT-5 as a Multimodal Clinical Reasoner: A Landscape Commentary

This landscape commentary evaluates the GPT-5 family against GPT-4o, revealing substantial improvements in expert-level textual reasoning and multimodal synthesis that approach state-of-the-art performance in tasks like mammography, while highlighting that generalist models still lag behind specialized systems in perception-critical domains such as neuroradiology.

Alexandru Florea, Shansong Wang, Mingzhe Hu + 5 more2026-03-06💻 cs

Distributional Reinforcement Learning with Information Bottleneck for Uncertainty-Aware DRAM Equalization

This paper proposes a distributional risk-sensitive reinforcement learning framework that integrates Information Bottleneck representations and Conditional Value-at-Risk optimization to achieve certified worst-case DRAM equalizer performance with significant speedups and uncertainty quantification, outperforming existing methods by up to 89.1% on real-world memory data.

Muhammad Usama, Dong Eui Chang2026-03-06💻 cs

Distributional Equivalence in Linear Non-Gaussian Latent-Variable Cyclic Causal Models: Characterization and Learning

This paper presents the first structural-assumption-free causal discovery method for linear non-Gaussian latent-variable cyclic models by establishing a graphical criterion for distributional equivalence, introducing edge rank constraints, and providing an algorithm to recover models up to this equivalence class.

Haoyue Dai, Immanuel Albrecht, Peter Spirtes + 1 more2026-03-06💻 cs