Norm-Hierarchy Transitions in Representation Learning: When and Why Neural Networks Abandon Shortcuts

This paper introduces the Norm-Hierarchy Transition (NHT) framework, which explains that neural networks delay learning structured representations in favor of spurious shortcuts because weight decay slowly drives the model from high-norm solutions to lower-norm ones, with the transition delay logarithmically scaling to the ratio between these norms.

Truong Xuan Khanh, Truong Quynh Hoa2026-03-10🤖 cs.LG

Explainable and Hardware-Efficient Jamming Detection for 5G Networks Using the Convolutional Tsetlin Machine

This paper proposes and validates a hardware-efficient, explainable Convolutional Tsetlin Machine (CTM) for real-time 5G jamming detection that achieves comparable accuracy to convolutional neural networks while significantly reducing training time, memory usage, and enabling deterministic FPGA deployment.

Vojtech Halenka, Mohammadreza Amini, Per-Arne Andersen, Ole-Christoffer Granmo, Burak Kantarci2026-03-10🤖 cs.LG

AgrI Challenge: A Data-Centric AI Competition for Cross-Team Validation in Agricultural Vision

The AgrI Challenge introduces a data-centric competition framework featuring Cross-Team Validation to demonstrate that while single-source training suffers from significant generalization gaps in agricultural vision, collaborative multi-source training on independently collected, heterogeneous datasets dramatically improves model robustness and real-world performance.

Mohammed Brahimi, Karim Laabassi, Mohamed Seghir Hadj Ameur, Aicha Boutorh, Badia Siab-Farsi, Amin Khouani, Omar Farouk Zouak, Seif Eddine Bouziane, Kheira Lakhdari, Abdelkader Nabil Benghanem2026-03-10🤖 cs.LG

Latent Generative Models with Tunable Complexity for Compressed Sensing and other Inverse Problems

This paper introduces tunable-complexity priors for generative models like diffusion models, normalizing flows, and VAEs by leveraging nested dropout, demonstrating that adaptively adjusting model dimensionality significantly improves reconstruction performance across various inverse problems compared to fixed-complexity baselines.

Sean Gunn, Jorio Cocola, Oliver De Candido, Vaggos Chatziafratis, Paul Hand2026-03-10🤖 cs.LG

Scaling Laws in the Tiny Regime: How Small Models Change Their Mistakes

This paper reveals that in the sub-20M parameter "tiny" regime, models follow steeper but non-uniform scaling laws where increasing size not only reduces overall error but fundamentally alters the structure of mistakes, shifts capacity from easy to hard classes, and paradoxically degrades calibration, necessitating validation at the specific target model size for edge AI deployment.

Mohammed Alnemari, Rizwan Qureshi, Nader Begrazadah2026-03-10🤖 cs.LG

Learning to Reflect: Hierarchical Multi-Agent Reinforcement Learning for CSI-Free mmWave Beam-Focusing

This paper proposes a CSI-free Hierarchical Multi-Agent Reinforcement Learning framework that leverages user localization data and a two-level control architecture to efficiently optimize mechanically reconfigurable mmWave beam-focusing, achieving significant RSSI improvements and robust scalability without the overhead of channel state information estimation.

Hieu Le, Oguz Bedir, Mostafa Ibrahim, Jian Tao, Sabit Ekin2026-03-10🤖 cs.LG

Domain-Specific Quality Estimation for Machine Translation in Low-Resource Scenarios

This paper addresses the challenge of domain-specific machine translation quality estimation in low-resource scenarios by demonstrating that while prompt-only methods are fragile for open-weight models, adapting intermediate Transformer layers via Low-Rank Adaptation (ALOPE) and Low-Rank Multiplicative Adaptation (LoRMA) significantly improves robustness and performance across English-to-Indic language pairs.

Namrata Patil Gurav, Akashdeep Ranu, Archchana Sindhujan, Diptesh Kanojia2026-03-10🤖 cs.LG

Feed m Birds with One Scone: Accelerating Multi-task Gradient Balancing via Bi-level Optimization

This paper introduces MARIGOLD, a unified bi-level optimization framework that leverages zeroth-order methods to efficiently solve multi-task learning problems by dynamically balancing task gradients without requiring access to all task gradients, thereby overcoming the computational inefficiency of existing MGDA-type approaches.

Xuxing Chen, Yun He, Jiayi Xu, Minhui Huang, Xiaoyi Liu, Boyang Liu, Fei Tian, Xiaohan Wei, Rong Jin, Sem Park, Bo Long, Xue Feng2026-03-10🤖 cs.LG

Context Channel Capacity: An Information-Theoretic Framework for Understanding Catastrophic Forgetting

This paper introduces the information-theoretic concept of Context Channel Capacity (CctxC_\mathrm{ctx}) to explain catastrophic forgetting in continual learning, proving that zero forgetting requires CctxH(T)C_\mathrm{ctx} \geq H(T) and demonstrating that architectures with structural context pathways (like HyperNetworks) bypass the Impossibility Triangle to achieve near-perfect retention, whereas methods lacking such capacity inevitably suffer significant forgetting.

Ran Cheng2026-03-10🤖 cs.LG