Why Channel-Centric Models are not Enough to Predict End-to-End Performance in Private 5G: A Measurement Campaign and Case Study

This paper demonstrates that channel-centric models, including ray-tracing simulators, fail to accurately predict end-to-end throughput in private 5G networks due to systematic over-estimation of MIMO spatial layers, whereas data-driven Gaussian process models trained on direct measurements provide significantly more reliable predictions for communication-aware robot planning.

Nils Jörgensen2026-03-11🤖 cs.LG

A New Modeling to Feature Selection Based on the Fuzzy Rough Set Theory in Normal and Optimistic States on Hybrid Information Systems

This paper introduces FSbuHD, a novel feature selection model for hybrid information systems that addresses the computational and noise limitations of traditional fuzzy rough set theory by reformulating the problem as an optimization task based on combined object distances, demonstrating superior efficiency and effectiveness in both normal and optimistic states across UCI datasets.

Mohammad Hossein Safarpour, Seyed Mohammad Alavi, Mohammad Izadikhah, Hossein Dibachi2026-03-11🤖 cs.AI

Cross-Domain Uncertainty Quantification for Selective Prediction: A Comprehensive Bound Ablation with Transfer-Informed Betting

This paper introduces Transfer-Informed Betting (TIB), a novel method that combines betting-based confidence sequences with cross-domain transfer learning to achieve tighter, data-efficient risk guarantees for selective prediction, demonstrating significant coverage improvements over existing bounds across multiple benchmarks and applications.

Abhinaba Basu2026-03-11🤖 cs.AI

FedLECC: Cluster- and Loss-Guided Client Selection for Federated Learning under Non-IID Data

FedLECC is a lightweight client selection strategy for federated learning under non-IID data that groups clients by label-distribution similarity and prioritizes those with higher local loss, thereby significantly improving test accuracy while reducing communication rounds and overhead.

Daniel M. Jimenez-Gutierrez, Giovanni Giunta, Mehrdad Hassanzadeh, Aris Anagnostopoulos, Ioannis Chatzigiannakis, Andrea Vitaletti2026-03-11🤖 cs.AI

Quantifying Memorization and Privacy Risks in Genomic Language Models

This paper introduces a comprehensive multi-vector privacy evaluation framework that quantifies memorization risks in Genomic Language Models by integrating perplexity-based detection, canary sequence extraction, and membership inference, revealing that these models exhibit measurable data leakage dependent on architecture and training dynamics.

Alexander Nemecek, Wenbiao Li, Xiaoqian Jiang, Jaideep Vaidya, Erman Ayday2026-03-11🤖 cs.LG

Vision-Language Models Encode Clinical Guidelines for Concept-Based Medical Reasoning

The paper introduces MedCBR, a novel framework that integrates clinical guidelines with vision-language models to enhance the interpretability and accuracy of medical image diagnosis by transforming visual features into guideline-conformant concepts and structured clinical narratives.

Mohamed Harmanani, Bining Long, Zhuoxin Guo, Paul F. R. Wilson, Amirhossein Sabour, Minh Nguyen Nhat To, Gabor Fichtinger, Purang Abolmaesumi, Parvin Mousavi2026-03-11🤖 cs.LG

Kernel Debiased Plug-in Estimation based on the Universal Least Favorable Submodel

This paper introduces ULFS-KDPE, a novel kernel-based estimator that achieves semiparametric efficiency for pathwise differentiable parameters in nonparametric models by constructing a data-adaptive debiasing flow via a universal least favorable submodel, thereby eliminating the need for explicit efficient influence function derivation while ensuring rigorous theoretical guarantees and computational tractability.

Haiyi Chen, Yang Liu, Ivana Malenica2026-03-11🤖 cs.LG

The qsqs Inequality: Quantifying the Double Penalty of Mixture-of-Experts at Inference

This paper introduces the qsqs inequality to demonstrate that Mixture-of-Experts (MoE) models suffer from a structural "double penalty" of routing fragmentation and memory constraints during inference, often rendering them significantly less efficient than quality-matched dense models for long-context serving despite their training-time FLOP advantages.

Vignesh Adhinarayanan, Nuwan Jayasena2026-03-11🤖 cs.LG

Semantic Level of Detail: Multi-Scale Knowledge Representation via Heat Kernel Diffusion on Hyperbolic Manifolds

This paper introduces Semantic Level of Detail (SLoD), a framework that utilizes heat kernel diffusion on hyperbolic manifolds to enable continuous, principled control over knowledge abstraction levels in AI memory systems, automatically detecting emergent semantic boundaries in both synthetic and real-world knowledge graphs without manual supervision.

Edward Izgorodin2026-03-11🤖 cs.AI