A White-Box SVM Framework and its Swarm-Based Optimization for Supervision of Toothed Milling Cutter through Characterization of Spindle Vibrations

This paper presents a white-box support vector machine framework optimized by five meta-heuristic swarm algorithms to monitor the health of toothed milling cutters in real-time by characterizing spindle vibrations and selecting relevant statistical features through Recursive Feature Elimination with Cross-Validation.

Tejas Y. Deo, B. B. Deshmukh, Keshav H. Jatakar, Kamlesh M. Chhajed, S. S. Pardeshi, R. Jegadeeshwaran, Apoorva N. Khairnar, Hrushikesh S. Khade, A. D. Patange2026-03-10🤖 cs.LG

Automated Reinforcement Learning: An Overview

This paper provides a comprehensive overview of Automated Reinforcement Learning (AutoRL), surveying existing literature including recent LLM-based techniques, discussing promising non-tailored methods for future integration, and outlining current challenges and research directions in automating MDP modeling, algorithm selection, and hyper-parameter optimization.

Reza Refaei Afshar, Joaquin Vanschoren, Uzay Kaymak, Rui Zhang, Yaoxin Wu, Wen Song, Yingqian Zhang2026-03-10🤖 cs.LG

Explainable classification of astronomical uncertain time series

This paper proposes an uncertainty-aware, explainable-by-design subsequence-based model that achieves state-of-the-art classification performance for astronomical uncertain time series by incorporating data uncertainty as an input, thereby enabling domain experts to inspect predictions and potentially inspire new theoretical astrophysics developments.

Michael Franklin Mbouopda (LIMOS, UCA), Emille E. O. Ishida (LIMOS, UCA), Engelbert Mephu Nguifo (LIMOS, UCA), Emmanuel Gangler (LPC, UCA)2026-03-10🔭 astro-ph

Survey of Computerized Adaptive Testing: A Machine Learning Perspective

This paper presents a machine learning-focused survey of Computerized Adaptive Testing (CAT), exploring how ML techniques can optimize measurement models, question selection, bank construction, and test control to create more robust, fair, and efficient adaptive assessment systems across various domains.

Yan Zhuang, Qi Liu, Haoyang Bi, Zhenya Huang, Weizhe Huang, Jiatong Li, Junhao Yu, Zirui Liu, Zirui Hu, Yuting Hong, Zachary A. Pardos, Haiping Ma, Mengxiao Zhu, Shijin Wang, Enhong Chen2026-03-10🤖 cs.LG

LoRA-Ensemble: Efficient Uncertainty Modelling for Self-Attention Networks

The paper introduces LoRA-Ensemble, a parameter-efficient method that leverages Low-Rank Adaptation to create an implicit ensemble for self-attention networks, achieving superior calibration and accuracy comparable to explicit ensembles while significantly reducing computational and memory costs.

Dominik J. Mühlematter, Michelle Halbheer, Alexander Becker, Dominik Narnhofer, Helge Aasen, Konrad Schindler, Mehmet Ozgur Turkoglu2026-03-10🤖 cs.LG

Fast Explanations via Policy Gradient-Optimized Explainer

This paper introduces Fast Explanation (FEX), a novel framework that utilizes policy gradient optimization to represent attribution-based explanations as probability distributions, achieving over 97% reduction in inference time and 70% less memory usage compared to traditional model-agnostic methods while maintaining high-quality, scalable explanations for image and text classification tasks.

Deng Pan, Nuno Moniz, Nitesh Chawla2026-03-10🤖 cs.LG

Exploring Diffusion Models' Corruption Stage in Few-Shot Fine-tuning and Mitigating with Bayesian Neural Networks

This paper identifies a "corruption stage" in few-shot fine-tuned diffusion models caused by a narrowed learning distribution and proposes a Bayesian Neural Network approach with variational inference to broaden this distribution, thereby mitigating corruption and improving image fidelity, quality, and diversity without additional inference costs.

Xiaoyu Wu, Jiaru Zhang, Yang Hua, Bohan Lyu, Hao Wang, Tao Song, Haibing Guan2026-03-10🤖 cs.LG

Mini-batch Estimation for Deep Cox Models: Statistical Foundations and Practical Guidance

This paper establishes the statistical foundations of the mini-batch maximum partial-likelihood estimator (mb-MPLE) for deep Cox models optimized via stochastic gradient descent, proving its consistency and optimal convergence rates while providing practical guidance on hyperparameter tuning and demonstrating its effectiveness in large-scale applications where standard estimation is intractable.

Lang Zeng, Weijing Tang, Zhao Ren, Ying Ding2026-03-10🤖 cs.LG

Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling

This paper proposes a novel Variational Learning framework for Gaussian Process Latent Variable Models that utilizes Stochastic Gradient Annealed Importance Sampling to overcome proposal distribution challenges in high-dimensional spaces, achieving tighter variational bounds and superior performance compared to state-of-the-art methods.

Jian Xu, Shian Du, Junmei Yang, Qianli Ma, Delu Zeng, John Paisley2026-03-10🤖 cs.LG

From Model Explanation to Data Misinterpretation: A Cautionary Analysis of Post Hoc Explainers in Business Research

This paper cautions against treating post hoc explainers like SHAP and LIME as definitive evidence for underlying data relationships in business research, demonstrating through a systematic review and simulation that their explanations often misalign with true data-generating processes due to feature correlation and the Rashomon effect, and thus should be used only as exploratory tools rather than for hypothesis validation.

Tong Wang (Jeffrey), Ronilo Ragodos (Jeffrey), Lu Feng (Jeffrey), Yu (Jeffrey), Hu2026-03-10🤖 cs.LG