StructSAM: Structure- and Spectrum-Preserving Token Merging for Segment Anything Models

This paper introduces StructSAM, a novel token merging framework that preserves structural boundaries and spectral properties in Segment Anything Models (SAM) by using gradient-based energy scores and grid-based screening to achieve significant computational savings with minimal accuracy loss across natural and medical imaging benchmarks.

Duy M. H. Nguyen, Tuan A. Tran, Duong Nguyen, Siwei Xie, Trung Q. Nguyen, Mai T. N. Truong, Daniel Palenicek, An T. Le, Michael Barz, TrungTin Nguyen, Tuan Dam, Ngan Le, Minh Vu, Khoa Doan, Vien Ngo, Pengtao Xie, James Zou, Daniel Sonntag, Jan Peters, Mathias Niepert2026-03-10🤖 cs.LG

Faster-HEAL: An Efficient and Privacy-Preserving Collaborative Perception Framework for Heterogeneous Autonomous Vehicles

Faster-HEAL is a lightweight, privacy-preserving collaborative perception framework that addresses the challenges of heterogeneous autonomous vehicles by using low-rank visual prompt fine-tuning and pyramid fusion to align diverse features into a unified space, achieving superior detection performance with significantly reduced computational overhead compared to state-of-the-art methods.

Armin Maleki, Hayder Radha2026-03-10💻 cs

AgrI Challenge: A Data-Centric AI Competition for Cross-Team Validation in Agricultural Vision

The AgrI Challenge introduces a data-centric competition framework featuring Cross-Team Validation to demonstrate that while single-source training suffers from significant generalization gaps in agricultural vision, collaborative multi-source training on independently collected, heterogeneous datasets dramatically improves model robustness and real-world performance.

Mohammed Brahimi, Karim Laabassi, Mohamed Seghir Hadj Ameur, Aicha Boutorh, Badia Siab-Farsi, Amin Khouani, Omar Farouk Zouak, Seif Eddine Bouziane, Kheira Lakhdari, Abdelkader Nabil Benghanem2026-03-10🤖 cs.LG

AQuA: Toward Strategic Response Generation for Ambiguous Visual Questions

This paper introduces AQuA, a fine-grained dataset that categorizes ambiguous visual questions into four levels with corresponding optimal response strategies, demonstrating that fine-tuning Vision-Language Models on this dataset enables them to effectively recognize ambiguity and adaptively generate context-appropriate responses such as seeking clarification or listing alternatives, thereby outperforming existing baselines.

Jihyoung Jang, Hyounghun Kim2026-03-10💬 cs.CL

Prompt-Based Caption Generation for Single-Tooth Dental Images Using Vision-Language Models

This paper addresses the lack of specialized dental datasets by proposing a framework that uses Vision-Language Models with guided prompts to generate high-quality, holistic captions for single-tooth RGB images, thereby enabling more comprehensive dental image analysis.

Anastasiia Sukhanova, Aiden Taylor, Julian Myers, Zichun Wang, Kartha Veerya Jammuladinne, Satya Sri Rajiteswari Nimmagadda, Aniruddha Maiti, Ananya Jana2026-03-10💻 cs

UnSCAR: Universal, Scalable, Controllable, and Adaptable Image Restoration

The paper introduces UnSCAR, a scalable and controllable universal image restoration framework that utilizes a multi-branch mixture-of-experts architecture to overcome the limitations of catastrophic forgetting and performance degradation in existing all-in-one models when handling multiple real-world degradations.

Debabrata Mandal, Soumitri Chattopadhyay, Yujie Wang, Marc Niethammer, Praneeth Chakravarthula2026-03-10💻 cs

Generalization in Online Reinforcement Learning for Mobile Agents

This paper addresses the underexplored challenge of generalization in online reinforcement learning for mobile GUI agents by introducing the AndroidWorld-Generalization benchmark and a scalable GRPO-based training system, demonstrating that while RL significantly improves zero-shot performance on unseen task instances, generalization to new templates and applications remains difficult and benefits from test-time few-shot adaptation.

Li Gu, Zihuan Jiang, Zhixiang Chi, Huan Liu, Ziqiang Wang, Yuanhao Yu, Glen Berseth, Yang Wang2026-03-10🤖 cs.LG

DogWeave: High-Fidelity 3D Canine Reconstruction from a Single Image via Normal Fusion and Conditional Inpainting

DogWeave is a novel framework that reconstructs high-fidelity 3D canine models from a single RGB image by refining parametric meshes into detailed SDF representations via diffusion-enhanced normal optimization and generating view-consistent textures through conditional inpainting, thereby overcoming challenges like self-occlusion and fur detail to outperform existing state-of-the-art methods.

Shufan Sun, Chenchen Wang, Zongfu Yu2026-03-10💻 cs

Med-Evo: Test-time Self-evolution for Medical Multimodal Large Language Models

Med-Evo is a novel self-evolution framework for medical multimodal large language models that leverages label-free reinforcement learning, featuring Feature-driven Pseudo Labeling and Hard-Soft Reward mechanisms, to significantly enhance model performance on unlabeled test data without requiring additional annotated medical datasets.

Dunyuan Xu, Xikai Yang, Juzheng Miao, Yaoqian Li, Jinpeng Li, Pheng-Ann Heng2026-03-10💻 cs

SLNet: A Super-Lightweight Geometry-Adaptive Network for 3D Point Cloud Recognition

The paper introduces SLNet, a super-lightweight 3D point cloud recognition network utilizing Nonparametric Adaptive Point Embedding (NAPE) and Geometric Modulation Units (GMU) to achieve state-of-the-art accuracy on benchmarks like ModelNet40 and ScanObjectNN with significantly fewer parameters and computational costs compared to existing models.

Mohammad Saeid, Amir Salarpour, Pedram MohajerAnsari, Mert D. Pesé2026-03-10🤖 cs.LG