SR3R: Rethinking Super-Resolution 3D Reconstruction With Feed-Forward Gaussian Splatting

The paper proposes SR3R, a feed-forward framework that reformulates 3D super-resolution as a direct mapping from sparse low-resolution views to high-resolution 3D Gaussian Splatting representations, enabling robust zero-shot generalization and superior reconstruction fidelity by autonomously learning 3D-specific high-frequency details from large-scale multi-scene data.

Xiang Feng, Xiangbo Wang, Tieshi Zhong + 7 more2026-03-02💻 cs

Steering and Rectifying Latent Representation Manifolds in Frozen Multi-modal LLMs for Video Anomaly Detection

This paper proposes SteerVAD, a novel tuning-free framework that enhances video anomaly detection in frozen multi-modal LLMs by identifying latent anomaly experts and employing a hierarchical meta-controller to dynamically steer and rectify their internal representations, thereby achieving state-of-the-art performance with minimal training data.

Zhaolin Cai, Fan Li, Huiyu Duan + 2 more2026-03-02💻 cs

Spatio-Temporal Garment Reconstruction Using Diffusion Mapping via Pattern Coordinates

This paper presents a unified framework for high-fidelity 3D garment reconstruction from monocular images and videos by combining Implicit Sewing Patterns with a generative diffusion model in UV space to learn expressive shape priors and enforce spatio-temporal consistency, enabling accurate recovery of both tight- and loose-fitting clothing with fine geometric details.

Yingxuan You, Ren Li, Corentin Dumery + 3 more2026-03-02💻 cs

Quant Experts: Token-aware Adaptive Error Reconstruction with Mixture of Experts for Large Vision-Language Models Quantization

This paper proposes Quant Experts (QE), a token-aware adaptive error reconstruction method using a mixture of shared and routed low-rank adapters to dynamically compensate for quantization errors in Large Vision-Language Models, thereby achieving near full-precision performance across various scales without retraining.

Chenwei Jia, Baoting Li, Xuchong Zhang + 3 more2026-03-02🤖 cs.AI

Uncertainty Quantification for Multimodal Large Language Models with Incoherence-adjusted Semantic Volume

This paper introduces UMPIRE, a training-free, efficient uncertainty quantification framework for Multimodal Large Language Models that leverages internal modality features to compute incoherence-adjusted semantic volumes, demonstrating superior performance in error detection and calibration across diverse modalities and challenging settings without relying on external tools.

Gregory Kang Ruey Lau, Hieu Dao, Nicole Kan Hui Lin + 1 more2026-03-02💬 cs.CL