Thinking with Images as Continuous Actions: Numerical Visual Chain-of-Thought

This paper proposes Numerical Visual Chain-of-Thought (NV-CoT), a framework that enables multimodal large language models to perform precise region-grounded reasoning by generating continuous numerical coordinates as actions, thereby overcoming the limitations of discrete text-based or fixed-patch approaches while improving localization accuracy and training efficiency.

Kesen Zhao, Beier Zhu, Junbao Zhou + 3 more2026-03-02💻 cs

SR3R: Rethinking Super-Resolution 3D Reconstruction With Feed-Forward Gaussian Splatting

The paper proposes SR3R, a feed-forward framework that reformulates 3D super-resolution as a direct mapping from sparse low-resolution views to high-resolution 3D Gaussian Splatting representations, enabling robust zero-shot generalization and superior reconstruction fidelity by autonomously learning 3D-specific high-frequency details from large-scale multi-scene data.

Xiang Feng, Xiangbo Wang, Tieshi Zhong + 7 more2026-03-02💻 cs

Steering and Rectifying Latent Representation Manifolds in Frozen Multi-modal LLMs for Video Anomaly Detection

This paper proposes SteerVAD, a novel tuning-free framework that enhances video anomaly detection in frozen multi-modal LLMs by identifying latent anomaly experts and employing a hierarchical meta-controller to dynamically steer and rectify their internal representations, thereby achieving state-of-the-art performance with minimal training data.

Zhaolin Cai, Fan Li, Huiyu Duan + 2 more2026-03-02💻 cs

Spatio-Temporal Garment Reconstruction Using Diffusion Mapping via Pattern Coordinates

This paper presents a unified framework for high-fidelity 3D garment reconstruction from monocular images and videos by combining Implicit Sewing Patterns with a generative diffusion model in UV space to learn expressive shape priors and enforce spatio-temporal consistency, enabling accurate recovery of both tight- and loose-fitting clothing with fine geometric details.

Yingxuan You, Ren Li, Corentin Dumery + 3 more2026-03-02💻 cs

Quant Experts: Token-aware Adaptive Error Reconstruction with Mixture of Experts for Large Vision-Language Models Quantization

This paper proposes Quant Experts (QE), a token-aware adaptive error reconstruction method using a mixture of shared and routed low-rank adapters to dynamically compensate for quantization errors in Large Vision-Language Models, thereby achieving near full-precision performance across various scales without retraining.

Chenwei Jia, Baoting Li, Xuchong Zhang + 3 more2026-03-02🤖 cs.AI