UnfoldLDM: Deep Unfolding-based Blind Image Restoration with Latent Diffusion Priors

The paper proposes UnfoldLDM, a deep unfolding framework that integrates a multi-granularity degradation-aware module for robust degradation estimation and a degradation-resistant latent diffusion model with an over-smoothing correction transformer to effectively address blind image restoration by overcoming degradation-specific dependencies and suppressing over-smoothing bias.

Chunming He, Rihan Zhang, Zheng Chen, Bowen Yang, Chengyu Fang, Yunlong Lin, Yulun Zhang, Fengyang Xiao, Sina Farsiu2026-03-10💻 cs

Yo'City: Personalized and Boundless 3D Realistic City Scene Generation via Self-Critic Expansion

This paper introduces Yo'City, an agentic framework that leverages large models for hierarchical planning and a self-critic expansion loop to generate personalized, boundless, and spatially coherent 3D realistic city scenes, outperforming existing state-of-the-art methods across multiple evaluation metrics.

Keyang Lu, Sifan Zhou, Hongbin Xu, Gang Xu, Zhifei Yang, Yikai Wang, Zhen Xiao, Jieyi Long, Ming Li2026-03-10💻 cs

Integrating a Causal Foundation Model into a Prescriptive Maintenance Framework for Optimising Production-Line OEE

This paper proposes a prescriptive maintenance framework that integrates a pre-trained causal foundation model as a "what-if" simulator to identify root causes and recommend optimal interventions, thereby overcoming the limitations of purely predictive models to enhance production-line Overall Equipment Effectiveness (OEE).

Felix Saretzky, Lucas Andersen, Thomas Engel, Fazel Ansari2026-03-10💻 cs

MAViD: A Multimodal Framework for Audio-Visual Dialogue Understanding and Generation

MAViD is a novel multimodal framework that employs a Conductor-Creator architecture, combining autoregressive audio and diffusion-based video generation with a specialized fusion module, to overcome existing limitations and achieve seamless, long-duration, and contextually coherent audio-visual dialogue understanding and generation.

Youxin Pang, Jiajun Liu, Lingfeng Tan, Yong Zhang, Feng Gao, Xiang Deng, Zhuoliang Kang, Xiaoming Wei, Yebin Liu2026-03-10💻 cs

When Token Pruning is Worse than Random: Understanding Visual Token Information in VLLMs

This paper reveals that visual token information in Vision Large Language Models progressively vanishes at a depth-dependent "information horizon," beyond which existing pruning methods underperform random selection, leading to a novel strategy that integrates random pruning to achieve state-of-the-art efficiency without sacrificing accuracy.

Yahong Wang, Juncheng Wu, Zhangkai Ni, Longzhen Yang, Yihang Liu, Chengmei Yang, Ying Wen, Lianghua He, Xianfeng Tang, Hui Liu, Yuyin Zhou2026-03-10💻 cs

ReMeDI: Refined Memory for Disambiguation of Identities with SAM3 in Surgical Segmentation

The paper introduces ReMeDI-SAM3, a training-free extension of SAM3 that enhances surgical instrument segmentation in endoscopy by implementing relevance-aware memory filtering, piecewise interpolation, and feature-based re-identification to overcome challenges like occlusions and rapid motion, achieving significant zero-shot performance improvements over existing methods.

Valay Bundele, Mehran Hosseinzadeh, Hendrik P. A. Lensch2026-03-10💻 cs

It is not always greener on the other side: Greenery perception across demographics and personalities in multiple cities

This study analyzes the discrepancies between objective and subjective urban greenery perceptions across five countries using street view imagery and a survey of 1,000 participants, revealing that while demographics and personality have little influence, an individual's geographic location is a primary factor shaping how they perceive green spaces.

Matias Quintana, Fangqi Liu, Jussi Torkko, Youlong Gu, Xiucheng Liang, Yujun Hou, Koichi Ito, Yihan Zhu, Mahmoud Abdelrahman, Tuuli Toivonen, Yi Lu, Filip Biljecki2026-03-10💻 cs

Cost Trade-offs of Reasoning and Non-Reasoning Large Language Models in Text-to-SQL

This paper demonstrates that reasoning Large Language Models significantly reduce cloud query execution costs and data consumption compared to non-reasoning models in Text-to-SQL tasks, while revealing that execution time is a poor proxy for cost efficiency and highlighting the substantial financial risks posed by non-reasoning models' tendency to generate inefficient queries.

Saurabh Deochake, Debajyoti Mukhopadhyay2026-03-10💻 cs