S2DiT: Sandwich Diffusion Transformer for Mobile Streaming Video Generation

The paper introduces S2DiT, a novel Streaming Sandwich Diffusion Transformer that leverages efficient attention mechanisms, a budget-aware sandwich architecture, and a 2-in-1 distillation framework to achieve high-fidelity, real-time video generation on mobile devices with performance comparable to server-grade models.

Lin Zhao, Yushu Wu, Aleksei Lebedev, Dishani Lahiri, Meng Dong, Arpit Sahni, Michael Vasilkovsky, Hao Chen, Ju Hu, Aliaksandr Siarohin, Sergey Tulyakov, Yanzhi Wang, Anil Kag, Yanyu Li2026-03-10💻 cs

ReViP: Mitigating False Completion in Vision-Language-Action Models with Vision-Proprioception Rebalance

This paper introduces ReViP, a novel Vision-Language-Action framework that mitigates "false completion" failures caused by proprioceptive bias through vision-proprioception rebalancing and a new benchmark suite, achieving significant performance gains over existing models.

Zhuohao Li, Yinghao Li, Jian-Jian Jiang, Lang Zhou, Tianyu Zhang, Jiadong Yin, Mu Lin, Yi-Kin Wei, Wei-Shi Zheng2026-03-10💻 cs

ScenePilot-Bench: A Large-Scale Dataset and Benchmark for Evaluation of Vision-Language Models in Autonomous Driving

This paper introduces ScenePilot-Bench, a large-scale benchmark built on the diverse ScenePilot-4K dataset to comprehensively evaluate and advance vision-language models in autonomous driving through multi-granularity annotations and a safety-aware, four-axis assessment framework.

Yujin Wang, Yutong Zheng, Wenxian Fan, Tianyi Wang, Hongqing Chu, Li Zhang, Bingzhao Gao, Daxin Tian, Jianqiang Wang, Hong Chen2026-03-10💻 cs

Query-Guided Spatial-Temporal-Frequency Interaction for Music Audio-Visual Question Answering

This paper proposes QSTar, a novel query-guided spatial-temporal-frequency interaction method enhanced by a Query Context Reasoning block, which significantly improves Audio-Visual Question Answering performance by deeply integrating question-guided clues and audio frequency characteristics with visual perception, outperforming existing multimodal approaches on multiple benchmarks.

Kun Li, Michael Ying Yang, Sami Sebastian Brandt2026-03-10💻 cs

Dynamic framework for edge-connectivity maintenance of simple graphs

This paper presents a dynamic framework for maintaining kk-edge-connectivity in undirected simple graphs under edge insertions and deletions by combining Nagamochi-Ibaraki sparse certificates with Link-Cut Trees for efficient O(klogn)O(k \log n) amortized insertions and a maximum-flow-based approach for O(k3/2n3/2)O(k^{3/2} n^{3/2}) deletions, all while keeping the graph sparse with O(kn)O(kn) edges.

Blazej Wrobel2026-03-10💻 cs

Real-Time Aligned Reward Model beyond Semantics

This paper introduces R2M, a novel lightweight RLHF framework that mitigates reward overoptimization by leveraging real-time policy hidden states to dynamically align the reward model with the policy's evolving distribution, rather than relying solely on static semantic representations.

Zixuan Huang, Xin Xia, Yuxi Ren, Jianbin Zheng, Xuefeng Xiao, Hongyan Xie, Li Huaqiu, Songshi Liang, Zhongxiang Dai, Fuzhen Zhuang, Jianxin Li, Yikun Ban, Deqing Wang2026-03-10💻 cs

Impact of LLMs news Sentiment Analysis on Stock Price Movement Prediction

This paper evaluates the impact of LLM-based news sentiment analysis on stock price prediction, demonstrating that DeBERTa outperforms other models and that an ensemble approach achieves 80% accuracy, while sentiment features provide modest improvements to various time-series forecasting architectures.

Walid Siala (SnT, University of Luxembourg, Luxembourg), Ahmed Khanfir (RIADI, ENSI, University of Manouba, Tunisia, SnT, University of Luxembourg, Luxembourg), Mike Papadakis (SnT, University of Luxembourg, Luxembourg)2026-03-10💻 cs

From Performers to Creators: Understanding Retired Women's Perceptions of Technology-Enhanced Dance Performance

Through co-design workshops with retired Chinese women, this paper demonstrates that age-sensitive interactive dance technologies and AI-generated content can lower technical barriers and transform these dancers from passive performers into empowered co-creators of their stage performances.

Danlin Zheng, Xiaoying Wei, Chao Liu, Quanyu Zhang, Jingling Zhang, Shihui Guo, Mingming Fan2026-03-10💻 cs

Green-VLA: Staged Vision-Language-Action Model for Generalist Robots

The paper introduces Green-VLA, a five-stage curriculum framework that combines large-scale multimodal pretraining, embodiment-specific adaptation, and reinforcement learning to enable a single generalist policy to robustly control diverse robotic systems, including the Green humanoid, with enhanced safety and long-horizon efficiency.

I. Apanasevich, M. Artemyev, R. Babakyan, P. Fedotova, D. Grankin, E. Kupryashin, A. Misailidi, D. Nerus, A. Nutalapati, G. Sidorov, I. Efremov, M. Gerasyov, D. Pikurov, Y. Senchenko, S. Davidenko, D. Kulikov, M. Sultankin, K. Askarbek, O. Shamanin, D. Statovoy, E. Zalyaev, I. Zorin, A. Letkin, E. Rusakov, A. Silchenko, V. Vorobyov, S. Sobolnikov, A. Postnikov2026-03-10💻 cs

Vulnerability-Amplifying Interaction Loops: a systematic failure mode in AI chatbot mental-health interactions

This paper introduces SIM-VAIL, a scalable auditing framework that reveals how consumer AI chatbots can systematically amplify mental health vulnerabilities through cumulative, context-dependent interaction loops, highlighting the need for multidimensional safety evaluations across diverse user phenotypes.

Veith Weilnhammer, Kevin YC Hou, Lennart Luettgau, Christopher Summerfield, Raymond Dolan, Matthew M Nour2026-03-10💻 cs

AgenticLab: A Real-World Robot Agent Platform that Can See, Think, and Act

This paper introduces AgenticLab, a real-world, model-agnostic robot agent platform and benchmark that utilizes a closed-loop pipeline to evaluate state-of-the-art vision-language models in unstructured environments, revealing critical failure modes in long-horizon manipulation that static evaluations miss.

Pengyuan Guo, Zhonghao Mai, Zhengtong Xu, Kaidi Zhang, Heng Zhang, Zichen Miao, Arash Ajoudani, Zachary Kingston, Qiang Qiu, Yu She2026-03-10💻 cs