FALCON: Future-Aware Learning with Contextual Object-Centric Pretraining for UAV Action Recognition

FALCON is a unified self-supervised pretraining framework for UAV action recognition that overcomes spatial imbalance in aerial footage by combining object-aware masked autoencoding with object-centric dual-horizon future reconstruction, achieving superior accuracy and faster inference without requiring additional preprocessing at test time.

Ruiqi Xian, Xiyang Wu, Tianrui Guan, Xijun Wang, Boqing Gong, Dinesh Manocha2026-03-09🤖 cs.AI

Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation

This survey provides a comprehensive overview of the emerging ecosystem of large language models and tools that support researchers across the scientific lifecycle, covering key tasks from literature search and idea generation to content creation, experimentation, and evaluation, while addressing associated datasets, methods, limitations, and ethical concerns.

Steffen Eger, Yong Cao, Jennifer D'Souza, Andreas Geiger, Christian Greisinger, Stephanie Gross, Yufang Hou, Brigitte Krenn, Anne Lauscher, Yizhi Li, Chenghua Lin, Nafise Sadat Moosavi, Wei Zhao, Tristan Miller2026-03-09🤖 cs.AI

Conditioning LLMs to Generate Code-Switched Text

This paper proposes a methodology to fine-tune Large Language Models for generating fluent English-Spanish code-switched text by leveraging back-translated parallel corpora, demonstrating that while traditional metrics fail to correlate with human preferences, LLM-based evaluation aligns well with human judgment and the approach significantly advances CS text generation capabilities.

Maite Heredia, Gorka Labaka, Jeremy Barnes, Aitor Soroa2026-03-09🤖 cs.AI

FragFM: Hierarchical Framework for Efficient Molecule Generation via Fragment-Level Discrete Flow Matching

The paper introduces FragFM, a hierarchical framework utilizing fragment-level discrete flow matching and a stochastic fragment bag strategy to achieve efficient, scalable, and property-controllable molecular generation, validated through a new Natural Product Generation (NPGen) benchmark where it outperforms existing atom-based methods.

Joongwon Lee, Seonghwan Kim, Seokhyun Moon, Hyunwoo Kim, Woo Youn Kim2026-03-09🤖 cs.AI

Aligning Compound AI Systems via System-level DPO

This paper introduces SysDPO, a framework that aligns complex, multi-component Compound AI Systems with human preferences by modeling them as Directed Acyclic Graphs and extending Direct Preference Optimization to overcome the challenges of non-differentiable interactions and the difficulty of translating system-level preferences to component levels.

Xiangwen Wang, Yibo Jacky Zhang, Zhoujie Ding, Katherine Tsai, Haolun Wu, Sanmi Koyejo2026-03-09🤖 cs.AI

FindAnything: Open-Vocabulary and Object-Centric Mapping for Robot Exploration in Any Environment

FindAnything is an efficient, open-world mapping framework that integrates vision-language features into object-centric volumetric submaps to enable real-time, open-vocabulary semantic understanding of large-scale environments on resource-constrained robots.

Sebastián Barbas Laina, Simon Boche, Sotiris Papatheodorou, Simon Schaefer, Jaehyung Jung, Helen Oleynikova, Stefan Leutenegger2026-03-09🤖 cs.AI

From Tokenizer Bias to Backbone Capability: A Controlled Study of LLMs for Time Series Forecasting

This paper investigates the inherent forecasting capabilities of large language models (LLMs) by controlling for tokenizer bias through large-scale pre-training, revealing that while LLM backbones show some promise, they still struggle to consistently outperform models specifically trained on large-scale time series data.

Xinyu Zhang, Shanshan Feng, Xutao Li, Kenghong Lin, Fan Li, Pengfei Jia2026-03-09🤖 cs.AI

Position: Stop Anthropomorphizing Intermediate Tokens as Reasoning/Thinking Traces!

This position paper argues that anthropomorphizing intermediate token generation as "reasoning traces" or "thoughts" is a dangerous misconception that obscures the true nature of language models, hinders their effective use, and leads to flawed research, urging the community to abandon such metaphors.

Subbarao Kambhampati, Karthik Valmeekam, Siddhant Bhambri, Vardhan Palod, Lucas Saldyt, Kaya Stechly, Soumya Rani Samineni, Durgesh Kalwar, Upasana Biswas2026-03-09🤖 cs.AI

The Malicious Technical Ecosystem: Exposing Limitations in Technical Governance of AI-Generated Non-Consensual Intimate Images of Adults

This paper adopts a survivor-centered approach to expose how a "malicious technical ecosystem" of accessible tools enables the creation of AI-generated non-consensual intimate images, while demonstrating that current governance frameworks, such as the NIST AI 100-4 report, fail to effectively regulate this landscape due to flawed underlying assumptions.

Michelle L. Ding, Harini Suresh2026-03-09🤖 cs.AI

HCT-QA: A Benchmark for Question Answering on Human-Centric Tables

This paper introduces HCT-QA, a comprehensive benchmark comprising thousands of real-world and synthetic human-centric tables with natural language question-answer pairs, designed to evaluate and improve the performance of Large Language Models and Vision Language Models in querying complex tabular data.

Mohammad S. Ahmad, Zan A. Naeem, Michaël Aupetit, Ahmed Elmagarmid, Mohamed Eltabakh, Xiaosong Ma, Mourad Ouzzani, Chaoyi Ruan, Hani Al-Sayeh2026-03-09🤖 cs.AI

FourierSpecNet: Neural Collision Operator Approximation Inspired by the Fourier Spectral Method for Solving the Boltzmann Equation

This paper introduces FourierSpecNet, a hybrid deep learning framework that integrates the Fourier spectral method to efficiently approximate the Boltzmann collision operator, achieving resolution-invariant learning, zero-shot super-resolution, and significant computational savings while maintaining accuracy across elastic and inelastic collision regimes.

Jae Yong Lee, Gwang Jae Jung, Byung Chan Lim, Hyung Ju Hwang2026-03-09🤖 cs.AI