Efficient Construction of Implicit Surface Models From a Single Image for Motion Generation

This paper introduces Fast Image-to-Neural Surface (FINS), a lightweight framework that efficiently reconstructs high-fidelity implicit surfaces and SDF fields from a single image within seconds by leveraging multi-resolution hash grids and pre-trained foundation models, outperforming existing methods in speed and accuracy for robotics applications.

Wei-Teng Chu, Tianyi Zhang, Matthew Johnson-Roberson, Weiming Zhi2026-03-10💻 cs

Generative Evolutionary Meta-Solver (GEMS): Scalable Surrogate-Free Multi-Agent Reinforcement Learning

The paper introduces Generative Evolutionary Meta-Solver (GEMS), a scalable, surrogate-free multi-agent reinforcement learning framework that replaces explicit policy populations with a compact generator and latent anchors to achieve significantly faster training, lower memory usage, and higher rewards than traditional methods like PSRO while maintaining game-theoretic guarantees.

Alakh Sharma, Gaurish Trivedi, Kartikey Singh Bhandari, Yash Sinha, Dhruv Kumar, Pratik Narang, Jagat Sesh Challa2026-03-10🤖 cs.LG

Mapping Overlaps in Benchmarks through Perplexity in the Wild

This paper introduces "benchmark signatures"—sets of salient tokens from in-the-wild corpora whose perplexity predicts model performance—to reveal nuanced overlaps and distinct capacities across 89 LLM benchmarks, offering a robust alternative to raw performance correlations for understanding the landscape of LLM abilities and the divergence between machine and human semantic organization.

Siyang Wu, Honglin Bao, Sida Li, Ari Holtzman, James A. Evans2026-03-10💬 cs.CL

Your Agent May Misevolve: Emergent Risks in Self-evolving LLM Agents

This paper introduces and empirically validates the concept of "misevolution," demonstrating that self-evolving LLM agents face widespread, emergent risks across model, memory, tool, and workflow pathways that can lead to safety degradation and unintended vulnerabilities, thereby highlighting an urgent need for new safety paradigms.

Shuai Shao, Qihan Ren, Chen Qian, Boyi Wei, Dadi Guo, Jingyi Yang, Xinhao Song, Linfeng Zhang, Weinan Zhang, Dongrui Liu, Jing Shao2026-03-10🤖 cs.LG

FOR-Prompting: From Objection to Revision via an Asymmetric Prompting Protocol

The paper introduces FOR-Prompting, a model-agnostic, asymmetric prompting protocol that enhances reasoning and iterative refinement across diverse tasks by structuring interactions between a Defender, a Questioner, and an optional Host, enabling even small models to achieve performance comparable to or better than standard baselines without requiring training or access to model internals.

He Zhang, Anzhou Zhang, Jian Dai2026-03-10💬 cs.CL

Tiny but Mighty: A Software-Hardware Co-Design Approach for Efficient Multimodal Inference on Battery-Powered Small Devices

The paper presents NANOMIND, a hardware-software co-design framework that decomposes Large Multimodal Models into modular components and dynamically schedules them across heterogeneous accelerators on unified-memory SoCs, enabling a battery-powered device to run LMMs entirely on-device with significantly improved energy efficiency and throughput.

Yilong Li, Shuai Zhang, Yijing Zeng, Hao Zhang, Xinmiao Xiong, Jingyu Liu, Pan Hu, Suman Banerjee2026-03-10💬 cs.CL

Deliberative Dynamics and Value Alignment in LLM Debates

This paper investigates how different deliberation protocols (synchronous vs. round-robin) and model architectures influence value alignment and verdict revision in multi-turn LLM debates, revealing significant behavioral disparities where GPT-4.1 exhibits strong inertia and autonomy-focused reasoning while Claude 3.7 Sonnet and Gemini 2.0 Flash demonstrate greater flexibility, empathy, and susceptibility to order effects.

Pratik S. Sachdeva, Tom van Nuenen2026-03-10💻 cs

Reallocating Attention Across Layers to Reduce Multimodal Hallucination

This paper proposes a lightweight, training-free plugin called Functional Head Identification and Class-Conditioned Rescaling that mitigates multimodal hallucinations in large reasoning models by adaptively rebalancing perception and reasoning contributions across layers, achieving significant performance gains with minimal computational overhead.

Haolang Lu, Bolun Chu, WeiYe Fu, Guoshun Nan, Junning Liu, Minghui Pan, Qiankun Li, Yi Yu, Hua Wang, Kun Wang2026-03-10💻 cs