Cultural Perspectives and Expectations for Generative AI: A Global Survey Approach

This paper presents findings from a large-scale global survey that explores diverse cultural perspectives on Generative AI, distilling community-defined understandings of culture to propose recommendations for more inclusive and sensitive AI development, including participatory approaches and frameworks for addressing cultural boundaries.

Erin van Liemt, Renee Shelby, Andrew Smart, Sinchana Kumbale, Richard Zhang, Neha Dixit, Qazi Mamunur Rashid, Jamila Smith-Loud2026-03-09🤖 cs.AI

LTLGuard: Formalizing LTL Specifications with Compact Language Models and Lightweight Symbolic Reasoning

LTLGuard is a modular framework that enables resource-efficient open-weight language models (4B–14B parameters) to generate correct and conflict-free Linear Temporal Logic (LTL) specifications from informal requirements by combining constrained generation with lightweight symbolic reasoning for iterative consistency checking and refinement.

Medina Andresel, Cristinel Mateis, Dejan Nickovic, Spyridon Kounoupidis, Panagiotis Katsaros, Stavros Tripakis2026-03-09🤖 cs.AI

Knowing without Acting: The Disentangled Geometry of Safety Mechanisms in Large Language Models

This paper proposes the Disentangled Safety Hypothesis (DSH), which reveals that large language models separate safety "recognition" and "refusal execution" into distinct geometric subspaces, enabling the development of the Refusal Erasure Attack (REA) to bypass safety mechanisms by surgically disabling the refusal axis while preserving harmful content generation.

Jinman Wu, Yi Xie, Shen Lin, Shiqian Zhao, Xiaofeng Chen2026-03-09🤖 cs.AI

PVminerLLM: Structured Extraction of Patient Voice from Patient-Generated Text using Large Language Models

This paper introduces PVminer, a benchmark for structured extraction of patient voice from patient-generated text, and presents PVminerLLM, a supervised fine-tuned large language model that significantly outperforms prompt-based baselines in extracting codes, sub-codes, and evidence spans to enable scalable analysis of non-clinical health drivers.

Samah Fodeh, Linhai Ma, Ganesh Puthiaraju, Srivani Talakokkul, Afshan Khan, Ashley Hagaman, Sarah Lowe, Aimee Roundtree2026-03-09🤖 cs.AI

Balancing Domestic and Global Perspectives: Evaluating Dual-Calibration and LLM-Generated Nudges for Diverse News Recommendation

This study evaluates a dual-calibration algorithmic nudge and an LLM-based presentation nudge within a personalized diversity framework, finding that while algorithmic nudges effectively increase news consumption diversity and shift long-term reading habits toward balanced domestic and global coverage, LLM-based presentation nudges yield variable results and user-specific topic interest remains the strongest predictor of engagement.

Ruixuan Sun, Matthew Zent, Minzhu Zhao, Thanmayee Boyapati, Xinyi Li, Joseph A. Konstan2026-03-09🤖 cs.AI

StreamWise: Serving Multi-Modal Generation in Real-Time at Scale

StreamWise is an adaptive, modular serving system that leverages heterogeneous hardware and dynamic resource management to enable cost-effective, high-quality real-time multi-modal generation (such as podcast videos) with sub-second startup delays, overcoming the latency and complexity challenges of coordinating diverse models at scale.

Haoran Qiu, Gohar Irfan Chaudhry, Chaojie Zhang, Íñigo Goiri, Esha Choukse, Rodrigo Fonseca, Ricardo Bianchini2026-03-09🤖 cs.AI

Lexara: A User-Centered Toolkit for Evaluating Large Language Models for Conversational Visual Analytics

This paper presents Lexara, a user-centered toolkit designed to address the challenges of evaluating Large Language Models for Conversational Visual Analytics by providing real-world test cases, interpretable multi-format metrics, and an interactive interface that enables developers and end-users to assess model performance without programming expertise.

Srishti Palani, Vidya Setlur2026-03-09🤖 cs.AI

Evaluating LLM Alignment With Human Trust Models

This paper presents a white-box analysis of the EleutherAI/gpt-j-6B model, demonstrating through contrastive prompting that its internal representation of trust aligns most closely with the Castelfranchi socio-cognitive model, thereby validating the feasibility of using LLM activation spaces to analyze socio-cognitive constructs and inform human-AI collaboration.

Anushka Debnath, Stephen Cranefield, Bastin Tony Roy Savarimuthu, Emiliano Lorini2026-03-09🤖 cs.AI

Remote Sensing Image Classification Using Deep Ensemble Learning

This paper proposes a deep ensemble learning framework that fuses four independent CNN-ViT hybrid models to overcome the performance bottlenecks of redundant feature representations, achieving state-of-the-art accuracy on remote sensing image classification datasets while maintaining computational efficiency.

Niful Islam, Md. Rayhan Ahmed, Nur Mohammad Fahad, Salekul Islam, A. K. M. Muzahidul Islam, Saddam Mukta, Swakkhar Shatabda2026-03-09🤖 cs.AI

Computational Pathology in the Era of Emerging Foundation and Agentic AI -- International Expert Perspectives on Clinical Integration and Translational Readiness

This paper presents an international expert review on the clinical integration and translational readiness of emerging foundation and agentic AI in computational pathology, moving beyond technical performance to address the economic, regulatory, and operational barriers hindering real-world adoption in patient care.

Qian Da, Yijiang Chen, Min Ju, Zheyi Ji, Albert Zhou, Wenwen Wang, Matthew A Abikenari, Philip Chikontwe, Guillaume Larghero, Bowen Chen, Peter Neiglinger, Dingrong Zhong, Shuhao Wang, Wei Xu, Drew Williamson, German Corredor, Sen Yang, Le Lu, Xiao Han, Kun-Hsing Yu, Jun-zhou Huang, Laura Barisoni, Geert Litjens, Anant Madabhushi, Lifeng Zhu, Chaofu Wang, Junhan Zhao, Weiguo Hu2026-03-09🤖 cs.AI

Reconstruct! Don't Encode: Self-Supervised Representation Reconstruction Loss for High-Intelligibility and Low-Latency Streaming Neural Audio Codec

This paper introduces JHCodec, a low-latency streaming neural audio codec that utilizes a self-supervised representation reconstruction (SSRR) loss to achieve state-of-the-art intelligibility and convergence speed without requiring additional lookahead or semantic encoder distillation.

Junhyeok Lee, Xiluo He, Jihwan Lee, Helin Wang, Shrikanth Narayanan, Thomas Thebaud, Laureano Moro-Velazquez, Jesús Villalba, Najim Dehak2026-03-09🤖 cs.AI