SPEED: Scalable, Precise, and Efficient Concept Erasure for Diffusion Models

The paper introduces SPEED, an efficient concept erasure framework for diffusion models that directly edits parameters within a null space—enhanced by influence-based filtering, directed prior augmentation, and invariant equality constraints—to achieve scalable, precise removal of multiple concepts in seconds while preserving the quality of non-target generations.

Ouxiang Li, Yuan Wang, Xinting Hu + 3 more2026-03-03💻 cs

A Multi-Objective Evaluation Framework for Analyzing Utility-Fairness Trade-Offs in Machine Learning Systems

This paper introduces a model-agnostic, multi-objective evaluation framework that utilizes radar charts and comprehensive metrics to systematically analyze and visualize utility-fairness trade-offs in machine learning systems, with a specific focus on mitigating disparities in medical imaging applications.

Gökhan Özbulak, Oscar Jimenez-del-Toro, Maíra Fatoretto + 2 more2026-03-03🤖 cs.LG

Towards Application-Specific Evaluation of Vision Models: Case Studies in Ecology and Biology

This paper argues that computer vision models for ecology and biology should be evaluated using application-specific metrics rather than standard machine learning benchmarks, demonstrating through case studies on chimpanzee abundance and pigeon gaze estimation that high technical performance does not guarantee accuracy in downstream scientific analyses.

Alex Hoi Hang Chan, Otto Brookes, Urs Waldmann + 11 more2026-03-03💻 cs

Dynamic Uncertainty Learning with Noisy Correspondence for Text-Based Person Search

To address the performance degradation caused by noisy correspondence in large-scale text-image datasets, this paper proposes the Dynamic Uncertainty and Relational Alignment (DURA) framework, which utilizes a Key Feature Selector and a Dynamic Softmax Hinge Loss to model noise uncertainty and adaptively handle negative samples, thereby significantly improving retrieval robustness across varying noise levels.

Zequn Xie, Haoming Ji, Chengxuan Li + 1 more2026-03-03💻 cs

Flexible-weighted Chamfer Distance: Enhanced Objective Function for Point Cloud Completion

This paper introduces the Flexible-weighted Chamfer Distance (FCD), a plug-and-play objective function that decouples local precision and global completeness with an asymmetric weighting strategy to mitigate point aggregation and structural defects, thereby significantly enhancing the quality and uniformity of generated point clouds across various completion and upsampling tasks.

Jie Li, Shengwei Tian, Long Yu + 1 more2026-03-03💻 cs

Point-MoE: Large-Scale Multi-Dataset Training with Mixture-of-Experts for 3D Semantic Segmentation

The paper introduces Point-MoE, a Mixture-of-Experts framework that enables effective large-scale joint training of 3D semantic segmentation models across diverse, unlabeled datasets by using a lightweight router to dynamically assign tokens to specialized experts, thereby overcoming the performance degradation caused by naive data mixing and achieving state-of-the-art results without requiring dataset-specific labels.

Xuweiyi Chen, Wentao Zhou, Aruni RoyChowdhury + 1 more2026-03-03💻 cs

SenseFlow: Scaling Distribution Matching for Flow-based Text-to-Image Distillation

This paper introduces SenseFlow, a novel distillation framework that overcomes the convergence challenges of vanilla Distribution Matching Distillation on large-scale flow-based text-to-image models like SD 3.5 and FLUX.1 by employing implicit distribution alignment and intra-segment guidance to achieve superior performance across both diffusion and flow-matching architectures.

Xingtong Ge, Xin Zhang, Tongda Xu + 4 more2026-03-03💻 cs