Welcome to Gist.Science

Research papers,
explained for humans.

We read the latest papers from arXiv, bioRxiv, and medRxiv and produce easy-to-understand explanations, key takeaways, and technical summaries — in ten languages.

93,985 papers explained across 10 languages·Last paper added just now
📄We read the full paperNot just the abstract — every word
🧠We simplify itAnalogies, metaphors, plain language
🌎In 10 languagesNatively generated, not machine translated

A Metamorphic Testing Perspective on Knowledge Distillation for Language Models of Code: Does the Student Deeply Mimic the Teacher?

This paper introduces MetaCompress, a metamorphic testing framework that reveals significant behavioral discrepancies between teacher and student code language models under adversarial conditions—discrepancies missed by traditional accuracy metrics—and demonstrates its effectiveness in evaluating the behavioral fidelity of models compressed via knowledge distillation.

Md. Abdul Awal, Mrigank Rochan, Chanchal K. Roy2026-04-14🤖 cs.LG

SVD-Prune: Training-Free Token Pruning For Efficient Vision-Language Models

SVD-Prune is a training-free, plug-and-play token pruning method that utilizes Singular Value Decomposition and statistical leverage scores to select the most informative vision tokens, effectively overcoming the limitations of existing heuristic-based approaches to maintain high performance in Vision-Language Models even under extreme token budget constraints.

Yvon Apedo, Martyna Poreba, Michal Szczepanski, Samia Bouchafa2026-04-14🤖 cs.AI

RL makes MLLMs see better than SFT

This paper demonstrates that Reinforcement Learning (RL) significantly outperforms Supervised Fine-Tuning (SFT) in enhancing Multimodal Large Language Models by fundamentally reshaping their vision encoders to produce stronger, more localized visual representations, leading to the proposal of a computationally efficient training framework called Preference-Instructed Vision OpTimization (PIVOT).

Junha Song, Sangdoo Yun, Dongyoon Han, Jaegul Choo, Byeongho Heo2026-04-14🤖 cs.LG

Browse by category