Scaling Agentic Capabilities, Not Context: Efficient Reinforcement Finetuning for Large Toolspaces
The paper introduces ATLAS, a reinforcement finetuning framework that enables small language models to effectively navigate large toolspaces by learning adaptive context acquisition and execution strategies, thereby achieving frontier-level performance with significantly reduced parameter and context budgets.