Why Do Neural Networks Forget: A Study of Collapse in Continual Learning
This study investigates the correlation between catastrophic forgetting and structural collapse in continual learning by measuring weight and activation effective rank across various architectures and strategies, revealing that forgetting is strongly linked to the loss of model plasticity and that different methods preserve capacity and performance with varying efficiency.