Abstract: Although many continual learning approaches claim to be state-of-the-art, they often do it by defining their own specific setting/evaluation. In this work, we tackle the class incremental learning setting, the most difficult and general continual learning setting. We start with Dark Experience Replay (DER), a simple and strong baseline. We extend it to do Self-Supervised Learning to mitigate a common problem in Class-IL known as Prior Information Loss (PIL). We plan to submit our approach to a competition instantiated in Sequoia, a framework which organizes the continual learning research problems to better compare methods with each other.
DER-SSL will be submitted to supervised learning track of the CVPR21 Continual Learning Challenge.
Ensure you have conda installed.
./install.sh
To run training:
make sl
- Dark Experience for General Continual Learning: a Strong, Simple Baseline [paper][code]
- Self-Supervised Learning Aided Class-Incremental Lifelong Learning [paper]
- Continuous Learning of Context-dependent Processing in Neural Networks (OWM) [paper][code]
- Synbols: Probing learning algorithmswith synthetic datasets [paper]