Meta-learning for Continual Learning

Meta-learning for Continual Learning

Continual learning (CL) refers to the ability to continually learn over time by accommodating new knowledge while retaining previously learned experiences. While this concept is inherent in the human learning ability, current machine learning-based methods struggle with this as they are highly prone to forget past experiences and they are usually not trained to accelerate future learning. Meta-learning, or learning how to learn, has shown promise in addressing these challenges. In this project, we propose to apply meta-learning to solve CL problems by meta-learning the weights and the hyperparameters of a neural network. The aim is to achieve better performance in terms of both backward and forward transfer, enhancing the generalization capabilities of deep learning models when new tasks are observed. We will evaluate our approach on benchmark datasets, such as SplitMNIST and SplitCIFAR10. Expected results will show that fine-tuning the model to the current task, before making the final prediction, is beneficial for the learning process and allows it to outperform state-of-the-art methods with few examples.

This work is part of a research visit to the Automated Machine Learning group at TU/e University (Eindhoven, Netherlands) under the supervision of Joaquin Vanschoren, Associate Professor, Department of Mathematics and Computer Science, Eindhoven University of Technology, The Netherlands.

Acknowledgments This work was supported by the EU’s Horizon Europe research and innovation program under grant agreement No.952215 (TAILOR).

Link to the project: TAILOR Project