Adapting to previously unseen tasks is a long-standing problem in machine learning. Ideally, we want to do this fast and with as little data as possible.
Consider the following example: You want to train an object classifier which can detect whether an image contains a meerkat or a cigar. However, you only have four training images per class (this is also called 2-way 4-shot classification):
Training a (deep) neural network from scratch on this dataset would not work at all: the model would overfit to the training data, and would not be able to generalise to an unseen image like the one on the right.
However, we might have access to a large collection of labelled images of different object categories:
We can use these to build 2-way 4-shot mini-datasets like the meerkat-cigar one, and learn how to learn quickly on such types of datasets.
One particular approach to these types of problems is meta-learning. For a fantastic overview of meta-learning settings and different approaches we recommend this blog post by Lilian Weng. In our work, we build on a method which solves this problem by learning a network initialisation as follows.
Model-Agnostic Meta-Learning (MAML) is a powerful gradient-based approach to the problem of fast adaptation. MAML tries to learn a parameter initialisation such that adapting to new tasks can be done within several gradient updates. This approach is model and task agnostic: it can be used with any gradient-based algorithm, and can be applied to regression, classification, and reinforcement learning tasks. After meta-training, the model is evaluated on a new task: given a small set of labelled data points (in supervised learning) or trajectories (in reinforcement learning), the learned initial parameters are adapted using just a few gradient steps.
As such, MAML adapts the entire model when learning a new task. However, this is (a) often not necessary since many tasks and existing benchmarks do not require generalisation beyond task identification, and (b) can in fact be detrimental to performance, since it can lead to overfitting.
We propose an extension to MAML which addresses these points, and has the additional benefit of being interpretable and easier to parallelise. We call our algorithm Fast Context Adaptation via Meta-Learning (CAVIA), and show empirically that this results in equal or better performance compared to MAML on a range of tasks.
So, how does our CAVIA work? Let’s formalise the problem setting first. We describe the supervised learning setting here. However, it is easy to transfer it to the reinforcement learning setup (check our paper for more details).
We are given a distribution over training tasks and test tasks . The goal of the supervised learning algorithm is to learn a model mapping input features to a label .
To understand CAVIA, it is easier to start with MAML.
where is the dataset size and is the learning rate.
where is the outer loop learning rate. As we can see, in both cases, we update , all the parameters of the network.
CAVIA does a similar update. However, we split all the network parameters into two disjoint subsets: global parameters and context parameters .
Like MAML, CAVIA consists of an inner and an outer loop update, with the difference that we update only the context parameters in the inner loop, and only the shared network parameters in the outer loop.
In the inner update loop, we update context parameters .
In the outer update loop, we update the global parameters .
Keeping a separate set of parameters has two advantages. First, we can vary the size of it based on the task on hand, incorporating prior knowledge about the task into the network structure. Second, it is much easier to parallelise than MAML.
We evaluated CAVIA on a range of popular meta-learning benchmarks for regression, classification and reinforcement learning tasks. One of the motivations of CAVIA is that many tasks do not require generalisation beyond task identification – and this is also true for many current benchmarks.
To illustrate this, the below figure shows the number of parameters we update on the benchmarks we tested, for MAML versus CAVIA (note the log-scale on the y-axis):
This figure shows that the amount of adaptation on these benchmarks is relatively small. In the following, we look at those benchmarks in more detail.
Fitting sine curves
Let us start with a regression task, in which we want to fit sine curves, as done in the Model-Agnostic Meta-Learning (MAML) paper. Amplitude and phase fully specify a task. We sample amplitudes from range and the phase from .
Figures below show the curve fitting before and after the gradient update. While MAML and CAVIA both succeed in the task, we would like to point out, that CAVIA adjusts just 2 context parameters, when MAML adjusts approximately 1500 weights, which makes it prone to overfitting.
For this example, we can easily visualise what the context parameters learn. Below you see a visualisation of what the context parameters learn when using only two context parameters:
The x-axis shows the resulting value of context parameter 1 after the update, and the y-axis show the resulting value of context parameter 2 after the update (each dot is a single task and its position reflects the value of the context parameters). The colour shows the true task variable (amplitude on the left, and phase on the right). As we can see, CAVIA learns an embedding which can be smoothly interpolated. The circular shape is probably due to the phase being periodic.
Next, we decided to test CAVIA on a more challenging task: CelebA image completion which was suggested by Marta Garnelo et al. (2018). The table below shows CAVIA superiority in terms of the pixel-wise MSE.
|Random Pixels||Ordered Pixels|
As the next figure justifies, CAVIA is able to learn to restore a picture of a face from ten pixels only. In this particular experiment, we used 128 context parameters and five gradient steps for adaptation.
We also tested CAVIA for few-shot classification on the challenging Mini-Imagenet benchmark. This task requires large convolutional networks, which have the risk of overfitting when updated on only a small number of datapoints. The question for us was whether CAVIA can scale to large networks without overfitting. In our experiments, we used 100 context parameters for CAVIA, and increased the size of by increasing the number of filters (numbers in brackets in the table). The table below shows that as the network size increases, the performance of MAML goes down, whereas the performance of CAVIA increases.
|1-shot, %||5-shot, %|
|CAVIA (512, first order)||49.92±0.68||63.59±0.57|