TACO: Learning Task Decomposition via Temporal Alignment for Control

Humans can do magnificently complex stuff. However, they often can’t formalise or even explain how they do it. Learning from Demonstration (LfD) aims to mimic human behaviour given demonstrations of the behaviour, e.g. state-action pairs.

Composite and complex tasks.

It is obvious from our everyday experience that longer, more complex tasks, such as learning to play Moonlight Sonata on the piano are achieved by composing together a series of simpler skills, such as playing a single note. Attempting to learn a complex piano piece without any prior experience is much more likely to end up in failure and frustration in contrast to learning fundamental skills first.

Now picture the above piano example in an LfD setting. If the demonstrations included the motor actions for the entire Moonlight Sonata, it is very unlikely that learning a single flat policy via LfD would be successful. A more sensible strategy would be to break down the demonstrations into basic, reusable and easy-to-learn sub-tasks, and compose them in order to play the full piece.

Modular LfD.

This leaves us with the question: how do we break up a demonstration into these handy sub-tasks? The answer lies in the field of modular LfD. In an ideal world, every time-step in the demonstration would be labeled as belonging to a certain sub-task. Given this annotation, we could simply cut up the demonstration into separate datasets and use our favourite LfD method (e.g., behavioural cloning), to learn the individual policies, along with a high-level controller that switches from one policy to the other.

However, hand labelling a demonstration of potentially thousands of time-steps is incredibly tedious. Another option is to employ unsupervised learning and model sub-policies as latent variables that can be used to either split the data or condition the learned policy. Unsupervised methods require no additional data, though they are not guaranteed to find meaningful latent variables and can result in switching policies that are unreliable.1

TACO.

In our paper, we consider a setting that lies in between fully supervised and unsupervised learning. We assume that along with a demonstration we are provided with a high-level sketch of the demonstration that describes it at a high level. For example, if the whole task is a piano piece, then the task sketch contains the sequence of notes played during the piece.  More concretely, we assume that each demonstration \rho=((s_1,a_1),(s_2,a_2),...,(s_{T},a_{T})), of length T is a sequence of states s and actions a. This is is accompanied with a sketch \tau = (b_1,b_2,\ldots, b_{L}) of a much shorter length L consisting of sub-task symbols b that simply tell us which sub-policies are active in a demonstration and the order in which they occur.

This setting leaves us with a potentially big problem, however. Since the sketch is of much shorter length, we are lacking information about the alignment of \rho and \tau, i.e., we are not told how long each element in \tau is active within \rho.

Luckily this is a well-known problem in the field of speech recognition, where an utterance can span several time-steps in an audio sample, but the label to be recognised (the actual words uttered) is much shorter. A state-of-the-art method to address this problem is Connectionist Temporal Classification (CTC).2  Applied to our setting CTC, would minimise the negative log likelihood of the sketch under the observed demonstration.

(1)   \begin{equation*} \mathcal{L}_{CTC} = -\mathbb{E}_{(\rho,\tau)}[\log(p(\tau|\rho))] \end{equation*}

It seems like all our problems have been solved! Since we have a method to perform the alignment between \tau and \rho we can use it to label each part of the demonstration with its respective subtask.  We can then train one policy (\pi_{\theta*_b}) per subtask using BC, i.e., by minimising:

(2)   \begin{eqnarray*} \mathcal{L}_{BC}= -\mathbb{E}_{\rho}[\sum_{t=1}^{T}\log \pi_\theta(a_t|s_t)].  \end{eqnarray*}

i.e. the negative likelihood of actions given states in the segmented demonstration. From now on we refer to this approach as CTC-BC, (i.e., CTC followed by BC)

CTC, however, is a method for recognition. This has important implications that render CTC-BC inappropriate for this application.

  1. CTC can result in highly inaccurate alignments since these are only a byproduct of the recognition process.
  2. The two procedures optimise equation (1) (but in the equation for BC) and equation (2) independently. I.e CTC does not know that we will be using the resulting alignment for another optimisation procedure. If this alignment is even slightly wrong then BC would be optimising each sub-policy with the wrong data!

It is clear then that instead of optimising equations (1) and (2) separately we should be optimising these jointly. This is exactly what TACO3 does:

(3)   \begin{equation*} \mathcal{L}_{TACO} = - \log(p(\tau, \mathbf{a}_{\rho} | \mathbf{s}_{\rho})) \end{equation*}

At the heart of TACO is a dynamic programming procedure that combines ideas from CTC and policy sketches.4 TACO aligns the two sequences and learns one policy per sub-task in the demonstration as well as a high-level controller that switches between policies. For more details as to how exactly this is done, see the paper or the Tensorflow implementation (PyTorch coming soon!).

Results.

To investigate TACO’s properties and performance we turn to the Dial domain, which is simpler but similar to the piano example used at the beginning of the post.

The Dial domain.

We consider a JACO arm situated next to a dial pad. The states in the demonstrations describe the robots positioning with respect to the different numbers and the actions are the torques applied to each of the joints. Each demonstration includes the motor actions required to press down a certain combination of keys, 42, 1492, <your credit card pin>, and can be as long as 400 time-steps. The combination of keys pressed in each demonstration is (you guessed it), the task sketch. The aim is then to learn one policy per possible keypress, and a high-level controller that knows when each key is done pressing and move on to the next one.

At test time, we provide our policies with a new sketch of an unseen sequence of numbers. The task is considered successful if all the keys in the sketch are pressed in the right sequence. Again this is done by composing simple policies, one for each key.

So how does TACO do? Let’s take a look at some results:

On the y-axis we have the percentage of tasks completed for each method at test time. On the x-axis is the number of demonstrations required to achieve that performance. GT-BC in the graph stands for ‘ground-truth Behavioural Cloning’, i.e., the performance we would get by manually aligning the demonstration and sketch sequences. The three messages to take away from this plot are:

  1. TACO reaches the performance of GT-BC as more demonstrations are provided, without the need for tedious manual labelling.
  2. CTC-BC completely fails to complete any tasks even when provided with many demonstrations. This is because mistakes in the alignment found by CTC result in wrong policies for each key.
  3. The test tasks themselves have not been seen at training time. This means that TACO is capable of performing 0-shot imitation.

Here are some videos of the learned policies during this experiment:

Another useful property of the policies trained using TACO is that we can execute much larger and complex tasks, in this case a longer number sequence. To see how far we can push this idea we sampled test tasks of length 3 to 20 and saw how many times we were able to fully execute them.

Accuracy against test task length for TACO and GT-BC.

 

Again on the y-axis here is the task accuracy, while on the x-axis is the test task length. Clearly the chances of success decrease with increasing task length; however, surprisingly, the performance of TACO-trained policies degrades more slowly than those trained using GT-BC! This suggests that the TACO training procedure results in less overfitted policies than those trained with GT-BC.

Conclusion.

TACO is a general domain-agnostic and reliable method that allows learning from demonstration to be scaled to longer and more complex tasks. It also exhibits several interesting properties such as the ability for 0-shot imitation of longer tasks than the once it was trained on. For more maths, experiments and results have a look at the paper or play with the implementation!

Blog post: Kyriacos Shiarlis, Vitaly Kurin, Shimon Whiteson.

We would like to thank Markus Wulfmeier for his comments on this post.

References

DiCE: The Infinitely Differentiable Monte Carlo Estimator

If you’ve stumbled upon this blog post, you’ve probably used policy gradient methods in Reinforcement Learning (RL). Or you might have maximised the likelihood in probabilistic models. In both cases, we need to estimate the gradient of the loss, which is an expectation over random variables.

The problem is that you cannot just differentiate the objective. Usually, you will apply the score function trick (aka log likelihood trick) here. We can view this trick as providing a differentiable function, whose gradient is an estimate of the gradient of the original objective. We can then apply any deep learning toolbox to do automatic differentiation. However, sometimes we need higher-order gradients, e.g., in meta-learning or multi-agent RL when we need to differentiate through other agents’ learning steps. This makes life much harder.

Infinitely Differentiable Monte Carlo Estimator (DiCE) [1] to the rescue! You can apply the magic \magic objective repeatedly infinitely many times to get the correct higher order gradients under Stochastic Computation Graph (SCG) formalism [2]. This lets automatic differentiation software do the job instead of us manipulating the graph manually. We illustrate the benefits of our approach applying “Learning with Opponent Learning Awareness” (LOLA) [3] to the iterated prisoner’s dilemma.

DiCE

As we mention above, in the surrogate loss (SL) approach, we choose an objective, whose gradient equals the true gradient of the objective and use this function to do the optimisation.

Sadly, constructing surrogate loss using the first-order gradient as an objective leads to wrong second-order gradient estimation. Simply put, applying SL twice and estimating the gradient is not the same as the second-order gradient of the true objective.

The wrong estimation happens because, in the SL approach, we treat part of the objective as a sampled cost. This causes the corresponding terms to lose a functional dependency on the sampling distribution.

We illustrate our reasoning graphically in the figure below using Stochastic Computation Graphs (SCGs) (Schulman et al. 2015) formalism.

Stochastic nodes are in orange, costs in grey, surrogate losses in blue, DiCE in purple, and gradient estimators in red.

We introduce the magic \magic operator, which allows us to compute the gradient to any order we like: \Expect[\nabla_{\theta}^n\calL_{\magic}] \rightarrowtail \nabla_{\theta}^{n}\calL, \forall n \in \{0, 1, 2, ...\}.

DiCE is easy to implement:

(1)   \begin{equation*} \magic(\calW) = \exp{(\tau - \perp(\tau))}, \tau=\sum_{w \in \calW}\log(p(w;\theta)), \end{equation*}

where \perp is an operator which sets the gradient of its operand to zero (detach in Pytorch and stop_gradient() in Tensorflow:

Alternatively, we can rewrite DiCE in the following way:

(2)   \begin{equation*} \magic(\calW) = \frac{\prod_{w \in \calW}p(w;\theta)}{\prod_{w \in \calW} \perp p(w;\theta)}. \end{equation*}

The figure below shows an example of DiCE applied to an RL problem:

DiCE applied to a reinforcement learning problem. A stochastic policy conditioned on s_t and \theta produces actions, a_t, which lead to rewards r_t and next states, s_{t+1}. Associated with each reward is a DiCE objective that takes as input the set of all causal dependencies that are functions of \theta, i.e, the actions. Arrows from \theta,a_i and r_i to gradient estimators omitted for clarity.

Variance Reduction

Variance reduction is an integral part of Monte Carlo estimation.
Though DiCE is not limited to the RL case, we are most interested in policy gradients that use the score function trick.

DiCE inherently reduces variance by taking causality into account. The cost node c is multiplied by the sum of the gradients of the log probabilities only for those nodes that influence c.

Now we propose another variance reduction mechanism by adding the following term to the DiCE objective:

(3)   \begin{align*} \calB_{\magic}^{(1)} &= \sum_{w \in \calS}{(1-\magic({w}))b_w},\nonumber \end{align*}

where b_w is any function of nodes not influenced by w. The baseline keeps the gradient estimation unbiased and does not influence the evaluation of the original objective \calL_{\magic}.

The flaw of \calB_{\magic}^{(1)} becomes apparent when we calculate second-order gradients. In two words, some the terms do not have control variates keeping variance high.

To fix the problem, we can subtract the following term from the objective to reduce the second-order gradient variance:

(5)   \begin{align*} \calB_{\magic}^{(2)} &= \sum_{w \in \calS'}{\big((1-\magic({w})\big) \big(1-\magic({\calS_w})\big)b_w}, \nonumber \end{align*}

where \calS' is the set of stochastic nodes that depend on \theta and at least one other stochastic node.

Code example

To show DiCE in action, we apply it to the iterated prisoner’s dilemma (IPD). In IPD, two agents iteratively play matrix games where they can either (C)ooperate or (D)efect. The first agent’s payoffs are the following: -2 (DD), 0 (DC), -3 (CD), -1 (CC).

Let’s build policies for both agents first:

Now, let’s build the DiCE objective:

Computing the gradient or hessian of the parameters is just calling tf.gradients() or tf.hessians() on the parameters:

You can find the complete working example here.

Empirical Results

Let’s now see the empirical verification of DiCE. From the figure below we can see that the second-order baseline \calL_{\magic}^{b_2} helps us to match the analytically derived Hessian, whereas the first-order one fails to do that.



The following figure shows that however, the quality of the gradient estimation increases with the sample size, \calL_{\magic}^{b_1} does not achieve that performance as \calL_{\magic}^{b_2} does. The results including the second-order baseline are in orange, the ones for first-order only are in blue.

Finally, we will show how DiCE helps us get better performance on IPD using LOLA [3]. Comparing LOLA-DICE agents and the original formulation LOLA-DICE agents discover strategies of high social welfare, replicating the results of the original LOLA paper in a way that is both more direct and efficient.

Joint average per step returns for different training methods. Shaded areas represent the 95% confidence intervals based on five runs. All agents used batches of size 64, which is more than 60 times smaller than the size required in the original LOLA paper.

As we can see in the figure below, the second-order baseline dramatically improves LOLA performance on the IPD problem:

LOLA performance with \calL_{\magic}^{b_1} (red) and \calL_{\magic}^{b_2} (blue).

Conclusion

In this post, we have described DiCE, a general method for computing any order gradient estimators for stochastic computation graphs. DiCE is easy to implement, however, at the same time it allows us to use the whole power of auto-differentiation software without manually constructing the graph for each order of the gradient. We believe DiCE will be a stepping stone for further exploration of higher order learning methods in meta-learning, reinforcement learning other applications of stochastic computation graphs.

Whether you want to build upon DiCE or are just interested to find out more, you can find our implementation here. For PyTorch lovers there is also an implementation by Alexis David Jacq.

References

Blogpost: Vitaly Kurin, Jakob Foerster, Shimon Whiteson.