The Value-Improvement Path Towards Better Representations for Reinforcement Learning

Made for a reading group at the Center for Safe AGI.

The Value-Improvement Path Towards Better Representations for Reinforcement Learning

Will Dabney, Andre Barreto, Mark Rowland, Robert Dadashi John Quan, Marc G. Bellemare, David Silver

Abstract

In value-based reinforcement learning (RL), unlike in supervised learning, the agent faces not a single, stationary, approximation problem, but a sequence of value prediction problems.

Each time the policy improves, the nature of the problem changes, shifting both the distribution of states and their values.

In this paper we take a novel perspective, arguing that the value prediction problems faced by an RL agent should not be addressed in isolation, but rather as a single, holistic, prediction problem.

An RL algorithm generates a sequence of policies that, at least approximately, improve towards the optimal policy.

We explicitly characterize the associated sequence of value functions and call it the valueimprovement path.

Our main idea is to approximate the value-improvement path holistically, rather than to solely track the value function of the current policy.

Specifically, we discuss the impact that this holistic view of RL has on representation learning.

We demonstrate that a representation that spans the past value-improvement path will also provide an accurate value approximation for future policy improvements.

We use this insight to better understand existing approaches to auxiliary tasks and to propose new ones.

To test our hypothesis empirically, we augmented a standard deep RL agent with an auxiliary task of learning the value-improvement path.

In a study of Atari 2600 games, the augmented agent achieved approximately double the mean and median performance of the baseline agent.