Made for a reading group at the Center for Safe AGI.
Rishabh Agarwal Dale Schuurmans Mohammad Norouzi
Off-policy reinforcement learning (RL) using a fixed offline dataset of logged interactions is an important consideration in real world applications.
This paper studies offline RL using the DQN replay dataset comprising the entire replay experience of a DQN agent on 60 Atari 2600 games.
We demonstrate that recent off-policy deep RL algorithms, even when trained solely on this replay dataset, outperform the fully trained DQN agent.
To enhance generalization in the offline setting, we present Random Ensemble Mixture (REM), a robust Q-learning algorithm that enforces optimal Bellman consistency on random convex combinations of multiple Q-value estimates.
Offline REM trained on the DQN replay dataset surpasses strong RL baselines.
The results here present an optimistic view that robust RL algorithms trained on sufficiently large and diverse offline datasets can lead to high quality policies.
To enable offline optimization of RL algorithms on a common ground and reproduce our results, the DQN replay dataset is released at offline-rl.github.io.