Lecture 6: DQN

Video

   

Description

Deep Q-Networks refer to the method proposed by Deepmind in 2014 to learn to play ATARI2600 games from the raw pixel observations. This hugely influential method kick-started the resurgence in interest in Deep Reinforcement Learning, however it’s core contributions deal simply with the stabilization of the NQL algorithm. In these session these key innovations (Experience Replay, Target Networks, and Huber Loss) are stepped though, taking the participants from the relatively unstable NQL algorithm to a fully-implemented DQN.

   

Lecture Slides

StarAi Lecture 6-DQN slides

   

Exercise

Follow the link below to access the exercises for lecture 6:

lecture 6 Exercise: DQN Homework Exercise

   

Exercise Solutions

Follow the link below to access the exercise solutions for lecture 6:

lecture 6 Exercise: DQN Homework Exercise Solutions

   

Additional Learning Material

  1. DeepRL Bootcamp lecture from Vlad Mnih, one of the original authors of DQN