# Fundamentals of Reinforcement Learning

This is an optional second semester course at the IASD master of Université PSL taught with Stéphane Airiau (LAMSADE, Univ. Paris Dauphine-PSL).

## Documents for the 2021-2022 Edition

- Exam
- Analysis of UCB and variants; Linear bandits (slides)
- Analysis of explore-then-commit; the Lai and Robbins lower bound (slides)
- Bayesian algorithms (slides)
- Policy gradients; the multi-armed bandit model (slides)
- Temporal difference learning and stochastic approximation (slides)
- Monte Carlo Methods and TD methods (slides), Homework - Temporal difference methods (python notebook)
- Dynamic programming and Monte Carlo methods (slides), Homework - Monte Carlo methods (python notebook)
- Introduction to reinforcement learning (slides), Homework - Value and policy iteration (python notebook)

### From Previous Year

- Final exam, Mar 2021

## Textbooks

*Reinforcement Learning: An Introduction*, Richard S. Sutton & Andrew G. Barto, Second Edition, MIT Press, 2018*Bandit Algorithms*, Tor Lattimore & Csaba Szepesvári, Cambridge University Press, 2020

## Syllabus

*Reinforcement Learning (RL) refers to scenarios where the learning algorithm operates in closed-loop, simultaneously using past data to adjust its decisions and taking actions that will influence future observations. Algorithms based on RL concepts are now commonly used in programmatic marketing on the web, robotics or in computer game playing. All models for RL share a common concern that in order to attain one's long-term optimality goals, it is necessary to reach a proper balance between exploration (discovery of yet uncertain behaviors) and exploitation (focusing on the actions that have produced the most relevant results so far).*

*The methods used in RL draw ideas from control, statistics and machine learning. This introductory course will provide the main methodological building blocks of RL, focussing on probabilistic methods in the case where both the set of possible actions and the state space of the system are finite. Some basic notions in probability theory are required to follow the course.*

*Models: Markov decision processes (MDP), multiarmed bandits and other models**Planning: finite and infinite horizon problems, the value function, Bellman equations, dynamic programming, value and policy iteration**Basic learning tools: Monte Carlo methods, temporal-difference learning, policy gradient**Probabilistic and statistical tools for RL: Bayesian approach, relative entropy and hypothesis testing, concentration inequalities**Optimal exploration in multiarmed bandits: the explore vs exploit tradeoff, lower bounds, the UCB algorithm, Thompson sampling**Extensions: Contextual bandits, optimal exploration for MDP*

*This is largely a blackboard course with a final written exam.*