Introduction to reinforcement learning and control theory#
Hint
Use ctrl+k to search.
This page contains material and information related to the spring 2025, version of the course Introduction to reinforcement learning and control, offered at DTU.
If you are thinking about taking the course you can read more about the course here or look at the Pre-requisites. If you are enrolled and just starting out, you should begin with the Installation. You can find the exercises, project descriptions in the menu to the left.
Practicalities#
Note
This page is continuously updated with typos and other adjustments. I therefore recommend bookmarking it and using the newest version of the exercises.
- Time and place:
Building B341, auditorium 21, 08:00–12:00
- DTU Learn:
- Exercise code:
- Course descriptions:
- Lecture recordings:
- Discord:
- ChatTutor AI help:
- DTU python support:
- Contact:
Tue Herlau, tuhe@dtu.dk.
Course schedule#
The schedule and reading can be found below. Click on the titles to read the exercise and project descriptions.
# |
Date |
Title |
Reading |
Homework |
Exercise |
Slides |
---|---|---|---|---|---|---|
Jan 31th, 2025 |
Chapter 1-3 , [Her25] |
|||||
1 |
Feb 7th, 2025 |
Chapter 4, [Her25] |
1, 2 |
|||
2 |
Feb 14th, 2025 |
Chapter 5-6.2, [Her25] |
1 |
|||
3 |
Feb 21th, 2025 |
Section 6.3; Chapter 10-11, [Her25] |
1, 2 |
|||
4 |
Feb 28th, 2025 |
Chapter 12-14, [Her25] |
1, 2 |
|||
Mar 6th, 2025 |
||||||
5 |
Mar 7th, 2025 |
Chapter 15, [Her25] |
1 |
|||
6 |
Mar 14th, 2025 |
Chapter 16, [Her25] |
1 |
|||
7 |
Mar 21th, 2025 |
Chapter 17, [Her25] |
1 |
|||
8 |
Mar 28th, 2025 |
Chapter 1; Chapter 2-2.7; 2.9-2.10, [SB18] |
1 |
|||
Apr 3rd, 2025 |
||||||
9 |
Apr 4th, 2025 |
Chapter 3; 4, [SB18] |
1, 2 |
|||
10 |
Apr 11th, 2025 |
Chapter 5-5.4+5.10; 6-6.3, [SB18] |
1 |
|||
🥚 🐤 Easter Holiday 🐤 🥚 |
🎮 |
🏖️ |
🍹 |
|||
11 |
Apr 25th, 2025 |
Chapter 6.4-6.5; 7-7.2; 9-9.3; 10.1, [SB18] |
1 |
|||
12 |
May 2nd, 2025 |
Chapter 10.2; 12-12.7, [SB18] |
1 |
|||
May 8th, 2025 |
||||||
13 |
May 9th, 2025 |
Chapter 6.7-6.9; 8-8.4; 16-16.2; 16.5; 16.6, [SB18] |
1 |
You can find the course reading material further down on this page.
Note
Chapters 1–3 is background information about python and are therefore not part of the main course content (pensum). Knowledge of python is required for the exams.
The Homework column list those problems from the exercise PDF sheets (see the table above) that will be discussed during class. They are also indicated by a in the margin of the exercises. I encourage you to prepare them at home and present your solution during the exercise session.
Exercise sessions#
Hint
I will upload solutions to the programming problems on gitlab.
The teaching assistants will be available Fridays 10:00–12:00 after the lecture.
Location |
Instructor |
|
---|---|---|
Building B341, auditorium 21 |
Tue Herlau |
|
Building B341, IT-015 |
Adam Bøttcher Haupt-Hansen |
|
Marius Emil Thornit |
||
Building B341, IT-019 |
Nikolaj Severin Stæhr Hertz |
For the exercises, you are encouraged to prepare the homework problems at home (see syllabus above), and present your solution during the exercise session.
Reading material#
The two books referenced in the course syllabus are available here
- [Her24]:
- [SB18]:
Introduction to Reinforcement Learning (2020) (Authors homepage)
Additional reading material#
The following references are mentioned in the course as background information but are not part of the course syllabus.
Bibliography#
Tue Herlau. Sequential decision making. (Freely available online), 2025. URL: https://www2.compute.dtu.dk/courses/02465/#reading-material.
Tue Herlau, Morten Mørup, and Mikkel N. Schmidt. Introduction to Machine Learning and Data Mining. 02450 Lecture notes, 2024. (Freely available online). URL: https://www2.compute.dtu.dk/courses/02465/#reading-material.
Matthew Kelly. An introduction to trajectory optimization: how to do your own direct collocation. SIAM Review, 59(4):849–904, 2017. (See kelly2017.pdf). URL: https://epubs.siam.org/doi/pdf/10.1137/16M1062569.
Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018. (Freely available online). URL: https://www2.compute.dtu.dk/courses/02465/#reading-material.
Yuval Tassa, Tom Erez, and Emanuel Todorov. Synthesis and stabilization of complex behaviors through online trajectory optimization. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 4906–4913. IEEE, 2012. (See tassa2012.pdf). URL: https://ieeexplore.ieee.org/abstract/document/6386025.
Contents#
>>> from datetime import datetime
>>> print("This page was last updated at:", datetime.now().strftime("%d/%m/%Y %H:%M:%S"))
This page was last updated at: 27/02/2025 22:01:55