Tentative Schedule

The session will cover a tutorial, invited talks, contributed talks and posters. The tentative schedule in Central European Summer Time (GMT+2) can be found below.


Time Type Title & Speakers
9:00 a.m. Opening Remarks
9:05 a.m. Tutorial Niao He (ETH Zurich)
10:00 a.m. Invited Talk Is Behavior Cloning All You Need? Understanding Horizon in Imitation Learning
Dylan Foster (Microsoft Research)

Imitation learning (IL) aims to mimic the behavior of an expert in a sequential decision making task by learning from demonstrations, and has been widely applied to robotics, autonomous driving, and autoregressive language generation. The simplest approach to IL, behavior cloning (BC), is thought to incur sample complexity with unfavorable quadratic dependence on the problem horizon, motivating a variety of different online algorithms that attain improved linear horizon dependence under stronger assumptions on the data and the learner’s access to the expert.
In this talk, we revisit the apparent gap between offline and online IL from a learning-theoretic perspective, with a focus on general policy classes up to and including deep neural networks. Through a new analysis of behavior cloning with the logarithmic loss, we will show that it is possible to achieve horizon-independent sample complexity in offline IL whenever (i) the range of the cumulative payoffs is controlled, and (ii) an appropriate notion of supervised learning complexity for the policy class is controlled. When specialized to stationary policies, this implies that the gap between offline and online IL is not fundamental. We will then discuss implications of this result and investigate the extent to which it bears out empirically.
10:45 a.m. Break
11:00 a.m. Invited Talk Reinforcement Learning at the Hyperscale
Jakob Foerster (University of Oxford)

Deep reinforcement learning is currently undergoing a revolution of scale, fuelled by jointly running the environment, data collection, and training loop on the GPU, which has resulted in orders of magnitude of speed-up for many tasks.
In this talk I start by presenting examples of our recent work which have been enabled by this revolution, spanning multi-agent RL, meta-learning, and environment discovery. I will end the talk by outlining failure modes of relying on GPU accelerated environments and possible paradigms for the community to collectively address them, ranging from promising research directions to novel evaluation protocols.
11:45 a.m. Contributed Talks TBD (4 talks)
12:25 p.m. Lunch Break
1:25 p.m. Poster Session 1
2:25 p.m. Panel Discussion Moderator: Csaba Szepesvari (University of Alberta)
Marcello Restelli (Harvard University), Sergey Levine (UC Berkeley), Mengdi Wang (Princeton University), Martha White (University of Alberta)
3:25 p.m. Coffee Break
3:40 p.m. Contributed Talks TBD (2 talks)
4:00 p.m. Poster Session 2