Schedule

The session will cover invited talks, contributed talks and posters. The tentative schedule in Pacific Daylight Time (GMT−7) can be found below.


Time Type Title & Speakers
8:25 a.m. Opening Remarks
8:30 a.m. Invited Talk What Does RL Theory Have to Do with Robotics?
Andrew Wagenmaker (UC Berkeley)

While the theory of reinforcement learning has advanced to a fairly mature place, it is often not apparent how this theory can impact practice. This is especially true in domains such as robotics, where the challenges faced by practitioners typically feel far removed from the settings and algorithms considered by theorists. In this talk, I will discuss how RL theory can impact practice in robotics despite this apparent gap, and how theorists might approach their work to further this impact. I will focus in particular on two case studies centered around the question of pretraining for online adaptation. In the first case, I will explore the question of sim-to-real transfer for robotics, and how we should pretrain with RL in a simulator to enable effective transfer to the real world. In the second case, I will discuss how we can pretrain a policy from human demonstration data to ensure it is a good initialization for further RL finetuning. In both cases, I will show how theory provides the key algorithmic insights that lead to the practical approaches we ultimately propose.
9:15 a.m. Invited Talk
Reality Check: Being an RL Theorist in 2025
Nan Jiang (Illinois)

RL is currently enjoying unprecedented visibility and impact, driven largely by its success in LLMs. Yet it is also a confusing time to be an RL theorist: while the field witnesses rapid empirical progress, the broader community's view of RL theory remain anchored in earlier eras, unaware of more recent developments. That said, I believe RL theory has valuable insights to offer—not only for the immediate improvement of current systems, but also for enabling future applications and addressing deep intellectual questions of shared interest by the broader community. Unfortunately, many of these subtle insights are either taken for granted within our community and invisible to outsiders, or buried between the lines of technical papers. Worse still, this tacit knowledge is often not transmitted effectively even within our own community, leaving junior researchers without a clear understanding of the field’s conceptual foundations.

This talk will be a reflection on some of these ideas based on my own journey in RL theory. I will revisit some foundational concepts and research methodology, discuss what's worth sharing with the broader community, the misconceptions we need to correct, and how a reconciliation with practice reveals new opportunities for theory research.
10:00 a.m. Invited Talk
Mengdi Wang (Princeton)


TBA.
10:45 a.m. Contributed Talks Horizon Reduction Makes Offline RL Scalable
Seohong Park (UC Berkeley)

Improved Regret Bounds for Linear Bandits with Heavy-Tailed Rewards
Artin Tajdini (University of Washington)

11:15 a.m. Poster Session 1
12:30 p.m. Lunch Break
1:30 p.m. Panel Discussion Moderator: Csaba Szepesvari (University of Alberta)
Amy Zhang (UT Austin), Eugene Vinitsky (NYU), Kevin Jamieson (University of Washington), Wen Sun (Cornell)
2:45 p.m. Contributed Talks Unifying Agent Interaction and World Information for Multi-agent Coordination.
Dongsu Lee (UT Austin)

What Makes a Reward Model a Good Teacher? An Optimization Perspective
Noam Razin (Princeton University)

3:15 p.m. Coffee Break
3:30 p.m. Poster Session 2