RL4AA'25 workshop group photo

RL4AA'25 at DESY in Hamburg was a blast! Thank you everyone!

Wow! The RL4AA'25 workshop at DESY in Hamburg was a blast, and we couldn’t be prouder of the community we have brought together! 😊 It’s unbelievable that we started just over 2 years ago with 30-ish participants, and this year we hosted our third workshop with almost 80 attendees from all over the world, two brilliant keynotes by Jan Peters and Alessandro Pau, and a very well-received hands-on RL challenge, where the competing teams far exceeded our expectations. Various interesting talks, intriguing posters, lab and city tours, as well as social events were also on our schedule. ...

April 7, 2025 · 239 words · RL4AA Collaboration
Simplified 3D illustration of the considered section of the ARES particle accelerator.

Learning to Do or Learning While Doing: Reinforcement Learning and Bayesian Optimisation for Online Continuous Tuning

J. Kaiser1, C. Xu2, A. Eichler1, A. Santamaria Garcia2, O. Stein1, E. Bründermann2, W. Kuropka1, H. Dinter1, F. Mayet1, T. Vinatier1, F. Burkart1, H. Schlarb1 1Deutsches Elektronen-Synchrotron DESY, 2 Karlsruhe Institute of Technology KIT arXiv Abstract Online tuning of real-world plants is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods, such as Reinforcement Learning-trained Optimisation (RLO) and Bayesian optimisation (BO), hold great promise for achieving outstanding plant performance and reducing tuning times. Which algorithm to choose in different scenarios, however, remains an open question. Here we present a comparative study using a routine task in a real particle accelerator as an example, showing that RLO generally outperforms BO, but is not always the best choice. Based on the study’s results, we provide a clear set of criteria to guide the choice of algorithm for a given tuning task. These can ease the adoption of learning-based autonomous tuning solutions to the operation of complex real-world plants, ultimately improving the availability and pushing the limits of operability of these facilities, thereby enabling scientific and engineering advancements. ...

June 6, 2023 · 201 words · RL4AA Collaboration
Reinforcement learning loop for the ARES experimental area.

Learning-based Optimisation of Particle Accelerators Under Partial Observability Without Real-World Training

J. Kaiser, O. Stein, A. Eichler Deutsches Elektronen-Synchrotron DESY 39th International Conference on Machine Learning Abstract In recent work, it has been shown that reinforcement learning (RL) is capable of solving a variety of problems at sometimes super-human performance levels. But despite continued advances in the field, applying RL to complex real-world control and optimisation problems has proven difficult. In this contribution, we demonstrate how to successfully apply RL to the optimisation of a highly complex real-world machine – specifically a linear particle accelerator – in an only partially observable setting and without requiring training on the real machine. Our method outperforms conventional optimisation algorithms in both the achieved result and time taken as well as already achieving close to human-level performance. We expect that such automation of machine optimisation will push the limits of operability, increase machine availability and lead to a paradigm shift in how such machines are operated, ultimately facilitating advances in a variety of fields, such as science and medicine among many others. ...

July 22, 2022 · 174 words · RL4AA Collaboration
RL environment for beam optimisation in theARES EA.

First Steps Toward an Autonomous Accelerator, A Common Project Between DESY and KIT

A. Eichler1, F. Burkart1, J. Kaiser1, W. Kuropka1, O. Stein1, E. Bründermann2, A. Santamaria Garcia2, C. Xu2 1Deutsches Elektronen-Synchrotron DESY, 2Karlsruhe Institute of Technology KIT 12th International Particle Accelerator Conference Abstract Reinforcement learning algorithms have risen in pop-ularity in the accelerator physics community in recentyears, showing potential in beam control and in the opti-mization and automation of tasks in accelerator operation.The Helmholtz AI project “Machine Learning Toward Au-tonomous Accelerators” is a collaboration between DESYand KIT that works on investigating and developing rein-forcement learning applications for the automatic start-upof electron linear accelerators. The work is carried out inparallel at two similar research accelerators: ARES at DESYand FLUTE at KIT, giving the unique opportunity of trans-fer learning between facilities. One of the first steps of thisproject is the establishment of a common interface betweenthe simulations and the machine, in order to test and applyvarious optimization approaches interchangeably betweenthe two accelerators. In this paper we present first results onthe common interface and its application to beam focusingin ARES as well as the idea of laser shaping with spatiallight modulators at FLUTE. ...

May 24, 2021 · 185 words · RL4AA Collaboration
Reinforcement learning agent joint with the physics-based polynomial neural network.

Physics-Enhanced Reinforcement Learning for Optimal Control

A. Ivanov, I. Agapov, A. Eichler, S. Tomin Deutsches Elektronen Synchrotron DESY 12th International Particle Accelerator Conference Abstract We propose an approach for incorporating acceleratorphysics models into reinforcement learning agents. The proposed approach is based on the Taylor mapping technique for the simulation of particle dynamics. The resulting computational graph is represented as a polynomial neural network and embedded into the traditional reinforcement learning agents. The application of the model is demonstrated in a nonlinear simulation model of beam transmission. The comparison of the approach with the traditional numerical optimization as well as neural networks-based agents demonstrates better convergence of the proposed technique. ...

May 21, 2021 · 110 words · RL4AA Collaboration