RL4AA Collaboration

Collaboration on Reinforcement Learning for Autonomous Accelerators

 

Find your way around the site

RL4AA'25 workshop group photo

RL4AA'25 at DESY in Hamburg was a blast! Thank you everyone!

Wow! The RL4AA'25 workshop at DESY in Hamburg was a blast, and we couldn’t be prouder of the community we have brought together! 😊 It’s unbelievable that we started just over 2 years ago with 30-ish participants, and this year we hosted our third workshop with almost 80 attendees from all over the world, two brilliant keynotes by Jan Peters and Alessandro Pau, and a very well-received hands-on RL challenge, where the competing teams far exceeded our expectations. Various interesting talks, intriguing posters, lab and city tours, as well as social events were also on our schedule. ...

April 7, 2025 · 239 words · RL4AA Collaboration
Hamburg Speicherstadt.

Announcing RL4AA'25 taking place 2 - 4 April 2025 in Hamburg, Germany

We are delighted to announce the “3rd International Workshop on Reinforcement Learning for Autonomous Accelerators” RL4AA’25! To be held 2 - 4 April 2025 at DESY in Hamburg, Germany. Following the very successful RL4AA'23 and RL4AA'24 workshops, the goal of this workshop is to exchange experiences and ideas about RL in the context of particle accelerators amongst both experts and beginners. We have an exciting workshop program lined up! Two keynotes by RL experts Jan Peters (University of Darmstadt) The second speaker will be announced soon. Hands-on RL challenge Contributed talks Posters Introduction to RL More details: Workshop website: https://rl4aa.github.io/RL4AA25/ Indico link: https://indico.scc.kit.edu/event/4216/ Call for abstracts: open until 24 January 2025 Registration: Open right now! Registration deadline: 7 March 2025 Thanks to generous sponsorships, there are no registration fees! Workshop: 2 - 4 April 2025, Hamburg, Germany Please join us if you have worked in reinforcement learning or you are simply interested and would like to start! This workshop is intended to foster discussions and to start interesting projects together. ...

October 17, 2024 · 298 words · RL4AA Collaboration
RL4AA'24 workshop group photo

Successful RL4AA'24 workshop in Salzburg: Thanks everyone for joining!

From 5 to 7 February 2024, IDA Lab at the Paris Lodron University of Salzburg kindly hosted the RL4AA community for the 2nd workshop on Reinforcement Learning for Autonomous Accelerators (RL4AA'24). With over 50 participants from more than 10 different countries, we are excited to see that our community is growing and that the interest in reinforcement learning (for particle accelerators) is increasing. In a total of 19 talks, we got to hear about the latest developments and impressive results in the field. ...

February 16, 2024 · 339 words · RL4AA Collaboration
RL4AA'24 flyer.

RL4AA'24 call for abstracts extended to 5 January. Exciting keynote speakers announced. Register now!

The call for abstracts for the RL4AA'24 workshop, taking place 05 - 07 February 2024 in Salzburg, Austria, has been extended. Register for the workshop and submit your abstract until 5 January 2024! We are also excited to announce our keynote speakers: Antonin Raffin (German Aerospace Center) Felix Berkenkamp (Bosch Center for AI) For more information on the workshop, please see the official workshop website: https://rl4aa.github.io/RL4AA24/ To get directly to registration, please visit: https://indico.scc.kit.edu/event/3746/ ...

December 22, 2023 · 95 words · RL4AA Collaboration
Salzburg.

Registration is now open for RL4AA'24 taking place 05 - 07 February 2024 in Salzburg, Austria

Announcing RL4AA'24 - Registration is open! Following up on the very successful RL4AA'23 workshop in Karlsruhe earlier this year, we are excited to announce the 2nd RL4AA workshop RL4AA'24, which will be held in Salzburg, Austria, from 05 - 07 February 2024. The workshop will be hosted at the Paris Lodron University of Salzburg. We are looking forward to an exciting workshop with many interesting talks and discussions on reinforcement learning for autonomous particle accelerators and hope to see you all in Salzburg! ...

August 18, 2023 · 96 words · RL4AA Collaboration
Simplified 3D illustration of the considered section of the ARES particle accelerator.

Learning to Do or Learning While Doing: Reinforcement Learning and Bayesian Optimisation for Online Continuous Tuning

J. Kaiser1, C. Xu2, A. Eichler1, A. Santamaria Garcia2, O. Stein1, E. Bründermann2, W. Kuropka1, H. Dinter1, F. Mayet1, T. Vinatier1, F. Burkart1, H. Schlarb1 1Deutsches Elektronen-Synchrotron DESY, 2 Karlsruhe Institute of Technology KIT arXiv Abstract Online tuning of real-world plants is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods, such as Reinforcement Learning-trained Optimisation (RLO) and Bayesian optimisation (BO), hold great promise for achieving outstanding plant performance and reducing tuning times. Which algorithm to choose in different scenarios, however, remains an open question. Here we present a comparative study using a routine task in a real particle accelerator as an example, showing that RLO generally outperforms BO, but is not always the best choice. Based on the study’s results, we provide a clear set of criteria to guide the choice of algorithm for a given tuning task. These can ease the adoption of learning-based autonomous tuning solutions to the operation of complex real-world plants, ultimately improving the availability and pushing the limits of operability of these facilities, thereby enabling scientific and engineering advancements. ...

June 6, 2023 · 201 words · RL4AA Collaboration
Discord logo.

The RL4AA Discord server is up!

Good News! The RL4AA community is happy to announce its Discord server! If you are interested in discussing reinforcement learning applied to accelerators, please join for announcements (e.g. new publications), forum discussions, an open chat, and meeting rooms. Hope to see you there! https://discord.gg/QtBMqsjWH2

June 2, 2023 · 44 words · RL4AA Collaboration
Overview of the training loop and the structure of simulated environment

Trend-Based SAC Beam Control Method with Zero-Shot in Superconducting Linear Accelerator

X. Chen, X. Qi, C. Su, Y. He, Z. Wang, K. Sun, C. Jin, W. Chen, S. Liu, X. Zhao, D. Jia, M. Yi Chinese Academy of Sciences arXiv Abstract The superconducting linear accelerator is a highly flexiable facility for modern scientific discoveries, necessitating weekly reconfiguration and tuning. Accordingly, minimizing setup time proves essential in affording users with ample experimental time. We propose a trend-based soft actor-critic(TBSAC) beam control method with strong robustness, allowing the agents to be trained in a simulated environment and applied to the real accelerator directly with zero-shot. To validate the effectiveness of our method, two different typical beam control tasks were performed on China Accelerator Facility for Superheavy Elements (CAFe II) and a light particle injector(LPI) respectively. The orbit correction tasks were performed in three cryomodules in CAFe II seperately, the time required for tuning has been reduced to one-tenth of that needed by human experts, and the RMS values of the corrected orbit were all less than 1mm. The other transmission efficiency optimization task was conducted in the LPI, our agent successfully optimized the transmission efficiency of radio-frequency quadrupole(RFQ) to over 85% within 2 minutes. The outcomes of these two experiments offer substantiation that our proposed TBSAC approach can efficiently and effectively accomplish beam commissioning tasks while upholding the same standard as skilled human experts. As such, our method exhibits potential for future applications in other accelerator commissioning fields. ...

May 23, 2023 · 244 words · RL4AA Collaboration
Overview of the steering method.

Ultra fast reinforcement learning demonstrated at CERN AWAKE

** Simon Hirlaender, Lukas Lamminger, Giovanni Zevi Della Porta, Verena Kain** Abstract Reinforcement learning (RL) is a promising direction in machine learning for the control and optimisation of particle accelerators since it learns directly from experience without needing a model a-priori. However, RL generally suffers from low sample efficiency and thus training from scracth on the machine is often not an option. RL agents are usually trained or pre-tuned on simulators and then transferred to the real environment. In this work we propose a model-based RL approach based on Gaussian processes (GPs) to overcome the sample efficiency limitation. Our RL agent was able to learn to control the trajectory at the CERN AWAKE (Advanced Wakefield Experiment) facility, a problem of 10 degrees of freedom, within a few interactions only. To date, numerical optimises are used to restore or increase and stabilise the performance of accelerators. A major drawback is that they must explore the optimisation space each time they are applied. Our RL approach learns as quickly as numerical optimisers for one optimisation run, but can be used afterwards as single-shot or few-shot controllers. Furthermore, it can also handle safety and time-varying systems and can be used for the online stabilisation of accelerator operation.This approach opens a new avenue for the application of RL in accelerator control and brings it into the realm of everyday applications. ...

May 1, 2023 · 233 words · RL4AA Collaboration
Salzburg.

The 2nd RL4AA workshop 2024 will be held 05 - 07 February 2024 in Salzburg

Good News - save the date! Our first workshop, RL4AA 2023, was very successful. Because of this, we are hoping to hold the 2nd RL4AA workshop in spring 2024 in Salzburg, Austria in 05 - 07 February 2024. Further details will follow soon.

April 27, 2023 · 43 words · RL4AA Collaboration