RL4AA'24 workshop group photo

Successful RL4AA'24 workshop in Salzburg: Thanks everyone for joining!

From 5 to 7 February 2024, IDA Lab at the Paris Lodron University of Salzburg kindly hosted the RL4AA community for the 2nd workshop on Reinforcement Learning for Autonomous Accelerators (RL4AA'24). With over 50 participants from more than 10 different countries, we are excited to see that our community is growing and that the interest in reinforcement learning (for particle accelerators) is increasing. In a total of 19 talks, we got to hear about the latest developments and impressive results in the field....

February 16, 2024 · 339 words · RL4AA Collaboration
AWAKE beamline showing location of the matching devices (actions) and the observation BTV.

Towards automatic setup of 18 MeV electron beamline using machine learning

F. M. Velotti1, B. Goddard1, V. Kain1, R. Ramjiawan1, G. Z. Della Porta1 and S. Hirlaender2 1CERN, 2University of Salzburg Machine Learning: Science and Technology Abstract To improve the performance-critical stability and brightness of the electron bunch at injection into the proton-driven plasma wakefield at the AWAKE CERN experiment, automation approaches based on unsupervised machine learning (ML) were developed and deployed. Numerical optimisers were tested together with different model-free reinforcement learning (RL) agents....

April 27, 2023 · 189 words · RL4AA Collaboration
Episodes from the best NAF2 agent and the PI controller with the same initial states and with a varying additive Gaussian action noise with zero mean and standard deviation as a percentage of the half action space [0, 1]. (A) 0%, (B) 10%, (C) 25%, and (D) 50% Gaussian action noise.

Application of reinforcement learning in the LHC tune feedback

L. Grech1, G. Valentino1, D. Alves2 and Simon Hirlaender3 1University of Malta, 2CERN, 3University of Salzburg Frontiers in Physics Abstract The Beam-Based Feedback System (BBFS) was primarily responsible for correcting the beam energy, orbit and tune in the CERN Large Hadron Collider (LHC). A major code renovation of the BBFS was planned and carried out during the LHC Long Shutdown 2 (LS2). This work consists of an explorative study to solve a beam-based control problem, the tune feedback (QFB), utilising state-of-the-art Reinforcement Learning (RL)....

September 7, 2022 · 168 words · RL4AA Collaboration
A schematic overview of theAE-DYNAapproach used in this paper.

Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control Demonstrated on the FERMI FEL

S. Hirlaender1, N. Bruchon2 1University of Salzburg, 2University of Trieste arXiv Abstract Reinforcement learning holds tremendous promise in accelerator controls. The primary goal of this paper is to show how this approach can be utilised on an operational level on accelerator physics problems. Despite the success of model-free reinforcement learning in several domains, sample-efficiency still is a bottle-neck, which might be encompassed by model-based methods. We compare well-suited purely model-based to model-free reinforcement learning applied to the intensity optimisation on the FERMI FEL system....

December 17, 2020 · 158 words · RL4AA Collaboration