RL environment for beam optimisation in theARES EA.

First Steps Toward an Autonomous Accelerator, A Common Project Between DESY and KIT

A. Eichler1, F. Burkart1, J. Kaiser1, W. Kuropka1, O. Stein1, E. Bründermann2, A. Santamaria Garcia2, C. Xu2 1Deutsches Elektronen-Synchrotron DESY, 2Karlsruhe Institute of Technology KIT 12th International Particle Accelerator Conference Abstract Reinforcement learning algorithms have risen in pop-ularity in the accelerator physics community in recentyears, showing potential in beam control and in the opti-mization and automation of tasks in accelerator operation.The Helmholtz AI project “Machine Learning Toward Au-tonomous Accelerators” is a collaboration between DESYand KIT that works on investigating and developing rein-forcement learning applications for the automatic start-upof electron linear accelerators....

May 24, 2021 · 185 words · RL4AA Collaboration
Reinforcement learning agent joint with the physics-based polynomial neural network.

Physics-Enhanced Reinforcement Learning for Optimal Control

A. Ivanov, I. Agapov, A. Eichler, S. Tomin Deutsches Elektronen Synchrotron DESY 12th International Particle Accelerator Conference Abstract We propose an approach for incorporating acceleratorphysics models into reinforcement learning agents. The proposed approach is based on the Taylor mapping technique for the simulation of particle dynamics. The resulting computational graph is represented as a polynomial neural network and embedded into the traditional reinforcement learning agents. The application of the model is demonstrated in a nonlinear simulation model of beam transmission....

May 21, 2021 · 110 words · RL4AA Collaboration
Simple scheme of the FERMI FEL seed laser alignment set up.

Feasibility Investigation on Several Reinforcement Learning Techniques to Improve the Performance of the FERMI Free-Electron Laser

N. Bruchon University of Trieste PhD thesis Abstract The research carried out in particle accelerator facilities does not concern only particle and condensed matter physics, although these are the main topics covered in the field. Indeed, since a particle accelerator is composed of many different sub-systems, its proper functioning depends both on each of these parts and their interconnection. It follows that the study, implementation, and improvement of the various sub-systems are fundamental points of investigation too....

March 18, 2021 · 322 words · RL4AA Collaboration
Plot of the reward received by the agent versus step number.

Policy gradient methods for free-electron laser and terahertz source optimization and stabilization at the FERMI free-electron laser at Elettra

F. H. O’Shea1, N. Bruchon2, G. Gaio1 1Elettra Sincrotrone Trieste, 2University of Trieste Physical Review Accelerators and Beams Abstract In this article we report on the application of a model-free reinforcement learning method to the optimization of accelerator systems. We simplify a policy gradient algorithm to accelerator control from sophisticated algorithms that have recently been demonstrated to solve complex dynamic problems. After outlining a theoretical basis for the functioning of the algorithm, we explore the small hyperparameter space to develop intuition about said parameters using a simple number-guess environment....

December 21, 2020 · 160 words · RL4AA Collaboration
A schematic overview of theAE-DYNAapproach used in this paper.

Model-free and Bayesian Ensembling Model-based Deep Reinforcement Learning for Particle Accelerator Control Demonstrated on the FERMI FEL

S. Hirlaender1, N. Bruchon2 1University of Salzburg, 2University of Trieste arXiv Abstract Reinforcement learning holds tremendous promise in accelerator controls. The primary goal of this paper is to show how this approach can be utilised on an operational level on accelerator physics problems. Despite the success of model-free reinforcement learning in several domains, sample-efficiency still is a bottle-neck, which might be encompassed by model-based methods. We compare well-suited purely model-based to model-free reinforcement learning applied to the intensity optimisation on the FERMI FEL system....

December 17, 2020 · 158 words · RL4AA Collaboration
Policy network maps states to actions.

Autonomous Control of a Particle Accelerator using Deep Reinforcement Learning

X. Pang1, S. Thulasidasan2, L. Rybarcyk2 1Apple, 2Los Alamos National Laboratory Machine Learning for Engineering Modeling, Simulation, and Design Workshop at Neural Information Processing Systems 2020 Abstract We describe an approach to learning optimal control policies for a large, linear particle accelerator using deep reinforcement learning coupled with a high-fidelity physics engine. The framework consists of an AI controller that uses deep neural networks for state and action-space representation and learns optimal policies using reward signals that are provided by the physics simulator....

December 12, 2020 · 150 words · RL4AA Collaboration
The RL paradigm as applied to particle accelerator control, showing the example of trajectory correction.

Sample-efficient reinforcement learning for CERN accelerator control

V. Kain1, S. Hirlander1, B. Goddard1, F. M. Velotti1, G. Z. Della Porta1, N. Bruchon2, G. Valentino3 1CERN, 2University of Trieste, 3University of Malta Physical Review Accelerators and Beams Abstract Numerical optimization algorithms are already established tools to increase and stabilize the performance of particle accelerators. These algorithms have many advantages, are available out of the box, and can be adapted to a wide range of optimization problems in accelerator operation....

December 1, 2020 · 185 words · RL4AA Collaboration
Schematic for neural network control policy updates.

Neural Networks for Modeling and Control of Particle Accelerators

A. L. Edelen Colorado State University PhD thesis Abstract Charged particle accelerators support a wide variety of scientific, industrial, and medical applications. They range in scale and complexity from systems with just a few components for beam acceleration and manipulation, to large scientific user facilities that span many kilometers and have hundreds-to-thousands of individually-controllable components. Specific operational requirements must be met by adjusting the many controllable variables of the accelerator. Meeting these requirements can be challenging, both in terms of the ability to achieve specific beam quality metrics in a reliable fashion and in terms of the time needed to set up and maintain the optimal operating conditions....

July 1, 2020 · 322 words · RL4AA Collaboration
Simple scheme of the FERMI FEL seed laser alignment set up.

Basic Reinforcement Learning Techniques to Control the Intensity of a Seeded Free-Electron Laser

N. Bruchon1, G. Fenu1, G. Gaio2, M. Lonza2, F. H. O’Shea2, F. A. Pellegrino1, E. Salvato1 1University of Trieste, 2Elettra Sincrotrone Trieste Electronics Abstract Optimal tuning of particle accelerators is a challenging task. Many different approaches have been proposed in the past to solve two main problems—attainment of an optimal working point and performance recovery after machine drifts. The most classical model-free techniques (e.g., Gradient Ascent or Extremum Seeking algorithms) have some intrinsic limitations....

May 9, 2020 · 206 words · RL4AA Collaboration
Simple scheme of the EOS laser alignment set up.

Toward the Application of Reinforcement Learning to the Intensity Control of a Seeded Free-Electron Laser

N. Bruchon, G. Fenu, G. Gaio, M. Lonza, F. A. Pellegrino, E. Salvato University of Trieste 23rd International Conference on Mechatronics Technology Abstract The optimization of particle accelerators is a challenging task, and many different approaches have been proposed in years, to obtain an optimal tuning of the plant and to keep it optimally tuned despite drifts or disturbances. Indeed, the classical model-free approaches (such as Gradient Ascent or Extremum Seeking algorithms) have intrinsic limitations....

October 23, 2019 · 222 words · RL4AA Collaboration