Simple scheme of the FERMI FEL seed laser alignment set up.

Basic Reinforcement Learning Techniques to Control the Intensity of a Seeded Free-Electron Laser

N. Bruchon1, G. Fenu1, G. Gaio2, M. Lonza2, F. H. O’Shea2, F. A. Pellegrino1, E. Salvato1 1University of Trieste, 2Elettra Sincrotrone Trieste Electronics Abstract Optimal tuning of particle accelerators is a challenging task. Many different approaches have been proposed in the past to solve two main problems—attainment of an optimal working point and performance recovery after machine drifts. The most classical model-free techniques (e.g., Gradient Ascent or Extremum Seeking algorithms) have some intrinsic limitations. To overcome those limitations, Machine Learning tools, in particular Reinforcement Learning (RL), are attracting more and more attention in the particle accelerator community. We investigate the feasibility of RL model-free approaches to align the seed laser, as well as other service lasers, at FERMI, the free-electron laser facility at Elettra Sincrotrone Trieste. We apply two different techniques—the first, based on the episodic Q-learning with linear function approximation, for performance optimization; the second, based on the continuous Natural Policy Gradient REINFORCE algorithm, for performance recovery. Despite the simplicity of these approaches, we report satisfactory preliminary results, that represent the first step toward a new fully automatic procedure for the alignment of the seed laser to the electron beam. Such an alignment is, at present, performed manually. ...

May 9, 2020 · 206 words · RL4AA Collaboration
Simple scheme of the EOS laser alignment set up.

Toward the Application of Reinforcement Learning to the Intensity Control of a Seeded Free-Electron Laser

N. Bruchon, G. Fenu, G. Gaio, M. Lonza, F. A. Pellegrino, E. Salvato University of Trieste 23rd International Conference on Mechatronics Technology Abstract The optimization of particle accelerators is a challenging task, and many different approaches have been proposed in years, to obtain an optimal tuning of the plant and to keep it optimally tuned despite drifts or disturbances. Indeed, the classical model-free approaches (such as Gradient Ascent or Extremum Seeking algorithms) have intrinsic limitations. To overcome those limitations, Machine Learning techniques, in particular, the Reinforcement Learning, are attracting more and more attention in the particle accelerator community. The purpose of this paper is to apply a Reinforcement Learning model-free approach to the alignment of a seed laser, based on a rather general target function depending on the laser trajectory. The study focuses on the alignment of the lasers at FERMI, the free-electron laser facility at Elettra Sincrotrone Trieste. In particular, we employ Q-learning with linear function approximation and report experimental results obtained in two setups, which are the actual setups where the final application has to be deployed. Despite the simplicity of the approach, we report satisfactory preliminary results, that represent the first step toward a fully automatic procedure for seed laser to the electron beam. Such a superimposition is, at present, performed manually. ...

October 23, 2019 · 222 words · RL4AA Collaboration
Scheme of the FERMI FEL seed laser alignmentset up

Free-electron Laser Optimization with Reinforcement Learning

N. Bruchon1, G. Gaio2, G. Fenu1, M. Lonza2, F. A. Pellegrino1, E. Salvato1 1University of Trieste, 2Elettra Sincrotrone Trieste 17th International Conference on Accelerator and Large Experimental Physics Control Systems Abstract Reinforcement Learning (RL) is one of the most promis-ing techniques in Machine Learning because of its modestcomputational requirements with respect to other algorithms.RL uses an agent that takes actions within its environmentto maximize a reward related to the goal it is designed toachieve. We have recently used RL as a model-free approachto improve the performance of the FERMI Free ElectronLaser. A number of machine parameters are adjusted tofind the optimum FEL output in terms of intensity and spec-tral quality. In particular we focus on the problem of thealignment of the seed laser with the electron beam, initiallyusing a simplified model and then applying the developedalgorithm on the real machine. This paper reports the resultsobtained and discusses pros and cons of this approach withplans for future applications. ...

October 5, 2019 · 164 words · RL4AA Collaboration
General feedback scheme using the CSR powersignal to construct both, the state and reward signal of the Markov decision process (MDP).

Feedback Design for Control of the Micro-Bunching Instability Based on Reinforcement Learning

T. Boltz, M. Brosi, E. Bründermann, B. Haerer, P. Kaiser, C. Pohl, P. Schreiber, M. Yan,T. Asfour, A.-S. Müller Karlsruhe Insitute of Technology KIT 10th International Particle Accelerator Conference Abstract The operation of ring-based synchrotron light sourceswith short electron bunches increases the emission of co-herent synchrotron radiation (CSR) in the THz frequencyrange. However, the micro-bunching instability resultingfrom self-interaction of the bunch with its own radiationfield limits stable operation with constant intensity of CSRemission to a particular threshold current. Above this thresh-old, the longitudinal charge distribution and thus the emittedradiation vary rapidly and continuously. Therefore, a fastand adaptive feedback system is the appropriate approach tostabilize the dynamics and to overcome the limitations givenby the instability. In this contribution, we discuss first effortstowards a longitudinal feedback design that acts on the RFsystem of the KIT storage ring KARA (Karlsruhe ResearchAccelerator) and aims for stabilization of the emitted THzradiation. Our approach is based on methods of adaptive con-trol that were developed in the field of reinforcement learningand have seen great success in other fields of research overthe past decade. We motivate this particular approach andcomment on different aspects of its implementation. ...

May 19, 2019 · 195 words · RL4AA Collaboration
Layout of the accelerator.

Using a neural network control policy for rapid switching between beam parameters in an FEL

A. L. Edelen1, S. G. Biedron2, J. P. Edelen3, S. V. Milton4, P. J. M. van der Slot5 1Colorado State University, 2Element Aero, 3Fermi National Accelerator Laboratory, 4Los Alamos National Laboratory, 5University of Twente 38th International Free Electron Laser Conference Abstract FEL user facilities often must accommodate requests for a variety of beam parameters. This usually requires skilled operators to tune the machine, reducing the amount of available time for users. In principle, a neural network control policy that is trained on a broad range of operating states could be used to quickly switch between these requests without substantial need for human inter-vention. We present preliminary results from an ongoing study in which a neural network control policy is investi-gated for rapid switching between beam parameters in a compact THz FEL. ...

August 25, 2017 · 138 words · RL4AA Collaboration
General setup for the neural network model.

Using Neural Network Control Policies For Rapid Switching Between Beam Parameters in a Free Electron Laser

A. L. Edelen1, S. G. Biedron2, J. P. Edelen3, S. V. Milton4, P. J. M. van der Slot5 1Colorado State University, 2Element Aero, 3Fermi National Accelerator Laboratory, 4Los Alamos National Laboratory, 5University of Twente Workshop on Deep Learning for Physical Sciences at the Conference on Neural Information Processing Systems 2017 Abstract Free Electron Laser (FEL) facilities often must accommodate requests for a varietyof electron beam parameters in order to supply scientific users with appropriatephoton beam characteristics. This usually requires skilled human operators to tunethe machine. In principle, a neural network control policy that is trained on a broadrange of machine operating states could be used to quickly switch between theserequests without substantial need for human intervention. We present preliminaryresults from an ongoing simulation study in which a neural network control policyis investigated for rapid switching between beam parameters in a compact THzFEL that exhibits nonlinear electron beam dynamics. To accomplish this, we firsttrain a feed-forward neural network to mimic a physics-based simulation of theFEL. We then train a neural network control policy by first pre-training it as aninverse model (using supervised learning with a subset of the simulation data) andthen training it more extensively with reinforcement learning. In this case, thereinforcement learning component consists of letting the policy network interactwith the learned system model and backpropagating the cost through the modelnetwork to the controller network. ...

August 25, 2017 · 232 words · RL4AA Collaboration
Example of a simulation run.

Orbit Correction Studies Using Neural Networks

E. Meier, Y.-R. E. Tan, G. S. LeBlanc Australian Synchrotron 3rd International Particle Accelerator Conference Abstract This paper reports the use of neural networks for orbitcorrection at the Australian Synchrotron Storage Ring. Theproposed system uses two neural networks in an actor-criticscheme to model a long term cost function and computeappropriate corrections. The system is entirely based onthe history of the beam position and the actuators, i.e. thecorrector magnets, in the storage ring. This makes the sys-tem auto-tuneable, which has the advantage of avoiding themeasure of a response matrix. The controller will automat-ically maintain an updated BPM corrector response matrix.In future if coupled with some form of orbit response anal-ysis, the system will have the potential to track drifts orchanges to the lattice functions in ”real time”. As a genericand robust orbit correction program it can be used duringcommissioning and in slow orbit feedback. In this study,we present positive initial results of the simulations of thestorage ring in Matlab. ...

May 20, 2012 · 165 words · RL4AA Collaboration