Schematic view of the GMPS control environment.

Real-time artificial intelligence for accelerator control: A study at the Fermilab Booster

J. St. John1, C. Herwig1, D. Kafkes1, J. Mitrevski1, W. A. Pellico1, G. N. Perdue1, A. Quintero-Parra1, B. A. Schupbach1, K. Seiya1, N. Tran1, M. Schram2, J. M. Duarte3, Y. Huang4, R. Keller5 1Fermi National Accelerator Laboratory, 2Thomas Jefferson National Accelerator Laboratory, 3University of California San Diego, 4Pacific Northwest National Laboratory, 5Columbia University Physical Review Accelerators and Beams Abstract We describe a method for precisely regulating the gradient magnet power supply (GMPS) at the Fermilab Booster accelerator complex using a neural network trained via reinforcement learning. We demonstrate preliminary results by training a surrogate machine-learning model on real accelerator data to emulate the GMPS, and using this surrogate model in turn to train the neural network for its regulation task. We additionally show how the neural networks to be deployed for control purposes may be compiled to execute on field-programmable gate arrays (FPGAs), and show the first machine-learning based control algorithm implemented on an FPGA for controls at the Fermilab accelerator complex. As there are no surprise latencies on an FPGA, this capability is important for operational stability in complicated environments such as an accelerator facility. ...

October 18, 2021 · 194 words · RL4AA Collaboration
Schematic for neural network control policy updates.

Neural Networks for Modeling and Control of Particle Accelerators

A. L. Edelen Colorado State University PhD thesis Abstract Charged particle accelerators support a wide variety of scientific, industrial, and medical applications. They range in scale and complexity from systems with just a few components for beam acceleration and manipulation, to large scientific user facilities that span many kilometers and have hundreds-to-thousands of individually-controllable components. Specific operational requirements must be met by adjusting the many controllable variables of the accelerator. Meeting these requirements can be challenging, both in terms of the ability to achieve specific beam quality metrics in a reliable fashion and in terms of the time needed to set up and maintain the optimal operating conditions. One avenue toward addressing this challenge is to incorporate techniques from the fields of machine learning (ML) and artificial intelligence (AI) into the way particle accelerators are modeled and controlled. While many promising approaches within AI/ML could beused for particle accelerators, this dissertation focuses on approaches based on neural networks. Neural networks are particularly well-suited to modeling, control, and diagnostic analysis of non-linear systems, as well as systems with large parameter spaces. They are also very appealing for their ability to process high-dimensional data types, such as images and time series (both of which are ubiquitous in particle accelerators). In this work, key studies that demonstrated the potential utility of modern neural network-based approaches to modeling and control of particle accelerators are presented. The context for this work is important: at the start of this work in 2012, there was little interest in AI/ML in the particle accelerator community, and many of the advances in neural networks and deep learning that enabled its present success had not yet been made at that time. As such, this work was both an exploration of possible application areas and a generator of initial demonstrations in these areas, including some of the first applications of modern deep neural networks in particle accelerators. ...

July 1, 2020 · 322 words · RL4AA Collaboration
Layout of the accelerator.

Using a neural network control policy for rapid switching between beam parameters in an FEL

A. L. Edelen1, S. G. Biedron2, J. P. Edelen3, S. V. Milton4, P. J. M. van der Slot5 1Colorado State University, 2Element Aero, 3Fermi National Accelerator Laboratory, 4Los Alamos National Laboratory, 5University of Twente 38th International Free Electron Laser Conference Abstract FEL user facilities often must accommodate requests for a variety of beam parameters. This usually requires skilled operators to tune the machine, reducing the amount of available time for users. In principle, a neural network control policy that is trained on a broad range of operating states could be used to quickly switch between these requests without substantial need for human inter-vention. We present preliminary results from an ongoing study in which a neural network control policy is investi-gated for rapid switching between beam parameters in a compact THz FEL. ...

August 25, 2017 · 138 words · RL4AA Collaboration
General setup for the neural network model.

Using Neural Network Control Policies For Rapid Switching Between Beam Parameters in a Free Electron Laser

A. L. Edelen1, S. G. Biedron2, J. P. Edelen3, S. V. Milton4, P. J. M. van der Slot5 1Colorado State University, 2Element Aero, 3Fermi National Accelerator Laboratory, 4Los Alamos National Laboratory, 5University of Twente Workshop on Deep Learning for Physical Sciences at the Conference on Neural Information Processing Systems 2017 Abstract Free Electron Laser (FEL) facilities often must accommodate requests for a varietyof electron beam parameters in order to supply scientific users with appropriatephoton beam characteristics. This usually requires skilled human operators to tunethe machine. In principle, a neural network control policy that is trained on a broadrange of machine operating states could be used to quickly switch between theserequests without substantial need for human intervention. We present preliminaryresults from an ongoing simulation study in which a neural network control policyis investigated for rapid switching between beam parameters in a compact THzFEL that exhibits nonlinear electron beam dynamics. To accomplish this, we firsttrain a feed-forward neural network to mimic a physics-based simulation of theFEL. We then train a neural network control policy by first pre-training it as aninverse model (using supervised learning with a subset of the simulation data) andthen training it more extensively with reinforcement learning. In this case, thereinforcement learning component consists of letting the policy network interactwith the learned system model and backpropagating the cost through the modelnetwork to the controller network. ...

August 25, 2017 · 232 words · RL4AA Collaboration