
AI Copilot for Particle Accelerators: How Berkeley’s ALS Keeps Operations Safe and Fast
Executive summary: What the copilot does—and why it matters
Lawrence Berkeley National Laboratory’s Advanced Light Source (ALS), which supports roughly 1,700 experiments each year, is deploying a safety‑aware AI copilot in its control room. Called Accelerator Assistant, the system translates natural‑language intent into structured plans and executable code, helping operators diagnose issues, tune settings, and respond to anomalies faster—while preserving strict safety controls and auditability [1][2]. This AI copilot for particle accelerators is designed to increase uptime, throughput, and usability across a complex, legacy‑plus‑modern environment [1][2].
Technical overview: Architecture of Accelerator Assistant
Built on the Osprey agent framework, Accelerator Assistant uses a hybrid model strategy: a local large language model (LLM) running on NVIDIA H100 GPUs for low‑latency, sensitive tasks, and selective calls to external models for specific capabilities. The system ingests institutional documentation, expert playbooks, and accelerator databases, then produces explicit, multi‑step plans before any action. It can call tools and generate code as part of an orchestrated workflow—codifying the tacit knowledge that previously lived primarily with veteran operators [1][2].
- Osprey’s plan‑first approach ensures the agent proposes a transparent sequence of steps prior to execution [1][2].
- The hybrid deployment balances performance, privacy, and flexibility, leveraging on‑prem resources for control‑room safety while tapping external models judiciously [1].
- Code generation and data analysis run in familiar environments (e.g., Python/Jupyter) to streamline diagnostics and tuning [1][2].
For readers seeking background on control‑system standards, see the EPICS documentation (external).
Integration with EPICS and control systems
ALS uses EPICS, a distributed control system that exposes over 230,000 process variables. Through EPICS integration, the copilot can read machine states and propose adjustments under the same access controls and safety constraints that govern human operators. This design maintains parity with existing permissions and enables full auditability of every change, whether initiated by a person or the AI [1][2].
Safety, governance, and human approval gates
Safety is enforced through graded autonomy. Accelerator Assistant can run fully automated sequences or present plans and code for human review at designated approval gates. This modular gating gives operators control over high‑risk steps, with logging and observability designed into the workflow. The plan‑first architecture improves transparency and accountability—operators see what the agent intends to do before it does it [1][2].
Operational capabilities: diagnostics, tuning, and code execution
By converting natural‑language requests into structured tasks, the copilot assists with beam diagnostics, configuration changes, and rapid response to anomalies. It can generate and execute Python to analyze signals, correlate heterogeneous data streams, and tune parameters across subsystems—all while respecting EPICS permissions. The result is faster interpretation of legacy‑plus‑modern telemetry and reduced manual toil for operators under time pressure [1][2].
Why an AI copilot for particle accelerators matters now
ALS is a large scientific user facility with many simultaneous beamlines and tight schedules. Encoding institutional knowledge into an agentic system helps newer operators ramp quickly and ensures continuity as staff rotate. By reducing tuning time and improving responsiveness, the copilot aims to maximize uptime and experiment throughput—key performance drivers for facilities serving broad research communities [1][2].
How this compares with other AI approaches
The ALS deployment complements work at other laboratories. At SLAC, physics‑informed machine learning has been used to optimize beam settings faster and with less experimental data, improving diagnostic quality and revealing non‑obvious operating points. Together, these efforts illustrate how agentic copilots and physics‑informed models can work in tandem: plan‑first agents to orchestrate safe, auditable operations; physics‑aware models to accelerate optimization and uncover better regimes [3].
Deployment considerations and risks
- Hardware and placement: on‑prem LLM inference (e.g., H100‑class GPUs) for safety‑critical responsiveness [1].
- Data governance: curated documentation, procedures, and control‑system schemas become model inputs—manage provenance and updates [1][2].
- Approval gates: define human‑in‑the‑loop checkpoints for high‑risk actions and ensure end‑to‑end logging [1][2].
- External model calls: restrict and monitor when the agent can use third‑party models [1].
- Operator training and validation: align workflows with existing runbooks and stress‑test failure modes [1][2].
For implementation patterns beyond accelerators, explore our AI tools and playbooks.
Conclusion and further reading
Agentic AI is moving from demos to daily operations in science facilities. By combining a plan‑first agent architecture with deep EPICS integration and on‑prem LLMs, Accelerator Assistant shows how to enhance reliability and safety without sacrificing speed. Results from SLAC’s physics‑informed ML point to a broader landscape where AI reduces tuning time, improves diagnostics, and boosts throughput across user facilities [1][2][3].
Sources
[1] AI Copilot Keeps Berkeley’s X-Ray Particle Accelerator on Track
https://blogs.nvidia.com/blog/ai-copilot-berkeley-x-ray-particle-accelerator/
[2] [PDF] Agentic AI at the Advanced Light Source
https://ml4physicalsciences.github.io/2025/files/NeurIPS_ML4PS_2025_93.pdf
[3] AI learns physics to optimize particle accelerator performance
https://www6.slac.stanford.edu/news/2021-07-29-ai-learns-physics-optimize-particle-accelerator-performance