Notes on… Deployment and AI Safety

I've worked most of my career in security and safety-critical domains: infrastructure, cyber, operational technology, national security. One thing those worlds eventually taught us: safety and security don’t live in the asset alone, they live in the system: the operator, the environment, the incentives, and the mission.

I've been learning more about AI safety and governance lately, and I'm struck by how model-centric it still is. Evaluations, alignment, capability thresholds…all critical, but framed as if the “manufacturer" controls deployment.

In the worlds I live in, that assumption broke years ago. IIoT, connected medical devices, and semi-autonomous equipment forced standards to evolve because systems became heterogeneous, multi-party, and high-consequence. IEC 62443 formalizes system-level risk because the same device has different profiles depending on deployment context: who integrates it, who configures it, who operates it, what controls exist around it. FDA's premarket cybersecurity guidance followed similar logic: lifecycle planning and postmarket monitoring recognized that safety and security don’t end at design and for darn sure depend on deployment.

From what I see, AI safety frameworks focus on model capability and threat category (CBRN, cyber ops, autonomous capabilities). What I'm not seeing consistently is the deployment ecosystem: the integrator who configures it, the institution that deploys it, the operators who use it, the governance that surrounds it, and the mission incentives that drive it. In industrial safety, we learned that risk gets distributed across that chain and you need defined responsibility at each layer. Same device, different deployment context, different controls, fundamentally different risk.

AI safety frameworks don't appear to account for that dimension yet, at least not as a formal part of risk assessment. We're still largely in a manufacturer-centric paradigm even as models move into complex, multi-party environments.

The analogy isn't perfect. AI models have properties physical devices don't. But the pattern is familiar to me. As AI gets embedded in high-stakes systems, treating deployment ecosystems as someone else's problem seems like the kind of gap that becomes obvious in hindsight.

Previous
Previous

The Fragility of Proof