I'm a researcher focused on technical AI governance, with a particular interest in applying systems-theoretic safety methods to the challenge of deploying frontier AI systems responsibly.

My current work centers on STAMP/STPA (Systems-Theoretic Accident Model and Processes / System-Theoretic Process Analysis) and how these frameworks can surface categories of risk that conventional failure-based analysis tends to miss. Most AI safety analysis focuses on what happens when a model produces an error or acts contrary to its intended objectives. That work is essential, but it's incomplete. Unsafe behavior can emerge from the interactions between components in a deployment system even when no individual component has failed. Understanding and governing those systemic risks is what I'm focused on.

I an starting write regularly about systems thinking, hazard analysis, and risk assessment as they apply to frontier AI, particularly agentic systems. You can find my writing on this site and on my Substack.

Background

Before moving into AI governance, I was an astrophysicist studying galaxy evolution. I earned my PhD at UC Irvine, where I worked with Michael Cooper and James Bullock on environmental quenching mechanisms in dwarf satellite galaxies. I then did postdoctoral research at the University of Washington with Sarah Tuttle.

That background gave me deep experience in quantitative modeling, working with complex interacting systems, and constructing explanations for emergent behavior. These skills translate directly to the systems safety problems I work on now.

Get in Touch

spfillingham [at] gmail [dot] com

LinkedIn