Why Failure-Based Thinking Is Not Enough for AI Safety

Most AI safety analysis asks "what happens when the model fails?" But some of the most important risks emerge when no component has failed at all. Systems-theoretic approaches offer a more complete lens.

Get new posts delivered to your inbox.

Subscribe on Substack