Loss of Control: "Normal Accidents" and AI Systems


A thread in recent work on the social impacts of AI systems is whether certain properties of a domain should preclude the application of such systems to begin with. Incorporating sociological work on accidents, I analyze two such properties: complexity and tight coupling. Respectively analogous to uninterpretability and lack of slack in a system, analysis suggests that current fundamental challenges in AI research either create or aggravate these properties. If this analysis holds, the burden of proof for deployment of AI systems is shifted even more onto those calling for deployment to show that such systems do not cause harm, or that such harm is negligible. Such a burden of proof may be incorporated into regulatory or legal standards, and is desirable given the common power imbalance between those implementing AI systems and those receiving their effects.

In ICLR 2021 RAI Workshop
Alan Chan
Alan Chan
PhD Student

I’m a PhD student at Mila, where I think about the development of AI from the perspectives of RL, fundamental science, fairness, and interpretability. Please feel free to reach out!