Communications of the ACM, February 2020, Vol. 63 No. 2, Pages 25-28
Inside Risks: “Are You Sure Your Software Will Not Kill Anyone?”
By Nancy Leveson
“System and software requirements development are necessarily a system engineering problem, not a software engineering problem.”
From what I have seen, heard, and read, confusion and misinformation abound about software and safety. I have worked in this area for nearly 40 years, starting around the time when computers were beginning to be introduced into the control of safety-critical systems. I want to share what I have learned. Too many incorrect beliefs are being promoted, which are inhibiting progress and, in some cases, unnecessarily costing lives. This column clarifies this topic so that the solutions we propose are more likely to have a significant impact on safety.
With only a few exceptions, software was not used to directly control safety-critical systems until approximately 1980, although it was used to provide computational power for complex systems, such as spacecraft. Direct control was very limited, but the hesitation has now almost completely disappeared and software is used to control most systems, including physical systems that could involve potentially large and even catastrophic losses.
Some of the most common misconceptions with respect to software and safety:
- Misconception 1: Software Itself Can Be Unsafe.
- Misconception 2: Reliable Systems Are Safe; That Is, Reliability and Safety Are Essentially the Same Thing. Reliability Assessment Can Therefore Act as a Proxy for Safety.
- Misconception 3: The Safety of Components in a Complex System Is a Useful Concept; That Is, We Can Model or Analyze the Safety of Software in Isolation from the Entire System Design.
- Misconception 4: Software Can Be Shown to Be Safe by Testing, Simulation, or Standard Formal Verification.
About the Author:
Nancy Leveson is a professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT), Cambridge, MA, USA.