“The Big Picture”
Communications of the ACM, November 2018, Vol. 61 No. 11, Pages 24-26
By Steven M. Bellovin, Peter G. Neumann
Previous Communications inside risks columns have discussed specific types of risks (to safety, security, reliability, and so on), and specific application areas (for example, critical national infrastructures, election systems, autonomous systems, the Internet of Things, artificial intelligence, machine learning, cybercurrencies and blockchains—all of which are riddled with security problems). We have also considered risks of deleterious misuses of social media, malware, malicious drones, risks to privacy, fake news, and the meaning of “truth.” All of these and many more issues must be considered proactively as part of the development and operation of systems with requirements for trustworthiness.
We consider here certain overarching and underlying concepts that must be better understood and more systematically confronted, sooner rather than later. Some are more or less self-evident, some may be debatable, and others may be highly controversial.
- A preponderance of flawed hardware-software systems, which limits the development of trustworthy applications, which also impedes accountability and forensics-worthy rapid identification of culprits and causes failures.
- Lack of understanding of the properties of composed systems. Components that seem secure locally, when combined, may yield insecure systems.
- A lack of discipline and constructive uses of computer science, physical science, technology, and engineering, which hinders progress in trustworthiness, although new applications, widgets, and snake-oil-like hype continue apace without much concern for sound usability.
- A lack of appreciation for the wisdom that can be gained from science, engineering, and scientific methods, which impedes progress, especially where that wisdom is clearly relevant.
- A lack of understanding of the short-term and long-term risks by leaders in governments and business, which is becoming critical, as is their willingness to believe that today’s sloppy systems are good enough for critical uses.
- A widespread failure to understand these risks is ominous, as history suggests they will pervasively continue to recur in the future.
- A general lack of awareness and education relating to all of these issues, requiring considerable rethinking of these issues.
About the Authors:
Steven M. Bellovin is a professor of Computer Science at Columbia University, and affiliate faculty at its law school.
Peter G. Neumann is Chief Scientist of the SRI International Computer Science Lab, and moderator of the ACM Risks Forum.
Both Peter and Steven have been co-authors of several of the cited NRC study reports, and co-authors of Keys Under Doormats.