Verified AI is the goal of achieving strong, ideally provable assurances of correctness and trustworthiness of AI systems with respect to mathematically specified requirements. Five challenge areas for verified AI: Environment modelling, Formal specification, Modeling learning systems, Scalable formal engines, and Correct-by-construction design.Toward Verified Artificial Intelligence Read More
The E.U. is an early mover in the race to regulate AI, and with the draft E.U. AI Act, it has adopted an assurance-based regulatory environment using yet-to-be-defined AI assurance standards.Trust, Regulation, and Human-in-the-Loop AI: within the European region Read More
Human civilization does not tend to agree among issues such as fairness, equality, safety, security, privacy, and self-determination (for example). With COVID-19, economical well-being, health care, climate change, and other issues (some of which are considered here), if we cannot agree on the basic goals, we will never reach whatever they might have been—especially if the goals appear to compete with each other.A Holistic View of Future Risks Read More
Students of cybersecurity must be students of cyberattacks and adversarial behavior.Engineering Trustworthy Systems: A Principled Approach to Cybersecurity Read More
Communications of the ACM, November 2018
By Steven M. Bellovin, Peter G. Neumann
“Cryptography is an enormously useful concept for achieving trustworthy systems and networks; unfortunately, its effectiveness can be severely limited if it is not implemented in systems with sufficient trustworthiness.
It is time to get serious about the dearth of trustworthy systems and the lack of deeper understanding of the risks that result from continuing on a business-as-usual course.”The Big Picture Read More