Lamboozling Attackers: A New Generation of Deception

honey and honey stick - Credit: Slawomir Zelasko

Lamboozling Attackers: A New Generation of Deception
Communications of the ACM, June 2022, Vol. 65 No. 6, Pages 44-53
Practice
By Kelly Shortridge, Ryan Petrich

“Conventional deception approaches are unconvincing to attackers with a modicum of experience.”

 

Deception is a powerful resilience tactic that provides observability into attack operations, deflects impact from production systems, and advises resilient system design. A lucid understanding of the goals, constraints, and design trade-offs of deception systems could give leaders and engineers in software development, architecture, and operations a new tactic for building more resilient systems—and for bamboozling attackers.

 

Unfortunately, innovation in deception has languished for nearly a decade because of its exclusive ownership by information security specialists. Mimicry of individual system components remains the status-quo deception mechanism despite growing stale and unconvincing to attackers, who thrive on interconnections between components and expect to encounter systems. Consequently, attackers remain unchallenged and undeterred.

 

This wasted potential motivated our design of a new generation of deception systems, called deception environments. These are isolated replica environments containing complete, active systems that exist to attract, mislead, and observe attackers. By harnessing modern infrastructure and systems design expertise, software engineering teams can use deception tactics that are largely inaccessible to security specialists. To help software engineers and architects evaluate deception systems through the lens of systems design, we developed a set of design principles summarized as a pragmatic framework. This framework, called the FIC trilemma, captures the most important dimensions of designing deception systems: fidelity, isolation, and cost.

 

The goal of this article is to educate software leaders, engineers, and architects on the potential of deception for systems resilience and the practical considerations for building deception environments. By examining the inadequacy and stagnancy of historical deception efforts by the information security community, the article also demonstrates why engineering teams are now poised—with support from advancements in computing—to become significantly more successful owners of deception systems.

Deception: Exploiting Attacker Brains

In the presence of humans (attackers) whose objectives are met by accessing, destabilizing, stealing, or otherwise leveraging other humans’ computers without consent, software engineers must understand and anticipate this type of negative shock to the systems they develop and operate. Doing so involves building the capability to collect relevant information about attackers and to implement anticipatory mechanisms that impede the success of their operations. Deception offers software engineering teams a strategic path to achieve both outcomes on a sustained basis.

 

Sustaining resilience in any complex system requires the capacity to implement feedback loops and continually learn from them. Deception can support this continuing learning capacity. The value of collecting data about the interaction between attackers and systems, which we refer to as attack observability, is generally presumed to be the concern of information security specialists alone. This is a mistake. Attacker effectiveness and systems resilience are antithetical; one inherently erodes the other. Understanding how attackers make decisions allows software engineers to exploit the attackers’ brains for improved resilience.

 

Attack observability. The importance of collecting information on how attackers make decisions in real operations is conceptually similar to the importance of observability and tracing in understanding how a system or application actually behaves rather than how it is believed to behave. Software engineers can attempt to predict how a system will behave in production, but its actual behavior is quite likely to deviate from expectations. Similarly, software engineers may have beliefs about attacker behavior, but observing and tracing actual attacker behavior will generate the insight necessary to improve system design against unwanted activity.

 

Understanding attacker behavior starts with understanding how humans generally learn and make decisions. Humans learn from both immediate and repeated interactions with their reality (that is, experiences). When making decisions, humans supplement preexisting knowledge and beliefs with relevant experience accumulated from prior decisions and their consequences. Taken together, human learning and decision-making are tightly coupled systems. Given that attackers are human beings—and even automated attack programs and platforms are designed by humans—this tight coupling can be leveraged to destabilize attacker cognition.

 

In any interaction rife with conflict, such as attackers vs. systems operators, information asymmetry leads to core advantages that can tip success toward a particular side. Imperfect information means players may not observe or know all moves made during the game. Incomplete information means players may be unaware of their opponents’ characteristics such as priorities, goals, risk tolerance, and resource constraints. If one player has more or better information related to the game than their opponent, this reflects an information asymmetry.

 

Attackers choose an attack plan based on preexisting beliefs and knowledge learned through experience about operators’ current infrastructure and protection of it. Operators choose a defense plan based on preexisting and learned knowledge about attackers’ beliefs and methods.

 

This dynamic presents an opportunity for software engineers to use deception to amplify information asymmetries in their favor. By manipulating the experiences attackers receive, any knowledge gained from those experiences is unreliable and will poison the attackers’ learning process, thereby disrupting their decision-making.

 

 

Sidebar: The FIC Trilemma: Sweet Spots for Deception Environments

FIC—The most important dimensions of designing deception systems: Fidelity, Isolation, and Cost.

Replicomb—A full replica of a production host with imitated load and a purposefully vulnerable component.

Honeyhive—A full, scaled-down replica of an entire production environment with activity flowing through a web of replicomb hosts.

Read the Full Article »

About the Authors:

Kelly Shortridge is a senior principal in product technology at Fastly, co-author with Aaron Rinehart of Security Chaos Engineering (O’Reilly Media), and is an expert in resilience-based strategies for systems defense. Their research on applying behavioral economics and DevOps principles to information security has been featured in top industry publications and is used to guide modernization of information security strategy globally.

Ryan Petrich is an SVP at a financial services company and was previously chief technology officer at Capsule8. Their current research focuses on using systems in unexpected ways for optimum performance and subterfuge. Their work spans designing developer tooling, developing foundational jailbreak tweaks, architecting resilient distributed systems, and experimenting with compilers, state replication, and instruction sets.

See also in Internet Salmagundi: