Glossary of Terms (Tag Terms)

Your archway to computing technology and cybersecurity terms A to Z used as tags in this website.

This glossary contains a selection of many of the computing technology and cybersecurity Terms used as Tags in this website.

1-10 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

1-10

5G – 5th Generation Wireless:

5G is 5th Generation Wireless cellular network technology having greater bandwidth and resulting higher download speeds with the potential for up to 10 Gbit/s. The new technology requires new networks since prior systems can’t be retrofitted. 5G can support up to 1 million devices per square kilometer, whereas 4G supports only 1/10 of that capacity.
—Wikipedia, “5G

A

Active Measures

Active Measures is an increasingly digital intelligence playbook [with] a wide-ranging set of techniques and strategies that Russian military and intelligence services deploy to influence the affairs of nations across the globe.
—WIRED, “A Guide to Russia’s High Tech Tool Box for Subverting US Democracy

Active Measures is political warfare conducted by the Soviet or Russian government since the 1920s. It includes offensive programs such as disinformation, propaganda, deception, sabotage, destabilization, subversion, and espionage. The programs were based on foreign policy priorities of the Soviet Union. Active measures have continued in the post-Soviet era in Russia.
—Wikipedia, “Active measures

Active Measures:

  • Covert political operations ranging from disinformation campaigns to staging insurrections—have a long and inglorious tradition in Russia and reflect a permanent wartime mentality, something dating back to the Soviet era and even tsarist Russia.
  • A strategic culture whose participants see the world full of secret threats and an operational culture whose adherents regard the best defense as offense have ensured that both have become central aspects of modern Russia’s geopolitical struggle with the West.
  • Active measures are not solely the preserve of the intelligence services, but of other actors as well, and these actors are expected to generate their own initiatives aimed at furthering the Kremlin’s disruptive agenda.

Aktivnye meropriyatiya, “active measures,” was a term used by the Soviet Union (USSR) from the 1950s onward to describe a gamut of covert and deniable political influence and subversion operations, including (but not limited to) the establishment of front organizations, the backing of friendly political movements, the orchestration of domestic unrest and the spread of disinformation.
—Marshall Center, “Active Measures: Russia’s Covert Geopolitical Operations

Artificial Intelligence

Artificial Intelligence (AI) is a broad term with no single authoritative definition — and frequently, people mean different things when they use it.

However, AI commonly means:

  1. The capability of a non-human system to perform functions typically thought of as requiring human intelligence.
  2. A field of study dedicated to developing these systems.

The term AI is often imprecise. AI is sometimes used interchangeably with machine learning, but the two terms are not identical. Machine learning is one promising set of techniques used to develop AI, but others exist. We disambiguate machine learning and AI more fully [see the link below].

AI is also a moving target. The “AI effect” is a paradox in which problems thought to require AI, once largely solved, are no longer seen as requiring “intelligence.” This dynamic further contributes to ambiguity around the definition of AI.
—CSET Glossary, “Artificial Intelligence

Autism at Work:

The Autism at Work summits are a set of annual conferences held around the globe to celebrate and encourage a neurodiverse workforce. Neurodiversity is the collective term to indicate persons who have a neurologically divergent development, and/or a mental health disorder. According to the National Symposium on Neurodiversity (2011) held at Syracuse University,

“Neurodiversity is a concept where neurological differences are to be recognized and respected as any other human variation. These differences can include those labeled with Dyspraxia, Dyslexia, Attention Deficit Hyperactivity Disorder, Dyscalculia, Autistic Spectrum Disorder, Tourette’s Syndrome, and others.”

The purpose of recognizing neurodiversity is to note that while each of these disorders hold challenges for the affected population, they often also bring unusual abilities. For example, autistic individuals tend to have an innate ability to focus on details and see patterns; those with ADHD have a powerful ability to hone their attention; dyslexics have exemplary spatial ability and strong creative minds; depressives are also artistic and often deep thinkers—even those with mania or schizophrenia, disorders that seem severe or dysfunctional, can offer immense insight and novel ideas where the typical brain misses out.

Hiring neurodiverse is a positive choice for social welfare and a highly valuable resource to organizations. The autism at work summits aim to engage a greater audience to see the resulting benefits of better engaging the skills and interests of all employees, not just those on the autism spectrum.
—Autism at Work, “About

See also Neurodiversity Programs.

B

Backdoor:

“In the world of cybersecurity, a Backdoor refers to any method by which authorized and unauthorized users are able to get around normal security measures and gain high level user access (aka root access) on a computer system, network, or software application. Once they’re in, cybercriminals can use a backdoor to steal personal and financial data, install additional malware, and hijack devices.”

“But backdoors aren’t just for bad guys. Backdoors can also be installed by software or hardware makers as a deliberate means of gaining access to their technology after the fact. Backdoors of the non-criminal variety are useful for helping customers who are hopelessly locked out of their devices or for troubleshooting and resolving software issues.”
—Malwarebytes, “Backdoor computing attacks

Behavioral Futures Markets:

Behavioral Futures Markets is defined by Shoshana Zuboff as “private human experience as a free raw material” translated into behavioral data that is “then computed and packaged as prediction products and sold into behavioral futures markets — business customers with a commercial interest in knowing what we will do now, soon, and later. It was Google that first learned how to capture surplus behavioral data, more than what they needed for services, and used it to compute prediction products that they could sell to their business customers, in this case advertisers.”
— The Harvard Gazette, “High tech is watching you

Behavioral Modification:

Behavioral Modification in the sense used in this website is a manipulation of behavior to “tune and herd populations toward guaranteed commercial outcomes” in the context of so-called smart-cities and other scenarios where computational systems modulate behavior. Behavioral modification is a term coined by Shoshana Zuboff in her book Surveillance Capitalism.
—New York Magazine, “Shoshana Zuboff on Surveillance Capitalism’s Threat to Democracy

Going beyond a definition and highlighting her concern: “Surveillance capitalism’s “means of behavioral modification” at scale erodes democracy from within because, without autonomy in action and in thought, we have little capacity for the moral judgment and critical thinking necessary for a democratic society. Democracy is also eroded from without, as surveillance capitalism represents an unprecedented concentration of knowledge and the power that accrues to such knowledge. They know everything about us, but we know little about them. They predict our futures, but for the sake of others’ gain. Their knowledge extends far beyond the compilation of the information we gave them. It’s the knowledge that they have produced from that information that constitutes their competitive advantage, and they will never give that up. These knowledge asymmetries introduce wholly new axes of social inequality and injustice.”
—Harvard Gazette, “High tech is watching you

Behavioral Surplus:

Behavioral Surplus is “meta level of” shared data such as on Facebook or Google, and “these data have tremendous predictive value.” “So the idea here is that there is behavioral data that companies are collecting about us, that are being used to improve what they give us, but there is much more [surplus] behavioral information that we are communicating that we” are not aware of. “And this is the surplus that they then take for its predictive value, stream it through their production processes to create these prediction products” that are then used in behavioral futures markets.
—New York Magazine, “Shoshana Zuboff on Surveillance Capitalism’s Threat to Democracy

Bellingcat:

Bellingcat (stylised as bell¿ngcat) is a Netherlands-based investigative journalism website that specialises in fact-checking and open-source intelligence (OSINT). It was founded by British journalist and former blogger Eliot Higgins in July 2014. Bellingcat publishes the findings of both professional and citizen journalist investigations into war zones, human rights abuses, and the criminal underworld. The site’s contributors also publish guides to their techniques, as well as case studies.
—Wikipedia, “Bellingcat

Bellingcat is an independent international collective of researchers, investigators and citizen journalists using open source and social media investigation to probe a variety of subjects – from Mexican drug lords and crimes against humanity, to tracking the use of chemical weapons and conflicts worldwide. With staff and contributors in more than 20 countries around the world, we operate in a unique field where advanced technology, forensic research, journalism, investigations, transparency and accountability come together.

Bellingcat’s innovative approaches to using publicly available data and citizen journalist analysis have been particularly significant for advancing narratives of conflict, crime, and human rights abuses. Bellingcat is part of the Global Investigative Journalism Network.
—Bellingcat, “About

Bias:

Bias is “a tendency to believe that some people, ideas, etc., are better than others that usually results in treating some people unfairly. A personal and sometimes unreasoned judgment. Prejudice.”
—Merriam-Webster, “Bias

“Our inherent human tendency of favoring one thing or opinion over another is reflected in every aspect of our lives, creating both latent and overt biases toward everything we see, hear, and do.”
CACM “Bias on the Web”

Bitcoin and The Bitcoin Network:

The Bitcoin Network is a global decentralized consensus network which operates on a cryptographic [peer to peer] protocol – on top of the Internet – established by individuals [computers / nodes] all around the world who run the Bitcoin Core Software [free open-source software] program which enforces Consensus Rules through a process called Bitcoin Mining which relays and validates UTXO transactions and record state to an immutable append-only Distributed Ledger; Bitcoin Blockchain.
BitcoinNetwork

Bitcoin is the most widely used cryptocurrency to date with over 42 million users. However, network-level adversaries can launch routing attacks to partition the bitcoin network, effectively preventing the system from reaching consensus. Besides Bitcoin, this attack is generally applicable to many peer-to-peer networks and is particularly dangerous against block-chain systems.

Bitcoin is a peer-to-peer network in which nodes use consensus mechanisms to jointly agree on a (distributed) log of all the transactions that ever happened. This log is called the blockchain because it is composed of an ordered list (chain) of grouped transactions (blocks).

Special nodes, known as wallets, are responsible for originating transactions and propagating them in the network using a gossip protocol. A different set of nodes, known as miners, are responsible for verifying the most recent transactions, grouping them in a block, and appending this block to the blockchain. To do so, the miners need to solve a periodic puzzle whose complexity is automatically adapted to the computational power of the miners in the network.

Every time a miner creates a block, it broadcasts it to all the nodes in the network and receives freshly mined bitcoins. Besides the most recent transactions, the block contains a proof-of-work (a solution to the puzzle) that each node can independently verify before propagating the block further.

As miners work concurrently, several of them may find a block at nearly the same time. These blocks effectively create “forks” in the blockchain, that is, different versions of the blockchain. The conflicts are eventually resolved as subsequent blocks are appended to each chain and one of them becomes longer. In this case, the network automatically discards the shorter chains, effectively discarding the corresponding blocks together with the miner’s revenues.

Routing attacks on consensus. Network-level adversaries can perform routing attacks on bitcoin to partition the set of nodes into two (or more) disjoint components.2 Consequently, the attacks disrupt the ability of the entire network to reach consensus.
—CACM, “Securing Internet Applications from Routing Attacks

Border Gateway Protocol (BGP):

The Border Gateway Protocol (BGP) is the glue that holds the Internet together by propagating information about how to reach destinations in remote networks. However, BGP is notoriously vulnerable to misconfiguration and attack. Routing among the Autonomous Systems of the Internet is governed by the Border Gateway Protocol (BGP), which computes paths to destination prefixes within the 67,000 Autonomous Systems that comprise the Internet.
—CACM, “Securing Internet Applications from Routing Attacks

Breach Risk:

Breach Risk is the risk of a breach of security in terms of Risk being equal to the Likelihood multiplied by the Impact of such a breach at each point of the enterprise attack surface. Factors to be considered include: vulnerabilities, exposure, threats, mitigating controls and business criticality.
—CACM “Why Is Cybersecurity Not a Human-Scale Problem Anymore?

C

Certificate Authorities:

Certificate Authorities provide certificates for secure access to Web service.

The Public Key Infrastructure is the foundation for securing online communications. Digital certificates are issued by trusted Certificate Authorities (CAs) to domain owners, verifying the ownership of a domain. Internet users trust a domain with encrypted communications, such as bank websites, only if a valid certificate signed by a CA is presented. This mechanism effectively prevents Man-In-The-Middle (MITM) attacks that can have disastrous consequences, such as stealing users’ financial information.

However, the certificate issuance process is itself vulnerable to routing attacks, allowing network-level adversaries to obtain trusted digital certificates for any victim domain. These attacks have significant consequences for the integrity and privacy of online communications, as adversaries can use fraudulently obtained digital certificates to bypass the protection offered by encryption and launch man-in-the-middle attacks against critical communications.

How certificate authorities work. Domain control verification is a crucial process for domain owners to obtain digital certificates from CAs. Domain owners approach a CA to request a digital certificate, and the CA responds with a challenge that requires the owners to demonstrate control of an important network resource (for example, a website or email address) associated with the domain. Upon completion of the challenge, the CA issues the digital certificate to the domain owner.
—CACM, “Securing Internet Applications from Routing Attacks

Channel Capacity (Shannon Limit):

Given a channel with particular bandwidth and noise characteristics, Claude Shannon SM ’37, PhD ’40 showed how to calculate the maximum rate at which data can be sent over it with zero error. He called that rate the Channel Capacity, but today, it’s just as often called the Shannon Limit.

Shannon, who taught at MIT from 1956 until his retirement in 1978, showed that any communications channel — a telephone line, a radio band, a fiber-optic cable — could be characterized by two factors: bandwidth and noise. Bandwidth is the range of electronic, optical or electromagnetic frequencies that can be used to transmit a signal; noise is anything that can disturb that signal.
—MIT News, “Explained: The Shannon limit

The Shannon Limit or Shannon capacity of a communication channel refers to the maximum rate of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon and Warren Weaver entitled The Mathematical Theory of Communication (1949). This founded the modern discipline of information theory.
—Wikipedia, “Noisy-channel coding theorem

Channel Polarization (Polar Codes):

Arıkan’s goal was to transmit messages accurately over a noisy channel at the fastest possible speed. … Arıkan’s new solution was to create near-perfect channels from ordinary channels by a process he called “Channel Polarization.” Noise would be transferred from one channel to a copy of the same channel to create a cleaner copy and a dirtier one. After a recursive series of such steps, two sets of channels emerge, one set being extremely noisy, the other being almost noise-free. The channels that are scrubbed of noise, in theory, can attain the Shannon limit. He dubbed his solution polar codes. It’s as if the noise was banished to the North Pole, allowing for pristine communications at the South Pole.
—WIRED, “Huawei, 5G, and the Man Who Conquered Noise

In information theory, a Polar Code is a linear block error-correcting code. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability (in other words, they polarize or become sparse), and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) with polynomial dependence on the gap to capacity. Notably, polar codes have modest encoding and decoding complexity O(n log n), which renders them attractive for many applications. Moreover, the encoding and decoding energy complexity of generalized polar codes can reach the fundamental lower bounds for energy consumption of two dimensional circuitry to within an O(nε polylog n) factor for any ε > 0.
—Wikipedia, “Polar code (coding theory)

Cloud Computing:

Cloud Computing is the on-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each location being a data center. Cloud computing relies on sharing of resources to achieve coherence[clarification needed] and economies of scale, typically using a “pay-as-you-go” model which can help in reducing capital expenses but may also lead to unexpected operating expenses for unaware users.
—Wikipedia, “Cloud computing

Cloud Computing is also a horse (but, of course) that won the 2017 Preakness Stakes.

Collusion Networks:

Collusion Networks are part of a “large scale reputation manipulation ecosystem,” engaged in “large-scale online social networking abuse.” “The manipulation services are called Collusion Networks since the users who knowingly participate collude with each other to generate fake actions.”
—CACM, “Technical Perspective: Fake ‘Likes’ and Targeting Collusion Networks

Computer Security:

Computer Security, cybersecurity, or information technology security (IT security) is the protection of computer systems and networks from information disclosure, theft of or damage to their hardware, software, or electronic data, as well as from the disruption or misdirection of the services they provide.

The field is becoming increasingly significant due to the continuously expanding reliance on computer systems, the Internet and wireless network standards such as Bluetooth and Wi-Fi, and due to the growth of “smart” devices, including smartphones, televisions, and the various devices that constitute the “Internet of things”. Cybersecurity is also one of the significant challenges in the contemporary world, due to its complexity, both in terms of political usage and technology.
—Wikipedia, “Computer security

Computer Viruses:

Computer Viruses are malicious programs that attack computer systems.

A virus is generally not regarded as a living organism, but sometimes described as (similar to) software. When the first self-replicating computer programs made the rounds, they were experiments or pranks; for most, the point was solely reproduction. An early computer worm was beneficent, but escaped control.

We distinguish Computer Viruses from computer worms by the profligate scale of replication, viruses generating a broadcast of copies rather than a chain of copies. The obvious points of analogy across both types of virus include that viruses are tiny, invading a host much greater in size and complexity, without an overt signal, and that viruses disrupt some process in the host. Neither computer nor biological virus necessarily does damage. In biology, self-replication is an end, not a means, making the damage a side-effect. In the modern computer virus, the end is likely to be the action of a payload of malicious code. Now the term “virus,” in both environments, connotes an intrusive and damaging force carrying dangerous baggage.
—CACM, “Protecting Computers and People From Viruses

Content Moderation:

Debate continues “that social media companies regulate speech in a politically biased way. Such charges come from both sides of the aisle, but frequently are missing key facts about the rules and processes behind keeping up or taking down content and accounts and grossly misunderstand the technical workings at play in large-scale commercial Content Moderation.”
—CACM, “Content Moderation Modulation

On Internet websites that invite users to post comments, a Moderation System is the method the webmaster chooses to sort contributions that are irrelevant, obscene, illegal, harmful, or insulting with regards to useful or informative contributions.

Social media sites may also employ Content Moderators to manually inspect or remove content flagged for hate speech or other objectionable content.
—Wikipedia, “Moderation system

Overall, the term Content Moderation gets at an ongoing debate regarding what content is appropriate and whose job it is to police it. It becomes especially problematic at scale as in the challenge of moderating mis- and dis-information, hate-speech and other forms of content that are deemed inappropriate. The challenges are both political and technical.

Continuous Delivery:

Continuous Delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time and, when releasing the software, without doing so manually. It aims at building, testing, and releasing software with greater speed and frequency. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.

CD contrasts with continuous deployment, a similar approach in which software is also produced in short cycles but through automated deployments rather than manual ones.
—Wikipedia, “Continuous delivery

Continuous Delivery is a set of principles, patterns, and practices designed to make deployments—whether of a large-scale distributed system, a complex production environment, an embedded system, or a mobile app—predictable, routine affairs that can be performed on demand at any time.

The object of Continuous Delivery is to be able to get changes of all types—including new features, configuration changes, bug fixes, and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.
—ACMQueue, “Continuous Delivery Sounds Great, but Will It Work Here?

Coronavirus-Covid19:

When used here Coronavirus-Covid19 is in reference to technological aspects of the disease, such as technologies used to counteract it.

Credential Stealing:

Credential Stealing is stealing and using a valid login credential. It is now the most common attack vector. Basically, the thieves steal passwords, set up man-in-the-middle attacks to piggy-back on legitimate logins, or engage in cleverer attacks to masquerade as authorized users. It’s a more effective avenue of attack in many ways: it doesn’t involve finding a zero-day or unpatched vulnerability, there’s less chance of discovery, and it gives the attacker more flexibility in technique.
—Schneier on Security, “Credential Stealing as an Attack Vector

Credential Access consists of techniques for stealing credentials like account names and passwords. Techniques used to get credentials include keylogging or credential dumping. Using legitimate credentials can give adversaries access to systems, make them harder to detect, and provide the opportunity to create more accounts to help achieve their goals.
—MITRE ATT&CK, “Credential Access
(This article includes a long list of techniques.)

Credential Theft, the first stage of a credential-based attack, is the process of stealing credentials. Attackers commonly use phishing for credential theft, as it is a fairly cheap and extremely efficient tactic. The effectiveness of credential phishing relies on human interaction in an attempt to deceive employees, unlike malware and exploits, which rely on weaknesses in security defenses.
—Palo Alto Networks, Cyberpedia, “What is a Credential-Based Attack?

Cryptography:

Cryptography is the technique of enciphering and deciphering messages to maintain the privacy of computer data.
—Schneier on Security, “Applied Cryptography, Protocols, Algorithms, and Source Code in C. A book by Bruce Schneier

The art and science of keeping messages secure is Cryptography and it is practiced by cryptographers.
Applied Cryptography, Protocols, Algorithms, and Source Code in C. By Bruce Schneier. Published 1996. Page 1.

Cyber Attribution:

Cyber Attribution – A fundamental concept in cybersecurity and digital forensics is the fact that it is sometimes extremely difficult after a cyberattack to definitively name a perpetrator. Hackers have a lot of technical tools at their disposal to cover their tracks. And even when analysts figure out which computer a hacker used, going from there to who used it is very difficult. This is known as the attribution problem.

The [Cyber] Attribution problem is the idea that identifying the source of a cyber attack or cyber crime is often complicated and difficult because there is no physical act to observe and attackers can use digital tools to extensively cover their tracks.
—WIRED, “Hacker Lexicon: What Is the Attribution Problem?

There is no universally agreed upon definition of the term Attribution in the field of information assurance (IA).

This paper defines “[Cyber] Attribution” as “determining the identity or location of an attacker or an attacker’s intermediary.” A resulting identity may be a person’s name, an account, an alias, or similar information associated with a person. A location may include physical (geographic) location, or a virtual location such as an IP address or Ethernet address.

This definition includes intermediaries, and not just the attacker. An ideal attribution process would always identify the original attacker’s identity and location. Unfortunately, clever attackers can often make themselves difficult to directly attribute (and/or providing misleading information to hide the true attacker). However, even if only an intermediary is identified, that information can still be useful. For example, blocking an attack may be more effective if an intermediary is known.

An attribution process may also provide additional information, such as the path used to perform the attack and the timing of the attack, but these cannot always be determined. In particular, it is worth noting that it can be difficult to determine by technical means the motivation for an attack.

A related term is traceback, which will be defined in this paper as “any attribution technique that begins with the defending computer and recursively steps backwards in the attack path toward the attacker.” Thus, traceback techniques are a subset of attribution techniques. The term “traceback” is common in the public literature on this topic.
—Institute for Defense Analysis, “Techniques for Cyber Attack Attribution

CyberInsecurity:

CyberInsecurity is a lack of a secure cyber environment, one that involves computers and networks. While vague, the term speaks to an environment that is systemically insecure with no quick or simple remedy available. The challenges in address an insecure cyber world are many and significant. It was coined by Glenn S. Gerstell, general council to the National Security Agency, although he did not provide a bespoke definition for it.

Cyber-Resilience:

Cyber Resilience the ability of an enterprise to limit the impact of cyber-attacks.
—CACM, “Why Is Cybersecurity Not a Human-Scale Problem Anymore?

Cyber Resiliency is the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on systems that use or are enabled by cyber resources. Cyber resiliency is intended to enable mission or business objectives that depend on cyber resources to be achieved in a contested cyber environment.
—NIST, Computer Security Resource Center Glossary, “cyber resiliency

As defined in plain English, Cyber-Resilience is the ability of an enterprise to limit the impact of security incidents. Cyber-resilience is not an opaque score that is derived from some simple scoring of N properties of a network. It is also not a number that you arrive at by answering a set of questions about your network. It is quite a bit more complicated than that, but can be calculated in a reasonable manner from observations of the state of your enterprise and a series of probabilistic mathematical calculations.
—Balbix, “What is cyber resilience?

CyberSecurity:

CyberSecurity or Cyber Security is the art of protecting networks, devices, and data from unauthorized access or criminal use and the practice of ensuring confidentiality, integrity, and availability of information.
—U.S. Dept. of Homeland Security – Cybersecurity and Infrastructure Security Agency (CISA), “Security Tip (ST04-001) What is Cybersecurity?

Computer Security, CyberSecurity or Information Technology Security (IT Security) is the protection of computer systems from theft or damage to their hardware, software or electronic data, as well as from disruption or misdirection of the services they provide.
—Wikipedia, “Computer security

CyberSecurity is a subset of Information Security. Information Security, sometimes shortened to InfoSec, is the practice of preventing unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction of information.
—Wikipedia, “Information security

CyberSecurity refers to a set of techniques used to protect the integrity of networks, programs and data from attack, damage or unauthorized access.
—PaloAlto Networks, “What is Cyber Security?

CyberSecurity is the practice of protecting systems, networks, and programs from digital attacks. These cyberattacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes. Implementing effective cybersecurity measures is particularly challenging today because there are more devices than people, and attackers are becoming more innovative.
—Cisco, “What is Cybersecurity?

CyberSecurity is strengthening the security and resilience of cyberspace:

  • CyberSecurity at the federal level includes: Combating Cyber Crime, Securing Federal Networks, Protecting Critical Infrastructure, Cyber Incident Response, Cyber Safety, Cybersecurity Governance, Cybersecurity Insurance, Cybersecurity Jobs, Cybersecurity Training & Exercises, Information Sharing, Stakeholder Engagement and Cyber Infrastructure Resilience.
    —U.S. Dept. of Homeland Security – CyberSecurity overview, “Cybersecurity
  • Cyberspace and its underlying infrastructure are vulnerable to a wide range of risk stemming from both physical and cyber threats and hazards. Sophisticated cyber actors and nation-states exploit vulnerabilities to steal information and money and are developing capabilities to disrupt, destroy, or threaten the delivery of essential services. A range of traditional crimes are now being perpetrated through cyberspace. This includes the production and distribution of child pornography and child exploitation conspiracies, banking and financial fraud, intellectual property violations, and other crimes, all of which have substantial human and economic consequences.
  • Cyberspace is particularly difficult to secure due to a number of factors: the ability of malicious actors to operate from anywhere in the world, the linkages between cyberspace and physical systems, and the difficulty of reducing vulnerabilities and consequences in complex cyber networks. Of growing concern is the cyber threat to critical infrastructure, which is increasingly subject to sophisticated cyber intrusions that pose new risks. As information technology becomes increasingly integrated with physical infrastructure operations, there is increased risk for wide scale or high-consequence events that could cause harm or disrupt services upon which our economy and the daily lives of millions of Americans depend. In light of the risk and potential consequences of cyber events, strengthening the security and resilience of cyberspace has become an important homeland security mission.
    —U.S. Dept. of Homeland Security – Cybersecurity and Infrastructure Security Agency (CISA), “Cybersecurity, Overview

Internet Security deals specifically with networks and so is a subset of Cybersecurity.

Three elements are key to any definition of security: 1) Confidentiality, 2) Integrity, and 3) Availability.

CyberSecurity Engineering:

CyberSecurity Engineering is a set of principles encompassing the knowledge of how to design cybersecurity attacks and defenses. The principles are laid out in a book by O. Sami Saydjari titled “Engineering Trustworthy Systems.” Ten of the most fundamental principles are addressed in the article “Engineering Trustworthy Systems: A Principled Approach to Cybersecurity” published in CACM June 2019.

The ten most fundamental principals are:

  1. Cybersecurity’s goal is to optimize mission effectiveness.
  2. Cybersecurity is about understanding and mitigating risk.
  3. Theories of security come from theories of insecurity.
  4. Espionage, sabotage, and influence are goals underlying cyberattack.
  5. Assume your adversary knows your system well and is inside it.
  6. Without integrity, no other cybersecurity properties matter.
  7. An attacker’s priority target is the cybersecurity system.
  8. Depth without breadth is useless; breadth without depth, weak.
  9. Failing to plan for failure guarantees catastrophic failure.
  10. Strategy and tactics knowledge comes from attack encounters.

—CACM, “Engineering Trustworthy Systems: A Principled Approach to Cybersecurity

Cybersecurity Knowledge:

Cybersecurity Knowledge is facts, information and skills acquired by a person through experience or education resulting in theoretical or practical understanding of cybersecurity.

The Cybersecurity Knowledge tag identifies content that is useful and/or helpful in acquiring it. It’s a bit nebulous to be sure, and it’s also a bit of a stand-in when no other tags seem to fit. However, I’m using it with respect to reports and other collections of data, facts & information that may be helpful in this regard. It’s also used to identify efforts to increase Cybersecurity Knowledge.

D

Data Integrity:

Data Integrity is the maintenance of the accuracy and consistency of stored information. Accuracy means the data is stored as the set of values that were intended. Consistency means these stored values remain the same over time, they do not unintentionally waver or morph as time passes.
—CACM, “Achieving Digital Permanence

Data Mining:

Data Mining is a process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal to extract information (with intelligent methods) from a data set and transform the information into a comprehensible structure for further use.

The term “data mining” is a misnomer, because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction (mining) of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence (e.g., machine learning) and business intelligence.
—Wikipedia, “Data mining

Data Networks:

A Data Network is a system designed to facilitate the transfer of data between individuals as well as organisations. It enables such modern marvels as the Internet and telecommunication. The points between which data is exchanged are called network access points or nodes.
—Sterlite Technologies limited, “Data Networks: The Unsung Hero of Organisational Success

Data Network – The primary purpose of data transmission and networking is to facilitate communication and sharing of information between individuals and organizations. The two predominant types of data networks are broadcast networks, in which one node transmits information to several nodes simultaneously, and point-to-point networks, in which each sender communicates with one receiver.

Signals are typically transmitted via three main methodologies:

  1. Circuit Switching: Before two nodes communicate, they establish a dedicated communications channel through the network.
  2. Message Switching: Each message is routed in its entirety from switch to switch; at each switch, the message is stored and the information is read before being transmitted to the next switch.
  3. Packet Switching: Messages are broken down and information is grouped into packets; each packet is transmitted over a digital network via the most optimum route to ensure minimal lag in data network speed, then the message is reassembled at the destinations.

In order to establish communication across machines, datacenter networks depend on Transmission Control Protocol (TCP) and the Internet Protocol (IP), the Internet Protocol Suite that dictates exactly how data should be packetized, addressed, transmitted, routed, and received.
—OMNI Sci, “Data Network

Deepfakes / Digital Fakes:

Deepfakes (a portmanteau of “deep learning” and “fake”) are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. While the act of faking content is not new, deepfakes leverage powerful techniques from machine learning and artificial intelligence to manipulate or generate visual and audio content with a high potential to deceive. The main machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures, such as autoencoders or generative adversarial networks (GANs).

Deepfakes have garnered widespread attention for their uses in creating child sexual abuse material, celebrity pornographic videos, revenge porn, fake news, hoaxes, and financial fraud. This has elicited responses from both industry and government to detect and limit their use.
—Wikipedia, “Deepfake

Deep Learning (DL):

Deep Learning (DL) is a general category that includes specific fields such as deep neural networks, deep belief networks, deep reinforcement learning, recurrent neural networks and convolutional neural networks. [Links to Wikipedia.]

Deep Learning is a subfield of machine learning that has proven particularly promising over the last decade or so and is responsible for many of the well-known developments in artificial intelligence such as in computer vision and autonomous vehicles.

Deep learning is a statistical technique for fitting the parameters of deep neural networks. This process is often referred to as “training” the deep neural net. While neural networks can comprise any number of layers, deep learning uses neural networks with multiple hidden layers to process data, allowing it to recognize more complex patterns. More colloquially, any application using this approach is referred to as “deep learning.”

Deep learning has shown promise in a wide range of areas including image recognition, natural language processing, photo generation, game play, robotics, self-driving cars, drug discovery and music generation. For example, in image recognition, earlier layers (those toward the beginning of the process) may identify shapes, edges and other abstract features. Later layers may identify how those shapes come together to form ears, noses and other facial features.
—CSET Glossary, “Deep learning

Deep Learning Neural Network (DNN):

Deep Learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.

Deep-learning architectures such as Deep Neural Networks, deep belief networks, deep reinforcement learning, recurrent neural networks and convolutional neural networks have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.
—Wikipedia, “Deep Learning

A Deep Neural Network (DNN) is an artificial neural network (ANN) with multiple layers between the input and output layers. There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions. These components functioning similar to the human brains and can be trained like any other ML algorithm.
—Wikipedia, “Deep Learning: Deep Neural Networks

Deep Learning Neural Network (DNN) is a form of artificial intelligence and is a subset of Deep Learning.

Device-Independent Quantum Key Distribution:

Device-Independent Quantum Key Distribution (DIQKD) is the art of using untrusted devices to distribute secret keys in an insecure network. It thus represents the ultimate form of cryptography, offering not only information-theoretic security against channel attacks, but also against attacks exploiting implementation loopholes.
—Nature Communications, “Device-independent quantum key distribution with random key basis

Quantum Key Distribution (QKD) is a secure communication method which implements a cryptographic protocol involving components of quantum mechanics. It enables two parties to produce a shared random secret key known only to them, which can then be used to encrypt and decrypt messages. It is often incorrectly called quantum cryptography, as it is the best-known example of a quantum cryptographic task.

An important and unique property of quantum key distribution is the ability of the two communicating users to detect the presence of any third party trying to gain knowledge of the key. This results from a fundamental aspect of quantum mechanics: the process of measuring a quantum system in general disturbs the system. A third party trying to eavesdrop on the key must in some way measure it, thus introducing detectable anomalies. By using quantum superpositions or quantum entanglement and transmitting information in quantum states, a communication system can be implemented that detects eavesdropping. If the level of eavesdropping is below a certain threshold, a key can be produced that is guaranteed to be secure (i.e., the eavesdropper has no information about it), otherwise no secure key is possible and communication is aborted.
—Wikipedia, “Quantum key distribution

Digital Millennium Copyright Act (DMCA):

The Digital Millennium Copyright Act (DMCA) is a 1998 United States copyright law that implements two 1996 treaties of the World Intellectual Property Organization (WIPO). It criminalizes production and dissemination of technology, devices, or services intended to circumvent measures that control access to copyrighted works (commonly known as digital rights management or DRM). It also criminalizes the act of circumventing an access control, whether or not there is actual infringement of copyright itself. In addition, the DMCA heightens the penalties for copyright infringement on the Internet. Passed on October 12, 1998, by a unanimous vote in the United States Senate and signed into law by President Bill Clinton on October 28, 1998, the DMCA amended Title 17 of the United States Code to extend the reach of copyright, while limiting the liability of the providers of online services for copyright infringement by their users.

The DMCA’s principal innovation in the field of copyright is the exemption from direct and indirect liability of Internet service providers and other intermediaries. This exemption was adopted by the European Union in the Electronic Commerce Directive 2000. The Information Society Directive 2001 implemented the 1996 WIPO Copyright Treaty in the EU.
—Wikipedia, “Digital Millennium Copyright Act

Digital Permanence:

Digital Permanence refers to the techniques used to anticipate and then meet the expected lifetime of data stored in digital media. Digital permanence not only considers data integrity, but also targets guarantees of relevance and accessibility: the ability to recall stored data and to recall it with predicted latency and at a rate acceptable to the applications that require that information.
—CACM, “Achieving Digital Permanence

Digital Spam:

Digital Spam encompasses all forms of junk content regardless of digital delivery mechanism, email or otherwise.

“While the most widely recognized form of spam is email spam, the term is applied to similar abuses in other media: instant messaging spam, Usenet news-group spam, Web search engine spam, spam in blogs, wiki spam, online classified ads spam, mobile phone messaging spam, Internet forum spam, junk fax transmissions, social spam, spam mobile apps, television advertising and file sharing spam.”
—Wikipedia, “Spamming

The first reported case of digital spam occurred in 1978 and was attributed to Digital Equipment Corporation, who announced their new computer system to over 400 subscribers of ARPANET, the precursor network of modern Internet. The first mass email campaign occurred in 1994, known as the USENET green card lottery spam: the law firm of Canter & Siegel advertised their immigration-related legal services simultaneously to over 6,000 USENET newsgroups. This event contributed to popularizing the term spam.

Email spam has mainly two purposes, namely advertising (for example, promoting products, services, or contents), and fraud (for example, attempting to perpetrate scams, or phishing). Neither ideas were particularly new or unique to the digital realm: advertisement based on unsolicited content delivered by traditional post mail (and, later, phone calls, including more recently the so-called “robo-calls”) has been around for nearly a century. As for scams, the first reports of the popular advance-fee scam (in modern days known as 419 scam, a.k.a. the Nigerian Prince scam), called the Spanish Prisoner scam were circulating in the late 1800s.
—CACM, “The History of Digital Spam

Disinformation (Coordinated Inauthentic Behavior):

Disinformation is a subset of propaganda and is false information that is spread deliberately to deceive.
—Wikipedia, “Disinformation

Disinformation is a subset of misinformation that is deliberately deceptive.
—Wikipedia, “Misinformation

Coordinated Inauthentic Behavior is a term used by Facebook. Facebook views “CIB as coordinated efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation. There are two tiers of these activities that we work to stop: 1) coordinated inauthentic behavior in the context of domestic, non-government campaigns and 2) coordinated inauthentic behavior on behalf of a foreign or government actor.”
—Facebook, “June 2021 Coordinated Inauthentic Behavior Report

Distributed Computing:

Distributed Computing considers the scenario where a number of distinct, yet connected, computing devices (or parties) wish to carry out a joint computation of some function. Distributed Computing often deals with questions of computing under the threat of machine crashes and other inadvertent faults.
—CACM, “Secure Multiparty Computation

A Distributed Computer system consists of multiple software components that are on multiple computers, but run as a single system. The computers that are in a distributed system can be physically close together and connected by a local network, or they can be geographically distant and connected by a wide area network. A distributed system can consist of any number of possible configurations, such as mainframes, personal computers, workstations, minicomputers, and so on. The goal of distributed computing is to make such a network work as a single computer.

Distributed systems offer many benefits over centralized systems, including the following:

  • Scalability: The system can easily be expanded by adding more machines as needed.
  • Redundancy: Several machines can provide the same services, so if one is unavailable, work does not stop. Additionally, because many smaller machines can be used, this redundancy does not need to be prohibitively expensive.

Distributed computing systems can run on hardware that is provided by many vendors, and can use a variety of standards-based software components. Such systems are independent of the underlying software. They can run on various operating systems, and can use various communications protocols. Some hardware might use UNIX or Linux as the operating system, while other hardware might use Windows operating systems. For intermachine communications, this hardware can use SNA or TCP/IP on Ethernet or Token Ring.

You can organize software to run on distributed systems by separating functions into two parts: clients and servers. This is described in The client/server model. A common design of client/server systems uses three tiers, as described in Three-tiered client/server architecture.
—IBM, TXSeries for Multiplatforms documentation, “What is distributed computing

A Distributed System is a system whose components are located on different networked computers, which communicate and coordinate their actions by passing messages to one another from any system. The components interact with one another in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. It deals with a central challenge that, when components of a system fails, it doesn’t imply the entire system fails. Examples of distributed systems vary from service-oriented architecture (SOA)-based systems to massively multiplayer online games to peer-to-peer applications.
—Wikipedia, “Distributed computing

Domestic Intelligence:

Domestic Intelligence is information gathered that involves threats to our nation from within our nation. These are threats to the nations’ people, property, or interests. Intelligence can provide insights not available elsewhere that warn of potential threats and opportunities, and assess probable outcomes of proposed policy options.

This definition is adapted from the U.S. Office of the Director of National Intelligence, “What is Intelligence?

There are ostensibly four agencies that conduct domestic intelligence in the United States:

1 – FBI Intelligence Branch – The Intelligence Branch is the strategic leader of the FBI’s Intelligence Program.

2 – Homeland Security Office of Intelligence – HSI’s Office of Intelligence develops intelligence on illegal trade, travel and financial activity.

3 – Drug Enforcement Administration: Intelligence Program – The DEA, in coordination with other federal, state, local, and foreign law enforcement organizations has been responsible for the collection, analysis, and dissemination of drug-related intelligence.

4 – Department of Justice: Office of Intelligence – The Office of Intelligence works to ensure: that Intelligence Community agencies have the legal authorities necessary to conduct intelligence operations.

—U.S. Naval War College, “Intelligence Studies: Foreign & Domestic Intelligence

E

Electronic and Information Warfare:

Electronic and Information Warfare as used here is an amalgamation of traditional electronic warfare and information warfare, involving fusing aspects of both. Anderson speaks of “the new ‘information warfare’ world of fake news, troll farms and postmodern propaganda.” In a sense these forms of propaganda (manipulated information) are carried over an electronic medium, combining information and electronic. It’s “propaganda and other psychological operations in information warfare.” Perhaps the best way to say this is “There are some interesting similarities between the decoys, jamming and other techniques used to manipulate enemy radar, and the techniques used to manipulate public opinion.” The synthesis of the two makes for very powerful tools. It’s not really quite that simple and reading Anderson’s chapter 23 is insightful.
—Based on Security Engineering, 3rd edition, by Ross Anderson, Chapter 23 “Electronic and Information Warfare”

Information Warfare (IW) (as different from cyber warfare that attacks computers, software, and command control systems) manipulates information trusted by targets without their awareness, so that the targets will make decisions against their interest but in the interest of the one conducting information warfare. It is a concept involving the battlespace use and management of information and communication technology (ICT) in pursuit of a competitive advantage over an opponent.

The United States military focus tends to favor technology and hence tends to extend into the realms of electronic warfare, cyberwarfare, information assurance and computer network operations, attack, and defense.

Most of the rest of the world use the much broader term of “Information Operations” which, although making use of technology, focuses on the more human-related aspects of information use, including (amongst many others) social network analysis, decision analysis, and the human aspects of command and control.
—Wikipedia, “Information warfare

Electronic Warfare (EW) is any action involving the use of the electromagnetic spectrum (EM spectrum) or directed energy to control the spectrum, attack an enemy, or impede enemy assaults. The purpose of electronic warfare is to deny the opponent the advantage of—and ensure friendly unimpeded access to—the EM spectrum. EW can be applied from air, sea, land, and/or space by manned and unmanned systems, and can target communication, radar, or other military and civilian assets.
—Wikipedia, “Electronic warfare

Electronic Intelligence:

Electronic Intelligence is information derived primarily from electronic signals that do not contain speech or text.

Electronic intelligence (ELINT) is divided into major branches. One branch is Technical ELINT (TechELINT), which describes the signal structure, emission characteristics, modes of operation, emitter functions, and weapons systems associations of such emitters as radars, beacons, jammers, and navigational signals.

A main purpose of TechELINT is to obtain signal parameters which can define the capabilities and the role that the emitter plays in the larger system, such as a ground radar locating aircraft, and thus lead to the design of radar detection, countermeasure, or counterweapons equipment. The overall process, including operation of the countermeasures, is part of electronic warfare.

Another major branch is Operational ELINT (OpELINT), which concentrates on locating specific ELINT targets and determining the operational patterns of the systems. These results are commonly called Electronic Order of Battle (EOB). OpELINT also provides threat assessments, often referred to as “tactical ELINT.” OpELINT intelligence products support military operational planners and tactical military commanders on the battlefield.
—NSA, Center for Cryptologic History, “Electronic Intelligence (ELINT) at NSA” (PDF)

Signals Intelligence (SIGINT) is the term used now. SIGINT involves collecting foreign intelligence from communications and information systems and providing it to customers across the U.S. government. NSA collects SIGINT from various sources, including foreign communications, radar and other electronic systems. This information is frequently in foreign languages and dialects, is protected by codes and other security measures, and involves complex technical characteristics.
—NSA, “Signals Intelligence Overview

Electronic Signals Intelligence (ELINT) refers to intelligence-gathering by use of electronic sensors. Its primary focus lies on non-communications signals intelligence. The Joint Chiefs of Staff define it as “Technical and geolocation intelligence derived from foreign noncommunications electromagnetic radiations emanating from sources other than nuclear detonations or radioactive sources.”
—Wikipedia, “Electronic signals intelligence

Note that the definitions above do disagree on the inclusion of speech and text as being subjects of electronic intelligence.

Encryption:

In cryptography, Encryption is the process of encoding information. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Ideally, only authorized parties can decipher a ciphertext back to plaintext and access the original information. Encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.

Encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key).

In symmetric-key encryption schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine utilized a new symmetric-key each day for encoding and decoding messages. In public-key or asymmetric-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read.
—Wikipedia, “Encryption

End-to-End Encryption:

End-to-End Encryption (E2EE) is a type of messaging that keeps messages private from everyone, including the messaging service. When E2EE is used, a message only appears in decrypted form for the person sending the message and the person receiving the message. The sender is one “end” of the conversation and the recipient is the other “end”; hence the name “end-to-end.”

Many messaging services offer encrypted communications without true end-to-end encryption. A message is encrypted as it travels from the sender to the service’s server, and from the server to the recipient, but when it reaches the server, it is decrypted briefly before being re-encrypted. (This is the case with the common encryption protocol TLS.)

E2EE is “end-to-end” because it is impossible for anyone in the middle to decrypt the message. Users do not have to trust that the service they are using will not read their messages: it is not possible for the service to do so. Imagine if, instead of sending a letter in an envelope, someone sent it in a locked box to which only they had the key. Now it would be physically impossible for anyone to read the letter aside from its intended recipient. This is how E2EE works.
—CloudFlare, “What is end-to-end encryption?

Extreme Automation:

Extreme Automation describes an increasing reliance on robotics and Artificial Intelligence (AI) in all aspects of our lives. It includes disruptive technologies like three-dimensional (3-D) printing, the Internet of Things (IoT), machine-to-machine communications (like sensors), and cognitive systems.

Extreme Connectivity happens when all of these systems interact and communicate with each other and people in real time. It is 4 billion users connecting with 1 trillion devices across fifth generation (5G) wireless networks.

When Extreme Automation is combined with Extreme Connectivity, the power of our computing systems increases exponentially. The global Internet is being fueled by advances in connectivity and capacity. These advances aren’t happening in baby steps; they are 1,000-fold gains in capacity, connections for trillions of devices, and from a user perspective, incredibly low latency and rapid response rates. As more people connect with more machines, we are moving closer to zero-distance connectivity with technology.
—OpenText Blogs, “The Future of Information: Extreme Automation and Extreme Connectivity

Extreme Automation—the confluence of cloud applications, blockchain, cognitive automation, natural language processing, and more.
—KPMG, “Future of finance: Extreme automation

Extreme Ultraviolet Lithography:

Extreme Ultraviolet Lithography (also known as EUV or EUVL) is an optical lithography technology using a range of extreme ultraviolet (EUV) wavelengths, roughly spanning a 2% FWHM bandwidth about 13.5 nm, to produce a pattern by exposing reflective photomask to UV light which gets reflected onto a substrate covered by photoresist. It is widely applied in semiconductor device fabrication process.

As of 2020, Samsung and Taiwan Semiconductor Manufacturing Company (TSMC) are the only companies who have used EUV systems in production, mainly targeting 5 nm. At the 2019 International Electron Devices Meeting (IEDM), TSMC reported use of EUV for 5 nm in contact, via, metal line, and cut layers, where the cuts can be applied to fins, gates or metal lines. At IEDM 2020, TSMC reported their 5 nm minimum metal pitch to be reduced 30% from that of 7 nm, which was 40 nm. Samsung’s 5 nm is lithographically the same design rule as 7 nm, with a minimum metal pitch of 36 nm.
—Wikipedia, “Extreme ultraviolet lithography

F

Filter Bubbles:

A Filter Bubble, social media bubble or ideological frame is a state of intellectual isolation that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history. As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. The choices made by these algorithms are not transparent. Prime examples include Google Personalized Search results and Facebook’s personalized news-stream.

The term filter bubble was coined by internet activist Eli Pariser circa 2010 and discussed in his 2011 book of the same name. The bubble effect may have negative implications for civic discourse, according to Pariser, but contrasting views regard the effect as minimal and addressable. The results of the U.S. presidential election in 2016 have been associated with the influence of social media platforms such as Twitter and Facebook, and as a result have called into question the effects of the “filter bubble” phenomenon on user exposure to fake news and echo chambers, spurring new interest in the term, with many concerned that the phenomenon may harm democracy and well-being by making the effects of misinformation worse.
—Wikipedia, “Filter bubble

Beware online “filter bubbles”
A TED Talk by Eli Pariser during TED2011

Formal Methods:

In computer science, specifically software engineering and hardware engineering, Formal Methods are a particular kind of mathematically rigorous techniques for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.

Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, discrete event dynamic system and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
—Wikipedia, “Formal methods

Fourth Industrial Revolution:

Fourth Industrial Revolution also known as the digital revolution.
—Glenn S. Gerstell, general counsel of the National Security Agency, “I Work for N.S.A. We Cannot Afford to Lose the Digital Revolution.

The Fourth Industrial Revolution, 4IR, or Industry 4.0, conceptualizes rapid change to technology, industries, and societal patterns and processes in the 21st century due to increasing interconnectivity and smart automation. Coined popularly by the World Economic Forum Founder and Executive Chairman, Klaus Schwab, it asserts that the changes seen are more than just improvements to efficiency, but express a significant shift in industrial capitalism.

A part of this phase of industrial change is the joining of technologies like artificial intelligence, gene editing, to advanced robotics that blur the lines between the physical, digital, and biological worlds.

Throughout this, fundamental shifts are taking place in how the global production and supply network operates through ongoing automation of traditional manufacturing and industrial practices, using modern smart technology, large-scale machine-to-machine communication (M2M), and the internet of things (IoT). This integration results in increasing automation, improving communication and self-monitoring, and the use of smart machines that can analyze and diagnose issues without the need for human intervention.

It also represents a social, political, and economic shift from the digital age of the late 1990s and early 2000s to an era of embedded connectivity distinguished by the omni-use and commonness of technological use throughout society (e.g. a metaverse) that changes the ways we experience and know the world around us. It posits that we have created and are entering an augmented social reality compared to just the natural senses and industrial ability of humans alone.
—Wikipedia, “Fourth Industrial Revolution

The Fourth Industrial Revolution represents a fundamental change in the way we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technology advances commensurate with those of the first, second and third industrial revolutions. These advances are merging the physical, digital and biological worlds in ways that create both huge promise and potential peril.
—World Economic Forum, “Fourth Industrial Revolution

FPGA (Field-Programmable Gate Array):

An FPGA (Field-Programmable Gate Array) is a chip that is programmed by a circuit. It is said to “emulate” that circuit. This emulation runs slower than the actual circuit would run if it were implemented in an ASIC—it has a slower clock frequency and uses more power, but it can be reprogrammed every few hundred milliseconds.
—CACM, “The History, Status, and Future of FPGAs

A Field-Programmable Gate Array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence the term field-programmable. The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration, but this is increasingly rare due to the advent of electronic design automation tools.

FPGAs contain an array of programmable logic blocks, and a hierarchy of reconfigurable interconnects allowing blocks to be wired together. Logic blocks can be configured to perform complex combinational functions, or act as simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software.

FPGAs have a remarkable role in embedded system development due to their capability to start system software (SW) development simultaneously with hardware (HW), enable system performance simulations at a very early phase of the development, and allow various system partitioning (SW and HW) trials and iterations before final freezing of the system architecture.
—Wikipedia, “Field-programmable gate array

G

Game Theory:

In the context used in this website, within the topic of complex systems, Game Theory is the mathematical study of optimizing agents, not mathematical study of sequential games.

Game Theory is the study of mathematical models of strategic interactions among rational agents. It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally, it addressed two-person zero-sum games, in which each participant’s gains or losses are exactly balanced by those of other participants. In the 21st century, game theory applies to a wide range of behavioral relations; it is now an umbrella term for the science of logical decision making in humans, animals, as well as computers.
—Wikipedia, “Game theory

Going Dark:

Going Dark as used here refers to encryption preventing law enforcement from accessing data. Various law enforcement officials have on occasion indicated that if encryption remains strong for all then it will enable the so-called “bad guys” to hide data through encryption. The problem with this is that in “encryption systems” the “distinction between military and consumer products largely doesn’t exist.” So, while it is understandable that law enforcement is challenged by strong encryption the fact remains that to weaken encryption for one class (“bad-guys) weakens it for all.
—See: Schneier on Security, “The Myth of Consumer-Grade Security

Going Dark is also used to mean essentially going silent. There are several nuanced variations in the contexts of social media, interpersonal relationships, intelligence agencies, and a generic sense.
—Urban Dictionary, “Going Dark

Going Dark, or Go Dark also refers to reactions to U.S. legislation proposed in 2011 to combat illegal content or activities that would have had significant unintended and/or unanticipated adverse consequences. It’s what organizations claimed they would have had to do in order to comply with the legislation.
—Wikipedia, “Protests against SOPA and PIPA

H

Hacker Culture:

Forty years ago, the word “hacker” was little known. Its march from obscurity to newspaper headlines owes a great deal to tech journalist Steven Levy, who in 1984 defied the advice of his publisher to call his first book Hackers: Heroes of the Computer Revolution. Hackers were a subculture of computer enthusiasts for whom programming was a vocation and playing around with computers constituted a lifestyle. Levy locates the origins of Hacker Culture among MIT undergraduates of the late-1950s and 1960s, before tracing its development through the Californian personal computer movement of the 1970s and the home videogame industry of the early 1980s.

The original hackers were neither destructive nor dedicated to the pilfering of proprietary data, unlike the online vandals and criminals who later appropriated the word, but they were quite literally antisocial. Levy describes their lack of respect for any rules or conventions that might limit their access to technology or prevent them from reconfiguring systems. They are seen by-passing locked doors, reprogramming elevators, and appropriating tools.

Not all observers of hacker culture were so accepting. Levy rejected MIT professor Joseph Weizenbaum’s portrayal of the institute’s “computer bums” (a term borrowed from [Stuart] Brand), which recalled the sordid opium dens found in Victorian novels: “bright, young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers,…

MIT professor Sherry Turkle presented an equally biting picture of MIT’s hacker culture in her ethnographic study The Second Self: Computers and the Human Spirit, another classic study of early computer use.16 As a humanist joining MIT’s faculty she had “immersed herself in a world that was altogether strange to me.” Turkle spent most of the book exploring the cognitive possibilities computing opened for education and personal development. Yet she used the hackers primarily as a cautionary illustration of what happens when human development goes wrong.
—CACM, “When Hackers Were Heroes

Hacker Ethic:

The Hacker Ethic as defined by Steven Levy is:

  • Access to computers—and anything that might teach you something about the way the world works—should be unlimited and total. Always yield to the Hands-On Imperative!
  • All information should be free.
  • Mistrust authority—promote decentralization.
  • Hackers should be judged by their hacking, not criteria such as degrees, age, race, sex, or position.
  • You can create art and beauty on a computer.
  • Computers can change your life for the better.

—”Hackers: Heroes of the Computer Revolution” by Steven Levy, pages 27-34.

For a good discussion of the subject see the article “When Hackers Were Heroes.”

Hackers & Hacking:

Hackers & Hacking is a very broad tag group that includes various shades of hackers (white, grey and black hat) and their exploits. Traditionally, hacking was what someone did to figure out how something works and how to make it better, it is still that for many. However, the modern usage is now typically carries a negative connotation.

Usage here in this website is in the broad sense with both traditional and modern usages. It’s a catch-all. Posted content is more closely identified with other associated tags.

Hacking for Hire:

Hacking for Hire includes hacking for other people or organizations for a fee and can be either so-called “white-hat” (“good-guys”), “black-hat” (bad-guys”) or the nebulous middle ground “grey-hat.” (Good v. bad can also be in the eye of the beholder.)

It is related to but not the same as Surveillance for Hire.

Hacktivism:

In Internet activism, Hacktivism, or hactivism (a portmanteau of hack and activism), is the use of computer-based techniques such as hacking as a form of civil disobedience to promote a political agenda or social change. With roots in hacker culture and hacker ethics, its ends are often related to free speech, human rights, or freedom of information movements.

Hacktivism” is a controversial term with several meanings. The word was coined to characterize electronic direct action as working toward social change by combining programming skills with critical thinking. But just as hack can sometimes mean cyber crime, hacktivism can be used to mean activism that is malicious, destructive, and undermining the security of the Internet as a technical, economic, and political platform.
—Wikipedia, “Hacktivism

Hardware Trojans:

Hardware Trojans are one of several classes of “invasive hardware attacks” that consist of “changing the physical layout of a single Integrated Circuit or assembly of ICs. Hardware Trojans modify the layout of a legitimate IC during design and fabrication. the other two classes are counterfeit attacks that substitute an illegitimate chip for a legitimate one, and assembly attacks that include incorporating additional ICs in the end-user device. …Insertion of malicious Hardware Trojans can occur at any stage during IC manufacturing. …Many variants of Hardware Trojans can be implemented to achieve a range of attacks: from the addition of extra transistors creating new logic to the modification of the wire width of the clock distribution network introducing clock skew.
—CACM, “The Die is Cast

Heuristic Evaluation:

Heuristic Evaluation involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the “heuristics”).
—Jakob Nielson, “How to Conduct a Heuristic Evaluation

A Heuristic Evaluation is a usability inspection method for computer software that helps to identify usability problems in the user interface (UI) design. It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles (the “heuristics”). These evaluation methods are now widely taught and practiced in the new media sector, where UIs are often designed in a short space of time on a budget that may restrict the amount of money available to provide for other types of interface testing.
—Wikipedia, “Heuristic evaluation

Honey Pot / Watering Hole:

Honey Pot, Watering Hole, Decoy Websites, or Decoy Networks are all forms of Security Deception Software with means of tricking malicious actors into visiting sites that they have nefarious designs on but actually are setup as ploys. These can also be setup by malicious actors to trap innocent or unsuspecting visitors in order to commit theft or to other malicious ends.
—General Terms

Approaches to the security deception method vary, but the principle behind them remains the same: enable a hacker to penetrate your network, then trick him or her into thinking they are working with your actual network or data when in fact they are really working with a dummy network or dummy data. Often, security deception software creates emulations of the inner workings of entire networks or Web sites in an attempt to fool hackers.
—CACM, “Spoofing the Spoofers

In computer terminology, a HoneyPot is a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems. Generally, a honeypot consists of data (for example, in a network site) that appears to be a legitimate part of the site and contain information or resources of value to attackers. It is actually isolated, monitored, and capable of blocking or analyzing the attackers. This is similar to police sting operations, colloquially known as “baiting” a suspect.
—Wikipedia, “Honeypot (computing)

Watering Hole is a computer attack strategy in which an attacker guesses or observes which websites an organization often uses and infects one or more of them with malware. Eventually, some member of the targeted group will become infected.
—Wikipedia, “Watering hole attack

HTTP Standard:

The HTTP Standard, the language of web servers, was born humbly in 1990 as the hypertext transfer protocol. HTTP was basically just a few verbs—simple commands—that a browser said to a web server. The most essential of these were GET, which asks a server for information, and POST, which sends info back.
—WIRED, “Meet the Web’s Operating System: HTTP

The Hypertext Transfer Protocol (HTTP) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.

Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989 and summarized in a simple document describing the behavior of a client and a server using the first HTTP protocol version that was named 0.9.

HTTP/1 was finalized and fully documented (as version 1.0) in 1996. It evolved (as version 1.1) in 1997 and then its specifications were updated in 1999 and in 2014.

Its secure variant named HTTPS is used by more than 76% of websites.

HTTP/2 is a more efficient expression of HTTP’s semantics “on the wire”, and was published in 2015; it is used by more than 45% of websites; it is now supported by almost all web browsers (96% of users) and major web servers over Transport Layer Security (TLS) using an Application-Layer Protocol Negotiation (ALPN) extension where TLS 1.2 or newer is required.

HTTP/3 is the proposed successor to HTTP/2; it is used by more than 20% of websites; it is now supported by many web browsers (73% of users). HTTP/3 uses QUIC instead of TCP for the underlying transport protocol. Like HTTP/2, it does not obsolete previous major versions of the protocol. Support for HTTP/3 was added to Cloudflare and Google Chrome first, and is also enabled in Firefox.
—Wikipedia, “Hypertext Transfer Protocol

I

Indistinguishability Obfuscation:

In cryptography, Indistinguishability Obfuscation (abbreviated IO or iO) is a type of software obfuscation with the defining property that obfuscating any two programs that compute the same mathematical function results in programs that cannot be distinguished from each other. Informally, such obfuscation hides the implementation of a program while still allowing users to run it. Formally, IO satisfies the property that obfuscations of two circuits of the same size which implement the same function are computationally indistinguishable.
—Wikipedia, “Indistinguishability obfuscation

…The quest now is to find a way to make Indistinguishability Obfuscation (iO) efficient enough to become a practical reality.

When it was first proposed, the value of iO was uncertain. Mathematicians had originally tried to find a way to implement a more intuitive form of obfuscation intended to prevent reverse engineering. If achievable, virtual black box (VBB) obfuscation would prevent a program from leaking any information other than the data it delivers from its outputs. Unfortunately, a seminal paper published in 2001 showed that it is impossible to guarantee VBB obfuscation for every possible type of program.

In the same paper, though, the authors showed that a weaker form they called iO was feasible. While iO does not promise to hide all the details of a logic circuit, as long as they are scrambled using iO, different circuits that perform the same function will leak the same information as each other; an attacker would not be able to tell which implementation is being used to provide the results they obtain.
—CACM, “Better Security Through Obfuscation

Industrial Food:

Industrial Food is a reference to foods grown and produced that rely on an industrialized system of production.

The modern food system is a product of the forces inherent in free-­market capitalism. Decisions on where to invest in technological research and where to apply its fruits have been guided by the drive for ever greater efficiency, productivity, and profit.

The result has been a long, steady trend toward greater abundance. …The steady march of higher yields was achieved by using large quantities of fertilizers and pesticides, as well as by discarding local crop varieties that were deemed unfavorable. Farmland became concentrated in the hands of a few large players. …In the same period, the proportion of the US workforce employed in agriculture shrank from slightly over 40% to around 2%. Supply chains have continued to be optimized for speed, reduced costs, and increased returns on investment. …Consumers have been mostly happy to enjoy the increases in convenience that have come with these trends, but there has also been a backlash. Products that are distributed globally can come across as soulless, removed from local culinary tradition and cultural contexts. …As a reaction, more affluent eaters now look for “authenticity” and turn to food as an arena in which to declare their identity. …Such attitudes fail to acknowledge the obvious: the availability, accessibility, and affordability of Industrial Food has been a major force in reducing food insecurity around the world.
—MIT Technology Review, “Technology can help us feed the world, if we look beyond profit: Powerful technologies like genetic modification aren’t the enemy of a healthy, sustainable food system.

Informatics:

Informatics is the study of computational systems, especially those for data storage and retrieval. According to ACM Europe and Informatics Europe, informatics is synonymous with computer science and computing as a profession, in which the central notion is transformation of information. In other countries, the term “informatics” is used with a different meaning in the context of library science.

In some countries, depending on local interpretations, the term “informatics” is used synonymously to mean information systems, information science, information theory, information engineering, information technology, information processing, or other theoretical or practical fields.

In the United States, however, the term informatics is mostly used in context of data science, library science or its applications in healthcare (biomedical informatics), where it first appeared in the US.
—Wikipedia, “Informatics

See the CACM article “Informatics as a Fundamental Discipline for the 21st Century” for an in-depth discussion of the subject with a focus on Informatics for All.

Information Integration:

Information Integration is representing data and models as a “system of systems” where all knowledge is interconnected. It is one of the five key research challenges for creating knowledge-rich intelligent systems.

Data, models, information, and knowledge are scattered across different communities and disciplines, causing great limitations to current geosciences research. [Integration of all of these elements] presents major research challenges that will require the use of scientific knowledge for Information Integration.
—CACM, “Intelligent Systems for Geosciences: An Essential Research Agenda

Information Integrity:

Information Integrity is the trustworthiness and dependability of information. Information integrity relies on the accuracy, consistency, and reliability of the information content, processes and systems to maintain a healthy information ecosystem.

Disinformation and information warfare pose an immense threat to information integrity.
—Yonder, “What is information integrity?

Integrity is the quality or state of being complete or undivided.
—Merriam-Webster, “integrity

Information Security:

Information Security, sometimes shortened to InfoSec, is the practice of protecting information by mitigating information risks. It is part of information risk management.

Information Security, typically involves preventing or reducing the probability of unauthorized/inappropriate access to data, or the unlawful use, disclosure, disruption, deletion, corruption, modification, inspection, recording, or devaluation of information. Information Security‘s primary focus is the balanced protection of the confidentiality, integrity, and availability of data (also known as the CIA triad) while maintaining a focus on efficient policy implementation, all without hampering organization productivity. This is largely achieved through a structured risk management process.
—Wikipedia, “Information security

A critical aspect of security in this context is that of confidentiality, integrity, and availability of data. All three elements must be maintained for information to be “secure.”

Information Theory:

Information Theory is the scientific study of the quantification, storage, and communication of digital information. The field was fundamentally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. The field is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering.
—Wikipedia, “Information Theory

In fact, by the early 1980s, the answers to the first two questions—Are there codes that can drive the data rate even higher? If so, how much higher? And what are those codes?—were more than 30 years old. They’d been supplied in 1948 by Claude Shannon SM ’37, PhD ’40 in a groundbreaking paper that essentially created the discipline of Information Theory. “People who know Shannon’s work throughout science think it’s just one of the most brilliant things they’ve ever seen,” says David Forney, an adjunct professor in MIT’s Laboratory for Information and Decision Systems.
—MIT News, “Explained: The Shannon limit

The Shannon limit or Shannon capacity of a communication channel refers to the maximum rate of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon and Warren Weaver entitled The Mathematical Theory of Communication (1949). This founded the modern discipline of Information Theory.
—Wikipedia, “Noisy-channel coding theorem

Information Warfare:

Information Warfare (IW) (as different from cyber warfare that attacks computers, software, and command control systems) manipulates information trusted by targets without their awareness, so that the targets will make decisions against their interest but in the interest of the one conducting information warfare. It is a concept involving the battlespace use and management of information and communication technology (ICT) in pursuit of a competitive advantage over an opponent.

The United States military focus tends to favor technology and hence tends to extend into the realms of electronic warfare, cyberwarfare, information assurance and computer network operations, attack, and defense.

Most of the rest of the world use the much broader term of “Information Operations” which, although making use of technology, focuses on the more human-related aspects of information use, including (amongst many others) social network analysis, decision analysis, and the human aspects of command and control.
—Wikipedia, “Information warfare

Intelligent Systems:

Intelligent Systems refers to different software tools that enable decision makers to draw on the knowledge and decision processes of experts in making decisions.
—ScienceDirect, “Intelligent Systems

Intelligent Systems incorporate artificial intelligence to some degree. [Indiana University, Dept. of Intelligent Systems Engineering] Also applied artificial intelligence. [Univ. of Pittsburgh, Intelligent Systems Program]

Intelligent Machines is incorporated into this category since esp. in this sense a machine is at least a system if not a system of systems.

Interactive Analytics:

Interactive Analytics is an extension of real-time analytics that has the capacity to crunch through huge volumes of unstructured data at scale and at speed. It gives users the ability to run complex queries across complex data landscapes in real time.

In addition to handling a high volume of data, fast, interactive analytics can query stored data sets in an ad hoc fashion, whatever their complexity. Typically, queries of this kind can take time, but interactive analytics accelerates the process by going beyond standard tables to return results faster.
—Sisense, “What is interactive analytics?

Internet Industry:

The Internet Industry consists of companies that provide a wide variety of products and services primarily online through their Web sites. Operations include, but are not limited to, search engines, retailers, travel services, as well as dial-up and broadband access services. As product offerings can vary widely within the industry, participants don’t all compete with each other.
—Value Line, “Industry Overview: Internet
Archived version: “Industry Overview: Internet

Internet Security:

Internet Security deals specifically with networks and so is a subset of Cybersecurity.

CyberSecurity or Cyber Security is the art of protecting networks, devices, and data from unauthorized access or criminal use and the practice of ensuring confidentiality, integrity, and availability of information.
—CISA, “Security Tip (ST04-001), What is Cybersecurity?

J

K

Key Escrow:

Key Escrow (also known as a “fair” cryptosystem) is an arrangement in which the keys needed to decrypt encrypted data are held in escrow so that, under certain circumstances, an authorized third party may gain access to those keys. These third parties may include businesses, who may want access to employees’ secure business-related communications, or governments, who may wish to be able to view the contents of encrypted communications (also known as exceptional access).

The technical problem is a largely structural one. Access to protected information must be provided only to the intended recipient and at least one third party. The third party should be permitted access only under carefully controlled conditions, as for instance, a court order. Thus far, no system design has been shown to meet this requirement fully on a technical basis alone.
—Wikipedia, “Key escrow

For a series of articles on Key Escrow see: Schneier on Security, “Entries Tagged “key escrow”

In short the essence of the issue is that governmental claims of the need for key escrow are far outweighed by the need for strong encryption. Any effort to weaken encryption to make it easier for governments to access the “bad guys” encrypted data equally weakens encryption for the rest of the world with the net result of creating more problems than it solves.

Knowledge Representation:

Knowledge Representation and reasoning (KRR, KR&R, KR²) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.

Examples of knowledge representation formalisms include semantic nets, systems architecture, frames, rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers.
—Wikipedia, “Knowledge representation and reasoning

Knowledge Representation where the knowledge acquired is stored and organize in a form that the computer understands.
—ScienceDirect, “Knowledge Representation

What is a Knowledge Representation? We argue that the notion can best be understood in terms of five distinct roles it plays, each crucial to the task at hand:

  1. A Knowledge Representation (KR) is most fundamentally a surrogate, a substitute for the thing itself, used to enable an entity to determine consequences by thinking rather than acting, i.e., by reasoning about the world rather than taking action in it.
  2. It is a set of ontological commitments, i.e., an answer to the question: In what terms should I think about the world?
  3. It is a fragmentary theory of intelligent reasoning, expressed in terms of three components: (i) the representation’s fundamental conception of intelligent reasoning; (ii) the set of inferences the representation sanctions; and (iii) the set of inferences it recommends.
  4. It is a medium for pragmatically efficient computation, i.e., the computational environment in which thinking is accomplished. One contribution to this pragmatic efficiency is supplied by the guidance a representation provides for organizing information so as to facilitate making the recommended inferences.
  5. It is a medium of human expression, i.e., a language in which we say things about the world.

—MIT Computer Science and Artificial Intelligence Laboratory, “What is a Knowledge Representation?

L

Law of Armed Conflict:

International humanitarian law (IHL), also referred to as the Laws of Armed Conflict, is the law that regulates the conduct of war (jus in bello). It is a branch of international law which seeks to limit the effects of armed conflict by protecting persons who are not participating in hostilities, and by restricting and regulating the means and methods of warfare available to combatants.

International humanitarian law is inspired by considerations of humanity and the mitigation of human suffering. It comprises a set of rules, established by treaty or custom, that seeks to protect persons and property/objects that are, or may be, affected by armed conflict and limits the rights of parties to a conflict to use methods and means of warfare of their choice.
—Wikipedia, “International humanitarian law

M

Machine Learning:

Machine Learning is a set of techniques by which a computer system learns how to perform a task through recognizing patterns in data and inferring decision rules, rather than through explicit instructions. Machine learning also refers to the subfield of computer science and statistics studying how to advance those techniques.

Machine learning has led to most of the recent advances in artificial intelligence. These advances have been incorporated into systems used by millions of people every day, such as Google’s search and translate tools, Amazon’s digital assistant Alexa and Netflix’s movie recommendation algorithm. It also includes specialized systems such as AlphaGo, text generators like BERT and GPT-2 and game-playing systems like OpenAI Five, AlphaStar and poker-playing Pluribus.

The essence of machine learning is a system recognizing patterns of relationships between inputs and outputs. For example, the U.S. Postal Service used machine learning to train a system to recognize handwritten zip codes on mail. The system was fed images of handwritten examples paired with the corresponding numbers as typed by humans, so it learned to identify what features were common in handwritten digits and how they varied. Once trained, the system could correctly identify previously unseen examples of handwritten digits.

Not all computing is machine learning and not all artificial intelligence systems that use computing use machine learning. Many computer programs, including most commonly used software, use rule-based systems where programmers set the actions the system should take. However, machine learning is useful for applications where it is difficult for human designers to specify the correct actions to take. For example, IBM’s Deep Blue used a rule-based, exhaustive-search approach to beat the world chess champion. Deep Blue is therefore an example of an AI system not based on machine learning. On the other hand, DeepMind used machine learning to create AlphaGo, an AI system capable of out-performing humans in Go. While it is theoretically possible to solve Go with rule-based algorithms, the search space is so large that winning against a human would have been impossible. Machine learning allowed AlphaGo to infer strategies not yet discovered by humans, leading it to beat the world champion.
—CSET Glossary, “Machine Learning

Malware:

Malware (a portmanteau for malicious software) is any software intentionally designed to cause disruption to a computer, server, client, or computer network, leak private information, gain unauthorized access to information or systems, deprive users access to information or which unknowingly interferes with the user’s computer security and privacy. By contrast, software that causes harm due to some deficiency is typically described as a software bug.

Many types of Malware exist, including computer viruses, worms, Trojan horses, ransomware, spyware, adware, rogue software, wiper, and scareware. The defense strategies against malware differs according to the type of malware but most can be thwarted by installing antivirus software, firewalls, applying regular patches to reduce zero-day attacks, securing networks from intrusion, having regular backups and isolating infected systems. Malware is now being designed to evade antivirus software detection algorithms.
—Wikipedia, “Malware

Meme Culture:

A Meme is an idea, behavior, or style that spreads by means of imitation from person to person within a culture and often carries symbolic meaning representing a particular phenomenon or theme. A meme acts as a unit for carrying cultural ideas, symbols, or practices, that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena with a mimicked theme. Supporters of the concept regard memes as cultural analogues to genes in that they self-replicate, mutate, and respond to selective pressures.

Proponents theorize that memes are a viral phenomenon that may evolve by natural selection in a manner analogous to that of biological evolution. Memes do this through the processes of variation, mutation, competition, and inheritance, each of which influences a meme’s reproductive success.
—Wikipedia, “Meme” (This is an extensive article on the subject.)

Memes, it turns out, far from being trivial or unimportant, afford us an entry point into thinking about communities, solidarity, performance, practice, and meaning. They can teach us about the ways that norms are created and sustained and how they function to organize communal behavior.
—Oxford University Press Blog, “What can we learn from meme culture?

Meme Culture is the culture on display around the use of memes. So it’s best to understand what memes are and then look at how they are used. That examination will lead to an understanding of Meme Culture.

Meltdown Vulnerability:

Meltdown is a novel attack that exploits a vulnerability in the way the processor enforces memory isolation.
—CACM, “Meltdown: Reading Kernel Memory from User Space

Meltdown is a hardware vulnerability affecting Intel x86 microprocessors, IBM POWER processors, and some ARM-based microprocessors. It allows a rogue process to read all memory, even when it is not authorized to do so.

Meltdown affects a wide range of systems. At the time of disclosure (2018), this included all devices running any but the most recent and patched versions of iOS, Linux, macOS, or Windows. Accordingly, many servers and cloud services were impacted, as well as a potential majority of smart devices and embedded devices using ARM-based processors (mobile devices, smart TVs, printers and others), including a wide range of networking equipment. A purely software workaround to Meltdown has been assessed as slowing computers between 5 and 30 percent in certain specialized workloads, although companies responsible for software correction of the exploit are reporting minimal impact from general benchmark testing.

Meltdown was issued a Common Vulnerabilities and Exposures ID of CVE-2017-5754, also known as Rogue Data Cache Load (RDCL), in January 2018. It was disclosed in conjunction with another exploit, Spectre, with which it shares some characteristics. The Meltdown and Spectre vulnerabilities are considered “catastrophic” by security analysts. The vulnerabilities are so severe that security researchers initially believed the reports to be false.
—Wikipedia, “Meltdown (security vulnerability)

Misinformation:

Misinformation is false, inaccurate, or misleading information that is communicated regardless of an intention to deceive.
—Wikipedia, “Misinformation

Misinformation contrasts with Disinformation which is deliberately false, inaccurate or misleading.

Moore’s Law:

Moore’s Law is the observation that the number of transistors in a dense integrated circuit (IC) doubles about every two years. Moore’s law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production.

The observation is named after Gordon Moore, the co-founder of Fairchild Semiconductor and Intel (and former CEO of the latter), who in 1965 posited a doubling every year in the number of components per integrated circuit, and projected this rate of growth would continue for at least another decade. While Moore did not use empirical evidence in forecasting that the historical trend would continue, his prediction held since 1975 and has since become known as a “law”.
—Wikipedia, “Moore’s law

Multi-factor Authentication:

Two-factor authentication (2FA) is a subset of Multi-factor Authentication, an electronic authentication method that requires a user to prove their identity in multiple ways before they are allowed access to an account. Two-factor authentication is so named because it requires a combination of two factors, whereas multi-factor authentication can require more.
—Norton, “What is two-factor authentication (2FA) and how does it work?

Use Multi-factor Authentication instead of Two-factor Authentication (2FA).

Multilevel Security:

Multilevel Security is the concept of processing information with different classifications and categories that simultaneously permits access by users with different security clearances and denies access to users who lack authorization.
—National Institute of Standards and Technology, Computer Security Resource Center, “multi-level security (MLS)

Multilevel Security is a security policy that allows the classification of data and users based on a system of hierarchical security levels combined with a system of non-hierarchical security categories. A multilevel-secure security policy has two primary goals. First, the controls must prevent unauthorized individuals from accessing information at a higher classification than their authorization. Second, the controls must prevent individuals from declassifying information.
—IBM, “What is multilevel security?

Multiparty Computation:

Secure Multi-party Computation (also known as secure computation, multi-party computation (MPC) or privacy-preserving computation) is a subfield of cryptography with the goal of creating methods for parties to jointly compute a function over their inputs while keeping those inputs private. Unlike traditional cryptographic tasks, where cryptography assures security and integrity of communication or storage and the adversary is outside the system of participants (an eavesdropper on the sender and receiver), the cryptography in this model protects participants’ privacy from each other.

The foundation for secure multi-party computation started in the late 1970s with the work on mental poker, cryptographic work that simulates game playing/computational tasks over distances without requiring a trusted third party. Note that traditionally, cryptography was about concealing content, while this new type of computation and protocol is about concealing partial information about data while computing with the data from many sources, and correctly producing outputs.
—Wikipedia, “Secure multi-party computation

N

Network Science:

Network Science is an academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, considering distinct elements or actors represented by nodes (or vertices) and the connections between the elements or actors as links (or edges). The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology.

The United States National Research Council defines Network Science as “the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena.”
—Wikipedia, “Network science

Today [Network Science] informs all sorts of pursuits, from generating algorithmic recommendations on Facebook to mapping terrorist networks to, yes, forecasting the spread of lethal diseases. But when Rényi got started, he wanted the answer to a simple question: What would a network organized completely at random look like? How would it behave?
—WIRED, “The Vulnerable Can Wait. Vaccinate the Super-Spreaders First: Who gets priority when Covid-19 shots are in short supply? Network theorists have a counterintuitive answer: Start with the social butterflies.

Neural Networks:

Neural Networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.
—IBM, “Neural Networks

Neural Networks are computing systems with interconnected nodes that work much like neurons in the human brain. Using algorithms, they can recognize hidden patterns and correlations in raw data, cluster and classify it, and – over time – continuously learn and improve.
—SAS, “Neural Networks: What they are & why they matter

Deep learning is a statistical technique that uses Neural Networks composed of multiple hidden layers of nodes and typically trained on large amounts of data to capture patterns and relationships in data.

At its simplest, a Neural Network can be made up of just three layers: an input layer where data is observed, a hidden layer where data is processed and the output layer where a conclusion is communicated. When a neural network has multiple hidden layers it is called a “deep neural network.” While shallow neural networks have uses, most of the recent advances in AI have come from deep neural networks. Because of this, in contemporary usage the term neural network usually refers to a deep neural network. The term neural network is also sometimes used interchangeably with deep learning, which technically refers to the process of training the deep neural network.
—CSET Glossary, “Neural Networks

Neural Networks (also known as artificial neural networks or neural nets) are one common type of machine learning algorithm. They were loosely inspired by aspects of biological brains. In the brain, signals cascade between neurons; similarly, a neural net is organized as layers of nodes that can send signals of varying strengths based on the inputs they receive. Analogous to human learning, training a neural network involves adjusting how and when the nodes in different layers activate.
—CSET Glossary, “Deep Learning

Neurodiversity Programs:

Neurodiversity Programs seek to uncover the strengths of neurodiverse individuals and utilizing their talents to increase innovation and productivity of the society as a whole. Neurodiversity is a concept that regards individuals with differences in brain function and behavioral traits as part of normal variation in the human population.
—Standford Medicine, “Stanford Nerodiversity Project

…Neurodiversity Program that recognizes and values neurological differences, agrees that people on the autism spectrum “are particularly well-suited to the IT industry.” Many tend to pay attention to detailed requirements and precision in their work, he says, and they also have great visual acuity, and like to perform repetitive tasks.

“It’s more than just hiring and placing people,” Kearon says. The big need here is to support people and help them develop careers over time. “It’s not just the initial placement; we need to change corporate cultures and make sure they’re supportive of people who think differently.”
—CACM, “Hiring from the Autism Spectrum

See also Autism at Work.

Non-Traditional Data Sources:

Non-Traditional Data Sources such as satellite imagery and CDRs [call detail records] have been used to map poverty at scale. We improve the state of the art by combining publicly accessible, anonymous advertising data with satellite imagery.

[Non-Traditional Data Sources include things like] call detail records (CDRs), Facebook advertising data, the Global Database of Events Language and Tone (the GDELT project), and Twitter data [among others].
—CACM, “Non-Traditional Data Sources: Providing Insights into Sustainable Development

Traditional [media] sources are sometimes referred to as ‘old’ or ‘mainstream’ media. Traditional sources are more authoritative as they come from professionals in the form of newspapers, radio or television.

Non-Traditional [Media] Sources, also referred to as sources from ‘citizen journalism’, or ‘new’, or ‘electronic’ media, usually come from social media, unpublished websites and blogs. They do not have to be approved and therefore can be created by anyone.
—Medium @ Catherine Alfille, “Traditional vs Non-Traditional Sources

[Non-Traditional Data is] data routinely collected by companies—including transportation providers, mobile network operators, social media networks and others. … [It also includes routinely collected market research], [and] data collected from mobile devices, internet platforms and satellite images.
—Forbes, “Leveraging Non-Traditional Data For The Covid-19 Socioeconomic Recovery Strategy

Note that “Non-Traditional Data Sources” is Not the same as “Unstructured Data.”

O

OAuth Access Tokens:

OAuth (Open Authorization) is an open standard for access delegation, commonly used as a way for Internet users to grant websites or applications access to their information on other websites but without giving them the passwords. Generally, OAuth provides clients a “secure delegated access” to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without providing credentials.
—Wikipedia, “OAuth

An Access Token is an object encapsulating the security identity of a process or thread. A token is used to make security decisions and to store tamper-proof information about some system entity.
—Wikipedia, “Access token

Facebook implements OAuth 2.0 authorization framework which allows third-party applications to gain restricted access to users’ accounts without sharing authentication credentials (i.e., username and password). When a user authenticates an application using OAuth 2.0, an Access Token is generated. This access token is an opaque string that uniquely identifies a user and represents a specific permission scope granted to the application to perform read/write actions on behalf of the user. A permission scope is a set of permissions requested by the application to perform actions on behalf of the user.
—CACM, “Measuring and Mitigating OAuth Access Token Abuse by Collusion Networks

OAuth Secure Delegated Access:

Generally, OAuth provides clients a “Secure Delegated Access” to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without providing credentials.
—Wikipedia, “OAuth

OAuth Secure Delegated Access is the process by which access is granted using OAuth Access Tokens.

Open Architecture:

Open Architecture is a type of computer architecture or software architecture intended to make adding, upgrading, and swapping components with other computers easy. … Open Architecture allows potential users to see inside all or parts of the architecture without any proprietary constraints.
—Wikipedia, “Open architecture

Open Compute Project:

The Open Compute Project (OCP) is an organization that shares designs of data center products and best practices among companies, including ASUS, ARM, Facebook, IBM, Intel, Nokia, Google, Microsoft, Seagate Technology, Dell, Rackspace, Hewlett Packard Enterprise, NVIDIA, Cisco, Goldman Sachs, Fidelity, Lenovo and Alibaba Group. The Open Compute Project began in Facebook as an internal project in 2009 called “Project Freedom”.

The Open Compute Project Foundation maintains a number of OCP projects, such as:Server designs, Data Storage, Rack designs, Energy efficient data centers, and Open networking switches.
—Wikipedia, “Open Compute Project

In 2011, the Open Compute Project started out of a basement lab in Facebook’s Palo Alto headquarters. Its mission was to design from a clean slate the most efficient and economical way to run compute at scale.
—CACM, “Power to the People
(This article provides great detail about the OCP.)

Open Web Index:

The Open Web Index is a European-based approach for a competitive and data-protecting digital infrastructure. It aims to build a basis for genuine competition in the digital platform business. The main Idea of Open Web Index is to set up a publicly funded, global, searchable index of the Web that is open to competing companies, institutions and civil society actors.
Open Web Index

While there seems to be a multitude of search engines on the market, there are only a few relevant search engines in terms of them having their own index (the database of Web pages underlying a search engine). Other search engines pull results from one of these search engines (for example, Yahoo pulls results from Bing), and should therefore not be considered search engines in the true sense of the word. Globally, the major search engines with their own indexes are Google, Bing, Yandex, and Baidu. Other independent search engines may have their own indexes, but not to the extent that their size makes them competitive in the global search engine market.

I am proposing an idea for a missing part of the Web’s infrastructure, namely a searchable index [, the Open Web Index]. The idea is to separate the infrastructure part of the search engine (the index) from the services part, thereby allowing for a multitude of services, whether existing as search engines or otherwise, to be run on a shared infrastructure.
—CACM, “The Web Is Missing an Essential Part of Infrastructure: An Open Web Index

Open Source Intelligence (OSINT):

OSINT stands for Open Source Intelligence, which refers to any information that can legally be gathered from free, public sources about an individual or organization. In practice, that tends to mean information found on the internet, but technically any public information falls into the category of OSINT whether it’s books or reports in a public library, articles in a newspaper or statements in a press release.

OSINT also includes information that can be found in different types of media, too. Though we typically think of it as being text-based, information in images, videos, webinars, public speeches and conferences all fall under the term.
—SentinelOne, “What is Open Source Intelligence (OSINT)?

OSINT Framework is focused on gathering information from free tools or resources. The intention is to help people find free OSINT resources.
OSINT Framework

The Certified in Open Source Intelligence (C|OSINT) program is the first and only globally recognized and accredited board certification on open source intelligence.
—Listed in DHS, CISA, NICCS Education and Training Catalog.

P

Persistent Engagement:

Persistent Engagement, a.k.a. “defending forward,” and “hunting forward,” is an offensive cyber doctrine advanced by Gen. Paul Nakasone, Commander, U.S. Cyber Command and Director, National Security Agency/Chief, Central Security Service.

Paul Nakasone became one of the nation’s founding cyberwarriors—an elite group that basically invented the doctrine that would guide how the US fights in a virtual world.

Nakasone quickly embraced his new authority [in 2018] under a philosophy he has dubbed “Persistent Engagement.”

The idea [of Persistent Engagement], in part, is simply to bog adversaries down. “Some of the things we see today might just be screwing with your enemy enough that they’re spending as much time trying to figure out what vulnerabilities they have, who screwed up, what’s really going on,” one official explains. “It takes the time and attention and the resources of your enemy.”

The “Persistent Engagement” approach is, in many ways, an attempt to reconcile the lessons of the mission that Nakasone led against ISIS in 2016 with the old NSA philosophy of strategic patience. Online attacks can’t be ordered up like a Tomahawk missile, deploying in hours to any place on the planet. “For cyber operations, you can’t just ask the military, ‘OK, we’re ready for you now,’” says Buckner, who retired last year after heading cyber policy for the Army. “Those accesses and understanding of how an adversary works in cyberspace is built up over years, and if you want it years from now, you need to start now.”
—WIRED, “The Man Who Speaks Softly—and Commands a Big Cyber Army

Phishing:

Phishing is:

  • Deceptive computer-based means to trick individuals into disclosing sensitive personal information.
  • A technique for attempting to acquire sensitive data, such as bank account numbers, through a fraudulent solicitation in email or on a web site, in which the perpetrator masquerades as a legitimate business or reputable person.
  • A digital form of social engineering that uses authentic-looking—but bogus—e-mails to request information from users or direct them to a fake Web site that requests information.
  • An attack in which the Subscriber is lured (usually through an email) to interact with a counterfeit Verifier/Reputable Person and tricked into revealing information that can be used to masquerade as that Subscriber to the real Verifier/RP.

—National Institute of Standards and Technology, Computer Security Resource Center, “phishing

Post-Quantum Cryptographic Algorithms:

Post-Quantum Cryptographic Algorithms are digital signature, public-key encryption, and key-establishment algorithms that are capable of protecting sensitive government information well into the foreseeable future, including after the advent of quantum computers.
—National Institute of Standards and Technology, Computer Security Resource Center, “Announcing Request for Nominations for Public-Key Post-Quantum Cryptographic Algorithms

Note that post-quantum cryptography is also called quantum-resistant cryptography.
—NIST, CSRC, “Post-Quantum Cryptography

While the NIST definition above targets “sensitive government information” it also applies to sensitive private sector information.

Prioritizing Cybersecurity:

Prioritizing Cybersecurity – Given our analysis, we believe there is a harsh reality lurking beneath the surface within many organizations. While they may be saying the right things in public to satisfy investors, underwriters, and customers, there is an apparent lack of urgency in promoting a truly resilient and secure organization.
—CACM, “Cybersecurity: Is It Worse than We Think?

The point is that from a security professional’s perspective cybersecurity is not given enough priority, not in light of the breadth and depth of security threats that do exist. It’s a matter of risk management. Prioritization depends on perspective and risk tolerance. C-suite types and shareholders prize Return on Investment and see the expense of cybersecurity as of questionable return. Security practitioners struggle to communicate the risks in terms of ROI. For the same reasons it’s hard to prove a negative, it’s hard to prove the value of prioritizing cybersecurity until a compromising event occurs, but by then it’s too late and the damage is done…

Propaganda:

Propaganda is communication that is primarily used to influence an audience and further an agenda, which may not be objective and may be selectively presenting facts to encourage a particular synthesis or perception, or using loaded language to produce an emotional rather than a rational response to the information that is being presented. Propaganda can be found in news and journalism, government, advertising, entertainment, education, and activism and is often associated with material which is prepared by governments as part of war efforts, political campaigns, revolutionaries, big businesses, ultra-religious organizations, the media, and certain individuals such as soapboxers.

In the 20th century, the term propaganda was often associated with a manipulative approach, but historically, propaganda has been a neutral descriptive term.
—Wikipedia, “Propaganda

Propaganda is a broad term. Disinformation is a subset of, is narrower than propaganda. Disinformation is used more in the modern context whereas propaganda tends to be more of a historical usage. Neither usage nor meaning are mutually exclusive.

Public-Interest Technologists:

Public-Interest Technologists are people who understand the technology—especially computer, information, and Internet technology—that permeates all aspects of our society. People who understand that technology need to be part of public-policy discussions. We need technologists who work in the public interest. We need Public-Interest Technologists.

I think of Public-Interest Technologists as people who combine their technological expertise with a public-interest focus, either by working on tech policy, working on a tech project with a public benefit, or working as a more traditional technologist for an organization with a public-interest focus. Public-interest technology isn’t one thing; it’s many things. And not everyone likes the term. Maybe it’s not the most accurate term for what different people do, but it’s the best umbrella term that covers everyone.
—Bruce Schneier, “Public-Interest Technology Resources

Q

Quantum Computing:

Quantum Computing is a subfield of quantum information science—including quantum networking, quantum sensing, and quantum simulation—which harnesses the ability to generate and use quantum bits, or qubits.

Quantum computers have the potential to solve certain problems much more quickly than conventional, or other classical, computers. They leverage the principles of quantum mechanics to perform multiple operations simultaneously in a way that is fundamentally different from classical computers. While quantum computers are not likely to replace classical computers, there are two key properties of qubits that fundamentally change the way quantum computers store and manipulate data compared to classical computers:

  1. Superposition: the ability of a particle to be in several different states at the same time.
  2. Entanglement: the ability of two particles to share information even at a distance.

To conceptualize these properties, envision a coin that has two states—heads or tails. That coin represents traditional bits. If you spun the coin, it would be both heads and tails at the same time (superposition). If you spun a pair of two entangled coins, the state of one would instantly change the state of the other (entanglement). Superposition and entanglement enable a connected group of qubits to have significantly more processing power than the same number of binary bits.

However, qubits are also subject to decoherence, a process in which the interaction between qubits and their environment changes the state of the quantum computer, causing information from the system to leak out or be lost. You can imagine the table under the spinning coin shaking, and the coin being knocked over. In order for a quantum computer to actually perform computations, it requires coherence to be preserved. Noise in the system, caused by vibration, changes in temperature, and even cosmic rays, leads to errors in a quantum computer’s calculations. It is possible to address this by running a quantum error correction (QEC) algorithm on a quantum computer to create redundancy, but the process is very resource intensive. The interplay between error correction and decoherence is the strongest determining factor as to when a large scale, cyber-relevant quantum computer will be built.

The aim of quantum computing research today is to build a stable and coherent quantum computer system while developing new applications for these devices. While quantum computers are unlikely to be useful as a direct replacement for classical computers, they will be able to solve certain problems that are practically impossible for today’s classical computers. In a similar manner to how graphics processing units (GPUs) accelerate specific tasks for today’s computers, quantum processing units (QPUs) will do the same. Already the quantum computing community has identified a range of problems across material science, biophysics and chemistry, machine learning and artificial intelligence that will have transformative solutions driven by quantum computers.
—Belfer Center for Science and International Affairs, “Quantum Computing and Cybersecurity

Quantum Cryptography:

Quantum Cryptography leverages the properties of quantum mechanics, as opposed to mathematics, to carry out cryptographic objectives such as key distribution. Quantum key distribution (QKD) is a method of transmitting a secret key over distance. It allows two parties to produce a shared random secret key known only to them, which can then be used to encrypt and decrypt messages and cannot be intercepted without the parties noticing, nor can it be reproduced by a third party. It is an information-theoretically secure solution to the key-management problem, which means its security is not based on any computational hardness assumption. It is the use of a quantum technology, and the need for physical infrastructure capable of transmitting quantum states, that gives rise to the label of quantum cryptography.
—CACM, “The Complex Path to Quantum Resistance”

Quantum CyberSecurity:

Quantum CyberSecurity is the field that studies all aspects affecting the security and privacy of communications and computations caused by the development of quantum technologies.
—CACM “Cyber Security in the Quantum Era

Quantum Resistance:

Post-quantum cryptography, also known as quantum-resistant or quantum-safe cryptography, is a subset of classical cryptography which can be deployed on existing classical devices and are currently believed to be safe against the threat of a scalable quantum computer.
—CACM, “The Complex Path to Quantum Resistance”

Quantum Technologies:

Quantum Technologies are those that employ the properties of quantum mechanics, such as quantum entanglement, quantum superposition and quantum tunneling.

Quantum Vulnerability:

Quantum Vulnerability expresses a vulnerability brought on by potential advances in the ability to break cryptographic security using quantum computing.

Queueing Theory:

Queueing Theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.

Queueing Theory has its origins in research by Agner Krarup Erlang when he created models to describe the system of Copenhagen Telephone Exchange company, a Danish company. The ideas have since seen applications including telecommunication, traffic engineering, computing and, particularly in industrial engineering, in the design of factories, shops, offices and hospitals, as well as in project management.
—Wikipedia, “Queueing theory

Queueing Theory had been invented by the telephone engineers (starting with A.K. Erlang) in the early 1900s, then taken up by the mathematicians, but after [World War II] the Operations Research folks began to apply it to industrial problems (for example, Jackson applied it to job-shop scheduling); but it never was a mainstream tool. Yet queueing systems models had all the ingredients of a mathematical approach to performance evaluation of networks that I needed since it dealt with ways of analyzing throughput, response time, buffer size, efficiency, and so forth. Further, and importantly, it was a perfect mechanism for implementing dynamic resource sharing.
—CACM, “An Interview with Leonard Kleinrock (developer of the mathematical theory behind packet switching)”

R

Ransomware:

Ransomware, which encrypts data so cybercriminals can extract a payment for its safe return, has become increasingly common—and costly. The origins of modern ransomware can be traced to September 2013. Then, a fairly rudimentary form of malware, CryptoLocker, introduced a new and disturbing threat: when a person clicked a malicious email link or opened an infected file, a Trojan Horse began encrypting all the files on a computer. Once the process was complete, crooks demanded a cryptocurrency payment, usually a few hundred dollars, to unlock the data. If the person didn’t pay in cybercurrency, the perpetrator deleted the private key needed to decrypt the data and it was lost permanently.
—CACM, “The Worsening State of Ransomware

Ransomware is a type of malware threat actors use to infect computers and encrypt computer files until a ransom is paid. After the initial infection, ransomware will attempt to spread to connected systems, including shared storage drives and other accessible computers.

If the threat actor’s ransom demands are not met (i.e., if the victim does not pay the ransom), the files or encrypted data will usually remain encrypted and unavailable to the victim. Even after a ransom has been paid to unlock encrypted files, threat actors will sometimes demand additional payments, delete a victim’s data, refuse to decrypt the data, or decline to provide a working decryption key to restore the victim’s access.
—CISA, “Security Tip (ST19-001) Protecting Against Ransomware

Ransomware Decryptors:

Ransomware Decryptors are decryption tools (programs or code files) that decrypt files that have been encrypted by ransomware. There are now many vendors providing these tools.

The “No More Ransom” project (an initiative by the National High Tech Crime Unit of the Netherlands’ police, Europol’s European Cybercrime Centre, Kaspersky and McAfee) is one such organization providing decryption tools. There are many others. Be careful…

Ransomware Q&A:

This is a reference to the No More Ransomware project’s section on Ransomware Q&A. In it you will find information on: The history of ransomware, Types of ransomware, If attacked should I pay the ransom?, How does a ransomware attack work?, Why is is so hard to find a single solution against ransomware?, along with answers to several other questions. It’s a good resource.

Real-Time Communication (RTC):

Real-Time Communication is a category of software protocols and communication hardware media which supports real-time computing.
—Wikipedia, “Real-time communication

In this time of pandemic, the world has turned to Internet-based, Real-Time Communication (RTC) as never before. The number of RTC products has, over the past decade, exploded in large part because of cheaper high-speed network access and more powerful devices, but also because of an open, royalty-free platform called WebRTC.
—CACM, “WebRTC: Real-Time Communication for the Open Web Platform

With WebRTC, you can add Real-Time Communication capabilities to your application that works on top of an open standard. It supports video, voice, and generic data to be sent between peers, allowing developers to build powerful voice- and video-communication solutions. The technology is available on all modern browsers as well as on native clients for all major platforms. The technologies behind WebRTC are implemented as an open web standard and available as regular JavaScript APIs in all major browsers. For native clients, like Android and iOS applications, a library is available that provides the same functionality. The WebRTC project is open-source and supported by Apple, Google, Microsoft and Mozilla, amongst others. This page is maintained by the Google WebRTC team.
—Google Developers, “Real-time communication for the web

Reputation Manipulation Services:

A number of “black-hat” Reputation Manipulation Services target popular online social networks. To conduct reputation manipulation, fraudsters purchase fake accounts in bulk from underground market-places, use infected accounts compromised by malware, or recruit users to join collusion networks, which collect OAuth access tokens from colluding members and abuse them to provide fake likes or comments to their members.

In these collusion networks, members like other members’ posts and in return receive likes on their own posts. Such collusion networks of significant size enable members to receive a large number of likes from other members, making them appear much more popular than they actually are. As expected, colluding accounts are hard to detect because they mix real and fake activity.
—CACM, “Measuring and Mitigating OAuth Access Token Abuse by Collusion Networks

Resource Public Key Infrastructure (RPKI):

Resource Public Key Infrastructure (RPKI), also known as Resource Certification, is a specialized public key infrastructure (PKI) framework to support improved security for the Internet’s BGP routing infrastructure.

RPKI provides a way to connect Internet number resource information (such as Autonomous System numbers and IP addresses) to a trust anchor. The certificate structure mirrors the way in which Internet number resources are distributed. That is, resources are initially distributed by the IANA to the regional Internet registries (RIRs), who in turn distribute them to local Internet registries (LIRs), who then distribute the resources to their customers. RPKI can be used by the legitimate holders of the resources to control the operation of Internet routing protocols to prevent route hijacking and other attacks. In particular, RPKI is used to secure the Border Gateway Protocol (BGP) through BGP Route Origin Validation (ROV), as well as Neighbor Discovery Protocol (ND) for IPv6 through the Secure Neighbor Discovery protocol (SEND).
—Wikipedia, “Resource Public Key Infrastructure

Resource Public Key Infrastructure (RPKI) is a distributed public database of cryptographically signed records containing routing information supplied by autonomous systems or networks, is considered to be the ultimate “truth” for network information, according to Siddiqui. RPKI is carried out by a process known as route origin validation (ROV), which uses route origin authorizations (ROAs)—digitally signed objects that fix an IP address to a specific network or autonomous system—to establish the list of prefixes a network is authorized to announce.
CACM, “Fixing the Internet

Routing Attacks:

Routing Attacks happen when an Autonomous System (AS) announces an incorrect path to a prefix, causing packets to traverse through and/or arrive at the attacker AS. We discuss the goals of the attacker from two perspectives: whom to affect and what to achieve. Routing Attacks occur in the wild and are getting increasingly prevalent and more sophisticated. The ability to divert targeted traffic via routing attacks is an emerging threat to Internet applications.
—CACM, “Securing Internet Applications from Routing Attacks

Route Origin Validation (ROV):

Route Origin Validation (ROV) is a process that uses route origin authorizations (ROAs)—digitally signed objects that fix an IP address to a specific network or autonomous system—to establish the list of prefixes a network is authorized to announce. It is an application of Resource Public Key Infrastructure (RPKI).
—CACM, “Fixing the Internet

[Route] Origin Validation is a mechanism by which route advertisements can be authenticated as originating from an expected autonomous system (AS). Origin validation uses one or more resource public key infrastructure (RPKI) cache servers to perform authentication for specified BGP prefixes.
—Juniper Networks, “BGP Origin Validation

S

Safety Critical Systems:

A Safety-Critical System (SCS) or life-critical system is a system whose failure or malfunction may result in one (or more) of the following outcomes:

  • death or serious injury to people
  • loss or severe damage to equipment/property
  • environmental harm

Risks of this sort are usually managed with the methods and tools of safety engineering.
—Wikipedia, “Safety-critical system

Sandworm:

Sandworm also known as Unit 74455, is allegedly a Russian cybermilitary unit of the GRU, the organization in charge of Russian military intelligence. Other names, given by cybersecurity researchers, include Telebots, Voodoo Bear, and Iron Viking.

The team is believed to be behind the December 2015 Ukraine power grid cyberattack, the 2017 cyberattacks on Ukraine using the Petya malware, various interference efforts in the 2017 French presidential election, and the cyberattack on the 2018 Winter Olympics opening ceremony. Then-United States Attorney for the Western District of Pennsylvania Scott Brady described the group’s cyber campaign as “representing the most destructive and costly cyber-attacks in history.”
—Wikipedia, “Sandworm (hacker group)

Security Deception Software:

CyberSecurity Deception Software tricks hackers into revealing the tactics they use to penetrate and control computer systems. Instead of blocking hackers, the software ingeniously invites hackers in, routes them to a decoy Web site or network, and then studies their behavior as they reveal their nefarious methods.

Approaches to the security deception method vary, but the principle behind them remains the same: enable a hacker to penetrate your network, then trick him or her into thinking they are working with your actual network or data when in fact they are really working with a dummy network or dummy data.

Often, security deception software creates emulations of the inner workings of entire networks or Web sites in an attempt to fool hackers.
—CACM, “Spoofing the Spoofers

Security Economics:

Security Economics or the Economics of Information Security addresses the economic aspects of privacy and computer security. Economics of information security includes models of the strictly rational “homo economicus” as well as behavioral economics. Economics of security addresses individual and organizational decisions and behaviors with respect to security and privacy as market decisions.

Economics of security addresses a core question: why do agents choose technical risks when there exists technical solutions to mitigate security and privacy risks? Economics addresses not only this question, but also inform design decisions in security engineering.
—Wikipedia, “Economics of security

Ross Anderson’s book Security Engineering: A Guide to Building Dependable Distributed Systems, 3rd Ed. has a whole chapter devoted to “Economics,” (as in Security Economics) with a subsection on “The economics of security and dependability.” It makes for worthwhile reading.

Security of Hardware Optimizations:

Meltdown fundamentally changes our perspective on the Security of Hardware Optimizations that change the state of microarchitectural elements. Meltdown and Spectre teach us that functional correctness is insufficient for security analysis and the micro-architecture cannot be ignored. They further open a new field of research to investigate the extent to which performance optimizations change the microarchitectural state, how this state can be lifted into an architectural state, and how such attacks can be prevented. Without requiring any software vulnerability and independent of the operating system, Meltdown enables an adversary to read sensitive data of other processes, containers, virtual machines, or the kernel.
CACM, “Meltdown: Reading Kernel Memory from User Space

Security of Hardware Optimizations is simply the mindful consideration of the security implications of hardware optimizations. That is, hardware optimizations provide significant benefits. However, those based on e.g. microarchitectural elements have potential adverse side effects such as data leakage that is taken advantage of in Meltdown vulnerability attacks.

Site Reliability Engineering:

Site Reliability Engineering, or SRE, is a software-engineering specialization that focuses on the reliability and maintainability of large systems.
—CACM, “Metrics That Matter

Site Reliability Engineering (SRE) is a set of principles and practices that incorporates aspects of software engineering and applies them to infrastructure and operations problems. The main goals are to create scalable and highly reliable software systems. Site reliability engineering is closely related to DevOps, a set of practices that combine software development and IT operations, and SRE has also been described as a specific implementation of DevOps. The field of site reliability engineering originated at Google with Ben Treynor Sloss, who founded a site reliability team after joining the company in 2003.
—Wikipedia, “Site reliability engineering

Slacktivism:

Slacktivism is “a term coined during the rise of the internet for the practice of publicly supporting a cause in ways that take little effort, often to make yourself look good. ‘That can diminish or even demean the seriousness of political discourse in a way that can kind of hinder our ability to solve big problems,’ says Carr.”

“Participating in online movements may not translate into offline engagement—some experts warn it could have the opposite effect. ‘On social media, you can get a burst of interest, sometimes a burst of activity, because it’s so easy to feel like you’ve participated just by clicking a link or retweeting something or using a hashtag,’ says Nicholas Carr, a sociology professor at Williams College. ‘What’s unclear is whether social media will help or hurt the ability of activists to sustain interests in a long-term campaign of change.'”

“People who engage in this performative activism are still spreading political messages, though, says William Golub, a junior at Stanford University who volunteered with the texting team on Joe Biden’s presidential campaign last year. ‘I think that there certainly are people who will just post about something on social media and that’s the end of the chain, but lots of those people are people who wouldn’t have done anything at all’ [if it weren’t for that engagement], he says.”
—MIT Technology Review, “How the next generation is reshaping political discourse

SMS (Short Message Service):

SMS (Short Message Service) is a text messaging service component of most telephone, Internet, and mobile device systems. It uses standardized communication protocols that let mobile devices exchange short text messages. An intermediary service can facilitate a text-to-voice conversion to be sent to landlines.

SMS technology originated from radio telegraphy in radio memo pagers that used standardized phone protocols. These were defined in 1986 as part of the Global System for Mobile Communications (GSM) series of standards. The first test SMS message was sent on December 3, 1992, when Neil Papworth, a test engineer for Sema Group, used a personal computer to send “Merry Christmas” to the phone of colleague Richard Jarvis. SMS rolled out commercially on many cellular networks that decade and became hugely popular worldwide as a method of text communication.
—Wikipedia, “SMS

Social Bots:

Social Bots are automated social media accounts mimicking humans. Social bots are actively used for both beneficial and nefarious purposes.

Social Bots coexist with humans since the early days of online social networks. Yet, we still lack a precise and well-agreed definition of what a social bot is. This is partly due to the multiple communities studying them and to the multifaceted and dynamic behavior of these entities, resulting in diverse definitions each focusing on different characteristics. Computer scientists and engineers tend to define bots from a technical perspective, focusing on features such as activity levels, complete or partial automation, use of algorithms and AI. The existence of accounts that are simultaneously driven by algorithms and by human intervention led to even more fine-grained definitions and cyborgs were introduced as either bot-assisted humans or human-assisted bots. Instead, social scientists are typically more interested in the social or political implications of the use of bots and define them accordingly.
—CACM, “A Decade of Social Bot Detection

A Social Bot (also: socialbot or socbot) or troll bot is an agent that communicates more or less autonomously on social media, often with the task of influencing the course of discussion and/or the opinions of its readers. It is related to chatbots but mostly only uses rather simple interactions or no reactivity at all. The messages (e.g. tweets) it distributes are mostly either very simple, or prefabricated (by humans), and it often operates in groups and various configurations of partial human control (hybrid). It usually targets advocating certain ideas, supporting campaigns, or aggregating other sources either by acting as a “follower” and/or gathering followers itself. In this very limited respect, social bots can be said to have passed the Turing test. If social media profiles are expected to be human, then social bots represent fake accounts. The automated creation and deployment of many social bots against a distributed system or community is one form of Sybil attack.
—Wikipedia, “Social bot

Social Credit:

The Social Credit system in China is a plan that the Communist Party hopes will build a culture of “sincerity” and a “harmonious socialist society” where “keeping trust is glorious.” The ambition is to collect every scrap of information available online about China’s companies and citizens in a single place — and then assign each of them a score based on their political, commercial, social and legal “credit.”

The government hasn’t announced [as of Oct. 2016] exactly how the plan will work — for example, how scores will be compiled and different qualities weighted against one another. But the idea is that good behavior will be rewarded and bad behavior punished, with the Communist Party acting as the ultimate judge. This is what China calls “Internet Plus,” … Harnessing the power of big data and the ubiquity of smartphones, e-commerce and social media in a society where 700 million people live large parts of their lives online.
—Washington Post, “China’s plan to organize its society relies on ‘big data’ to rate everyone

The Social Credit System is a national credit rating and blacklist being developed by the government of the People’s Republic of China. The program initiated regional trials in 2009, before launching a national pilot with eight credit scoring firms in 2014. It was first introduced formally by then Chinese Premier, Wen Jiabao, on October 20, 2011. Managed by the National Development and Reform Commission (NDRC), the People’s Bank of China (PBOC), and Supreme People’s Court (SPC), the system was intended to standardize the credit rating function and perform financial and social assessment for businesses, government institutions, individuals, and non-government organizations.

The social credit initiative calls for the establishment of a unified record system so that businesses, individuals and government institutions can be tracked and evaluated for trustworthiness. There are multiple, different forms of the social credit system being experimented with, while the national regulatory method is based on blacklisting and whitelisting. The credit system is closely related to China’s mass surveillance systems such as Skynet, which incorporates facial recognition, big data analysis, and artificial intelligence.
—Wikipedia, “Social Credit System

Spectre Vulnerability:

Spectre is a class of security vulnerabilities that affects modern microprocessors that perform branch prediction and other forms of speculation. On most processors, the speculative execution resulting from a branch misprediction may leave observable side effects that may reveal private data to attackers. For example, if the pattern of memory accesses performed by such speculative execution depends on private data, the resulting state of the data cache constitutes a side channel through which an attacker may be able to extract information about the private data using a timing attack.
—Wikipedia, “Spectre (security vulnerability)

Meltdown was published simultaneously with the Spectre Attack, which exploits a different CPU performance feature, called speculative execution, to leak confidential information. Meltdown is distinct from Spectre in several ways, notably that Spectre requires tailoring to the victim process’s software environment but applies more broadly to CPUs and is not mitigated by KAISER. Since the publication of Meltdown and Spectre, several prominent follow-up works exploited out of order and speculative execution mechanisms to leak information across other security domains.
—CACM, “Meltdown: Reading Kernel Memory from User Space

Spyware:

Spyware (a portmanteau for spying software) is software with malicious behavior that aims to gather information about a person or organization and send it to another entity in a way that harms the user. For example, by violating their privacy or endangering their device’s security. This behavior may be present in malware as well as in legitimate software. Websites may engage in spyware behaviors like web tracking. Hardware devices may also be affected. Spyware is frequently associated with advertising and involves many of the same issues. Because these behaviors are so common, and can have non-harmful uses, providing a precise definition of spyware is a difficult task.
—Wikipedia, “Spyware

STEM & STEAM Education:

Science, Technology, Engineering, and Mathematics (STEM) is a broad term used to group together these academic disciplines. This term is typically used to address an education policy or curriculum choices in schools. It has implications for workforce development, national security concerns (as a shortage of STEM-educated citizens can reduce effectiveness in this area) and immigration policy. There is no universal agreement on which disciplines are included in STEM; in particular whether or not the science in STEM includes social sciences, such as psychology, sociology, economics, and political science.
—Wikipedia, “Science, technology, engineering, and mathematics

STEAM fields are the areas of science, technology, engineering, the arts, and mathematics. STEAM is designed to integrate STEM subjects with arts subjects into various relevant education disciplines. These programs aim to teach students innovation, to think critically and use engineering or technology in imaginative designs or creative approaches to real-world problems while building on students’ mathematics and science base. STEAM programs add art to STEM curriculum by drawing on reasoning and design principles, and encouraging creative solutions.
—Wikipedia, “STEAM fields

Stockpiling of Sensitive Data:

Stockpiling of Sensitive Data by adversaries and nation states who will use a “harvest now and decrypt later” strategy, saving what they gather in the present to be used once quantum computing based technologies become available, enabling decryption of the sensitive data they have stockpiled.

Stockpiling of Vulnerabilities:

Stockpiling of Vulnerabilities by adversaries and nation states who will use those vulnerabilities to their advantage. The problem is that those vulnerabilities remain beyond the knowledge of those working to make the Internet safer, thereby jeopardizing at-large security. The flip-side is that national security entities use such vulnerabilities in furtherance of their objectives. They are not able to make use of such vulnerabilities if they become public and remediated. It’s a trade-off between national security objectives and security at-large of the Internet.

Surveillance:

Surveillance is the monitoring of behavior, activities, or information for the purpose of information gathering, influencing, managing or directing. This can include observation from a distance by means of electronic equipment, such as closed-circuit television (CCTV), or interception of electronically transmitted information, such as Internet traffic. It can also include simple technical methods, such as human intelligence gathering and postal interception.

Surveillance is used by governments for intelligence gathering, prevention of crime, the protection of a process, person, group or object, or the investigation of crime. It is also used by criminal organizations to plan and commit crimes, and by businesses to gather intelligence on their competitors, suppliers or customers. Religious organizations charged with detecting heresy and heterodoxy may also carry out surveillance. Auditors carry out a form of surveillance.

Surveillance can be used by governments to unjustifiably violate people’s privacy and is often criticized by civil liberties activists. Liberal democracies may have laws that seek to restrict governmental and private use of surveillance, whereas authoritarian governments seldom have any domestic restrictions.
—Wikipedia, “Surveillance

Surveillance – Electronic:

Electronic Surveillance is defined in federal law as the non-consensual acquisition by an electronic, mechanical, or other surveillance device of the contents of any wire or electronic communication, under circumstances in which a party to the communication has a reasonable expectation of privacy. The “contents” of a communication consists of any information concerning the identity of the parties, or the existence, substance, purport, or meaning of the communication.

Examples of electronic surveillance include: wiretapping, bugging, videotaping; geolocation tracking such as via RFID, GPS, or cell-site data; data mining, social media mapping, and the monitoring of data and traffic on the Internet. Such surveillance tracks communications that falls into two general categories: wire and electronic communications. “Wire communications” involve the transfer of the contents from one point to another via a wire, cable, or similar device. “Electronic communications” refer to the transfer of information, data, sounds, or other contents via electronic means, such as email, VoIP, or uploading to the cloud.
—Cornell Law School, Legal Information Institute, “Electronic Surveillance

Surveillance – Mass:

Mass Surveillance is the intricate surveillance of an entire or a substantial fraction of a population in order to monitor that group of citizens. The surveillance is often carried out by local and federal governments or governmental organisations, such as organizations like the NSA and the FBI, but it may also be carried out by corporations (either on behalf of governments or at their own initiative). Depending on each nation’s laws and judicial systems, the legality of and the permission required to engage in mass surveillance varies. It is the single most indicative distinguishing trait of totalitarian regimes. It is also often distinguished from targeted surveillance.
—Wikipedia, “Mass Surveillance

Surveillance Capitalism:

Surveillance Capitalism is an economic system centered around the commodification of personal data with the core purpose of profit-making. Since personal data can be commodified it has become one of the most valuable resources on earth. The concept of surveillance capitalism, as described by Shoshana Zuboff, arose as advertising companies, led by Google’s AdWords, saw the possibilities of using personal data to target consumers more precisely.

Increased data collection may have various advantages for individuals and society such as self-optimization (Quantified Self), societal optimizations (such as by smart cities) and optimized services (including various web applications). However, collecting and processing data in the context of capitalism’s core profit-making motive might present a danger to human liberty, autonomy and wellbeing. Capitalism has become focused on expanding the proportion of social life that is open to data collection and data processing. This may come with significant implications for vulnerability and control of society as well as for privacy.—Wikipedia, “Surveillance capitalism

The term as used in Internet Salmagundi is as coined by and described by Shoshana Zuboff in her book Surveillance Capitalism. (The Wikipedia definition is a handy encapsulation of Zuboff’s book for my purposes here.)

Zuboff defines “surveillance capitalism as the unilateral claiming of private human experience as free raw material for translation into behavioral data. These data are then computed and packaged as prediction products and sold into behavioral futures markets — business customers with a commercial interest in knowing what we will do now, soon, and later. It was Google that first learned how to capture surplus behavioral data, more than what they needed for services, and used it to compute prediction products that they could sell to their business customers, in this case advertisers. But I argue that surveillance capitalism is no more restricted to that initial context than, for example, mass production was restricted to the fabrication of Model T’s.”
—Harvard Gazette, “High tech is watching you

Surveillance for Hire:

Surveillance for Hire involves contracting surveillance services out to third parties unrelated to the organization doing the surveillance.

According to Facebook (Meta): “The global surveillance-for-hire industry targets people across the internet to collect intelligence, manipulate them into revealing information and compromise their devices and accounts. These companies are part of a sprawling industry that provides intrusive software tools and surveillance services indiscriminately to any customer — regardless of who they target or the human rights abuses they might enable. This industry “democratizes” these threats, making them available to government and non-government groups that otherwise wouldn’t have these capabilities.”

It is somewhat related to but not the same as Hacking for Hire.

System Safety Engineering:

Safety Engineering is an engineering discipline which assures that engineered systems provide acceptable levels of safety. It is strongly related to industrial engineering/systems engineering, and the subset system safety engineering. Safety engineering assures that a life-critical system behaves as needed, even when components fail.
—Wikipedia, “Safety engineering

One of the benefits of using a System Safety Engineering process is simply that someone becomes responsible for ensuring that particular hazardous behaviors are eliminated if possible or their likelihood reduced and their effects mitigated in the design. Almost all attention during development is normally focused on what the system and software are supposed to do. System safety and software system safety engineers are responsible for ensuring that adequate attention is also paid to what the system and software are not supposed to do as well as verifying that hazardous behavior will not occur. It is this unique focus that has made the difference in systems where safety engineering successfully identified problems not found by the other engineering processes.
—ScienceDirect, “System Safety Engineering

T

Targeted Surveillance Industry:

Targeted Surveillance (or targeted interception) is a form of surveillance, such as wiretapping, that is directed towards specific persons of interest, and is distinguishable from mass surveillance (or bulk interception). Both untargeted and targeted surveillance is routinely accused of treating innocent people as suspects in ways that are unfair, of violating human rights, international treaties and conventions as well as national laws, and of failing to pursue security effectively.
—Wikipedia, “Targeted surveillance

Targeted Surveillance Industry” refers to an industry formed by “surveillance-for-hire companies [who have] quietly used Facebook, Instagram, and WhatsApp as springboards to target people in more than 100 countries.” “The findings [of the Meta report] underscore the breadth of the targeted surveillance industry and the massive scope of targeting it enables worldwide.”
—WIRED, “Meta Removes 7 Surveillance-for-Hire Operations From Its Platforms

See also Surveillance for Hire.

Technology Addiction / Excessive Use:

Technology Addiction is hotly debated and is sometimes referred to as Excessive Use or Internet Addiction. All get at similar concepts.

Technology Addiction refers to the uncontrollable urge or impulse to continue using technology to the point that it starts to interfere with the individual’s mental, physical, and social life. This can be in forms of social media, internet surfing, video games, online gambling, and other related acts. It is also called internet addiction, internet use disorder (IUD), and internet addiction disorder (IAD).
—Addiction Resource, “Technology Addiction: Signs, Risk Groups, And Treatment Options

There is considerable controversy with respect to so-called Internet Addiction and whether it ought to be reified as a diagnosis in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. The relationship between “addiction” and various compulsive or impulsive behaviors is also a source of confusion. Some psychiatrists have argued that internet addiction shows the features of excessive use, withdrawal phenomena, tolerance, and negative repercussions that characterize many substance use disorders; however, there are few physiological data bearing on these claims. It is not clear whether internet addiction usually represents a manifestation of an underlying disorder, or is truly a discrete disease entity. The frequent appearance of internet addiction in the context of numerous comorbid conditions raises complex questions of causality.
—NIH, PubMed Central, Psychiatry, “Should DSM-V Designate “Internet Addiction” a Mental Disorder?

Telegram:

Telegram is a freeware, cross-platform, cloud-based instant messaging (IM) service. The service also provides end-to-end encrypted video calling, VoIP, file sharing and several other features. It was launched for iOS on 14 August 2013 and Android in October 2013. The servers of Telegram are distributed worldwide to decrease frequent data load with five data centers in different regions, while the operational center is based in Dubai in the United Arab Emirates. Various client apps are available for desktop and mobile platforms including official apps for Android, iOS, Windows, macOS and Linux (although registration requires an iOS or Android device and a working phone number). There are also two official Telegram web twin apps, WebK and WebZ, and numerous unofficial clients that make use of Telegram’s protocol. All of Telegram’s official components are open source, with the exception of the server which is closed-sourced and proprietary.

Telegram provides end-to-end encrypted voice and video calls and optional end-to-end encrypted “secret” chats. Cloud chats and groups are encrypted between the app and the server, so that ISPs and other third-parties on the network can’t access data, but the Telegram server can. Users can send text and voice messages, make voice and video calls, and share an unlimited number of images, documents (2 GB per file), user locations, animated stickers, contacts, and audio files. In January 2021, Telegram surpassed 500 million monthly active users. It was the most downloaded app worldwide in January 2021 with 1 billion downloads globally as of late August 2021.
—Wikipedia, “Telegram (software)

TikTok

TikTok, known in China as Douyin, is a video-focused social networking service owned by Chinese company ByteDance Ltd. It hosts a variety of short-form user videos, from genres like pranks, stunts, tricks, jokes, dance, and entertainment with durations from 15 seconds to three minutes. TikTok is an international version of Douyin, which was originally released in the Chinese market in September 2016.
—Wikipedia, “TikTok

The TikTok videos from around Kursk—all of which have had their location verified by the CIR—provide a snapshot of how powerful open source intelligence, also known as OSINT, has become. The videos contribute to media reports and policy discussions. They can be low quality and poorly framed, but they show exactly what is happening at a specific moment in time.

However, there are risks. Those sharing footage from Russia and Ukraine—including open source investigators, journalists, and people on social media—could wind up amplifying incorrect information if it has not first been verified. “We will have to be careful consumers of information—suspicious to the possibility of active measures designed to fool us,” Sandra Joyce, an executive vice president and head of global intelligence at security firm Mandiant, wrote in a blog post.
—WIRED, “If Russia Invades Ukraine, TikTok Will See It Up Close
[This is an example of a novel and beneficial use of social media.]

Time Series Forecasting:

Time Series Forecasting is the process of analyzing time series data using statistics and modeling to make predictions and inform strategic decision-making. It’s not always an exact prediction, and likelihood of forecasts can vary wildly—especially when dealing with the commonly fluctuating variables in time series data as well as factors outside our control. However, forecasting insight about which outcomes are more likely—or less likely—to occur than other potential outcomes. Often, the more comprehensive the data we have, the more accurate the forecasts can be. While forecasting and “prediction” generally mean the same thing, there is a notable distinction. In some industries, forecasting might refer to data at a specific future point in time, while prediction refers to future data in general.
—Tableau, “Time Series Forecasting: Definition, Applications, and Examples

When we associate a temporal or time component to [a] forecast, it becomes Time Series Forecasting and the data is called as Time Series Data. In statistical terms, time series forecasting is the process of analyzing the time series data using statistics and modeling to make predictions and informed strategic decisions. It falls under Quantitative Forecasting.

Examples of Time Series Forecasting are weather forecast over next week, forecasting the closing price of a stock each day etc.

To make close to accurate forecasts, we need to collect the time series data over a period, analyse the data and then build a model which will help is make the forecast. But for this process there are certain rules to be followed which help us achieve, close to accurate results.
—Analytics Vidhya, “Time Series Forecasting — A Complete Guide

Tor Network (The Onion Router):

Tor Network (The Onion Router) allows users to browse anonymously.

Tor is the most widely used anonymity system. It carries terabytes of traffic every day and serves millions of users.j However, network-level adversaries can deanonymize Tor users by launching routing attacks to observe user traffic and subsequently performing correlation analysis. Furthermore, the attacks have broad applicability to low-latency anonymous communication systems beyond Tor (for example, I2P anonymous network or even VPNs).

How Tor works. To prevent an adversary from associating a client with a destination server, Tor encrypts the network traffic and sends it through a sequence of relays (proxies) before going to the destination. The client selects three relays (entry, middle, exit), and constructs a circuit through them with layered encryption by repeatedly encrypting the next hop with the keys of the current hops. Each relay only learns the previous and next hops, and no relay or local network observer can identify both the source and destination.

However, Tor is known to be vulnerable to network-level adversaries who can observe traffic at both ends of the communication, that is, between client and entry, and between exit and server. By default, Tor does not obfuscate packet timings, so the traffic entering and leaving Tor are highly correlated. An adversary on the path at both ends can then perform traffic correlation analysis on the packet traces to deanonymize the clients.

Routing attacks on anonymity systems. Traditional attacks from network-level adversaries focus on passive adversaries who are already on the paths to observe Tor traffic. However, adversaries can exploit active routing attacks to strategically intercept Tor traffic, enabling on-demand and targeted attacks.
—CACM, “Securing Internet Applications from Routing Attacks

Tracking Cookies:

Tracking Cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals’ browsing histories.

HTTP Cookies (also called web cookies, Internet cookies, browser cookies, or simply cookies) are small blocks of data created by a web server while a user is browsing a website and placed on the user’s computer or other device by the user’s web browser. Cookies are placed on the device used to access a website, and more than one cookie may be placed on a user’s device during a session.

Cookies serve useful and sometimes essential functions on the web. They enable web servers to store stateful information (such as items added in the shopping cart in an online store) on the user’s device or to track the user’s browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to save for subsequent use information that the user previously entered into form fields, such as names, addresses, passwords, and payment card numbers.

Authentication cookies are commonly used by web servers to authenticate that a user is logged in, and with which account they are logged in. Without the cookie, users would need to authenticate themselves by logging in on each page containing sensitive information that they wish to access. The security of an authentication cookie generally depends on the security of the issuing website and the user’s web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie’s data to be read by an attacker, used to gain access to user data, or used to gain access (with the user’s credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site request forgery for examples).
—Wikipedia, “HTTP cookie

First-Party Cookies are placed [on your computer] by the website you are visiting and are generally useful – they remember if you’re logged in or not, for example.

Third-Party Cookies are added to your device by other parties the website you’re visiting has made agreements with. Third-party cookies, which can be placed in ads, can track you as you move around the web. They build a profile of you by gathering data on your browsing history and linking it to an identifier that’s attached to your name.

This highly-personalised, intrusive, approach is then used to show you targeted adverts – that’s why that pair of jeans you looked at last week are now stalking you around the web, or why a shop can know you’re pregnant before you’ve told your family. But tracking people using third-party cookies has been going out of fashion for years: both Firefox and Safari have introduced blockers to actively stop them from working.
—WIRED, “Google’s cookie ban and FLoC, explained

Trusted Computing Base:

The Trusted Computing Base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the security policy.

The careful design and implementation of a system’s trusted computing base is paramount to its overall security. Modern operating systems strive to reduce the size of the TCB so that an exhaustive examination of its code base (by means of manual or computer-assisted software audit or program verification) becomes feasible.
—Wikipedia, “Trusted computing base

Trusted Computing Base (TCB) is the totality of protection mechanisms within a computer system, including hardware, firmware, and software, the combination responsible for enforcing a security policy.
—National Institute of Standards and Technology, Computer Security Resource Center, “trusted computing base (TCB)

Trusted Internet Connections:

Trusted Internet Connections – The purpose of the TIC initiative is to enhance network security across the U.S. federal government. This objective was initially realized by consolidating external connections and routing all network traffic through approved devices at TIC access points. In the intervening years, cloud computing became well established, paving the way for modern security architectures and a shift away from the primary focus on perimeter security. Accordingly, the TIC initiative evolved to provide federal agencies with increased flexibility to use modern security capabilities.
—Microsoft, Azure Government Compliance, “Trusted Internet Connections guidance

Trusted Internet Connections (TIC) is a federal cybersecurity initiative intended to enhance network and data security across the Federal Government. The Office of Management and Budget (OMB), the Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA), and the General Services Administration (GSA) oversee the TIC initiative through a robust program that sets guidance and an execution framework for agencies to implement a baseline boundary security standard. Originally established in 2008, the initial versions of the TIC Initiative sought to consolidate federal networks and standardize perimeter security for the federal enterprise.

On September 12, 2019, OMB released M-19-26 with the goal of “Updating the Trusted Internet Connections (TIC Initiative)”. M-19-26 provided an enhanced approach for implementing the TIC initiative that enabled agencies with the increased flexibility to leverage modern security capabilities, while also establishing a process for ensuring the TIC Initiative is agile and responsive to advancements in technology and rapidly evolving threats. CISA branded this evolution of the program as TIC 3.0 and has since developed Core Program and Use Case Guidance to support and navigate the program’s paradigm shift. The TIC 3.0 program updates have modernized and expanded the original version of the initiative to drive security capabilities to better leverage advances in technology as agencies decentralize their network perimeters or system boundaries to better support the remote workforce and the continued adoption of cloud service provider environments.
—U.S. General Services Administration, “Trusted Internet Connections (TIC)

Trusted Platform Module:

Trusted Platform Module (TPM, also known as ISO/IEC 11889) is an international standard for a secure cryptoprocessor, a dedicated microcontroller designed to secure hardware through integrated cryptographic keys. The term can also refer to a chip conforming to the standard.

TPM is used for digital rights management (DRM), Windows Defender, Windows Domain logon, protection and enforcement of software licenses, and prevention of cheating in online games.

One of Windows 11’s system requirements is TPM 2.0. Microsoft has stated that this is to help increase security against firmware and ransomware attacks.
—Wikipedia, “Trusted Platform Module

Trusted Platform Module (TPM) technology is designed to provide hardware-based, security-related functions. A TPM chip is a secure crypto-processor that is designed to carry out cryptographic operations. The chip includes multiple physical security mechanisms to make it tamper-resistant, and malicious software is unable to tamper with the security functions of the TPM. Some of the key advantages of using TPM technology are that you can:

  • Generate, store, and limit the use of cryptographic keys.
  • Use TPM technology for platform device authentication by using the TPM’s unique RSA key, which is burned into it.
  • Help ensure platform integrity by taking and storing security measurements.

The most common TPM functions are used for system integrity measurements and for key creation and use. During the boot process of a system, the boot code that is loaded (including firmware and the operating system components) can be measured and recorded in the TPM. The integrity measurements can be used as evidence for how a system started and to make sure that a TPM-based key was used only when the correct software was used to boot the system.

Different versions of the TPM are defined in specifications by the Trusted Computing Group (TCG). For more information, consult the TCG Web site.
—Microsoft, Windows, Security, “Trusted Platform Module Technology Overview

Trustworthy Behavior:

Trustworthy Behavior – In this context it refers to efforts to promote and/or incentivize trustworthy behavior.

To trust an individual is to assert you believe you can predict how that individual will behave in various contexts. Of course, people surprise even themselves when left alone and confronted with extraordinary circumstances. Mature security organizations recognize this fact and know that collaborative efforts with shared goals are likely to produce better results than imposing controls on creative individuals (who might simply be motivated to show how those controls can be defeated).

Staff inevitably must respond to competing demands—productivity and efficiency on the one hand versus diligence, care, and security on the other. When checks or safeguards are put in place, especially those seen as an impediment to efficiency or productivity, leadership should expect creative and amusing workarounds. One example is the use of a single password for all accounts, which is easy to generate but increases the damage from succumbing to a phishing attack.
—CACM, “Implementing Insider Defenses

Trustworthy Systems:

Trustworthiness: The degree to which an information system (including the information technology components that are used to build the system) can be expected to preserve the confidentiality, integrity, and availability of the information being processed, stored, or transmitted by the system across the full range of threats. A Trustworthy Information System is a system that is believed to be capable of operating within defined levels of risk despite the environmental disruptions, human errors, structural failures, and purposeful attacks that are expected to occur in its environment of operation.
—National Institute of Science and Technology, Computer Security Resource Center, “trustworthiness

To increase users’ trust in the systems they use, there is a need to develop Trustworthy Systems. These systems must meet the needs of the system’s stakeholders with respect to security, privacy, reliability, and business integrity. The first major step in achieving trustworthiness is to properly and faithfully capture the stakeholders requirements. A requirement is something that the system must satisfy or a quality that the system must possess. A requirement is normally elicited from the system stakeholders, including its users, developers, and owners. Requirements should be specified before attempting to construct the system. If the correct requirements are not captured properly and faithfully, the correct system cannot be built.
—Encyclopedia of Information Science and Technology, “Modeling Security Requirements for Trustworthy Systems

Engineering Trustworthy Systems contains 223 principles organized into 25 chapters. This article will address 10 of the most fundamental principles that span several important categories and will offer rationale and some guidance on application of those principles to design.
—CACM, “Engineering Trustworthy Systems: A Principled Approach to Cybersecurity

Twitter:

Twitter is an American microblogging and social networking service on which users post and interact with messages known as “tweets”. Registered users can post, like, and retweet tweets, but unregistered users can only read those that are publicly available. Users interact with Twitter through browser or mobile frontend software, or programmatically via its APIs. Prior to April 2020, services were accessible via SMS. The service is provided by Twitter, Inc., a corporation based in San Francisco, California, and has more than 25 offices around the world. Tweets were originally restricted to 140 characters, but the limit was doubled to 280 for non-CJK languages in November 2017. Audio and video tweets remain limited to 140 seconds for most accounts.

Twitter was created by Jack Dorsey, Noah Glass, Biz Stone, and Evan Williams in March 2006 and launched in July of that year. By 2012, more than 100 million users posted 340 million tweets a day, and the service handled an average of 1.6 billion search queries per day. In 2013, it was one of the ten most-visited websites and has been described as “the SMS of the Internet”. As of Q1 2019, Twitter had more than 330 million monthly active users. In practice, the vast majority of tweets are written by a minority of users.
—Wikipedia, “Twitter

U

Ukraine

Ukraine is a country in Eastern Europe. It is the second-largest country by area in Europe after Russia, which it borders to the east and north-east. Ukraine also shares borders with Belarus to the north; Poland, Slovakia, and Hungary to the west; Romania and Moldova to the south; and has a coastline along the Sea of Azov and the Black Sea. It spans an area of 603,628 km2 (233,062 sq mi), with a population of 43.6 million, and is the eighth-most populous country in Europe. The nation’s capital and largest city is Kyiv.

The territory of modern Ukraine has been inhabited since 32,000 BC. Following its independence, Ukraine declared itself a neutral state; it formed a limited military partnership with Russia and other CIS countries while also establishing a partnership with NATO in 1994. In 2013, after the government of President Viktor Yanukovych had decided to suspend the Ukraine–European Union Association Agreement and seek closer economic ties with Russia, a several-months-long wave of demonstrations and protests known as the Euromaidan began, which later escalated into the Revolution of Dignity that led to the overthrow of Yanukovych and the establishment of a new government. These events formed the background for the annexation of Crimea by Russia in March 2014 and the War in Donbas, a protracted conflict with Russian-backed separatists, from April 2014 until the Russian invasion in February 2022.

Ukraine is a developing country ranking 74th in the Human Development Index. It suffers from a high poverty rate as well as severe corruption. However, because of its extensive fertile farmlands, Ukraine is one of the largest grain exporters in the world. Ukraine is a unitary republic under a semi-presidential system with separation of powers into legislative, executive, and judicial branches. The country is a member of the United Nations, the Council of Europe, the OSCE, the GUAM organization, the Association Trio, and the Lublin Triangle.
—Wikipedia, “Ukraine

Urban Technology:

While the term Urban Technology seems fairly simple—use and implementation of technology in urban environments—it touches on deeper issues as highlighted in the following excerpt:

Urban Technology projects have long sought to manage the city—to organize its ambiguities, mitigate its uncertainties, and predict or direct its growth and decline.

The hype is based partly on a belief that technology will deliver unprecedented value to urban areas. The opportunity seems so vast that at times our ability to measure, assess, and make decisions about it almost feels inadequate. The message to cities is: You don’t know what you’re dealing with, but you don’t want to get left behind.

After a decade of pilot projects and flashy demonstrations, though, it’s still not clear whether smart city technologies can actually solve or even mitigate the challenges cities face. A lot of progress on our most pressing urban issues—such as broadband access, affordable housing, or public transport—could come from better policies and more funding. These problems don’t necessarily require new technology.

What is clear is that technology companies are increasingly taking on administrative and infrastructure responsibilities that governments have long fulfilled. If smart cities are to avoid exacerbating urban inequalities, we must understand where these projects will create new opportunities and problems, and who may lose out as a result. And that starts by taking a hard look at how cities have fared so far.
—MIT Technology Review, “What cities need now

V

Virtual Black Box (VBB) Obfuscation:

If achievable, Virtual Black Box (VBB) Obfuscation would prevent a program from leaking any information other than the data it delivers from its outputs. Unfortunately, a seminal paper published in 2001 showed that it is impossible to guarantee VBB obfuscation for every possible type of program.

In the same paper, though, the authors showed that a weaker form they called iO [indistinguishability obfuscation] was feasible. While iO does not promise to hide all the details of a logic circuit, as long as they are scrambled using iO, different circuits that perform the same function will leak the same information as each other; an attacker would not be able to tell which implementation is being used to provide the results they obtain.
—CACM, “Better Security Through Obfuscation

In cryptography, Black-Box Obfuscation was a proposed cryptographic primitive which would allow a computer program to be obfuscated in a way such that it was impossible to determine anything about it except its input and output behavior. Black-box obfuscation has been proven to be impossible, even in principle.

Weaker variants: In their original paper exploring Black-Box Obfuscation, Barak et al. defined two weaker notions of cryptographic obfuscation which they did not rule out: indistinguishability obfuscation and extractability obfuscation (which they called “differing-inputs obfuscation”.)
—Wikipedia, “Black-box obfuscation

In other words, while Virtual Black Box (VBB) Obfuscation is the objective, it is unattainable and Indistinguishability Obfuscation (iO) is the best alternative at this point in time.

VKontakte:

VK (short for its original name VKontakte; Russian: ВКонтакте, meaning InContact) is a Russian online social media and social networking service based in Saint Petersburg. VK is available in multiple languages but it is predominantly used by Russian speakers. VK allows users to message each other publicly or privately; create groups, public pages, and events; share and tag images, audio, and video; and play browser-based games.

As of August 2018, VK had at least 500 million accounts. It is the most popular website in Russia. The network was also popular in Ukraine until it was banned by its parliament in 2017.

According to SimilarWeb, VK is the 16th most visited website in the world.
—Wikipedia, “VK (service)

VKontakte—Russian for “in touch.”
—WIRED, “How Telegram Became the Anti-Facebook” (See Section III of that article for a history of VKontakte.)

Vulnerabilities:

Vulnerability: A weakness in the computational logic (e.g., code) found in software and hardware components that, when exploited, results in a negative impact to confidentiality, integrity, or availability. Mitigation of the vulnerabilities in this context typically involves coding changes, but could also include specification changes or even specification deprecations (e.g., removal of affected protocols or functionality in their entirety).
—National Institute of Standards and Technology, National Vulnerability Database, “Vulnerabilities

In computer security, a Vulnerability is a weakness which can be exploited by a threat actor, such as an attacker, to cross privilege boundaries (i.e. perform unauthorized actions) within a computer system. To exploit a vulnerability, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerabilities are also known as the attack surface.

A security risk is often incorrectly classified as a vulnerability. The use of vulnerability with the same meaning of risk can lead to confusion. The risk is the potential of a significant impact resulting from the exploit of a vulnerability. Then there are vulnerabilities without risk: for example when the affected asset has no value. A vulnerability with one or more known instances of working and fully implemented attacks is classified as an exploitable vulnerability—a vulnerability for which an exploit exists. The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled—see zero-day attack.
—Wikipedia, “Vulnerability (computing)

Vulnerabilities Equities Process:

When state-backed hackers in Western nations find cybersecurity flaws, there are established methods for working out the potential costs and benefits of disclosing the security gap to the company that is affected. In the United States it’s called the “Vulnerabilities Equities Process.”
—MIT Technology Review, “Google’s top security teams unilaterally shut down a counterterrorism operation

The Vulnerabilities Equities Process (VEP) is a process used by the U.S. federal government to determine on a case-by-case basis how it should treat zero-day computer security vulnerabilities; whether to disclose them to the public to help improve general computer security, or to keep them secret for offensive use against the government’s adversaries.

The VEP was first developed during the period 2008–2009, but only became public in 2016, when the government released a redacted version of the VEP in response to a FOIA request by the Electronic Frontier Foundation. Following public pressure for greater transparency in the wake of the Shadow Brokers affair, the U.S. government made a more public disclosure of the VEP process in November 2017.
—Wikipedia, “Vulnerabilities Equities Process

W

Web Design:

Web Design encompasses many different skills and disciplines in the production and maintenance of websites. The different areas of web design include web graphic design; user interface design (UI design); authoring, including standardised code and proprietary software; user experience design (UX design); and search engine optimization. Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all. The term “web design” is normally used to describe the design process relating to the front-end (client side) design of a website including writing markup. Web design partially overlaps web engineering in the broader scope of web development. Web designers are expected to have an awareness of usability and if their role involves creating markup then they are also expected to be up to date with web accessibility guidelines.
—Wikipedia, “Web design

In the context of Internet Salmagundi, I’m using Web Design in relation to specific to elements of design such as graphics, usability, SEO and the design process, among other aspects. As opposed to Web Development which is much broader in scope which includes the more technical elements.

Web Development:

Web Development is the work involved in developing a Web site for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing a simple single static page of plain text to complex web applications, electronic businesses, and social network services. A more comprehensive list of tasks to which Web development commonly refers, may include Web engineering, Web design, Web content development, client liaison, client-side/server-side scripting, Web server and network security configuration, and e-commerce development.
—Wikipedia, “Web development

In the context of Internet Salmagundi I’m using Web Development in a global sense as opposed to Web Design that would be specific to elements of design such as graphics, usability, SEO and the design process, among other aspects.

Web Science:

Web Science seeks to investigate, analyze, and intervene in the Web from a sociotechnical perspective, integrating our understanding of the mathematical properties, engineering principles, and the social processes that shape its past, present, and future. Over the past 10 years, Web Science has made remarkable progress, providing the building blocks to face the challenges described here. And yet there is more do to.

Web Science in Europe begins from the premise that the Web is both technical and social. From this perspective, it is so difficult to disentangle the social from the technical that we describe the Web as ‘sociotechnical.’ The Web has been built on layers of communication at different levels of abstraction, from physical link layers (such as Ethernet) over Internet and transport layers (such as TCP/IP). It started as a Web of Documents (HTML), which served as the nucleus that other Webs would piggyback on: a Web of Data (RDF, SPARQL), a Web of Services (REST, JSON), a Web of Things.

All these layers are defined by underlying technical standards and are the result of sophisticated engineering. And they are also deeply social, in two key ways. First, they have been developed in particular social contexts, with social goals in mind. … Second, the Web merely offered a set of opportunities for humans to develop and populate information constructs and link with each other.
—CACM, “Web Science in Europe: Beyond Boundaries

Wiper:

In computer security, a Wiper is a class of malware intended to erase (wipe) the hard drive of the computer it infects, maliciously deleting data and programs.

A wiping component was used as part of the malware employed by the Lazarus Group—a cybercrime group with alleged ties to North Korea, during the 2013 South Korea cyberattack, and the 2014 Sony Pictures hack. The Sony hack also utilized RawDisk.

In 2017, computers in several countries—most prominently Ukraine, were infected by a variant of the Petya ransomware, which had been modified to effectively act as a wiper. The malware infects the master boot record with a payload that encrypts the internal file table of the NTFS file system. Although it still demanded a ransom, it was found that the code had been significantly modified so that the payload could not actually revert its changes, even if the ransom were successfully paid.

Several variants of wiper malware were discovered during the Russian invasion of Ukraine in early 2022 on computer systems associated with Ukraine. Named CaddyWiper, HermeticWiper and IsaacWiper by researchers, the programs showed little relation to each other, prompting speculation, that they were created by different state-sponsored actors in Russia especially for this occasion.
—Wikipedia, “Wiper (malware)

WPA2 Enterprise:

WPA2 Enterprise is the suite of protocols for secure communication in enterprise wireless networks. … Whenever people connect to enterprise wireless networks [using WPA2 Enterprise] with devices that are not configured correctly [they run significant risks]. [These risks apply to both WPA2 and WPA2 Enterprise.]
—CACM, “Enterprise Wi-Fi: We Need Devices That Are Secure by Default

IEEE 802.11i-2004, or 802.11i for short, is an amendment to the original IEEE 802.11, implemented as Wi-Fi Protected Access II (WPA2). The draft standard was ratified on 24 June 2004. This standard specifies security mechanisms for wireless networks, replacing the short Authentication and privacy clause of the original standard with a detailed Security clause. In the process, the amendment deprecated broken Wired Equivalent Privacy (WEP), while it was later incorporated into the published IEEE 802.11-2007 standard.

802.11i supersedes the previous security specification, Wired Equivalent Privacy (WEP), which was shown to have security vulnerabilities. Wi-Fi Protected Access (WPA) had previously been introduced by the Wi-Fi Alliance as an intermediate solution to WEP insecurities. WPA implemented a subset of a draft of 802.11i. The Wi-Fi Alliance refers to their approved, interoperable implementation of the full 802.11i as WPA2, also called RSN (Robust Security). 802.11i makes use of the Advanced Encryption Standard (AES) block cipher, whereas WEP and WPA use the RC4 stream cipher.
—Wikipedia, “IEEE 802.11i-2004

X

Y

Z

Zero-Click / Interactionless Attacks:

In the Zero-Click scenario no user interaction is required. Meaning, the attacker doesn’t need to send phishing messages; the exploit just works silently in the background. Short of not using a device, there is no way to prevent exploitation by a zero-click exploit; it’s a weapon against which there is no defense.
—Google Project Zero, “A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution

A Zero-Click attack is an exploit that requires no user interaction to operate – that is to say, no key-presses or mouse clicks. FORCEDENTRY, discovered in 2021, is an example of a zero-click attack.
—Wikipedia, “Exploit (computer security) – Zero-click

A Zero-Click / Interactionless Attack is one that victims don’t need to click a link or grant a permission for the hack to move forward.

Zero-Day Exploits:

A Zero Day is both a previously undetected hole in security software and the code attackers use to take advantage of said hole.

Zero day actually refers to two things—a Zero-Day Vulnerability or a Zero-Day Exploit.

Zero-Day Vulnerability refers to a security hole in software—such as browser software or operating system software—that is yet unknown to the software maker or to antivirus vendors. This means the vulnerability is also not yet publicly known, though it may already be known by attackers who are quietly exploiting it. Because zero day vulnerabilities are unknown to software vendors and to antivirus firms, there is no patch available yet to fix the hole and generally no antivirus signatures to detect the exploit, though sometimes antivirus scanners can still detect a zero day using heuristics (behavior-tracking algorithms that spot suspicious or malicious behavior).

Zero-Day Exploit refers to code that attackers use to take advantage of a zero-day vulnerability. They use the exploit code to slip through the hole in the software and plant a virus, Trojan horse or other malware onto a computer or device. It’s similar to a thief slipping through a broken or unlocked window to get into a house.

Zero day vulnerabilities and exploit codes are extremely valuable and are used not only by criminal hackers but also by nation-state spies and cyber warriors, like those working for the NSA and the U.S. Cyber Command.
—WIRED, “Hacker Lexicon: What Is a Zero Day?