Schneier on Security, February 15, 1999
By Bruce Schneier
“The term we use for bad cryptography products is “snake oil,” which was the turn-of-the-century American term for quack medicine. ”
The problem with bad security is that it looks just like good security. You can’t tell the difference by looking at the finished product. Both make the same security claims; both have the same functionality. Both might even use the same algorithms: triple-DES, 1024-bit RSA, etc. Both might use the same protocols, implement the same standards, and have been endorsed by the same industry groups. Yet one is secure and the other is insecure.
The term we use for bad cryptography products is “snake oil,” which was the turn-of-the-century American term for quack medicine. It brings to mind traveling medicine shows, and hawkers selling their special magic elixir that would cure any ailment you could imagine.
Here I want to talk about some of the common snake-oil warning signs, and how you can pre-judge products from their advertising claims. These warning signs are not foolproof, but they’re pretty good.
[Some of these apply in general, not just to cryptography/security.]
Warning Sign #1: Pseudo-mathematical gobbledygook.
- Long noun chains don’t automatically imply security.
- The point here is that, like medicine, cryptography is a science. It has a body of knowledge, and researchers are constantly improving that body of knowledge: designing new security methods, breaking existing security methods, building theoretical foundations, etc. Someone who obviously does not speak the language of cryptography is not conversant with the literature, and is much less likely to have invented something good. It’s as if your doctor started talking about “energy waves and healing vibrations.” You’d worry.
Warning Sign #2: New mathematics.
- Beware cryptography based on new paradigms or new areas of mathematics: chaos theory, neural networks, coding theory, zeta functions. Cryptography is hard; the odds that someone without any experience in the field can revolutionize it are small.
Warning Sign #3: Proprietary cryptography.
- I promise not to start another tirade about the problems of proprietary cryptography. I just include it here as a warning sign. … Any company that won’t discuss its algorithms or protocols has something to hide. There’s no other possible reason. (And don’t let them tell you that it is patent-pending; as soon as they file the patent, they can discuss the technology. If they’re still working on the patent, tell them to come back after they can make their technology public.)
Warning Sign #4: Extreme cluelessness.
- Some companies make such weird claims that it’s obvious that they don’t understand the field.
Warning Sign #5: Ridiculous key lengths.
- Longer key lengths are better, but only up to a point. AES will have 128-bit, 192-bit, and 256-bit key lengths. This is far longer than needed for the foreseeable future. In fact, we cannot even imagine a world where 256-bit brute force searches are possible. It requires some fundamental breakthroughs in physics and our understanding of the universe. For public-key cryptography, 2048-bit keys have same sort of property; longer is meaningless. Think of this as a sub-example of Warning Sign #4: if the company doesn’t understand keys, do you really want them to design your security product?
Warning Sign #6: One-time pads.
- One-time pads don’t make sense for mass-market encryption products. They may work in pencil-and-paper spy scenarios, they may work on the U.S.-Russia teletype hotline, but they don’t work for you. Most companies that claim they have a one-time pad actually do not. They have something they think is a one-time pad. A true one-time pad is provably secure (against certain attacks), but is also unusable.
Warning Sign #7: Unsubstantiated claims.
- Jaws Technologies says this about its new encryption technology: “This scientifically acclaimed encryption product is the world’s strongest commercially available software of its kind.” Acclaimed by who? The Web site doesn’t say. World’s strongest by what comparison? Nothing.
- Some companies claim “military-grade” security. This is a meaningless term. There’s no such standard. There’s no such standard. And at least in the U.S., military cryptography is not available for non-government purposes (although government contractors can get it for classified contracts).
- Other companies make claims about other algorithms that are “broken,” without giving details. Or that public-key cryptography is useless. Don’t believe any of this stuff. If the claim seems far-fetched, it probably is.
Warning Sign #8: Security proofs.
- There are two kinds of snake-oil proofs. The first are real mathematical proofs that don’t say anything about real security. The second are fake proofs.
- More subtle are actual provably secure systems. They do exist. … but mathematical proofs have little to do with actual product security.
Warning Sign #9: Cracking contests.
- Suffice it to say that cracking contests are no guarantee of security, and often mean that the designers don’t understand what it means to show that a product is secure.
Conclusion: Separating the Good from the Bad.
- These snake-oil warning signs are neither necessary nor sufficient criteria for separating the good cryptography from the snake oil. Just as there could be insecure products that don’t trigger any of these nine warning signs, there could be secure products that look very much like snake oil. But most people don’t have the time, patience, or expertise to perform the kind of analysis necessary to make an educated determination. In the absence of a Food-and-Drug-Administration-like body to regulate cryptography, the only thing a reasonable person can do is to use warning signs like these as guides.
- Further reading: The “Snake Oil” FAQ is an excellent source of information on questionable cryptographic products, and a good way to increase the sensitivity of your bullshit detector. Get your copy at: <http://www.interhack.net/people/cmcurtin/snake-oil-faq.html>.
Here is Matt Curtin’s list of warning signs (see his article for full descriptions):
- Trust Us, We Know What We’re Doing
- Secret Algorithms
- Revolutionary Breakthroughs
- Experienced Security Experts, Rave Reviews, and Other Useless Certificates
- Algorithm or product X is insecure
- Recoverable Keys
- Exportable from the USA
- Military Grade [or Bank Grade]
About the Author:
Bruce Schneier is a public-interest technologist, working at the intersection of security, technology, and people. He’s been writing about security issues on my blog since 2004, and in his monthly newsletter since 1998. He is a fellow and lecturer at Harvard’s Kennedy School and a board member of EFF. This personal [entry from his] website expresses the opinions of neither of those organizations.