[Contents]

Copyright © 2014 jsd

Trust, Authorization, Badging, and/or Identification
John Denker

    

Ford:  “I know who you are.”
Spike:  “Yeah, I know who I am too, so what?”
     
       — Buffy the Vampire Slayer
            episode “Lie to Me”

*   Contents

1  Background

By way of background, here is a memo from the keen-grasp-of-the-obvious department:

1.
Trust is not necessarily transitive or transferable. If Alice trusts Bob, and Bob trusts Carol, it does not mean that Alice trusts – or should trust – Carol.

Some people say trust is «never» transitive or transferable, but I consider that to be an overstatement. Sometimes you might want to use one trust relationship to jump-start another ... or sometimes not. We agree that this should not happen always or automatically.

2.
Trust is not even symmetric. If I trust you, that does not necessarily mean that you trust me.

3.
Trust is not one-dimensional. I might trust you with a certain set of things but not another.

This by itself suffices to guarantee that trust cannot be well-ordered (item 4), in accordance with the flower-pressing theorem (Brouwer, 1911).

4.
Trust is not well-ordered. In general, there cannot be any well-defined notion of “more trust” as opposed to “less trust” for reasons mentioned in item 3.

5.
Identification does not imply trust. There are some people I can identify exceedingly well, but do not trust at all.

6.
Conversely, trust does not imply identification. There are people I could not pick out of a lineup, and indeed have never met, but nevertheless trust with certain things.

7.
Identification does not imply authorization. Just because you now some identifying information about me does not prove that I authorized the transfer of $10,000 from my account at the credit union to your account in the Cayman Islands.

This leaves us with some major problems with PGP in general and the PGP Web-of-Trust in particular.

  1. The PGP WoT tries to quantify the “amount of trust” on a one-dimensional scale, which is impossible in principle. See item 3 and item 4 above.
  2. The usual instructions for signing somebody’s key thoroughly confuse the notions of identification and trust. See item 5 and item 6 above.
  3. PGP has mechanisms to provide a limited amount of transitivity, which is what “Web” means in “Web of Trust”. To the extent this happens automatically it is not trustworthy; see item 1. At the opposite extreme, the user-interface for dealing with this non-automatically is not very convenient.

In short: Trust is a complicated thing, and PGP as originally conceived does not have enough expressive power to say what needs to be said.

2  Signatures: Real-World Use Cases

In the real world, when I sign my name to a document, the body of the document spells out what my signature means in that context. In another context, my signature might mean something else entirely.

As a specific example, suppose I have a collection of PGP public keys that have been used by people who participate in the Cryptography mailing list. I have never met most of these participants in person. In fact the participant could be a sock puppet controlled by some unknown person. Indeed, for all I know, the participant could be the proverbial dog typing at the keyboard.

On the other hand, I do know something about these keys: I know they belong to established participants. I would like PGP to keep track of this fact. When I see the key again, I would like to know I am hearing from the same person (or dog) as last time.

Bad idea #1: I could sign all those keys using my personal key. However, according to the instructions that apply to key-signing parties, that would tend to imply that I met each such person and checked their driver’s license to make sure they were who they claimed to be. This is not at all the meaning I wish to convey.

Bad idea #2: Suppose I create an ad-hoc key just for this purpose. Call it the Crypto List Participation Award (CLPA) key. In the comments section of the CLPA key I put a URL linking to some document – perhaps this document – explaining exactly what a signature from this key means.

This idea has about ten problems. Some of the problems are fixable, but it’s not worth bothering, because of the remaining unfixable problems.

As an example of a minor problem, I might well want to distribute the CLPA document privately, rather than putting it up on the public keyservers. This can be mostly fixed by using a special-purpose private keyserver, although this is somewhat inconvenient.

Here’s a different issue: You can use PGP to sign documents, which is fine. The body of the document spells out what the signature means. In contrast, the whole idea of “keys signing keys” is ridiculous. There is no way to assign a reasonable semantics.

Another basic problem is that using the PGP keyservers to store and communicate things like the CLPA key is not scalable. In general, the keyserver is a roach motel: Keys check in but they never check out. The collection of Cryptography List participants is basically a document, and the idea that every time I sign a document I put it on the keyserver is pretty much ridiculous.

What’s missing here that the we would like the applications that use PGP keys to know about the signed CLPA list. The goal is that when the application sees one of the keys on the list, it should somehow tell that we know something interesting we know about the key.

3  Badges aka Credentials

The notion of badges may be of use here. The Crypto List Participation Award can be considered a type of badge. It implies a certain subset of the idea of identification, namely “same person as last time”. It implies a certain subset of the notion of trust, namely that the person is trusted to contribute to the list.

On social media sites, “badge” is the trendy term. The more formal term is “credential”. I will use the words “badge” and “credential” pretty much interchangeably. Another word with a similar meaning is “certificate” ... provided we use the broad, vernacular meaning of the word. In this context, it is emphatically not restricted to x509 certificates.

In the real world, there are many kinds of credentials: The schoolteacher has a teaching certificate; the aviator has a pilot certificate; the weapons inspector has a NATO security clearance; et cetera. It’s all very multidimensional, in contrast to the one-dimensional notion deplored in item 3 above. A given individual might have none of the just-mentioned credentials, or all of them, or any combination.

In many cases, badges should not be exhibited in public. They should be shown on a need-to-know basis, subject to the badge-holder’s approval. For example, if somebody works at a nuclear weapons lab, he needs some credentials to get in the door, but the fewer people who know about that, the better.

In general, people rely far too much on credentials. A credential is just a symbol. Never confuse a symbol with the thing symbolized. Certainly I don’t automatically believe everything said by some highly-credentialed “authority” figure. See reference 1. Someone with a high-school teaching certificate or flight-instructor certificate is not automatically a good instructor. I’ve seen plenty of counterexamples.

As limited as the notion of credentials might be, it’s still a huge advance over the one-dimensional notion of “trust” that’s built into PGP. For every problem that credentials have, the PGP “trust” model has the same problem, only orders of magnitude worse.

4  Identification versus Trust versus Authorization

For thousands of years, so long as almost nobody traveled outside their home village, identification implied a certain amount of trust, because if somebody reneged on his word, you could send enforcers to his house.

It seems people have thoughtlessly extended this pre-modern notion to electronic commerce. If they thought about it for a femtosecond, they would realize how ridiculous it is.

In the cryptography and security business, the term “identification” has a somewhat arcane technical meaning. I never like to argue about terminology, so I will avoid the problem by talking about something else and giving it a clear name: identifying information.

The idea that an e-commerce site would use identifying information about me as evidence that I have authorized a particular transaction is completely ridiculous. For one thing, it is spectacularly open to MITM attacks. Everytime I give my identifying information to another party, it empowers that party to impersonate me. Using a social-security number or other “personal” information as if it were a secret password is security malpractice of the highest order.

Nowadays lots of people say that “identity theft” is a big problem. I say that’s the wrong term for it, and the wrong way to think about it. The correct term is authorization protocol failure inflicted by bone-headed e-commerce protocols.

Let’s be clear: It is easy to obtain identifying information about me, always was, and always will be. It is simply not possible for me to prevent this. What is possible is that banks, merchants, and others must stop pretending that anyone who knows about me must be me.

5  Root Authority and/or Pinning

There was a time when networks had no security at all, not even against passive snooping. You could login using telnet, sending your password in the clear. That would be considered gross malpractice today. SSH has been around for 20 years.

Until recently, in some circles, worrying about active MITM attacks was considered tin-foil-hat paranoia. Not anymore. Nowadays we need strong protection against such attacks ... much stronger than what is presently deployed. IMHO, continuing with things as they are would constitute gross malpractice. Reference 2 analyzed 3,447,719 real-world SSL connections and discovered that at least 0.2% of them were forged. There is an ongoing high-level threat, as discussed in reference 3.

SSL relies on x509 certificates with (until recently) no pinning. In contrast, SSH relies on pinning with no certification.

An x509 certificate is trusted if it is signed by some higher authority. The chain goes back to some trusted “root CA” i.e. certificate authority. The problem is, various providers including Mozilla and Ubuntu recognize well over 100 root CAs. The chance that all of these root CAs can resist penetration from the NSA, from the Third Directorate, etc. is zero.

On the other side of the same coin, SSH is vulnerable, because if somebody mounts a successful MITM attack against your first connection to a given host, the attack can be continued indefinitely.

The combination of certification plus pinning should make MITM attacks considerably harder to carry out. For some all-too-rare good news on this front, see reference 4.

A perfect solution is probably not possible in principle, because if somebody with sufficient resources wants to create an artifical Truman Show / Matrix world for you to live in, they can probably do it. However, we should not let the perfect be the enemy of the good.

6  Traffic Analysis

I use SSL (https) on my personal web site. In theory, the goal is that visitors should be allowed to read stuff without the whole world knowing exactly what they are reading.

However, I have not the slightest expectation that SSL achieves this goal. That’s because of traffic analysis. Most of my web pages include .png images. By observing the number of files and their sizes, any halfway-determined adversary could figure out what documents are being read.

Similarly, even if I encrypt the body of every email message I can send, third parties can figure out who I’m corresponding with. Again, this is because of traffic analysis.

There exist methods for combatting traffic analysis. This starts with sending lots of cover traffic. It also requires removing identifying marks (such as exact byte counts) from documents. Alas, these methods are not particularly easy to carry out, and standard tools provide very little in the way of support.

7  References

1.
John Denker,
“Soft versus Hard Evidence; Appeal to Authority etc.”
www.av8n.com/physics/authority.htm

2.
Lin-Shung Huang, Alex Rice, Erling Ellingsen, and Collin Jackson,
“Analyzing Forged SSL Certificates in the Wild”
https://www.linshunghuang.com/papers/mitm.pdf

3.
Ryan Gallagher,
“New Snowden Documents Show NSA Deemed Google Networks a ’Target’”
http://www.slate.com/blogs/future_tense/2013/09/09/shifting_shadow_stormbrew_flying_pig_new_snowden_documents_show_nsa_deemed.html

4.
Richard Chirgwin,
“Firefox 32 moves to kill MITM attacks”
http://www.theregister.co.uk/2014/09/03/firefox_32_moves_to_kill_mitm_attacks/
[Contents]

Copyright © 2014 jsd