Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Old public keys should be stored in versioned metadata, not roles #835

Closed
ecordell opened this issue Jul 14, 2016 · 17 comments
Closed

Old public keys should be stored in versioned metadata, not roles #835

ecordell opened this issue Jul 14, 2016 · 17 comments
Milestone

Comments

@ecordell
Copy link
Contributor

Currently, rotating root keys versions the role and keeps the old keys around in the metadata indefinitely. Here's a quick example after adding two root keys:

root.json

{
    "signed": {
        "_type": "Root",
        "consistent_snapshot": false,
        "expires": "2026-07-10T15:54:46.480499158-04:00",
        "keys": {
            "072c38acffb2309e45bb942cfbd44d15a644b303713dda5dd0f2af25c71a6a7a": {
                "keytype": "ed25519",
                "keyval": {
                    "private": null,
                    "public": "uuyVXZHUGX3Sau6Yk3rC6mK2fq2fXODgKwXj8jkOo6I="
                }
            },
            "117e3e9ccce8a85a236c2156b1729e57ebd427b544a65c9dfa2d2145c64534e9": {
                "keytype": "ecdsa",
                "keyval": {
                    "private": null,
                    "public": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEcl5LAZXA5VLrKq5t+EzcWyskVOyTIANtqh4r0znfE+Qv867Lbx29vlU6A1BsoWtQGORIotzJ78aSbu5DWPCVgQ=="
                }
            },
            "5ccfcc693384568483aa960453b9e13fbb5d10d5251f1ab5ed5fc76c58d7372e": {
                "keytype": "ecdsa-x509",
                "keyval": {
                    "private": null,
                    "public": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJaekNDQVF5Z0F3SUJBZ0lSQUttbTF5R0VJZlIxa3ZZcldjWmJNYXd3Q2dZSUtvWkl6ajBFQXdJd0dURVgKTUJVR0ExVUVBeE1PY1hWaGVTOTBaWE4wTVRVNU16a3dIaGNOTVRZd056RXlNVGsxTkRRMFdoY05Nall3TnpFdwpNVGsxTkRRMFdqQVpNUmN3RlFZRFZRUURFdzV4ZFdGNUwzUmxjM1F4TlRrek9UQlpNQk1HQnlxR1NNNDlBZ0VHCkNDcUdTTTQ5QXdFSEEwSUFCSjRodWx6VmxETnZSS1ZYN1RZVytCeUNNdEFWNnlDZkI0Ymp1MXVZWlkwcUhRcU4KYkxORmRhblRLN2VlMmx2bVowalRuS3ZoMGVTZE5reFh2VThjbWhLak5UQXpNQTRHQTFVZER3RUIvd1FFQXdJRgpvREFUQmdOVkhTVUVEREFLQmdnckJnRUZCUWNEQXpBTUJnTlZIUk1CQWY4RUFqQUFNQW9HQ0NxR1NNNDlCQU1DCkEwa0FNRVlDSVFEbzlnTkhkSU5JZzFhN1BMcGQ1ZE1pcU1WMlRubzNVcEg1NTdWaVVvYTF5Z0loQU1JemcrM1YKcG5HcGRYTURFWkx5cnhGWHYzOURaUVdsTGlZbjd4K0x2d2FtCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
                }
            },
            "86958cab1fb3cbaeae393fe7b7af3109adf627effc8e0ba39e23b185d266c61e": {
                "keytype": "ecdsa",
                "keyval": {
                    "private": null,
                    "public": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEIowYCNsTPsmTkA4WzO5VdwOtVs3AZJnkW2pdPIKa30uB/oURJGx6gMGfuv3yZEu87hlVUOXT2DpIIRJIcyru5A=="
                }
            },
            "997c16ec6386ae57914a69983c712a05fa6382e4826251e9283c2cda21186045": {
                "keytype": "ecdsa",
                "keyval": {
                    "private": null,
                    "public": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEp5vDm97fT4AnrqS7qmcTrE7Mu4aeFIVAGXU9k3kigWALgRQ9c+g+FLeX4hxd3vJwOd7JjaSsalAyZC/wDduDGw=="
                }
            },
            "ca35263785be4f26f01d216c6f6238810c9ea997f35dcd2bbfc812a94e492278": {
                "keytype": "ecdsa",
                "keyval": {
                    "private": null,
                    "public": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEZMTJCaBeOdWis/ZtnX30W+TTuYBRz7dGIhTH5zGAlVcK1gY6qOKDJWtI6Nvuo73xsvKIpPXP2CkxhNCq16L66Q=="
                }
            }
        },
        "roles": {
            "root": {
                "keyids": [
                    "5ccfcc693384568483aa960453b9e13fbb5d10d5251f1ab5ed5fc76c58d7372e",
                    "117e3e9ccce8a85a236c2156b1729e57ebd427b544a65c9dfa2d2145c64534e9",
                    "997c16ec6386ae57914a69983c712a05fa6382e4826251e9283c2cda21186045"
                ],
                "threshold": 1
            },
            "root.1": {
                "keyids": [
                    "5ccfcc693384568483aa960453b9e13fbb5d10d5251f1ab5ed5fc76c58d7372e"
                ],
                "threshold": 1
            },
            "root.2": {
                "keyids": [
                    "5ccfcc693384568483aa960453b9e13fbb5d10d5251f1ab5ed5fc76c58d7372e",
                    "117e3e9ccce8a85a236c2156b1729e57ebd427b544a65c9dfa2d2145c64534e9"
                ],
                "threshold": 1
            },
            "snapshot": {
                "keyids": [
                    "ca35263785be4f26f01d216c6f6238810c9ea997f35dcd2bbfc812a94e492278"
                ],
                "threshold": 1
            },
            "targets": {
                "keyids": [
                    "86958cab1fb3cbaeae393fe7b7af3109adf627effc8e0ba39e23b185d266c61e"
                ],
                "threshold": 1
            },
            "timestamp": {
                "keyids": [
                    "072c38acffb2309e45bb942cfbd44d15a644b303713dda5dd0f2af25c71a6a7a"
                ],
                "threshold": 1
            }
        },
        "version": 3
    },
    "signatures": [{
        "keyid": "5ccfcc693384568483aa960453b9e13fbb5d10d5251f1ab5ed5fc76c58d7372e",
        "method": "ecdsa",
        "sig": "GEe8atrsRO5fy5dhcYQlsV0K2sr+Uwum1utBBViLLqTEdsYXGLPN6dWgpMQU8y6U4J/WvqV43xnMRLxaBmXwvw=="
    }]
}

I think the older keys should be available in root.2.json and root.1.json, not as additional roles on a single root.json file. This way clients that have older metadata can request intermediate metadata to become current.

I saw the discussion on #267 and #648, but I don't think this implementation follows TUF. I wanted to bring this up for discussion before working on it, since it would be a breaking change.

@cyli
Copy link
Contributor

cyli commented Jul 14, 2016

Heya @ecordell! Just wanted to summarize for those who may not have read through those two PRs, since they're really long:

You can actually just remove the data entirely without breaking any of the TUF functionality. None of the TUF validation depends on the old keys being present in the list of keys, nor the old root roles. Clients that are several versions behind do not actually check any of these roles, since anyone with a root key can then arbitrarily replace history.

The old root roles were added mainly as a usability feature for clients who need to sign a root after the initial root rotation, so they know which keys to sign with, because according to the TUF spec you have to sign with old keys basically forever.

And we would like some way of allow a client to explicit say "I don't have these keys anymore, sign anyway without these very old keys".

You are right in that the size of the root.json will keep growing after every root rotation, which is not ideal. We thought that was a better tradeoff than having such a client have to download all historic roots every time they would want to update the root file, or force them maintain their own list of public keys they'd have to sign with in perpetuity, but would be open to re-evaluating this.

Incidentally we've also been discussing how to have a (hopefully) cryptographically verifiable history so that clients who are behind can walk forward in time to a new root. @mtrmac made the suggestion of putting the old root checksum in the new root file (so we can download previous roots by checksum) - if we did want any client (either a root signer or just one catching up the latest version of the repo) to be able to walk a chain of old roots, perhaps that might suit instead of number the root.json's (since version numbers can skip)?

@ecordell
Copy link
Contributor Author

ecordell commented Jul 15, 2016

Thanks @cyli for the overview! It's really encouraging, because I think we're converging on some very useful additions to the TUF spec.

I'd like to link to some thoughts we've been ruminating on, a proposal for a public log ("cryptographically verifiable history") for TUF. I didn't see the issue on notary that you linked before, but it looks like we've separately arrived at the same conclusion, so I think this is a very good candidate for upstreaming into TUF. (As you can see from that document we're thinking about pinning as well; I think we have different requirements from DCT in that regard.)

But there may still be a case for storing previous roots by number: if a client is very out of date, (say they have version 1 and current is version 100) using a hash chain for verification would not be very performant - a client would have to download each file one at a time, verify, and then request by hash again. If the previous roots are addressable by number they can be requested in parallel and then the hash chain used to verify once they're downloaded locally.

Some combination of the two could be useful; perhaps the parent metadata is requested by hash and any other previous versions requested by number? Perhaps multiple "keyframe" hashes are stored, with version numbers filling in the gaps.

The old root roles were added mainly as a usability feature for clients who need to sign a root after the initial root rotation, so they know which keys to sign with, because according to the TUF spec you have to sign with old keys basically forever.

This wouldn't be true if there were history of root.json files - you would simply download back until you see a key you've trusted. (This property holds whether you use a hash chain or not)

You are right in that the size of the root.json will keep growing after every root rotation, which is not ideal. We thought that was a better tradeoff than having such a client have to download all historic roots every time they would want to update the root file, or force them maintain their own list of public keys they'd have to sign with in perpetuity, but would be open to re-evaluating this.

When downloading a history, you only need to go as far back as keys that you trust - once a client has updated to the keys in version 5, they never need to go back and pull version 4. Analogously for someone signing root files, they don't need to keep older keys once they've published the new version rotating them away. Am I missing some aspect that would make that not be the case?

Anyway I like where this is going, and I think we're on the same page. I'm a little confused about some of the claims for keeping old keys around, since as far as I can tell, having a history (hash chained or not) removes that requirement.

Opened up an issue for discussion on TUF: theupdateframework/python-tuf#340 . I'm hoping that between us and the TUF people we can come up with a generally useful solution.

@cyli
Copy link
Contributor

cyli commented Jul 15, 2016

This wouldn't be true if there were history of root.json files - you would simply download back until you see a key you've trusted.

Yes, we agree, and that was something we had previously discussed - we had just de-prioritized the history functionality. Signing forever is just recommended by the spec and easier to do, and the older root roles was a stopgap measure to make this more usable, until we were we provided all TUF guarantees with the history mechanism we chose since it will be used for a most critical part of the update path.

@cyli
Copy link
Contributor

cyli commented Jul 15, 2016

@ecordell Just to be clear, are you talking about not having to sign with all the old keys because a consumer client would have a way to walk backwards in history to a key that they trust? Hence publishers would have an easy time, only having to sign with two sets of keys on the root rotation itself, but making it more expensive for consumer clients to update (O(n) downloads of root)?

Or are you talking about a publisher client walking backwards in history to get all the keys to sign with? This would make it more expensive for publisher clients to publish in exchange for consumer clients to be able to update cheaply (O(1) downloads of root)?

@ecordell
Copy link
Contributor Author

Just to be clear, are you talking about not having to sign with all the old keys because a consumer client would have a way to walk backwards in history to a key that they trust? Hence publishers would have an easy time, only having to sign with two sets of keys on the root rotation itself, but making it more expensive for consumer clients to update (O(n) downloads of root)?

Yes exactly, but the downloads only have to occur up to the last seen root. Similar to a fast-forward in a git repo.

@endophage
Copy link
Contributor

@ecordell so I don't think this would provide the same security guarantees. Including the hash of the previous root gives us a way to move backwards in time in a trustworthy way. I don't think it would allow a client to move forward in time in a similarly trustworthy way.

If I have root v1, and I need to get to root v5 (the "current" one), the spec requires me to use data I already know, namely the root keys in root v1, to verify the new data I'm receiving, the root.json v5.

In the same situation with hashes, I can determine that root v5 chains back to root v1, but without signatures from the root v1 keys, how do I determine the root v5 data is valid? A malicious person could generate root v5 and chain it back to a valid root v4, how do I detect that their v5 is infact invalid?

@ecordell
Copy link
Contributor Author

In the same situation with hashes, I can determine that root v5 chains back to root v1, but without signatures from the root v1 keys, how do I determine the root v5 data is valid? A malicious person could generate root v5 and chain it back to a valid root v4, how do I detect that their v5 is infact invalid?

This was the idea behind having current metadata signed with the previous threshold/keys as described in theupdateframework/python-tuf#340. You trust v5 because it's chained to v4 and v4's keys have signed v5, etc back to v1.

If a threshold of keys at some point have been compromised, a client can be convinced to trust a fork, but this is not especially different from a threshold of keys being compromised in a non-chain case. A nice property of the chaining case is that if a client were to become compromised, the server would know immediately if the client ever established a connection again (because it would be requesting nonexistent hashes).

@endophage
Copy link
Contributor

endophage commented Jul 15, 2016

@ecordell ok, @cyli's last comment was trying to establish if you were suggesting we could do away with previous root keys being required to sign. It seems like that's not what you're suggesting.

To be 100% clear and make sure we're all understanding. You would not expect a consumer client to use the parent hashes in root files in any way to walk forward. Is that accurate?

OK, it seems like I was misunderstanding. @cyli has explained it to me a different way. Going to go away and think now. :-)

@ecordell
Copy link
Contributor Author

ecordell commented Jul 15, 2016

last comment was trying to establish if you were suggesting we could do away with previous root keys being required to sign.

That is what I was suggesting

You would not expect a consumer client to use the parent hashes in root files in any way to walk forward. Is that accurate?

No, I would expect clients to use the parent hashes to walk forward.

Let me write out an example and hopefully you can tell me where we're misunderstanding each other:

  1. root v1 - keys: A, B, C - threshold: 2 - signatures: A, B
  2. Company policy indicates the keys should be rotated, so A, B and C are swapped for D, E, and F
  3. root v2 - keys: D, E, F - threshold: 2 - parent: hash(v1) - signatures: A, C, D, F
  4. v2 signatures satisfy both key/threshold of v2 and key/threshold of v1
  5. root v1 must be accessible on the server
  6. no one needs to keep A, B, or C (private keys) around at all.

Suppose a client starts out by trusting A, B, and C. When they request updates for root.json, they get v2, which has a different set of trusted keys and a parent hash. They request the parent metadata by hash, and its keys are already trusted by the client (through your favorite bootstrapping process). The client then verifies v1 signatures, verifies v2 with the keys listed in v1, and then verifies v2 with the keys listed in v2. Once that's done the client can forget A, B, and C, and update their most recent trusted root keys to D, E, and F.

Does that clarify anything? Am I misunderstanding your concerns?

@endophage
Copy link
Contributor

@cyli got it clear for me but this helps too. We need to be more clear about what we mean when we say "old/previous root keys". Your proposal still requires the root keys of the immediately previous root version to sign, but removes the need for all old keys to sign a new root. So if there is a root change that doesn't involve a root rotation, only the current keys will have appeared to sign.

@ecordell
Copy link
Contributor Author

ecordell commented Jul 16, 2016

Your proposal still requires the root keys of the immediately previous root version to sign, but removes the need for all old keys to sign a new root.

Yes - this brings the signing requirements for rotation to be the same as for a non-rotation change. If it's worth making the distinction, I might argue that "requires the root keys of the immediately previous root" is the same as "don't need to store any previous root keys." Key rotation uses the current keys to sign for new ones (after which the previous "current" ones can immediately be forgotten).

So if there is a root change that doesn't involve a root rotation, only the current keys will have appeared to sign.

This is fine though: you can increment the version again, and through the history only the "current keys" will be trusted anyway. (i.e. you don't need "current" + "previous" keys to sign in the non-rotation case)

I think we're on the same page now!

@cyli
Copy link
Contributor

cyli commented Jul 22, 2016

Heya @ecordell, apologies for not responding sooner. So to be clear, the logic for clients when updating will now be:

  1. TUF-download chain... get to the root. If the root is not signed by the right keys then
  2. Request all versions of roots between your old root version and the new root version from the server
  3. Just download whatever the server gives you - you get a pile of roots, which you sort in ascending root version order.
  4. In a loop, Check if the next higher version root validates against your current good root. If it's good, promote that root to the current good root.
  5. If the final root validates, then continue the TUF download chain?

I'm not seeing any security issues with this, other than the fact that the server can serve you huge amounts of data, but I might be missing something. cc @diogomonica @NathanMcCauley ?

Root versions might increment because some other key got rotated, though. Are you anticipating that we'll store extra data regarding whether a root version involved a root change on the server, so we don't have to send the client every root version? Or should the client just take the performance hit by requesting every version in parallel so it doesn't have to depend on the server to filter out roots?

@ecordell
Copy link
Contributor Author

Client logic looks sound to me!

Just download whatever the server gives you

Based on the current version number the client has, and the latest version on the server, the client should have expectations about the number of files it will be downloading. In general I don't think you can make claims about size of metadata files though; it may make sense to have a client setting for maximum metadata size - there's probably some reasonable cap per notary deployment. (Aside: interesting that TUF specifies file sizes for targets and not metadata; I don't see why an endless data attack couldn't be performed against the metadata as well)

other than the fact that the server can serve you huge amounts of data

This is a good point and I think something to bring up with the TUF people. What stops me from sending an endless stream of data when you request any metadata file? (i.e. this isn't an issue with this proposal but TUF in general)

Or should the client just take the performance hit by requesting every version in parallel so it doesn't have to depend on the server to filter out roots?

This is what I was thinking, with the following reasoning:

  • Clients that are updating frequently will rarely have to request more than one root file
  • Clients that aren't updating frequently may have to request more files, but will only have to do so occasionally.

(Basically an amortization argument)

There's a lot of room for optimization: perhaps the client sends a header (ETag would be a good candidate) specifying the current version of the root that the client has. Then the server could send a compressed archive of all intermediate versions in one shot. And as you pointed out, the minimal set of root files that need to be downloaded are only those "keyframes" that contain root key rotations.

@cyli
Copy link
Contributor

cyli commented Jul 22, 2016

What stops me from sending an endless stream of data when you request any metadata file? (i.e. this isn't an issue with this proposal but TUF in general)

The spec includes a length for metadata checksums, although not for timestamp, but that should just be a very small file. Our download logic currently specifies a constant max size for the timestamp, and then downloads the other metadata one file at a time, cutting off the response body after the specified file size.

Although when we download a root for the first time, or a fresh root (since we need that to verify the timestamp), we just go with max size. We can probably do the same when downloading historical roots.

But a simple "download every version between mine and the latest" with limited parallelism seems like it may be a good first draft implementation.

We can also do something where if there are only 3 versions between the current version and the latest version, just download all of them, but if there are 100 we can download every 10th version, so we can narrow down where we may need to download extra roots, kind of like a more elaborate git bisect.

@cyli
Copy link
Contributor

cyli commented Aug 5, 2016

@ecordell Apologies for not following up on this sooner! After discussing as a team, this sounds like a good plan, if you would like to (or have already) worked on it. :)

@ecordell
Copy link
Contributor Author

@cyli No worries! My delay has been making sure we work with TUF to clarify this issue in the spec and update it in the reference implementation.

After appropriate review here (theupdateframework/python-tuf#341) I think it makes sense to start working on this. Happy to take that on if it hasn't already been started.

@cyli
Copy link
Contributor

cyli commented Aug 17, 2016

Oops, sorry, I thought I had hit "comment" - no one is working on it yet. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants