-
Notifications
You must be signed in to change notification settings - Fork 510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Old public keys should be stored in versioned metadata, not roles #835
Comments
Heya @ecordell! Just wanted to summarize for those who may not have read through those two PRs, since they're really long: You can actually just remove the data entirely without breaking any of the TUF functionality. None of the TUF validation depends on the old keys being present in the list of keys, nor the old root roles. Clients that are several versions behind do not actually check any of these roles, since anyone with a root key can then arbitrarily replace history. The old root roles were added mainly as a usability feature for clients who need to sign a root after the initial root rotation, so they know which keys to sign with, because according to the TUF spec you have to sign with old keys basically forever. And we would like some way of allow a client to explicit say "I don't have these keys anymore, sign anyway without these very old keys". You are right in that the size of the root.json will keep growing after every root rotation, which is not ideal. We thought that was a better tradeoff than having such a client have to download all historic roots every time they would want to update the root file, or force them maintain their own list of public keys they'd have to sign with in perpetuity, but would be open to re-evaluating this. Incidentally we've also been discussing how to have a (hopefully) cryptographically verifiable history so that clients who are behind can walk forward in time to a new root. @mtrmac made the suggestion of putting the old root checksum in the new root file (so we can download previous roots by checksum) - if we did want any client (either a root signer or just one catching up the latest version of the repo) to be able to walk a chain of old roots, perhaps that might suit instead of number the |
Thanks @cyli for the overview! It's really encouraging, because I think we're converging on some very useful additions to the TUF spec. I'd like to link to some thoughts we've been ruminating on, a proposal for a public log ("cryptographically verifiable history") for TUF. I didn't see the issue on notary that you linked before, but it looks like we've separately arrived at the same conclusion, so I think this is a very good candidate for upstreaming into TUF. (As you can see from that document we're thinking about pinning as well; I think we have different requirements from DCT in that regard.) But there may still be a case for storing previous roots by number: if a client is very out of date, (say they have version 1 and current is version 100) using a hash chain for verification would not be very performant - a client would have to download each file one at a time, verify, and then request by hash again. If the previous roots are addressable by number they can be requested in parallel and then the hash chain used to verify once they're downloaded locally. Some combination of the two could be useful; perhaps the parent metadata is requested by hash and any other previous versions requested by number? Perhaps multiple "keyframe" hashes are stored, with version numbers filling in the gaps.
This wouldn't be true if there were history of root.json files - you would simply download back until you see a key you've trusted. (This property holds whether you use a hash chain or not)
When downloading a history, you only need to go as far back as keys that you trust - once a client has updated to the keys in version 5, they never need to go back and pull version 4. Analogously for someone signing root files, they don't need to keep older keys once they've published the new version rotating them away. Am I missing some aspect that would make that not be the case? Anyway I like where this is going, and I think we're on the same page. I'm a little confused about some of the claims for keeping old keys around, since as far as I can tell, having a history (hash chained or not) removes that requirement. Opened up an issue for discussion on TUF: theupdateframework/python-tuf#340 . I'm hoping that between us and the TUF people we can come up with a generally useful solution. |
Yes, we agree, and that was something we had previously discussed - we had just de-prioritized the history functionality. Signing forever is just recommended by the spec and easier to do, and the older root roles was a stopgap measure to make this more usable, until we were we provided all TUF guarantees with the history mechanism we chose since it will be used for a most critical part of the update path. |
@ecordell Just to be clear, are you talking about not having to sign with all the old keys because a consumer client would have a way to walk backwards in history to a key that they trust? Hence publishers would have an easy time, only having to sign with two sets of keys on the root rotation itself, but making it more expensive for consumer clients to update (O(n) downloads of root)? Or are you talking about a publisher client walking backwards in history to get all the keys to sign with? This would make it more expensive for publisher clients to publish in exchange for consumer clients to be able to update cheaply (O(1) downloads of root)? |
Yes exactly, but the downloads only have to occur up to the last seen root. Similar to a fast-forward in a git repo. |
@ecordell so I don't think this would provide the same security guarantees. Including the hash of the previous root gives us a way to move backwards in time in a trustworthy way. I don't think it would allow a client to move forward in time in a similarly trustworthy way. If I have root v1, and I need to get to root v5 (the "current" one), the spec requires me to use data I already know, namely the root keys in root v1, to verify the new data I'm receiving, the root.json v5. In the same situation with hashes, I can determine that root v5 chains back to root v1, but without signatures from the root v1 keys, how do I determine the root v5 data is valid? A malicious person could generate root v5 and chain it back to a valid root v4, how do I detect that their v5 is infact invalid? |
This was the idea behind having current metadata signed with the previous threshold/keys as described in theupdateframework/python-tuf#340. You trust v5 because it's chained to v4 and v4's keys have signed v5, etc back to v1. If a threshold of keys at some point have been compromised, a client can be convinced to trust a fork, but this is not especially different from a threshold of keys being compromised in a non-chain case. A nice property of the chaining case is that if a client were to become compromised, the server would know immediately if the client ever established a connection again (because it would be requesting nonexistent hashes). |
@ecordell
OK, it seems like I was misunderstanding. @cyli has explained it to me a different way. Going to go away and think now. :-) |
That is what I was suggesting
No, I would expect clients to use the parent hashes to walk forward. Let me write out an example and hopefully you can tell me where we're misunderstanding each other:
Suppose a client starts out by trusting A, B, and C. When they request updates for Does that clarify anything? Am I misunderstanding your concerns? |
@cyli got it clear for me but this helps too. We need to be more clear about what we mean when we say "old/previous root keys". Your proposal still requires the root keys of the immediately previous root version to sign, but removes the need for all old keys to sign a new root. So if there is a root change that doesn't involve a root rotation, only the current keys will have appeared to sign. |
Yes - this brings the signing requirements for rotation to be the same as for a non-rotation change. If it's worth making the distinction, I might argue that "requires the root keys of the immediately previous root" is the same as "don't need to store any previous root keys." Key rotation uses the current keys to sign for new ones (after which the previous "current" ones can immediately be forgotten).
This is fine though: you can increment the version again, and through the history only the "current keys" will be trusted anyway. (i.e. you don't need "current" + "previous" keys to sign in the non-rotation case) I think we're on the same page now! |
Heya @ecordell, apologies for not responding sooner. So to be clear, the logic for clients when updating will now be:
I'm not seeing any security issues with this, other than the fact that the server can serve you huge amounts of data, but I might be missing something. cc @diogomonica @NathanMcCauley ? Root versions might increment because some other key got rotated, though. Are you anticipating that we'll store extra data regarding whether a root version involved a root change on the server, so we don't have to send the client every root version? Or should the client just take the performance hit by requesting every version in parallel so it doesn't have to depend on the server to filter out roots? |
Client logic looks sound to me!
Based on the current version number the client has, and the latest version on the server, the client should have expectations about the number of files it will be downloading. In general I don't think you can make claims about size of metadata files though; it may make sense to have a client setting for maximum metadata size - there's probably some reasonable cap per notary deployment. (Aside: interesting that TUF specifies file sizes for targets and not metadata; I don't see why an endless data attack couldn't be performed against the metadata as well)
This is a good point and I think something to bring up with the TUF people. What stops me from sending an endless stream of data when you request any metadata file? (i.e. this isn't an issue with this proposal but TUF in general)
This is what I was thinking, with the following reasoning:
(Basically an amortization argument) There's a lot of room for optimization: perhaps the client sends a header ( |
The spec includes a length for metadata checksums, although not for timestamp, but that should just be a very small file. Our download logic currently specifies a constant max size for the timestamp, and then downloads the other metadata one file at a time, cutting off the response body after the specified file size. Although when we download a root for the first time, or a fresh root (since we need that to verify the timestamp), we just go with max size. We can probably do the same when downloading historical roots. But a simple "download every version between mine and the latest" with limited parallelism seems like it may be a good first draft implementation. We can also do something where if there are only 3 versions between the current version and the latest version, just download all of them, but if there are 100 we can download every 10th version, so we can narrow down where we may need to download extra roots, kind of like a more elaborate git bisect. |
@ecordell Apologies for not following up on this sooner! After discussing as a team, this sounds like a good plan, if you would like to (or have already) worked on it. :) |
@cyli No worries! My delay has been making sure we work with TUF to clarify this issue in the spec and update it in the reference implementation. After appropriate review here (theupdateframework/python-tuf#341) I think it makes sense to start working on this. Happy to take that on if it hasn't already been started. |
Oops, sorry, I thought I had hit "comment" - no one is working on it yet. :) |
Currently, rotating root keys versions the role and keeps the old keys around in the metadata indefinitely. Here's a quick example after adding two root keys:
root.json
I think the older keys should be available in
root.2.json
androot.1.json
, not as additional roles on a singleroot.json
file. This way clients that have older metadata can request intermediate metadata to become current.I saw the discussion on #267 and #648, but I don't think this implementation follows TUF. I wanted to bring this up for discussion before working on it, since it would be a breaking change.
The text was updated successfully, but these errors were encountered: