-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TAP for client initialization and metadata backstop #128
base: master
Are you sure you want to change the base?
Conversation
This is very interesting, thanks for taking a stab at it. I'll leave some comments based on a first read -- My POV here is thinking about similar issues with pip where I imagine multiple pip instance that may have different bootstrap/backstop metadata (some of them relatively safe, some of them not) but share the same metadata cache. Some of the comments could be misguided because I had a bit of trouble understanding which source of metadata is referred to in cases: e.g. steps 5.0 - 5.2 talk about loading initial root metadata, then loading backstop metadata, loading root metadata role and finally mentions trusted root metadata in a way that is not crystal clear to me. For the rationale section, I would also mention the simplest case: the original out-of-band metadata may be protected by filesystem permissions or ACLs in a way that makes it more trustworthy than the cached runtime metadata -- no hardware protection is needed to make this TAP a reasonable idea. My impression is that in the TAP there's a root update loop that verifies the chain of trust from backstop root to most recent cached root (step 5.1), and then later a root update loop from current cached root to current remote repository root (step 5.7). Is there a reason not to consider the backstop root as trusted in the beginning of the process, and do a single Update the root metadata file-loop using both the local metadata cache and the remote repository as potential sources for new root versions (let's assume client caches all root versions)? At first read the process feels complex: for the non-root cases like timestamp, is it not possible to first establish the current "trusted local metadata" (either backstop or cached or nothing based on what's available) and then do the updates from remote repository using the current update process -- making the backstop as much an optional preliminary step to the update as possible? As an example, isn't there redundancy in e.g. the rollback checks or do I misunderstand something: if you make a rollback check for cached timestamp (against backstop timestamp, step 5.3.3) and later a rollback check for downloaded timestamp (against cached timestamp, step 5.8.3), do you really also need to do a rollback check for the downloaded timestamp (against backstop timestamp, step 5.8.2)? There are multiple references to "step 5.6." but that does not seem to exist. 5.2.7 Seems to add a root freeze test that seems harsh: update cycle MUST be aborted if the cached root is expired. That is probably not reasonable for some use cases. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great! I left a couple of comments
tap15.md
Outdated
|
||
* ... | ||
|
||
# Security Analysis |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be worth mentioning a fast forward attack on the backstop snapshot metadata (ie backstop metadata with very high version numbers). It looks like this is mitigated by deleting fast-forwarded snapshot metadata in section 5.7.10, but an explicit mention here might be helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for writing this up @erickt. Backstop metadata is a great idea.
It took me a couple of attempts to realise what was going on in the rationale section, where the TAP proposes several backstop approaches. As these will likely be the canonical source for how to implement backstop for clients, it would be good to clean those up such that they are easier to reference. I think it would be good to break them out into their own section, number the approaches, and clearly list the pros and cons of each approach.
As @mnm678 already suggested, I think it would be good to resolve #121 first so that we can simplify the specification section of this TAP. Not least of all because the specification has changed enough that the TAP specification section is out of date.
tap15.md
Outdated
|
||
## Resisting Rollback and Freeze Attacks with Offline Devices. | ||
|
||
This TAP enables a TUF system to use a large expiry window to support offline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you anticipate backstop and live metadata having the same expiration values? i.e. it seems reasonable to have a large expiration in a backstop metadata file, but would it be problematic to have that same large expiration value in the live repository metadata which is fetched over the network?
tap15.md
Outdated
_**5.1**. **Load the backstop metadata versions and hashes**, if any. We assume these | ||
version numbers are provided by a trusted out-of-band process._ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd love to see the TAP expand on this a bit more. At a high level, how do you imagine this being implemented?
Nice observation, it might be good to have a definitions section of the TAP? Then ensure we're being consistent in how we use terms throughout the TAP.
💯
This question made me realise I'm making some assumptions about backstop vs. local cache that we might want the TAP to clarify. I don't know if it's what you are suggesting, but I am very wary of the idea of reaching out to the remote repository to fill in "gaps" in the backstop metadata while loading the backstop. It feels like having distinct "load backstop" and "fetch new" stages help keep rollback and freeze attack protection strong. Especially when using backstop metadata means having large expiration times on timestamp and snapshot.
Indeed, this is more like what I'd imagined too.
Well spotted. I wouldn't expect the backstop root having expired to be an issue, much as the bootstrap root expiration doesn't matter in the current client workflow. Nice review @jku, thanks for taking the time to do it. |
For now, I want to focus on the root backstop process and comment it.
|
For what it is worth with respect to freeze attacks / metadata validity,
the TUF spec was written with the expectation that clients would update and
replace metadata within the lifetime of expiration in the normal case. So,
I would think it may be reasonable for backstop metadata to have its
expiration information ignored. (If we wanted to be slightly more
paranoid, we could check to see that some set of it was valid at all times,
but this is an attacker able to replace files in the backstop allowed in
the threat model?)
…On Wed, Dec 16, 2020 at 10:13 PM Martin Vrachev ***@***.***> wrote:
For now, I want to focus on the root backstop process and comment it.
1.
On step 5.1 : what happens if the loading of backstop metadata
versions and hashes fail? Probably end the whole process?
What if it fails only for a certain role?
2.
On step 5.2.2: Try loading version N+1 of the root metadata file from
the local repository when we say local repository it's good to leave a
sentence somewhere explaining that local could mean any secure
read-only storage you decide to use. In. a perfect scenario you will not
access the internet when loading any backstop metadata files, but should we
enforce it?
Probably, the best place to clarify this will be a definitions section
as @joshuagl <https://github.com/joshuagl> mentioned.
3.
I agree with @joshuagl <https://github.com/joshuagl> and I am too
worried about the idea for larger expiration times for backstop files. This
could easily backfire in multiple ways and it adds its own complexity and
limitations for tuf users. Some of the points I can think of:
- If we assume the backstop files have a longer expiration date, then
doesn’t this mean that every file in the backstop chain should have a
longer expiration time?
We check for freeze attack when we have reached the maximum backstop
root version locally available, but this means it could be any version.
- what do we get by checking for a freeze attack when the expiration
time is made longer?
- if we give larger than usual expiration times for the backstop
metadata files, then the user of tuf should be diligent and careful which
version of the metadata file he/she uses as backstop.
1. On step 5.2.8: when you say delete... and the backstop metadata
versions and hashes from where?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#128 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGROD4FR2SOCNG7XRJ5AN3SVC6BRANCNFSM4TKSQBGQ>
.
|
Signed-off-by: Joshua Lock <[email protected]>
Signed-off-by: Joshua Lock <[email protected]>
Signed-off-by: Joshua Lock <[email protected]>
Signed-off-by: Joshua Lock <[email protected]>
Nobody is actively leading this TAP, but it's full of good ideas. Assign it a number and mark it as deferred. Signed-off-by: Joshua Lock <[email protected]>
Following the discussion at the TUF working session in Paris I've tried to update this PR to address review comments and make it mergeable as a Deferred TAP. |
This TAP proposes extending the spec to describe how clients should initialize with local metadata, as well as allowing for a metadata backstop to protect against local rollback attacks.
This grew out of a proposal in theupdateframework/specification#107.