Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add TAP for client initialization and metadata backstop #128

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

erickt
Copy link

@erickt erickt commented Nov 4, 2020

This TAP proposes extending the spec to describe how clients should initialize with local metadata, as well as allowing for a metadata backstop to protect against local rollback attacks.

This grew out of a proposal in theupdateframework/specification#107.

tap15.md Outdated Show resolved Hide resolved
@jku
Copy link
Member

jku commented Nov 5, 2020

This is very interesting, thanks for taking a stab at it. I'll leave some comments based on a first read -- My POV here is thinking about similar issues with pip where I imagine multiple pip instance that may have different bootstrap/backstop metadata (some of them relatively safe, some of them not) but share the same metadata cache.

Some of the comments could be misguided because I had a bit of trouble understanding which source of metadata is referred to in cases: e.g. steps 5.0 - 5.2 talk about loading initial root metadata, then loading backstop metadata, loading root metadata role and finally mentions trusted root metadata in a way that is not crystal clear to me.

For the rationale section, I would also mention the simplest case: the original out-of-band metadata may be protected by filesystem permissions or ACLs in a way that makes it more trustworthy than the cached runtime metadata -- no hardware protection is needed to make this TAP a reasonable idea.

My impression is that in the TAP there's a root update loop that verifies the chain of trust from backstop root to most recent cached root (step 5.1), and then later a root update loop from current cached root to current remote repository root (step 5.7). Is there a reason not to consider the backstop root as trusted in the beginning of the process, and do a single Update the root metadata file-loop using both the local metadata cache and the remote repository as potential sources for new root versions (let's assume client caches all root versions)?

At first read the process feels complex: for the non-root cases like timestamp, is it not possible to first establish the current "trusted local metadata" (either backstop or cached or nothing based on what's available) and then do the updates from remote repository using the current update process -- making the backstop as much an optional preliminary step to the update as possible? As an example, isn't there redundancy in e.g. the rollback checks or do I misunderstand something: if you make a rollback check for cached timestamp (against backstop timestamp, step 5.3.3) and later a rollback check for downloaded timestamp (against cached timestamp, step 5.8.3), do you really also need to do a rollback check for the downloaded timestamp (against backstop timestamp, step 5.8.2)?

There are multiple references to "step 5.6." but that does not seem to exist.

5.2.7 Seems to add a root freeze test that seems harsh: update cycle MUST be aborted if the cached root is expired. That is probably not reasonable for some use cases.

Copy link
Contributor

@mnm678 mnm678 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great! I left a couple of comments

tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated

* ...

# Security Analysis
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth mentioning a fast forward attack on the backstop snapshot metadata (ie backstop metadata with very high version numbers). It looks like this is mitigated by deleting fast-forwarded snapshot metadata in section 5.7.10, but an explicit mention here might be helpful.

Copy link
Member

@joshuagl joshuagl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for writing this up @erickt. Backstop metadata is a great idea.

It took me a couple of attempts to realise what was going on in the rationale section, where the TAP proposes several backstop approaches. As these will likely be the canonical source for how to implement backstop for clients, it would be good to clean those up such that they are easier to reference. I think it would be good to break them out into their own section, number the approaches, and clearly list the pros and cons of each approach.

As @mnm678 already suggested, I think it would be good to resolve #121 first so that we can simplify the specification section of this TAP. Not least of all because the specification has changed enough that the TAP specification section is out of date.

tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated

## Resisting Rollback and Freeze Attacks with Offline Devices.

This TAP enables a TUF system to use a large expiry window to support offline
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you anticipate backstop and live metadata having the same expiration values? i.e. it seems reasonable to have a large expiration in a backstop metadata file, but would it be problematic to have that same large expiration value in the live repository metadata which is fetched over the network?

tap15.md Outdated Show resolved Hide resolved
tap15.md Outdated
Comment on lines 140 to 141
_**5.1**. **Load the backstop metadata versions and hashes**, if any. We assume these
version numbers are provided by a trusted out-of-band process._
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to see the TAP expand on this a bit more. At a high level, how do you imagine this being implemented?

@joshuagl
Copy link
Member

Some of the comments could be misguided because I had a bit of trouble understanding which source of metadata is referred to in cases: e.g. steps 5.0 - 5.2 talk about loading initial root metadata, then loading backstop metadata, loading root metadata role and finally mentions trusted root metadata in a way that is not crystal clear to me.

Nice observation, it might be good to have a definitions section of the TAP? Then ensure we're being consistent in how we use terms throughout the TAP.

For the rationale section, I would also mention the simplest case: the original out-of-band metadata may be protected by filesystem permissions or ACLs in a way that makes it more trustworthy than the cached runtime metadata -- no hardware protection is needed to make this TAP a reasonable idea.

💯

My impression is that in the TAP there's a root update loop that verifies the chain of trust from backstop root to most recent cached root (step 5.1), and then later a root update loop from current cached root to current remote repository root (step 5.7). Is there a reason not to consider the backstop root as trusted in the beginning of the process, and do a single Update the root metadata file-loop using both the local metadata cache and the remote repository as potential sources for new root versions (let's assume client caches all root versions)?

This question made me realise I'm making some assumptions about backstop vs. local cache that we might want the TAP to clarify.

I don't know if it's what you are suggesting, but I am very wary of the idea of reaching out to the remote repository to fill in "gaps" in the backstop metadata while loading the backstop. It feels like having distinct "load backstop" and "fetch new" stages help keep rollback and freeze attack protection strong. Especially when using backstop metadata means having large expiration times on timestamp and snapshot.

At first read the process feels complex: for the non-root cases like timestamp, is it not possible to first establish the current "trusted local metadata" (either backstop or cached or nothing based on what's available) and then do the updates from remote repository using the current update process -- making the backstop as much an optional preliminary step to the update as possible? As an example, isn't there redundancy in e.g. the rollback checks or do I misunderstand something: if you make a rollback check for cached timestamp (against backstop timestamp, step 5.3.3) and later a rollback check for downloaded timestamp (against cached timestamp, step 5.8.3), do you really also need to do a rollback check for the downloaded timestamp (against backstop timestamp, step 5.8.2)?

Indeed, this is more like what I'd imagined too.

5.2.7 Seems to add a root freeze test that seems harsh: update cycle MUST be aborted if the cached root is expired. That is probably not reasonable for some use cases.

Well spotted. I wouldn't expect the backstop root having expired to be an issue, much as the bootstrap root expiration doesn't matter in the current client workflow.

Nice review @jku, thanks for taking the time to do it.

tap15.md Outdated Show resolved Hide resolved
@MVrachev
Copy link
Contributor

MVrachev commented Dec 16, 2020

For now, I want to focus on the root backstop process and comment it.

  1. On step 5.1 : what happens if the loading of backstop metadata versions and hashes fail? Probably end the whole process?
    What if it fails only for a certain role?

  2. On step 5.2.2: Try loading version N+1 of the root metadata file from the local repository when we say local repository it's good to leave a sentence somewhere explaining that local could mean any secure read-only storage you decide to use. In a perfect scenario you will not access the internet when loading any backstop metadata files, but should we enforce it?
    Probably, the best place to clarify this will be a definitions section as @joshuagl mentioned.

  3. I agree with @joshuagl and I am too worried about the idea for larger expiration times for backstop files. This could easily backfire in multiple ways and it adds its own complexity and limitations for tuf users. Some of the points I can think of:

  • If we assume the backstop files have a longer expiration date, then doesn’t this mean that every file in the backstop chain should have a longer expiration time?
    We check for freeze attack when we have reached the maximum backstop root version locally available, but this means it could be any version, so every version should have a longer expiration date...
  • what do we get by checking for a freeze attack when the expiration time is made longer?
  1. On step 5.2.8: when you say delete... and the backstop metadata versions and hashes from where?

@JustinCappos
Copy link
Member

JustinCappos commented Dec 17, 2020 via email

Signed-off-by: Joshua Lock <[email protected]>
Nobody is actively leading this TAP, but it's full of good ideas. Assign it
a number and mark it as deferred.

Signed-off-by: Joshua Lock <[email protected]>
@joshuagl
Copy link
Member

Following the discussion at the TUF working session in Paris I've tried to update this PR to address review comments and make it mergeable as a Deferred TAP.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants