Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Annotate resources with URNs to prevent accidental deletion #2999

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

blampe
Copy link
Contributor

@blampe blampe commented May 7, 2024

Proposed changes

User feedback here indicates that upsert behavior is actually desirable, and the concern is more along the lines of preventing accidental resource deletions. I agree with all the concerns -- while it's good to be aware of aliases and deleteBeforeReplace, the fact remains that our default behavior is still very foot-shooty.

What if Pulumi wrote the pulumi resource name as pulumi.com/something annotation into the k8s resource. Upon deletion it checks if the pulumi resource name matches the value of this annotation, and only if that's the case it can go ahead and actually delete the resource.

I liked this proposal and wanted to see if it was feasible, because it could potentially provide a stricter/safer execution mode without requiring any additional effort from the user.

To quickly recap, #2948 describes a situation where a resource is renamed in Pulumi but unchanged in the cluster.

-const name1 = new k8s.apps.v1.Deployment("name1", { metadata: { name: "foo" }, ...})
+const name2 = new k8s.apps.v1.Deployment("name2", { metadata: { name: "foo" }, ...})

The engine plans this as create default/foo followed by delete default/foo, so Pulumi thinks it's created a new resource but the end result is actually a deletion. Yikes!!

The proposal is to annotate the object in the cluster with its URN and then only perform deletion when the URN matches what we expect to be deleting. This is not hard to implement, but there are a couple caveats:

  • Resources shared by multiple stacks. I don't know how common this is, but we have to assume some people are using multiple stacks to operate on the same cluster resource(s). In those situations this proposal would refuse to delete a resource if it was previously manipulated by another stack. If we want that behavior to continue to work we can treat URNs belonging to different projects/stacks as valid deletions.
  • Field managers. The create step currently performs an Apply that doesn't actually patch the resource, but it does use a different field manager from what's already recorded in the cluster. With this proposal, the create step would modify the resource to update its ARN annotation and this generates a SSA conflict. We could handle this a few ways:
    1. Surface the SSA conflict. This is the easiest but also the least user-friendly option, since they want the rename to "just work".
    2. Modify how we assign field managers. Instead of giving every resource its own unique field manager, make the field manager consistent for every physical resource in the stack. This allows us to upsert a new URN annotation without a conflict. Patch resources still need unique managers. The way this is implemented, existing field managers would be unchanged since we prefer to use whatever's already in state.

In any case, this is a big enough change that it would need to be opt-in -- what would we call the config to enable this?

Refs #2926
Refs #2948

Copy link

github-actions bot commented May 7, 2024

Does the PR have any schema changes?

Looking good! No breaking changes found.
No new resources/functions.

Copy link

codecov bot commented May 7, 2024

Codecov Report

Attention: Patch coverage is 3.57143% with 27 lines in your changes are missing coverage. Please review.

Project coverage is 29.82%. Comparing base (09be004) to head (391a3e1).

Files Patch % Lines
provider/pkg/provider/provider.go 5.55% 17 Missing ⚠️
provider/pkg/await/await.go 0.00% 9 Missing ⚠️
provider/pkg/await/util.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2999      +/-   ##
==========================================
- Coverage   29.86%   29.82%   -0.05%     
==========================================
  Files          63       63              
  Lines        8353     8365      +12     
==========================================
  Hits         2495     2495              
- Misses       5634     5646      +12     
  Partials      224      224              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@blampe blampe marked this pull request as draft May 8, 2024 18:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant