-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DXE-3042 Question about provider behaviour destroy then create - edgedns record #462
Comments
For gtm, there is a property wait_on_complete on akamai_gtm_property, I guess we need the same here. |
I am looking at this issue and trying to reproduce with no luck so far. Does this happen often? Were there any other records before first apply? I wonder if maybe you had more than one record of which one was the mentioned A record. You replaced this A record with CNAME and completely removed the other records from .tf file, and hit apply. I can see how this could potentially cause synchronization issue, but having only one record and replacing it with other record should not have caused this error 🤔. |
Hi @majakubiec Yes the A is replaced by a CNAME, so no more in tf inputs as older type. |
Hi, we had the case again yesterday, here is some details we had, terraform plan before apply:
Then terraform apply failure:
|
Hi, |
I perfectly understand now! Thanks for the explanation, yep that would be very nice if we can do something at provider side since there are different type of records that does not conflict and can appears twice with same name (I can not have only one ak-records structure since I can have test.toto.tld TXT "something" with also test.toto.tld A X.X.X.X) and the conflict appears mainly with CNAME type and all other type of resources on type conversion attempt (ANY to CNAME or CNAME to ANY). I have (I think) a temporary workaround that consists in forcing parallel value to one on tf apply... Thx for your help |
Do you think it could work if I materialize:
https://developer.hashicorp.com/terraform/language/meta-arguments/depends_on Edit: I think It would work doing a CNAME to "another type" conversion but not "another type" to CNAME because it will try to create the CNAME before deleting the base type |
I think you are right with the above. We'll try to help you with this but could you please provide a minimal version of your current configuration? This would allow us to understand your setup more clearly, and potentially reproduce and debug the issue you're facing. Please ensure to redact any sensitive information before sharing your configuration |
Hi @majakubiec I shared with you some tf hcl in a private gh repo |
Hi @hightoxicity |
@hightoxicity, we're going to evaluate possible solutions that could mitigate the risk of running into the problem you described. This will require some time, but once we're set on anything we'll let you know. Unfortunately, we were not able to find any specific workarounds that would work with the config you shared with us. |
Hi @dstopka , thanks for the update and work you are about to do on that... To avoid to face again this annoying issue, a temp fix has been added at the CI level on our side:
Something like this before applying:
|
Hi there,
I want some details about the provider behaviour on terraform apply...
I know that terraform default behaviour is to destroy things before creating new one and it is a welcome behaviour in the coming explained usecase...
Terraform version: 1.2.9
Akamai provider version: 3.2.1
We use to encounter a typical issue with akamai_dns_record in the following situation:
If I re-run a tf plan after this failure it tells me now that only a CNAME record will be created (test.domain.com) and after that the terraform apply runs properly.
My conclusion is that the provider does not synchronously destroy the previous A record before creating the new CNAME with same name and it is a very annoying behaviour...
Is there any existing workaround or thing to do against this (get a synchronous destroy of resources) or is it a bug you should fix?
Thanks.
The text was updated successfully, but these errors were encountered: