You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're deleting the first read region "eastus" via following change. We have to change the failover_priority of other read regions because "the maximum value for a failover priority = (total number of regions - 1)."
However, this causes all three read regions are removed because "Failover priority values must be unique for each of the regions in which the database account exists. Changing this causes the location to be re-provisioned and cannot be changed for the location with failover priority".
Although eventually the other two read regions will be added back, but this is not we expected. We want the CosmosDB always has more than 2 regions to be highly available. In addition, there are private endpoint changes triggered on Azure end when region was removed and this will cause the application fail to connect to CosmosDB since they use these private endpoints.
This same operation was tried from Azure portal, Azure portal can complete this operation without region removal. Not sure which endpoint Azure use for this, but from the change history, there are no region removal, only the failover priority was changed for all three read regions and at the same time, deleting the first read region as requested.
Terraform should use the same approach or endpoints like Azure portal did for this scenario.
### Expected Behaviour
Other read regions won't be removed to ensure high availability. Service shouldn't be interrupted.
### Actual Behaviour
Other read regions would be removed. On Azure end, there are private endpoint changes triggered and this will cause application fail to connect to CosmosDB since they use the private endpoint.
When region removed, service was interrupted, the service use Private endpoints to connect to CosmosDB.
### Steps to Reproduce
_No response_
### Important Factoids
_No response_
### References
_No response_
The text was updated successfully, but these errors were encountered:
leilifremont
changed the title
Remove CosmosDB first read region causing other read region removed
Remove CosmosDB first read region causing all other read regions removal
May 18, 2024
leilifremont
changed the title
Remove CosmosDB first read region causing all other read regions removal
Removing CosmosDB first read region will cause all other read regions be removed
May 18, 2024
Is there an existing issue for this?
Community Note
Terraform Version
1.7.3
AzureRM Provider Version
3.97.1
Affected Resource(s)/Data Source(s)
azurerm_cosmosdb_account
Terraform Configuration Files
Debug Output/Panic Output
We're deleting the first read region "eastus" via following change. We have to change the failover_priority of other read regions because "the maximum value for a failover priority = (total number of regions - 1)."
However, this causes all three read regions are removed because "Failover priority values must be unique for each of the regions in which the database account exists. Changing this causes the location to be re-provisioned and cannot be changed for the location with failover priority".
Although eventually the other two read regions will be added back, but this is not we expected. We want the CosmosDB always has more than 2 regions to be highly available. In addition, there are private endpoint changes triggered on Azure end when region was removed and this will cause the application fail to connect to CosmosDB since they use these private endpoints.
This same operation was tried from Azure portal, Azure portal can complete this operation without region removal. Not sure which endpoint Azure use for this, but from the change history, there are no region removal, only the failover priority was changed for all three read regions and at the same time, deleting the first read region as requested.
Terraform should use the same approach or endpoints like Azure portal did for this scenario.
The text was updated successfully, but these errors were encountered: