Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HA Host Tag Behavior Inconsistency with multiple tags #10070

Open
mredaelli02 opened this issue Dec 9, 2024 · 1 comment · May be fixed by #10240
Open

HA Host Tag Behavior Inconsistency with multiple tags #10070

mredaelli02 opened this issue Dec 9, 2024 · 1 comment · May be fixed by #10240
Assignees
Milestone

Comments

@mredaelli02
Copy link

ISSUE TYPE
  • Bug Report
COMPONENT NAME
UI, HOST
CLOUDSTACK VERSION
4.19.1.3, 4.19.1.2

Now I'm using 4.19.1.3 but I noticed the same problem on 4.19.1.2

CONFIGURATION

advanced networking

OS / ENVIRONMENT
Oracle Linux 8.8, Oracle Linux 8.10 on host set as ha_host
SUMMARY

I have two different clusters:

  1. In the first cluster, none of the hosts have tags configured, except for one host dedicated to HA with the ha_host tag. This host is completely unable to start or migrate VMs.
    Example of nodes belonging to the first cluster (working one)
    Screenshot 2024-12-09 152516
    Screenshot 2024-12-09 152449

Retrieved information throgh cmk:

  "host": [
    {
      "hahost": true,
      "hostha": {
        "haenable": true,
        "haprovider": "kvmhaprovider",
        "hastate": "Ineligible"
      },
      "hosttags": "ha_host",
      "name": "cloud02"
    },
    {
      "hahost": false,
      "hostha": {
        "haenable": true,
        "haprovider": "kvmhaprovider",
        "hastate": "Ineligible"
      },
      "name": "cloud01"
    }
  ]
  1. In the second cluster, all hosts have multiple tags configured. One of these hosts is dedicated to HA and like other host in the same cluster it has the same tags plus the ha_host, it should not be able to start or migrate VMs, but in this case i can work with this host like a normal one.
    Example of nodes belonging to the second cluster
    Screenshot 2024-12-09 151919
    Screenshot 2024-12-09 151825

Retrieved information throgh cmk:

  "host": [
    {
      "hahost": false,
      "hostha": {
        "haenable": true,
        "haprovider": "kvmhaprovider",
        "hastate": "Ineligible"
      },
      "hosttags": "KVM,STORAGE1",
      "name": "cloud09"
    },
    {
      "hahost": true,
      "hostha": {
        "haenable": true,
        "haprovider": "kvmhaprovider",
        "hastate": "Ineligible"
      },
      "hosttags": "KVM,ha_host,STORAGE1",
      "name": "cloudha"
    }
  ]

As you can see the host is correctly recognised as hahost.
I noticed that the only difference is the multiple tag, this should be fixed by #4789. But apparently won't work for me.
Any possible solution?

STEPS TO REPRODUCE
Set Global Settings the host.ha=ha_host
Restart cloudstack management service
Set different tags to hosts
EXPECTED RESULTS
Example on first cluster
Screenshot 2024-12-09 152837 ##### ACTUAL RESULTS
Second Cluster

Host running on ha_host, this shouldn't happen
Screenshot 2024-12-09 153223
Screenshot 2024-12-09 153553

Copy link

boring-cyborg bot commented Dec 9, 2024

Thanks for opening your first issue here! Be sure to follow the issue template!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Dev In Progress
3 participants