Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node selector / Node affinity behavior is inconsistent with Kubernetes Scheduler #1957

Open
nonylene opened this issue Feb 1, 2024 · 0 comments · May be fixed by #1958
Open

Node selector / Node affinity behavior is inconsistent with Kubernetes Scheduler #1957

nonylene opened this issue Feb 1, 2024 · 0 comments · May be fixed by #1958

Comments

@nonylene
Copy link

nonylene commented Feb 1, 2024

What steps did you take and what happened:
[A clear and concise description of what the bug is.]

  1. Set up a node A and set two labels to A: a: b and c: d
  2. Set up another node B and set one label to B: a: b
  3. Set up a sonobuoy plugin with DaemonSet driver
  • We want to avoid the plugin running on the node A
  1. Run a sonobuoy daemonset plugin with either PodSpec configuration below:
  • Case X: Put a node selector to run Pods only on node A and B and a node affinity to avoid running Pods on node A
  • Case Y: Put a node affinity that contains two match expressions in a node SelectorTerms: the one forces Pods to run only on node A and B and the other one prevents a Pod running on node A

Case X

podSpec:
  nodeSelector:
    a: b
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: c
            operator: NotIn
            values:
            - d

Case Y

podSpec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: a
            operator: In
            values:
            - b
          - key: c
            operator: NotIn
            values:
            - d
  1. Run a sonobuoy daemonset plugin with above plugin congiguration
  2. Sonobuoy counts the both node A and B as the available nodes

On our environment, node A is the Fargate node and node B is the normal node. DaemonSets cannot run on Fargate nodes, resulting the plugin always fails with No pod aws scheduleded on node A error.

What did you expect to happen:

Sonobuoy should not count the node A as an available node.

This issue is caused by inconsistency in handling nodeSelector and nodeAffinity between Kubernetes Scheduler and Sonobuoy.

Case X: Kubernetes schedules a Pod on a node that satisfies both nodeSelector and nodeAffinity.

If you specify both nodeSelector and nodeAffinity, both must be satisfied for the Pod to be scheduled onto a node.

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

Currently Sonobuoy performs like OR of nodeSelector and nodeAffinity code.

Case Y: In Kubernetes, match expressions in a single matchExpressions field are ANDed.

If you specify multiple expressions in a single matchExpressions field associated with a term in nodeSelectorTerms, then the Pod can be scheduled onto a node only if all the expressions are satisfied (expressions are ANDed).

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

Currently Sonobuoy counts a node as available if at least one expression is matched code.

Environment:

  • Sonobuoy version: 0.57.1 (go1.21.4)
  • Kubernetes version: (use kubectl version): Confirmed with multiple versions, 1.25~1.27
  • Kubernetes installer & version: AWS EKS
  • Cloud provider or hardware configuration: AWS EKS
  • OS (e.g. from /etc/os-release): n/a
  • Sonobuoy tarball (which contains * below): Please request if needed
@nonylene nonylene changed the title Node selector / Node affinity behavior is inconsistent with Kubernetes Node selector / Node affinity behavior is inconsistent with Kubernetes Scheduler Feb 1, 2024
nonylene added a commit to nonylene/sonobuoy that referenced this issue Feb 5, 2024
Align node filter behavior with Kuberntes scheduler
to avoid errors when both of nodeSelctor and nodeAffinity are set
to PodSpec.

> If you specify both nodeSelector and nodeAffinity, both must be satisfied for the Pod to be scheduled onto a node.
>
> https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

Issue: vmware-tanzu#1957
Signed-off-by: nonylene <[email protected]>
nonylene added a commit to nonylene/sonobuoy that referenced this issue Feb 5, 2024
Align nodeAffinity matching behavior with Kubernetes schduler.

> If you specify multiple expressions in a single matchExpressions field associated with a term in nodeSelectorTerms, then the Pod can be scheduled onto a node only if all the expressions are satisfied (expressions are ANDed).
>
> https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

Close vmware-tanzu#1957

Signed-off-by: nonylene <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant