Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-4381: DRA update for 1.31 #4709

Merged
merged 10 commits into from
Jun 13, 2024
Merged

Conversation

pohly
Copy link
Contributor

@pohly pohly commented Jun 7, 2024

This is based on #4667 which got already merged.

If you review this anew, start with 4381, then circle back to 3063.

This update is the result of the prototyping in
https://github.com/kubernetes-sigs/wg-device-management/tree/main/dra-evolution.

The key changes compared to 1.30 are:

  • Removal of CRD support. Vendor configuration parameters
    are still supported as raw extensions of the in-tree types.
  • Removal of immediate allocation.
  • Removal of "claim.status.sharable": all claims can be shared when multiple pods reference them.
  • Removal of the notion of a "device model" and a selection mechanism
    that only works for one such model. This simplifies the API.
  • Instead of one claim referencing a ResourceClass, individual
    requests inside a claim now reference a DeviceClass.
  • Support for "match attributes".
  • Support for "admin access".
  • Support for "partititional devices".
  • Support for more than one device per request ("all" and fixed amount).
  • Support for selecting devices from a request in a claim for a
    container instead of exposing all devices of a claim.
  • Support for network-attached devices with structured parameters (not a priority, came out of generalizing the pool concept)

Editorial changes:

  • Reduced usage of the word "resource" in favor of "devices" and
    "DRA drivers".
  • Removal of some out-dated text.

This revision of the KEP roughly corresponds to
https://github.com/kubernetes-sigs/wg-device-management/tree/258d109f54d3baaa48719519dec37a450a1dd5a1/dra-evolution
(= kubernetes-sigs/wg-device-management#14 with some minor edits).

Differences:

What was added from kubernetes-sigs/wg-device-management#26:

  • ResourceClaim.Status.Allocation.NodeName -> NodeSelector
  • flattening of the allocation result

Undecided:

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/node Categorizes an issue or PR as relevant to SIG Node. labels Jun 7, 2024
@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jun 7, 2024
Comment on lines 513 to 526
In addition, whenever kubelet starts, it first deletes all `ResourceSlices`
belonging to the node with a `DeleteCollection` call that uses the node name in
a field filter. This ensures that no pods depending in DRA get scheduled to the
node until the required DRA drivers have started up again (node reboot) and
reconnected to kubelet (kubelet restart). It also ensures that drivers which
don't get started up again at all don't leave stale `ResourceSlices`
behind. Garbage collection does not help in this case because the node object
still exists. For the same reasons, the ResourceSlices belonging to a driver
get removed when the driver unregisters, this time with a field filter for node
name and driver name.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned elsewhere, does this then imply that we will never be able to store a status on these objects?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add a status. Once we do, we need to become more careful with deletion and add more rules around it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we are not doing status in 1.31? I thought we were? It's on the cusp, I guess. The more important status is the one in #4681 which lets users see why their pods are failing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, it's on the cusp. Someone still needs to define what that status looks like and who populates it (kubernetes-sigs/wg-device-management#19).

I prefer to merge this without a status, just to be sure that we have something that is agreed upon for 1.31. Perhaps we can get an extension to add the status.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked into this today (see issue comments), but it's not really obvious what the exact use cases are and how to model them in the API. Let's not try for 1.31.

Comment on lines 577 to 594
A ResourceClaim is a request to allocate one or more devices. Each request in a
claim may reference a pre-defined DeviceClass to narrow down which devices are
desired or describe that directly in the request itself through a device
selector. A device selector is a CEL expression that must evaluate to true if a
device satisfies the request. A special `device` variable provides access to
the attributes of a device.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there just a single selector per request? At one point we had a design for impliciltly "anding" multiple deviceSelector strings. Is that now not part of the design?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A request has multiple requirements, and each requirement is a CEL selector. So you can AND them together.

I need to change "a device selector" into "one or more device selectors".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe selectors is a better name for requirements? So, we would have selectors and constraints.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like that. Changed, with "expression" instead of "deviceSelector".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The text still talks about "device selector" because that is what that expression is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What other fields to we expect to live along side expressionas a one-of in the future?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable definitions for the CEL expressions was mentioned in one thread somewhere - found it: https://docs.google.com/document/d/1i0xVx884vsWzVaVb72LtEqjsVILmNkeZimAyz4Bj-m0/edit?disco=AAABLY07Bjc

Comment on lines 986 to 1003
// This field can be used to limit access from nodes to slices with
// the same node name. It also indicates to autoscalers that adding
// new nodes of the same type as some old node might also make new
// devices available.
//
// NodeName and PoolName are mutually exclusive.
//
// +optional
NodeName string

// PoolName can be used to describe pools of devices which are not local
// to a node. It can be left blank if the device names are unique in the cluster.
//
// It must not be longer than 253 and must consist of one or more DNS sub-domains
// separated by slashes.
//
// +optional
PoolName string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need two names? Can't PoolName just be the NodeName in cases where all devices are node local?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, it's really confusing to see things like:

  • only field1 OR field2 must be set
  • only if field2 is set, then you can set field3

Copy link
Contributor Author

@pohly pohly Jun 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can group "PoolName" and "NodeSelector" under "Pool", a struct containing "Name" and "NodeSelector".

Then "NodeName" and "Pool" are properly mutually exclusive.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the confusing thing to me is that whether we are node-local or not, we still talk about having a "pool" of devices. Which (putting on my hat as a new-comer to this) would imply to me that I should have a PoolName in both cases.

I don't have a suggestion off the top of my head to avoid this (or an alternate name for this), but it's worth noting how confusing this is.

Copy link
Contributor

@klueska klueska Jun 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we require both PoolName and NodeSelector and in the single node case, set NodeSelector to the one and only node where the devices are available (as well as recommend that PoolName be set to the node name to ensure its uniqueness)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does PoolName really have to be required? It might not be needed when device names are globally unique.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with requiring it, driver authors can always provide "global" if they cannot leave it empty - I just want to be sure.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say PoolName == NodeName is a suggestion (but it doesn’t have to be so long as it is globally unique) and yes it should be required so we don’t have to over explain the canonical naming convention.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to this plan and to requiring PoolName, I think this is a lot clearer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, changed.

I kept the part which says that a pool name may contains slashes, so we could end up with `dra.example.com/foo/bar/gpu" if a driver wants to separate different parts of its pool name.

Comment on lines 1571 to 1542
// Admins is true if the source of the piece was a class and thus
// not something that a normal user would have been able to set.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

source of the piece?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this means "came from the class" maybe we should just name it that way ClassConfig or something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"configuration piece", or just "configuration".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I don't know what a "configuration piece" is...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's drop the "piece".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm using "configuration parameters" when I need a plural form. "Configurations" looked odd.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've not renamed it to ClassConfig because I prefer to describe the semantic instead of the mechanism.


AllocationResultModel
DriverConfigurationParameters
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the extra level of indirection?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We agreed that "configuration" is a one-of, with one option right now the opaque vendor configuration. Standardized alternatives could get added later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I'm still confused as to what was decided for when we embed a struct vs. when we just put the one-of parameters inline. My understanding was that we decided to always put the one-of parameters inline...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ConfigurationParameters is a one-of struct because it gets embedded in a slice where we need a struct. I'm using DriverConfigurationParameters as a struct because it mirrors that other struct.

Inlining a single field also has another challenge. It has to be required, but at the same time has to be a pointer because it might become optional:

type DeviceConfiguration struct {
   Admin bool
   
    // Currently required, might become optional when alternatives are added. Check for ni!
    Opaque *runtime.RawExtension
}

I find the struct easier to understand.

// The generation gets bumped in all slices of a pool whenever device
// definitions changed. A consumer must only use device definitions from the slices
// with the highest generation number and ignore the others.
PoolGeneration int64
Copy link
Contributor Author

@pohly pohly Jun 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can metadata.generation be used instead of creating a new field?

I tried with the current ResourceSlice implementation in 1.30 and could set metadata.generation on create, but not bump it during patching (tried kubectl apply and kubectl apply --server-side). Perhaps it would work if the type had a different support for the field.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, metadata.generation on updates is not possible:
https://github.com/kubernetes/kubernetes/blob/6ba9fa89fb5889550649bfde847c742a55d3d29c/staging/src/k8s.io/apiserver/pkg/registry/rest/update.go#L122-L127

That seems mandatory for all resources, so we have to use a separate field for this cross-resources generation.

Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sending comments after a full day of reading. I stopped at ClaimStatus L1632. This is what you missed by me being sick, but the bill always comes due :) Really, though, I am sorry to have so many thoughts and keep pulling threads you thought were resolved.

FTR: I am not treating this a s a FULL review of API. I'll save that for the PR, but I will flag anything that seems like it will have major impact. I encourage you to send an API review WELL BEFORE the full impl is done, in case we make any structural changes (I know how annoying that is).

// access is requested. Admin access is granted also for
// devices which are in use.
//
// The default if `all` and `count` are unset is to allocate one device.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same trap as above - you can't have nothing mean something. If we ever wanted to add a range option, but the client was back-rev, it would see all and count as unset and assume 1, which is wrong.

You can't even get away with saying that the apiserver will default count to 1 if no other field is set. If you have a modern client and a back-rev apiserver, and the client passes range (which the apiserver doesn't know about), the default will kick in, corrupting the client's intention.

AFAICT your choices are:

  1. never add to this one-of and rely on "nothing set" = default

  2. require exactly 1 field, which means trivial use-cases have to say count: 1

  3. add a discriminator, if the discriminator is not set it gets defaulted to "Exact". Any other mode REQUIRES the discriminator be set.

We don't often do the discriminator thing in k8s. We have made the mistake several times and found ourselves in case 1 (e.g. NetworkPolicy). We usually can make case 2 work. In this case I think you might want 3.

Simple case: say nothing, get default CountMode: Exact and Count: 1

If I want 2: set CountMode: Exact and Count: 2

If I want all: set CountMode: All

If I want a range: set CountMode: Range and Range: { min: 1, max: 4 }

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh. That makes sense. I think the discriminator is probably what we want here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Amount struct is referenced via a pointer in RequestDetail. Therefore if we add a range, the back-rev client will see a pointer to an empty struct, which is safe.

The "if all and count are unset" is how the users sees it and therefore how the documentation is written. What the apiserver and client see is a nil *Amount pointer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, you are saying, "nil" means 1, but "empty" means "there must be something I don't now"; that is, we can differentiate as it is written, no need to change?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. It's too clever and depends on the content-type and language/library details.

E.g. we support JSON as a transport. A forward-rev sender who sends range will unmarshall as a nil Amount in the receiver: https://go.dev/play/p/eQGyV5_UsIV

Also what does it mean if I pass all: false and nothing else?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This count "type" would have to be stored as a string to avoid issues with round-tripping.

But as we probably want to allow count: 1, an IntOrString is a better choice.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IntOrString is generally seen as a bad idea. JSON supports it, but protobuf (and any other reasonable transport) does not. We should not repeat that, I think.

if we allow countMode: exact and countMode: all in 1.31 and validation rejects everything else, how do we add countMode: range later?

API changes require an alpha. Assuming that we are successful in getting DRA to beat in 31:

1.31: we have only the main DRA gate
DRA gate (alpha) off: NA, not usable <-- default
DRA gate (alpha) on: would see "countMode: range" and reject it

1.32: we add a new gate for DRA count ranges
DRA gate (beta) off: NA, not usable
DRA gate (beta) on + range gate (alpha) off: would see "countMode: range" and reject it <-- default
DRA gate (beta) on + range gate (alpha) on: would see "countMode: range" and allow it

1.33: we have 2 gates
DRA gate (beta) off: NA, not usable
DRA gate (beta) on + range gate (beta) off: would see "countMode: range" and reject it
DRA gate (beta) on + range gate (beta) on: would see "countMode: range" and allow it <-- default

Any back-rev scheduler or autoscaler would say "I don't know what to do with that" and reject it.

The important piece is that API gate-checks are always "gate_enabled || already_in_use". In 1.32 you would have to manually enable the range gate (alpha), which waives rollback guarantees. In 1.33 the range gate (beta) would be on by default. If you used a range in 1.33, then rolled back to 1.32, it meets the "already_in_use" criterion. and would not fail validation.

Embedding the data into a string could ALSO satisfy this pattern, but we rarely, if ever, do formatted strings this way. The few places we have done it tend to be awkward (e.g. container image tags forcing consumers to parse for :latest). This would either abuse IntOrString (bad) or carry numbers as strings (bad, and yes I hate Quantity for this) and require a grammar and parsing (bad).

Or you can just require that clients pass exactly one option, making count: 1 explicit (no default).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the already_in_use part is what makes validation not break when downgrading or disabling feature gates. That applies to adding new enum values as well as strings which need be parsed. But amount: "1" (now) and amount: "1-2" (future extension) requires parsing, so that's out.

I think that leaves @thockin's original proposal in this thread as the most viable approach (discriminator with additional fields). I'll keep the current simplistic one-of struct in my implementation PR for now, but will switch to the discriminator approach soon.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working thru it. I am trying to collect information to write a clearer API convention on this specifically.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

//
// Must be a DNS subdomain and should end with a DNS domain owned by the
// vendor of the driver.
DriverName string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this moved up to type ConfigurationParameters (and was optional) then I think it would be simpler:

type ConfigurationParameters struct {
    DriverName string // optional

    // one-of
    Opaque *runtime.RawExtension
}

Copy link
Contributor Author

@pohly pohly Jun 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The driver name for a runtime.RawExtension config cannot be optional, otherwise it is unclear without peeking into the content which driver they are for.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you don't specify a drivername, pass it to all of them?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And then they have to decode the configuration of some other driver to determine whether it is "for them". That seems fragile and error prone: decoding errors would have to be treated as "must be for someone else" and ignored when it really was meant for the driver and just has a typo somewhere.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or it causes a decode error which causes the pod to fail, and the user should fix the YAML? I guess that's pretty ugly, but needs to be possible anyway. OK.


The `DeviceClassName` field may be left empty. The request itself can specify which
driver needs to provide devices. There are some corner cases:
- Empty `claim.requests`: this is a "null request" which can be satisfied without
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is ths useful? Again, don't make nothing mean something. How would you add that hypothetical non-device resource if this field being empty means "null" ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intention behind allowing an empty claim.requests was that its akin to malloc(0): it's not really wrong and may avoid having to code special cases in code which generates ResourceClaims for a pod.

Not particularly important, so if we feel that it helps avoid user mistakes ("forgot to ask for something by mistake") we can also make empty request an error.

How would you add that hypothetical non-device resource if this field being empty means "null" ?

I wouldn't - those would still be in requests IMHO.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we agree on requests[*].device, this COULD be empty, but I still don't see why we want to allow it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay.

Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sending replies to comments

devices:
- name: gpu-0
attributes:
- name: memory
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that proposal I actually suggested that we remove all Quantity fields from DeviceAttributes

If we move allocatable things out of attributes, then this seems reasonable to me, though we would need to bring back IntValue, probably. I doubt that there are non-capacity attributes that benefit from quantityt notation, but if that is wrong, we could bring it back.

Users could then request these capacities in a Claim

I think that is ultimately coming, without much doubt. I also thing it makes sense to drop it from this release. That doesn't mean that we shouldn't start segregating "attribs" from "capacity". In this release people can treat capacity[] as a 2nd set of attributes and constrain against it. Later they make requests against it.

I guess we could still create the API for partial allocation now, even if actual allocation is whole device. So if the user makes a claim with requests for "10Gi", we will still grant them the whole GPU.

We don't need to put the requests API in yet - just let people reference dev.capacity["memory"] in CEL. "I want a GPU with at least 20Gi of memory, but I get the whole device". When we add requests the "at least" will be implied, but you will get only 20Gi.

Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly resolved comments, just a few left

// and reports the error.
//
// Some examples:
// "memory.dra.example.com" in device.attributes && # Is the attribute available?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we had landed on NOT qualifying them, which implied the drivername prefix?

//
// Must be a DNS subdomain and should end with a DNS domain owned by the
// vendor of the driver.
DriverName string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you don't specify a drivername, pass it to all of them?


The `DeviceClassName` field may be left empty. The request itself can specify which
driver needs to provide devices. There are some corner cases:
- Empty `claim.requests`: this is a "null request" which can be satisfied without
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we agree on requests[*].device, this COULD be empty, but I still don't see why we want to allow it.

pohly added a commit to pohly/kubernetes that referenced this pull request Jun 18, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jun 18, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jun 19, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jun 25, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jun 26, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 2, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 5, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 11, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 11, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 12, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 12, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 16, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 17, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 18, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 18, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 19, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 21, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
pohly added a commit to pohly/kubernetes that referenced this pull request Jul 21, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
k8s-publishing-bot pushed a commit to kubernetes/api that referenced this pull request Jul 23, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.

Kubernetes-commit: de5742ae83c8d77268a7caf5f3b1f418c4a13a84
k8s-publishing-bot pushed a commit to kubernetes/client-go that referenced this pull request Jul 23, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.

Kubernetes-commit: de5742ae83c8d77268a7caf5f3b1f418c4a13a84
k8s-publishing-bot pushed a commit to kubernetes/dynamic-resource-allocation that referenced this pull request Jul 23, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.

Kubernetes-commit: de5742ae83c8d77268a7caf5f3b1f418c4a13a84
akhilerm pushed a commit to akhilerm/kubernetes that referenced this pull request Aug 4, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
hungnguyen243 pushed a commit to hungnguyen243/kubernetes that referenced this pull request Aug 16, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
SoulPancake pushed a commit to SoulPancake/k8s that referenced this pull request Sep 11, 2024
As agreed in kubernetes/enhancements#4709, immediate
allocation is one of those features which can be removed because it makes no
sense for structured parameters and the justification for classic DRA is weak.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/node Categorizes an issue or PR as relevant to SIG Node. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

9 participants