Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow reporting cpu physical core count. #4402

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

andrewd-zededa
Copy link
Contributor

cpu.Info() reports a list of cpus and the list
will be double in length when hyperthreading
is enabled. This difference also scales cpu utilization.

New config property: cpu.stats.physicalcore.enabled
Default set to false to allow opt-in usage.
When set to true domainmgr will:
Take the last cpu.InfoStat's CoreID
Add 1 (since CoreID starts at 0)
Set HostMemory.Ncpus to that value.

cpu.Info() reports a list of cpus and the list
will be double in length when hyperthreading
is enabled.  This difference also scales cpu utilization.

New config property: cpu.stats.physicalcore.enabled
Default set to false to allow opt-in usage.
When set to true domainmgr will:
	Take the last cpu.InfoStat's CoreID
	Add 1 (since CoreID starts at 0)
	Set HostMemory.Ncpus to that value.

Signed-off-by: Andrew Durbin <[email protected]>
Copy link
Member

@OhmSpectator OhmSpectator left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to review this PR in more detail. As far as I remember, Core IDs are not always continuous in modern CPUs. Also, I want to understand if it affects the CPU allocator logic (part of CPU pinning).

Copy link
Contributor

@eriknordmark eriknordmark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this will affect the reported scaled CPU usage. Is that the intent? As opposed to reporting cores instead or hyperthreads as the CPUs shows in the UI.

That means that with hyperthreading the UI can calculate a CPU usage which exceeds 100% (if reportPhyCores is set).

@andrewd-zededa
Copy link
Contributor Author

@eriknordmark yeah the intent for this PR is to scale cpu utilization differently. The use case is an eve node which may have 4 physical cores and has hyperthreading enabled leading to 8 threads. A vm app is deployed with 2 cpus and the vm running some cpu stress utility like "stress-ng --cpu 2". In this case the vm will see 100% cpu utilization and in existing cpu accounting the utilization is scaled by 8 threads and top level eve node cpu utilization reports 25% utilization. With this config property enabled the utilization would instead be scaled by 4 cores and report 50% utilization.

This PR is not linked to the existing kubevirt PRs.

@OhmSpectator I started with an alternate option which did a simple scan through the list to find the highest CoreId. I removed that to hopefully skip a loop, it could certainly be reimplemented.

@eriknordmark
Copy link
Contributor

@eriknordmark yeah the intent for this PR is to scale cpu utilization differently. The use case is an eve node which may have 4 physical cores and has hyperthreading enabled leading to 8 threads. A vm app is deployed with 2 cpus and the vm running some cpu stress utility like "stress-ng --cpu 2". In this case the vm will see 100% cpu utilization and in existing cpu accounting the utilization is scaled by 8 threads and top level eve node cpu utilization reports 25% utilization. With this config property enabled the utilization would instead be scaled by 4 cores and report 50% utilization.

But if everything is busy including the hyperthreads, then it will might report up to 200% CPU usage for the edge node. Isn't that going to be confusing?
Note that users who don't care about hyperthreading can disable it in the BIOS AFAIK.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants