Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(single-node): add memory allocation #19895

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

fuyufjh
Copy link
Member

@fuyufjh fuyufjh commented Dec 23, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

Follow up of #19477. This makes it work in single-node mode.

Checklist

  • I have written necessary rustdoc comments.
  • I have added necessary unit tests and integration tests.
  • I have added test labels as necessary.
  • I have added fuzzing tests or opened an issue to track them.
  • My PR contains breaking changes.
  • My PR changes performance-critical code, so I will run (micro) benchmarks and present the results.
  • My PR contains critical fixes that are necessary to be merged into the latest release.

Documentation

  • My PR needs documentation updates.
Release note

compactor_opts.compactor_total_memory_bytes = memory_for_compactor(system_total_mem);
compute_opts.total_memory_bytes = system_total_mem
- memory_for_frontend(system_total_mem)
- memory_for_compactor(system_total_mem);
Copy link
Collaborator

@hzxa21 hzxa21 Dec 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a discussion of @Li0k recently on the memory allocation between CN and compator in standalone.
Let's say CN_MEM = x * Total_MEM, Compactor_MEM = y * Total_MEM, FE_MEM = z * Total_MEM, the choices are

  1. x + y + z = 1.

    • Pros: more likely to avoid OOM.
    • Cons: less efficient on memory usage because when loads on compaction/FE is not large, the memory will be wasted and cannot used by CN.
  2. x + y + z > 1

    • Pros: More efficient on memory usage because CN operator cache can use the idle memory when compactor/FE is not fully loaded.
    • Cons: More vulnerable to OOM because CN operator cache eviction is lazy.

I am more leaning towards 2 because it fits more to the CN dynamic operator cache design. For example, we can have x = y = z = 0.8 as well as the gradient allocation. WDYT?

Copy link
Member Author

@fuyufjh fuyufjh Dec 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CN operator cache is special - keep in mind that it's controlled according to the process-level jemalloc statistics. (See memory_manager_target_bytes in next line)

Here the 3 options

  • frontend_opts.frontend_total_memory_bytes
  • compactor_opts.compactor_total_memory_bytes
  • compute_opts.total_memory_bytes

Mostly decides decides the storage memory, including uploading buffer and meta & block cache:

  • frontend_opts.frontend_total_memory_bytes (code here)
    • batch_memory_limit
  • compactor_opts.compactor_total_memory_bytes (code here)
    • meta_cache_capacity_bytes
    • compactor_memory_limit_bytes
  • compute_opts.total_memory_bytes decides (code here)
    • block_cache_capacity_mb
    • meta_cache_capacity_mb
    • shared_buffer_capacity_mb
    • (embedded) compactor_memory_limit_mb

As you are more familiar with storage, I'll follow your decisions.

Besides, it doesn't count the Meta's memory usage (because there is no such as option), so actually it's already > 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants