Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DO NOT MERGE: cherry pick https://github.com/neondatabase/neon/pull/8763 onto release branch for testing in pre-prod #8857

Closed
wants to merge 1 commit into from

Conversation

yliang412
Copy link
Contributor

@yliang412 yliang412 commented Aug 28, 2024

Part of #8130, closes #8719.

## Problem

Currently, vectored blob io only coalesce blocks if they are immediately
adjacent to each other. When we switch to Direct IO, we need a way to
coalesce blobs that are within the dio-aligned boundary but has gap
between them.

## Summary of changes

- Introduces a `VectoredReadCoalesceMode` for `VectoredReadPlanner` and
`StreamingVectoredReadPlanner` which has two modes:
  - `AdjacentOnly` (current implementation)
  - `Chunked(<alignment requirement>)`
- New `ChunkedVectorBuilder` that considers batching `dio-align`-sized
read, the start and end of the vectored read will respect
`stx_dio_offset_align` / `stx_dio_mem_align` (`vectored_read.start` and
`vectored_read.blobs_at.first().start_offset` will be two different
value).
- Since we break the assumption that blobs within single `VectoredRead`
are next to each other (implicit end offset), we start to store blob end
offsets in the `VectoredRead`.
- Adapted existing tests to run in both `VectoredReadCoalesceMode`.
- The io alignment can also be live configured at runtime.

Signed-off-by: Yuchen Liang <[email protected]>
@problame problame changed the title pageserver: do vectored read on each dio-aligned section once (#8763) DO NOT MERGE: cherry pick https://github.com/neondatabase/neon/pull/8763 onto release branch for testing in pre-prod Aug 28, 2024
Copy link

github-actions bot commented Aug 28, 2024

3792 tests run: 3682 passed, 4 failed, 106 skipped (full report)


Failures on Postgres 14

  • test_sql_regress[4]: release-arm64
  • test_remote_storage_backup_and_restore[False-local_fs]: release-arm64
  • test_delete_timeline_exercise_crash_safety_failpoints[Check.RETRY_WITH_RESTART-timeline-delete-before-rm]: release-arm64
  • test_ancestor_detach_branched_from[False-True-at]: release-arm64
# Run all failed tests locally:
scripts/pytest -vv -n $(nproc) -k "test_sql_regress[release-pg14-4] or test_remote_storage_backup_and_restore[release-pg14-False-local_fs] or test_delete_timeline_exercise_crash_safety_failpoints[release-pg14-Check.RETRY_WITH_RESTART-timeline-delete-before-rm] or test_ancestor_detach_branched_from[release-pg14-False-True-at]"
Flaky tests (10)

Postgres 14

  • test_sql_regress[4]: release-arm64
  • test_pg_regress[4]: release-arm64
  • test_remote_storage_backup_and_restore[False-local_fs]: release-arm64
  • test_delete_timeline_exercise_crash_safety_failpoints[Check.RETRY_WITH_RESTART-timeline-delete-before-index-delete]: release-arm64
  • test_delete_timeline_exercise_crash_safety_failpoints[Check.RETRY_WITH_RESTART-timeline-delete-after-index-delete]: release-arm64
  • test_delete_timeline_exercise_crash_safety_failpoints[Check.RETRY_WITH_RESTART-timeline-delete-before-rm]: release-arm64
  • test_ancestor_detach_branched_from[True-True-at]: release-arm64
  • test_ancestor_detach_branched_from[True-True-after]: release-arm64
  • test_ancestor_detach_branched_from[False-True-at]: release-arm64
  • test_pull_timeline_partial_segment_integrity: release-arm64

Test coverage report is not available

The comment gets automatically updated with the latest test results
714594c at 2024-08-28T18:51:50.923Z :recycle:

@yliang412
Copy link
Contributor Author

Done with experiment.

@yliang412 yliang412 closed this Sep 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant