You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This error occasionally happens while migrating a region with a large amount of writes
What did you expect to see?
Region is available after migration
What did you see instead?
Unable to write the region:
{"timestamp":"2024-11-13T17:49:16.585732Z","level":"ERROR","fields":{"message":"Failed to handle request","err":"0: Execute gRPC request error, at greptimedb/src/datanode/src/region_server.rs:381:18\n1: Failed to handle request for region 1599385691488256(372386, 0), at greptimedb/src/datanode/src/region_server.rs:770:14\n2: Failed to write region\n3: Failed to write WAL, at /home/runner/work/greptimedb-cloud/greptimedb-cloud/greptimedb/src/mito2/src/wal.rs:213:14\n4: Attempt to append discontinuous log entry, region: 1599385691488256(372386, 0), last index: 6245314, attempt index: 21596978"},"target":"servers::grpc::region_server"}
What operating system did you use?
Unrelated
What version of GreptimeDB did you use?
0.9.5
Relevant log output and stack trace
Error:
{"timestamp":"2024-11-13T17:49:16.585732Z","level":"ERROR","fields":{"message":"Failed to handle request","err":"0: Execute gRPC request error, at greptimedb/src/datanode/src/region_server.rs:381:18\n1: Failed to handle request for region 1599385691488256(372386, 0), at greptimedb/src/datanode/src/region_server.rs:770:14\n2: Failed to write region\n3: Failed to write WAL, at /home/runner/work/greptimedb-cloud/greptimedb-cloud/greptimedb/src/mito2/src/wal.rs:213:14\n4: Attempt to append discontinuous log entry, region: 1599385691488256(372386, 0), last index: 6245314, attempt index: 21596978"},"target":"servers::grpc::region_server"}
dn-2:
2024-11-14T01:41:50+08:00 {"timestamp":"2024-11-13T17:41:50.072618Z","level":"INFO","fields":{"message":"Deregister alive countdown for region 1599385691488256(372386, 0)"},"target":"datanode::alive_keeper"}
2024-11-14T01:41:50+08:00 {"timestamp":"2024-11-13T17:41:50.072556Z","level":"INFO","fields":{"message":"Region 1599385691488256(372386, 0) is deregistered from engine mito"},"target":"datanode::region_server","span":{"request_type":"Close","name":"handle_request"},"spans":[{"request_type":"Close","name":"handle_request"}]}
2024-11-14T01:41:50+08:00 {"timestamp":"2024-11-13T17:41:50.072199Z","level":"INFO","fields":{"message":"Region 1599385691488256(372386, 0) closed, worker: 0"},"target":"mito2::worker::handle_close"}
2024-11-14T01:41:50+08:00 {"timestamp":"2024-11-13T17:41:50.072193Z","level":"INFO","fields":{"message":"Stopped region manifest manager, region_id: 1599385691488256(372386, 0)"},"target":"mito2::region"}
2024-11-14T01:41:50+08:00 {"timestamp":"2024-11-13T17:41:50.072182Z","level":"INFO","fields":{"message":"Try to close region 1599385691488256(372386, 0), worker: 0"},"target":"mito2::worker::handle_close"}
2024-11-14T01:41:50+08:00 {"timestamp":"2024-11-13T17:41:50.072054Z","level":"INFO","fields":{"message":"Closing staled region: 1599385691488256(372386, 0)"},"target":"datanode::alive_keeper"}
2024-11-14T01:41:49+08:00 {"timestamp":"2024-11-13T17:41:49.315438Z","level":"INFO","fields":{"message":"Namespace 1599385691488256 obsoleted 215 entries, compacted index: 21596977, span: (None, None)"},"target":"log_store::raft_engine::log_store"}
2024-11-14T01:41:49+08:00 {"timestamp":"2024-11-13T17:41:49.315385Z","level":"INFO","fields":{"message":"Region 1599385691488256(372386, 0) flush finished, tries to bump wal to 21596977"},"target":"mito2::worker::handle_flush"}
2024-11-14T01:41:49+08:00 {"timestamp":"2024-11-13T17:41:49.315116Z","level":"INFO","fields":{"message":"Successfully update manifest version to 20492, region: 1599385691488256(372386, 0), reason: Downgrading"},"target":"mito2::flush"}
2024-11-14T01:41:49+08:00 {"timestamp":"2024-11-13T17:41:49.261712Z","level":"INFO","fields":{"message":"Applying RegionEdit { files_to_add: [FileMeta { region_id: 1599385691488256(372386, 0), file_id: FileId(42ab1c3f-06d3-4ae1-84bf-abaef1d6d6e1), time_range: (1731519432292::Millisecond, 1731519688257::Millisecond), level: 0, file_size: 1349474, available_indexes: [], index_file_size: 0, num_rows: 13893, num_row_groups: 1 }], files_to_remove: [], compaction_time_window: None, flushed_entry_id: Some(21596977), flushed_sequence: Some(2589152525) } to region 1599385691488256(372386, 0)"},"target":"mito2::flush"}
2024-11-14T01:41:49+08:00 {"timestamp":"2024-11-13T17:41:49.261682Z","level":"INFO","fields":{"message":"Successfully flush memtables, region: 1599385691488256(372386, 0), reason: Downgrading, files: [FileId(42ab1c3f-06d3-4ae1-84bf-abaef1d6d6e1)], cost: 0.203302411s"},"target":"mito2::flush"}
2024-11-14T01:41:49+08:00 {"timestamp":"2024-11-13T17:41:49.058247Z","level":"INFO","fields":{"message":"Flush region: 1599385691488256(372386, 0) before converting region to follower"},"target":"datanode::heartbeat::handler::downgrade_region"}
2024-11-14T01:41:49+08:00 {"timestamp":"2024-11-13T17:41:49.058184Z","level":"INFO","fields":{"message":"Convert region 1599385691488256(372386, 0) to downgrading region, previous role state: Leader(Writable)"},"target":"mito2::region"}
2024-11-14T01:41:49+08:00 {"timestamp":"2024-11-13T17:41:49.058047Z","level":"INFO","fields":{"message":"Received mailbox message: MailboxMessage { id: 8447, subject: \"Downgrade leader region: 1599385691488256(372386, 0)\", from: \"[email protected]:3002\", to: \"[email protected]:4001\", timestamp_millis: 1731519709057, payload: Some(Json(\"{\\\"DowngradeRegion\\\":{\\\"region_id\\\":1599385691488256,\\\"flush_timeout\\\":{\\\"secs\\\":120,\\\"nanos\\\":0},\\\"reject_write\\\":true}}\")) }, meta_client id: (0, 2)"},"target":"datanode::heartbeat"}
2024-11-14T01:40:15+08:00 {"timestamp":"2024-11-13T17:40:15.351106Z","level":"INFO","fields":{"message":"Namespace 1599385691488256 obsoleted 1323 entries, compacted index: 21596762, span: (Some(21596763), Some(21596763))"},"target":"log_store::raft_engine::log_store"}
The text was updated successfully, but these errors were encountered:
What type of bug is this?
Unexpected error
What subsystems are affected?
Distributed Cluster
Minimal reproduce step
This error occasionally happens while migrating a region with a large amount of writes
What did you expect to see?
Region is available after migration
What did you see instead?
Unable to write the region:
What operating system did you use?
Unrelated
What version of GreptimeDB did you use?
0.9.5
Relevant log output and stack trace
The text was updated successfully, but these errors were encountered: