Skip to content

Releases: tikv/tikv

tikv-server v6.1.1

01 Sep 03:39
c518118
Compare
Choose a tag to compare

Improvements

  • Support compressing the metrics response using gzip to reduce the HTTP body size #12355 @winoros
  • Support reducing the amount of data returned for each request by filtering out some metrics using the server.simplify-metrics configuration item #12355 @glorv
  • Support dynamically modifying the number of sub-compaction operations performed concurrently in RocksDB (rocksdb.max-sub-compactions) #13145 @ethercflow

Bug fixes

  • Fix a bug that Regions might be overlapped if Raftstore is busy #13160 @5kbpers
  • Fix the issue that PD does not reconnect to TiKV after the Region heartbeat is interrupted #12934 @bufferflies
  • Fix the issue that TiKV panics when performing type conversion for an empty string #12673 @wshwsh12
  • Fix the issue of inconsistent Region size configuration between TiKV and PD #12518 @5kbpers
  • Fix the issue that encryption keys are not cleaned up when Raft Engine is enabled #12890 @tabokie
  • Fix the panic issue that might occur when a peer is being split and destroyed at the same time #12825 @BusyJay
  • Fix the panic issue that might occur when the source peer catches up logs by snapshot in the Region merge process #12663 @BusyJay
  • Fix the issue of frequent PD client reconnection that occurs when the PD client meets an error #12345 @Connor1996
  • Fix potential panic when parallel recovery is enabled for Raft Engine #13123 @tabokie
  • Fix the issue that the Commit Log Duration of a new Region is too high, which causes QPS to drop #13077 @Connor1996
  • Fix rare panics when Raft Engine is enabled #12698 @tabokie
  • Avoid redundant log warnings when proc filesystem (procfs) cannot be found #13116 @tabokie
  • Fix the wrong expression of Unified Read Pool CPU in dashboard #13086 @glorv
  • Fix the issue that when a Region is large, the default region-split-check-diff might be larger than the bucket size #12598 @tonyxuqqi
  • Fix the issue that TiKV might panic when Apply Snapshot is aborted and Raft Engine is enabled #12470 @tabokie
  • Fix the issue that the PD client might cause deadlocks #13191 @bufferflies #12933 @BurtonQin (Boqin Qin

tikv-server v6.2.0

23 Aug 00:25
Compare
Choose a tag to compare

For the complete and official release notes, see https://docs.pingcap.com/tidb/v6.2/release-6.2.0.

Improvements

  • Support compressing the metrics response using gzip to reduce the HTTP body size #12355 @glorv
  • Improve the readability of the TiKV panel in Grafana Dashboard #12007 @kevin-xianliu
  • Optimize the commit pipeline performance of the Apply operator #12898 @ethercflow
  • Support dynamically modifying the number of sub-compaction operations performed concurrently in RocksDB (rocksdb.max-sub-compactions) #13145 @ethercflow

Bug fixes

  • Avoid reporting WriteConflict errors in pessimistic transactions #11612 @sticnarf
  • Fix the possible duplicate commit records in pessimistic transactions when async commit is enabled #12615 @sticnarf
  • Fix the issue that TiKV panics when modifying the storage.api-version from 1 to 2 #12600 @pingyu
  • Fix the issue of inconsistent Region size configuration between TiKV and PD #12518 @5kbpers
  • Fix the issue that TiKV keeps reconnecting PD clients #12506, #12827 @Connor1996
  • Fix the issue that TiKV panics when performing type conversion for an empty string #12673 @wshwsh12
  • Fix the issue of time parsing error that occurs when the DATETIME values contain a fraction and Z #12739 @gengliqi
  • Fix the issue that the perf context written by the Apply operator to TiKV RocksDB is coarse-grained #11044 @LykxSassinator
  • Fix the issue that TiKV fails to start when the configuration of backup/import/cdc is invalid #12771 @3pointer
  • Fix the panic issue that might occur when a peer is being split and destroyed at the same time #12825 @BusyJay
  • Fix the panic issue that might occur when the source peer catches up logs by snapshot in the Region merge process #12663 @BusyJay
  • Fix the panic issue caused by analyzing statistics when max_sample_size is set to 0 #11192 @LykxSassinator
  • Fix the issue that encryption keys are not cleaned up when Raft Engine is enabled #12890 @tabokie
  • Fix the issue that the get_valid_int_prefix function is incompatible with TiDB. For example, the FLOAT type was incorrectly converted to INT #13045 @guo-shaoge
  • Fix the issue that the Commit Log Duration of a new Region is too high, which causes QPS to drop #13077 @Connor1996
  • Fix the issue that PD does not reconnect to TiKV after the Region heartbeat is interrupted #12934 @bufferflies

tikv-server v5.4.2

08 Jul 01:43
Compare
Choose a tag to compare

Improvements

  • Reload TLS certificate automatically for each update to improve availability #12546
  • Improve the health check to detect unavailable Raftstore, so that the TiKV client can update Region Cache in time #12398
  • Transfer the leadership to CDC observer to reduce latency jitter #12111

Bug Fixes

  • Fix the panic issue caused by analyzing statistics when max_sample_size is set to 0 #11192
  • Fix the potential issue of mistakenly reporting TiKV panics when exiting TiKV #12231
  • Fix the panic issue that might occur when the source peer catches up logs by snapshot in the Region merge process #12663
  • Fix the panic issue that might occur when a peer is being split and destroyed at the same time #12825
  • Fix the issue of frequent PD client reconnection that occurs when the PD client meets an error #12345
  • Fix the issue of time parsing error that occurs when the DATETIME values contain a fraction and Z #12739
  • Fix the issue that TiKV panics when performing type conversion for an empty string #12673
  • Fix the possible duplicate commit records in pessimistic transactions when async commit is enabled #12615
  • Fix the issue that TiKV reports the invalid store ID 0 error when using Follower Read #12478
  • Fix the issue of TiKV panic caused by the race between destroying peers and batch splitting Regions #12368
  • Fix the issue that tikv-ctl returns an incorrect result due to its wrong string match #12329
  • Fix the issue of failing to start TiKV on AUFS #12543

tikv-server v5.3.2

29 Jun 02:29
Compare
Choose a tag to compare

Improvements

  • Reduce the system call by the Raft client and increase CPU efficiency #11309
  • Improve the health check to detect unavailable Raftstore, so that the TiKV client can update Region Cache in time #12398
  • Transfer the leadership to CDC observer to reduce latency jitter #12111
  • Add more metrics for the garbage collection module of Raft logs to locate performance problems in the module #11374

Bug Fixes

  • Fix the issue of frequent PD client reconnection that occurs when the PD client meets an error #12345
  • Fix the issue of time parsing error that occurs when the DATETIME values contain a fraction and Z #12739
  • Fix the issue that TiKV panics when performing type conversion for an empty string #12673
  • Fix the possible duplicate commit records in pessimistic transactions when async commit is enabled #12615
  • Fix the bug that TiKV reports the invalid store ID 0 error when using Follower Read #12478
  • Fix the issue of TiKV panic caused by the race between destroying peers and batch splitting Regions #12368
  • Fix the issue that successfully committed optimistic transactions may report the Write Conflict error when the network is poor #34066
  • Fix the issue that TiKV panics and destroys peers unexpectedly when the target Region to be merged is invalid #12232
  • Fix a bug that stale messages cause TiKV to panic #12023
  • Fix the issue of intermittent packet loss and out of memory (OOM) caused by the overflow of memory metrics #12160
  • Fix the potential panic issue that occurs when TiKV performs profiling on Ubuntu 18.04 #9765
  • Fix the issue that tikv-ctl returns an incorrect result due to its wrong string match #12329
  • Fix a bug that replica reads might violate the linearizability #12109
  • Fix the TiKV panic issue that occurs when the target peer is replaced with the peer that is destroyed without being initialized when merging a Region #12048
  • Fix a bug that TiKV might panic if it has been running for 2 years or more #11940

tikv-server v6.1.0

13 Jun 03:00
Compare
Choose a tag to compare

Improvements

  • Improve the old value hit rate of CDC when using in-memory pessimistic lock #12279
  • Improve the health check to detect unavailable Raftstore, so that the TiKV client can update Region Cache in time #12398
  • Support setting memory limit on Raft Engine #12255
  • TiKV automatically detects and deletes the damaged SST files to improve the product availability #10578
  • CDC supports RawKV #11965
  • Support splitting a large snapshot file into multiple files #11595
  • Move the snapshot garbage collection from Raftstore to background thread to prevent snapshot GC from blocking Raftstore message loops #11966
  • Support dynamic setting of the the maximum message length (max-grpc-send-msg-len) and the maximum batch size of gPRC messages (raft-msg-max-batch-size) #12334
  • Support executing online unsafe recovery plan through Raft #10483

Bug fixes

  • Fix the issue that the Raft log lag is increasing when a TiKV instance is taken offline #12161
  • Fix the issue that TiKV panics and destroys peers unexpectedly because the target Region to be merged is invalid #12232
  • Fix the issue that TiKV reports the failed to load_latest_options error when upgrading from v5.3.1 or v5.4.0 to v6.0.0 #12269
  • Fix the issue of OOM caused by appending Raft logs when the memory resource is insufficient #11379
  • Fix the issue of TiKV panic caused by the race between destroying peers and batch splitting Regions #12368
  • Fix the issue of TiKV memory usage spike in a short time after stats_monitor falls into a dead loop #12416
  • Fix the issue that TiKV reports the invalid store ID 0 error when using Follower Read #12478

tikv-server v5.4.1

13 May 04:49
Compare
Choose a tag to compare

Improvements

  • Support displaying multiple Kubernetes clusters in the Grafana dashboard #12104

Bug Fixes

  • Fix the issue that TiKV panics and destroys peers unexpectedly because the target Region to be merged is invalid #12232
  • Fix a bug that stale messages cause TiKV to panic #12023
  • Fix the issue of intermittent packet loss and out of memory (OOM) caused by the overflow of memory metrics #12160
  • Fix the potential panic issue that occurs when TiKV performs profiling on Ubuntu 18.04 #9765
  • Fix a bug that replica reads might violate the linearizability #12109
  • Fix the TiKV panic issue that occurs when the target peer is replaced with the peer that is destroyed without being initialized when merging a Region #12048
  • Fix a bug that TiKV might panic if it has been running for 2 years or more #11940
  • Reduce the TiCDC recovery time by reducing the number of the Regions that require the Resolve Locks step #11993
  • Fix the panic issue caused by deleting snapshot files when the peer status is Applying #11746
  • Fix the issue that destroying a peer might cause high latency #10210
  • Fix the panic issue caused by invalid assertion in resource metering #12234
  • Fix the issue that slow score calculation is inaccurate in some corner cases #12254
  • Fix the OOM issue caused by the resolved_ts module and add more metrics #12159
  • Fix the issue that successfully committed optimistic transactions may report the Write Conflict error when the network is poor #34066
  • Fix the TiKV panic issue that occurs when replica read is enabled on a poor network #12046

tikv-server v5.2.4

26 Apr 06:51
Compare
Choose a tag to compare

There is no release note for this version.

tikv-server v6.0.0

06 Apr 02:55
Compare
Choose a tag to compare

Improvements

  • Improve the Raftstore sampling accuracy for large key range batches #11039
  • Add the correct "Content-Type" for debug/pprof/profile to make the Profile more easily identified #11521
  • Renew the lease time of the leader infinitely when the Raftstore has heartbeats or handles read requests, which helps reduce latency jitter #11579
  • Choose the store with the least cost when switching the leader, which helps improve performance stability #10602
  • Fetch Raft logs asynchronously to reduce the performance jitter caused by blocking the Raftstore #11320
  • Support the QUARTER function in vector calculation #5751
  • Support pushing down the BIT data type to TiKV #30738
  • Support pushing down the MOD function and the SYSDATE function to TiKV #11916
  • (dup: release-5.3.1.md > Improvements> TiKV)- Reduce the TiCDC recovery time by reducing the number of the Regions that require the Resolve Locks step #11993
  • Support dynamically modifying raftstore.raft-max-inflight-msgs #11865
  • Support EXTRA_PHYSICAL_TABLE_ID_COL_ID to enable dynamic pruning mode #11888
  • Support calculation in buckets #11759
  • Encode the keys of RawKV API V2 as user-key + memcomparable-padding + timestamp #11965
  • Encode the values of RawKV API V2 as user-value + ttl + ValueMeta and encode delete in ValueMeta #11965
  • TiKV Coprocessor supports the Projection operator #12114
  • Support dynamically modifying raftstore.raft-max-size-per-msg #12017
  • Support monitoring multi-k8s in Grafana #12014
  • Transfer the leadership to CDC observer to reduce latency jitter #12111
  • Support dynamically modifying raftstore.apply_max_batch_size and raftstore.store_max_batch_size #11982
  • RawKV V2 returns the latest version upon receiving the raw_get or raw_scan request #11965
  • Support the RCCheckTS consistency reads #12097
  • Support dynamically modifying storage.scheduler-worker-pool-size(the thread count of the Scheduler pool) #12067
  • Control the use of CPU and bandwidth by using the global foreground flow controller to improve the performance stability of TiKV #11855
  • Support dynamically modifying readpool.unified.max-thread-count (the thread count of the UnifyReadPool) #11781
  • Use the TiKV internal pipeline to replace the RocksDB pipeline and deprecate the rocksdb.enable-multibatch-write parameter #12059

Bug Fixes

  • (dup: release-5.3.1.md > Bug fixes> TiKV)- Fix the panic issue caused by deleting snapshot files when the peer status is Applying #11746
  • (dup: release-5.3.1.md > Bug fixes> TiKV)- Fix the issue of QPS drop when flow control is enabled and level0_slowdown_trigger is set explicitly #11424
  • (dup: release-5.3.1.md > Bug fixes> TiKV)- Fix the issue that destroying a peer might cause high latency #10210
  • (dup: release-5.3.1.md > Bug fixes> TiKV)- Fix a bug that TiKV cannot delete a range of data (unsafe_destroy_range cannot be executed) when the GC worker is busy #11903
  • Fix a bug that TiKV panics when the data in StoreMeta is accidentally deleted in some corner cases #11852
  • Fix a bug that TiKV panics when performing profiling on an ARM platform #10658
  • Fix a bug that TiKV might panic if it has been running for 2 years or more #11940
  • Fix the compilation issue on the ARM64 architecture caused by missing SSE instruction set #12034
  • (dup: release-5.3.1.md > Bug fixes> TiKV)- Fix the issue that deleting an uninitialized replica might cause an old replica to be recreated #10533
  • Fix the bug that stale messages causes TiKV to panic #12023
  • Fix the issue that undefined behavior (UB) might occur in TsSet conversions #12070
  • Fix a bug that replica reads might violate the linearizability #12109
  • Fix the potential panic issue that occurs when TiKV performs profiling on Ubuntu 18.04 #9765
  • Fix the issue that tikv-ctl returns an incorrect result due to its wrong string match #12049
  • Fix the issue of intermittent packet loss and out of memory (OOM) caused by the overflow of memory metrics #12160
  • Fix the potential issue of mistakenly reporting TiKV panics when exiting TiKV #12231

v6.0.0-alpha: *: fix some typos (#12066)

04 Mar 05:34
18a119c
Compare
Choose a tag to compare
 

Signed-off-by: cuishuang <[email protected]>

Co-authored-by: Ti Chi Robot <[email protected]>

tikv-server v5.3.1

03 Mar 10:17
Compare
Choose a tag to compare

Feature enhancements

  • Update the proc filesystem (procfs) to v0.12.0 #11702
  • Improve the error log report in the Raft client #11959
  • Increase the speed of inserting SST files by moving the verification process to the Import thread pool from the Apply thread pool #11239

Bug fixes

  • Fix a bug that TiKV cannot delete a range of data (unsafe_destroy_range cannot be executed) when the GC worker is busy #11903
  • Fix the issue that destroying a peer might cause high latency #10210
  • Fix a bug that the any_value function returns a wrong result when regions are empty #11735
  • Fix the issue that deleting an uninitialized replica might cause an old replica to be recreated #10533
  • Fix the metadata corruption issue when Prepare Merge is triggered after a new election is finished but the isolated peer is not informed #11526
  • Fix the deadlock issue that happens occasionally when coroutines run too fast #11549
  • Fix the potential deadlock and memory leak issues when profiling flame graphs #11108
  • Fix the rare data inconsistency issue when retrying a prewrite request in pessimistic transactions #11187
  • Fix a bug that the configuration resource-metering.enabled does not work #11235
  • Fix the issue that some coroutines leak in resolved_ts #10965
  • Fix the issue of reporting false "GC can not work" alert under low write flow #9910
  • Fix a bug that tikv-ctl cannot return the correct Region-related information #11393
  • Fix the issue that a down TiKV node causes the resolved timestamp to lag #11351
  • Fix a panic issue that occurs when Region merge, ConfChange, and Snapshot happen at the same time in extreme conditions #11475
  • Fix the issue that TiKV cannot detect the memory lock when TiKV performs a reverse table scan #11440
  • Fix the issue of negative sign when the decimal divide result is zero #29586
  • Fix a memory leak caused by the monitoring data of statistics threads #11195
  • Fix the issue of TiCDC panic that occurs when the downstream database is missing #11123
  • Fix the issue that TiCDC adds scan retries frequently due to the Congest error #11082
  • Fix the issue that batch messages are too large in Raft client implementation #9714
  • Collapse some uncommon storage-related metrics in Grafana dashboard #11681