-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
random_page_cost and effective_io_concurrency are hardcoded to SSD values #70
Comments
I'm going to second this issue based on behavior I've seen in my own environment. It works fine when operating on an SSD, but when operating on a magnetic drive or a raid array thereof, the system will completely grind to a halt with relatively small amounts of data (200-300mb), resulting in several second query latency as psql spikes cores to 100% (for context, this is using the timescaledb docker container which is automatically running timescaledb-tune on initialization). When I go in to the configuration and hand change these values to what pgtune would recommend for the drives, the CPU usage drops to single digits and the queries become nearly instantaneous. The rest of the tuning seems fine, but these particular values will absolutely destroy performance on a magnetic drive. |
probably this can be worked around by e.g. ALTER SYSTEM, but I'd love to have tstune detect the storage type SSD/HDD automatically |
hi @matthock are you still concerned by this issue? Thanks to @Kazmirchuk bumping up my old issue, I read your other issue on timescaledb-docker and realized that having the wrong values for my deployments probably explains a LOT of bad stuff. And even then, according to Bruce Momjian 16 would be a better number for HDD taken from tstune's own codebase: timescaledb-tune/pkg/pgtune/misc.go Line 30 in 154501b
The only question I have is if there's also a penalty for using HDD values on SSD (or just sacrificing some gains) |
@jflambert Yes, I still run into this. I've got workarounds in place where I'm having to hook in and overwrite the values with ALTER SYSTEM after initial setup... but it would be nice if it worked out of the box, especially since I've seen others get caught by the same and it's a HUGE gotcha if you're doing anything with remotely legacy hardware! I've got mine set up to automatically set up the HDD values. You lose a bit of performance on SSDs, but it's not a huge amount in my experience (low double digit percent), as opposed to the penalty of using SSD values on an HDD being several orders of magnitude degradation in performance. |
@matthock happy to share, I run this in an init-container post timescaledb-tune (which values do you use for
There are two ways timescaledb-tune can handle this.
If #2 is acceptable to @jnidzwetzki or @svenklemm (i.e. the onus is on the user to determine their drive type) then I could pitch in a quick PR (I've done a few before). |
Having an option to overwrite the builtins is fine. Maybe use the -profile switch to trigger this |
Interesting. I looked into this |
Using pgtune, I get random_page_cost/effective_io_concurrency recommendations of 1.1/200 for SSD and 4/2 for HDD. I see these recommendations in other tools or websites as well.
timescaledb-tune hardcodes its recommendations to 1.1/200 (SSD)
How important is it for these two settings to be set according to your drive type?
The text was updated successfully, but these errors were encountered: