command substitution vs read benchmarks #8556
Replies: 6 comments
-
The read performance is very dependent on length because read has to, in the general case, read byte-for-byte. This is so: echo foo\nbar | begin
read -l foo # consumes the "foo" and nothing more
read -l bar
echo $bar
end prints It's probably possible to change this if the read is directly redirected like your The command substitution performance here is caused by fish's parser being overly simplistic. It doesn't recurse into command substitutions, so it then re-parses them again and again, every time through the loop. This means there's a high setup cost for command substitutions, but once they do read they read quickly, in chunks. I've messed with a command substitution "cache" as a bandaid but have not had great results, so the proper solution is to make the parser recurse. This is a bigger task. |
Beta Was this translation helpful? Give feedback.
-
Also: If you're benchmarking this, you definitely want longer benchmarks, to remove system noise from the equation. I've seen that randomly add 200ms (although this was on WSL, where you'd expect it - what with virus scanners and other background jobs). I would shoot for a per-run time of a second or more, and reduce the number of runs accordingly. Especially warmup runs really aren't worth all that much so you only want 2 or 3 of those. Once things are in cache, they are in cache. Can't be more in-cachier than that. This also removes other influences like startup costs. |
Beta Was this translation helpful? Give feedback.
-
Also, to put this in context: This is 500 runs, and the difference is 10ms. That suggests the overhead of the command substitution is about 0.02ms, compared to piping to read. As an alternative benchmark: for f in (seq 10000); true (true); end runs about 1s slower than for f in (seq 10000); true; true; end This seems like a big deal until you realize 1s for 10000 runs is 0.1ms per run. That matches up with other attempts with more command substitutions ( |
Beta Was this translation helpful? Give feedback.
-
It's true that this isn't a big deal. Ultimately external commands and globbing will take the most time. But Tide, even after lots of optimization, still probably has like 30 command substitutions/reads per prompt render. And this could theoretically go a lot higher with tight loops like so: while set -l truncation_length (math $truncation_length + 1) &&
set -l truncated (string sub --length $truncation_length -- $dir_section) &&
test $truncated != $dir_section -a (count $parent_dir/$truncated*/) -gt 1
end which could theoretically add 3 command substitutions per character in |
Beta Was this translation helpful? Give feedback.
-
This thread is actually an example of something that would probably be better as a discussion. It's not really a bug report or an enhancement request: nothing was asked for, nothing was complained about, but it is interesting. I'm converting it. |
Beta Was this translation helpful? Give feedback.
-
Also of note is that if you are already piping, |
Beta Was this translation helpful? Give feedback.
-
fish, version 3.3.1-701-gceade1629
read
performance is highly dependant on string length however. If we set string like so:The following happens:
The equivalence point is around 32 on my machine.
Beta Was this translation helpful? Give feedback.
All reactions