name diff % speedup
slice::sort_large_random -65.49% x 2.90
slice::sort_large_strings -37.75% x 1.61
slice::sort_medium_random -47.89% x 1.92
slice::sort_small_random 11.11% x 0.90
slice::sort_unstable_large_random -47.57% x 1.91
slice::sort_unstable_large_strings -25.19% x 1.34
slice::sort_unstable_medium_random -22.15% x 1.28
slice::sort_unstable_small_random -15.79% x 1.19
I did not read the whole conversation, but sorting seems a very common usecase (not mine, but seems to me a lot of people sort data), so this seems quite a broad improvement to me.
Note though, as is mentioned in the issue, that the survey showed people still prioritize runtime performance over compilation performance in general, so this tradeoff seems warranted.
It’s not unheard of that regressions can be unmade later on, so here’s hoping :)
Yeah, sorting is definitely a common use case, but note it also didn’t improve every sorting use case. Anyway, even if I’m a bit skeptical I trust the Rust team that they don’t take these decisions lightly.
But the thing that lead to my original question was: if the compiler itself uses the std sorting internally, there’s also additional reason to hope that it might have transitive performance benefits. So even if compiling the Rust compiler with this PR was actually slower, compiling again with the resulting compiler could be faster since the resulting compiler benefits from faster sorting. So yeah, fingers crossed 🤞
I would have assumed the benchmark suite accounts for that, otherwise the results aren’t quite as meaningfull really. Which ties back you your 2nd senctence: I certainly trust the rust team more than myself on these things :)