diff options
author | Matt Turner <mattst88@gmail.com> | 2020-04-19 16:01:22 -0700 |
---|---|---|
committer | David Oberhollenzer <goliath@infraroot.at> | 2020-04-22 14:48:46 +0200 |
commit | 20756f4354f333005bc59a2d07593d5d1429d287 (patch) | |
tree | df1e0475927fa7b395c20ed9e777011f9f592b63 /lib/util/fast_urem_by_const.h | |
parent | 566f67ce915f9175ed9075bb1d6c553249c9a426 (diff) |
Import and use Mesa's hash table
With `perf record`/`perf report` I saw that 30% of the time was spent in
`sqfs_frag_table_find_tail_end` with tar2sqfs for a tarball containing
the Gentoo ebuild repository (many thousands of small files).
The reason was the bucketing hash table in frag_table.c: too many
elements in too few buckets meant lots of walking over the linked lists.
This patch replaces that hash table with the hash table implementation
from Mesa. Its implementation is more complex (is is an open-addressing,
linear-reprobing) hash table, but it is much better suited for the task.
On my 4c/8t Skylake, the time to run tar2sqfs drops from 7.5s to less
than 3s. CPU usage increases from ~207% to ~356%, presumably indicating
an increase in available parallelism due to the removal of the hash
table as a bottleneck. The `perf report` profile with this patch shows
that the time spent in `sqfs_frag_table_find_tail_end` has dropped from
~30% to 0.01%.
Output from ministat:
x before
+ after
N Min Max Median Avg Stddev
x 20 7.476 7.685 7.5725 7.5615 0.051254268
+ 20 2.79 2.901 2.846 2.84475 0.03543842
Difference at 95.0% confidence
-4.71675 +/- 0.0282015
-62.3785% +/- 0.241477%
(Student's t, pooled s = 0.0440618)
I imported only the bits of the hash table implementation that were
needed for frag_table.c. Among the changes I made after importing are
- removed usage of ralloc, Mesa's recursive memory allocator
- Replaced ralloc -> malloc
ralloc_free -> free
rzalloc_array -> calloc
- Removed mem_ctx parameters
- Added free()s to the appropriate places (valgrind confirms there
are no leaks)
- removed _mesa_-prefix from function names
Fixes: #40
Signed-off-by: Matt Turner <mattst88@gmail.com>
Diffstat (limited to 'lib/util/fast_urem_by_const.h')
-rw-r--r-- | lib/util/fast_urem_by_const.h | 75 |
1 files changed, 75 insertions, 0 deletions
diff --git a/lib/util/fast_urem_by_const.h b/lib/util/fast_urem_by_const.h new file mode 100644 index 0000000..62b22a6 --- /dev/null +++ b/lib/util/fast_urem_by_const.h @@ -0,0 +1,75 @@ +/* + * Copyright © 2010 Valve Software + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice (including the next + * paragraph) shall be included in all copies or substantial portions of the + * Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS + * IN THE SOFTWARE. + */ + +#include <assert.h> +#include <stdint.h> + +/* + * Code for fast 32-bit unsigned remainder, based off of "Faster Remainder by + * Direct Computation: Applications to Compilers and Software Libraries," + * available at https://arxiv.org/pdf/1902.01961.pdf. + * + * util_fast_urem32(n, d, REMAINDER_MAGIC(d)) returns the same thing as + * n % d for any unsigned n and d, however it compiles down to only a few + * multiplications, so it should be faster than plain uint32_t modulo if the + * same divisor is used many times. + */ + +#define REMAINDER_MAGIC(divisor) \ + ((uint64_t) ~0ull / (divisor) + 1) + +/* + * Get bits 64-96 of a 32x64-bit multiply. If __int128_t is available, we use + * it, which usually compiles down to one instruction on 64-bit architectures. + * Otherwise on 32-bit architectures we usually get four instructions (one + * 32x32->64 multiply, one 32x32->32 multiply, and one 64-bit add). + */ + +static inline uint32_t +_mul32by64_hi(uint32_t a, uint64_t b) +{ +#ifdef HAVE_UINT128 + return ((__uint128_t) b * a) >> 64; +#else + /* + * Let b = b0 + 2^32 * b1. Then a * b = a * b0 + 2^32 * a * b1. We would + * have to do a 96-bit addition to get the full result, except that only + * one term has non-zero lower 32 bits, which means that to get the high 32 + * bits, we only have to add the high 64 bits of each term. Unfortunately, + * we have to do the 64-bit addition in case the low 32 bits overflow. + */ + uint32_t b0 = (uint32_t) b; + uint32_t b1 = b >> 32; + return ((((uint64_t) a * b0) >> 32) + (uint64_t) a * b1) >> 32; +#endif +} + +static inline uint32_t +util_fast_urem32(uint32_t n, uint32_t d, uint64_t magic) +{ + uint64_t lowbits = magic * n; + uint32_t result = _mul32by64_hi(d, lowbits); + assert(result == n % d); + return result; +} + |