diff options
author | David Oberhollenzer <david.oberhollenzer@sigma-star.at> | 2020-11-06 15:26:14 +0100 |
---|---|---|
committer | David Oberhollenzer <david.oberhollenzer@sigma-star.at> | 2020-11-06 15:26:14 +0100 |
commit | 73e853f9660072abf0ae68cbb5d9753ac6e9034a (patch) | |
tree | 3a155dd7e14f62e39b1597075de591c3a211d9fb | |
parent | 4661c6ebae3662d3e30349138546689b07c21076 (diff) |
Minor "late night typing" fixes in documentation
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
-rw-r--r-- | doc/benchmark.txt | 11 |
1 files changed, 6 insertions, 5 deletions
diff --git a/doc/benchmark.txt b/doc/benchmark.txt index 4b5e01e..407cb26 100644 --- a/doc/benchmark.txt +++ b/doc/benchmark.txt @@ -53,8 +53,9 @@ The repacking was repeated 4 times and the worst wall-clock time ("real") was used for comparison. - Altough not relevant for this benchmark, the resulting image sizes where - for a specific compressor, so that the compression ratio could be estimated: + Altough not relevant for this benchmark, the resulting image sizes were + measured once for each compressor, so that the compression ratio could + be estimated: $ stat test.tar $ stat test.sqfs @@ -84,7 +85,7 @@ In addition, relative and absolute efficiency of the parellel implementation - was determined: + were determined: speedup_rel(compressor, num_cpu) efficiency_rel(compressor, num_cpu) = -------------------------------- @@ -238,8 +239,8 @@ decompression and beating the others in compression speed by orders of magnitudes, has by far the worst compression ratio. - It should be noted that the actual number of actually compressed blocks has not - been determined. A worse compression ratio can lead to more blocks being stored + It should be noted that the number of actually compressed blocks has not been + determined. A worse compression ratio can lead to more blocks being stored uncompressed, reducing the workload and thus affecting decompression time. However, since zstd has a better compression ratio than gzip, takes only 30% of |