diff options
author | David Oberhollenzer <david.oberhollenzer@sigma-star.at> | 2019-09-17 14:29:29 +0200 |
---|---|---|
committer | David Oberhollenzer <david.oberhollenzer@sigma-star.at> | 2019-09-20 03:18:47 +0200 |
commit | 9d7d0a84a2017af2e70cc0f33bfbce0b59470e62 (patch) | |
tree | f06ddabcebc1210d3764ada396284b46cebedc8d /doc | |
parent | 544f8f6dfd2f61fd1d2ab7a9a955e63d4b416dcc (diff) |
Remove parallel unpacking
Parallel unpacking didn't really improve the speed that much. Actually
sorting the files for optimized unpack order improved speed much more
than the parallel unpacker.
Furthermore, the fork based parallel unpacker was actually pretty messy
to begin with.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
Diffstat (limited to 'doc')
-rw-r--r-- | doc/rdsquashfs.1 | 7 |
1 files changed, 0 insertions, 7 deletions
diff --git a/doc/rdsquashfs.1 b/doc/rdsquashfs.1 index 72c602a..338138a 100644 --- a/doc/rdsquashfs.1 +++ b/doc/rdsquashfs.1 @@ -57,13 +57,6 @@ Skip directories that would end up empty after applying the above rules. The following options are specific to unpacking files from a SquashFS image to disk: .TP -\fB\-\-jobs\fR, \fB\-j\fR <count> -Specify a number of parallel jobs to spawn for unpacking file data. -The file hierarchy is created sequentially but the data unpacking is -distributed over the given number of jobs so that each job has to unpack -roughly the same amount of data. This can be used to speed up unpacking -of large SquashFS archives. -.TP \fB\-\-no\-sparse\fR, \fB\-Z\fR Do not create sparse files. Always unpack sparse files by writing blocks of zeros to disk. |