From 9d7d0a84a2017af2e70cc0f33bfbce0b59470e62 Mon Sep 17 00:00:00 2001 From: David Oberhollenzer Date: Tue, 17 Sep 2019 14:29:29 +0200 Subject: Remove parallel unpacking Parallel unpacking didn't really improve the speed that much. Actually sorting the files for optimized unpack order improved speed much more than the parallel unpacker. Furthermore, the fork based parallel unpacker was actually pretty messy to begin with. Signed-off-by: David Oberhollenzer --- doc/rdsquashfs.1 | 7 ------- 1 file changed, 7 deletions(-) (limited to 'doc') diff --git a/doc/rdsquashfs.1 b/doc/rdsquashfs.1 index 72c602a..338138a 100644 --- a/doc/rdsquashfs.1 +++ b/doc/rdsquashfs.1 @@ -57,13 +57,6 @@ Skip directories that would end up empty after applying the above rules. The following options are specific to unpacking files from a SquashFS image to disk: .TP -\fB\-\-jobs\fR, \fB\-j\fR -Specify a number of parallel jobs to spawn for unpacking file data. -The file hierarchy is created sequentially but the data unpacking is -distributed over the given number of jobs so that each job has to unpack -roughly the same amount of data. This can be used to speed up unpacking -of large SquashFS archives. -.TP \fB\-\-no\-sparse\fR, \fB\-Z\fR Do not create sparse files. Always unpack sparse files by writing blocks of zeros to disk. -- cgit v1.2.3