Age | Commit message (Collapse) | Author |
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Parallel unpacking didn't really improve the speed that much. Actually
sorting the files for optimized unpack order improved speed much more
than the parallel unpacker.
Furthermore, the fork based parallel unpacker was actually pretty messy
to begin with.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
As a side effect, this requires the data writer to keep track of
statistics.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
This commit moves the file unpacking order & job scheduling to a libfstree
function. The ordering is improved by making sure fragment blocks are not
extracted more than once and files with data blocks are extracted in order.
This way, serial unpacking of a 2GiB Debian live image could be reduced
from ~5' on my test machine to ~3.5', whereas parallel unpacking stays
roughly the same (~3' for -j 4).
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|