Age | Commit message (Collapse) | Author |
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
On Windows, long is a 32 bit integer, so we cannot check if the long
value is greater than UINT32_MAX. Instead, check if strtol sets errno.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
We do not allow hard links to directories, so we can toss the special
case handling code for that. The visited mechanism was pointless
anyway, because we don't even descend down hard links in the recursive
tree handling functions.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
For some reason, the recursive hardlink resolution ended up in post
process, calling into the non recrusive one in hardlink.c that wasn't
used elsewhere.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
The idea of the block align feature was to allow micro-managing that
some files are forcefully aligned to 1k/4k ("device block") boundaries,
hoping to improve access time at the cost of data density. The feature
was not exposed in the tools for a long time and eventuall added to the
sort file. Measurement and experimentation showed, that it in fact
worsened the read performance on a test system with an old micro SD
card as the bottle neck.
The feature is removed, and if needed, can be brought back simply by
wrapping/sub-classing the default block writer, if need be..
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
This basically re-uses the libxfrm pack/unpack tests, but runs
the data through a stream wrapper.
When de-compressing, we have a ridiculously tiny input buffer,
to force the wrapper to snort up the data in several attempts
until the it can decompress something. We read the data byte-by-byte
to force the wraper to internally cache the uncompressed data and
spoon feed it to us. It has to be completely transparent to us that
it internaly decompresses and also reads transparently across
concatenated streams.
When compressing, we only require that the output is smaller than
the input and not equal to it. We also require the wrapper to flush
the wrapped stream when it is flushed. We then test if the compressed
data can be unpacked again.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
We were previously building pack for unpack and vice versa.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Having it all in one buffer allows us the re-use the "generat GNU
record" function.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
By cobbling together the xattr lines manually in libtar, the
need (and thus the function itself) are removed.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Generate a simple tarball and compare it with a reference.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
In the istream implementation, automatically skip the padding when we
reach end-of-file. Also skip file AND padding when we destroy the
object. Replace the remaining instances with a simple istream_skip
instead and remove the wrapper from libtar.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
The tar_istream_t reads the data from a tar file, having been given
the header, and synthesizes zero bytes for sparse regions.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Use read_number in the places that remain.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
There was some code duplication for extracting the sparse entry
fields from the start record and the subsequent extended record.
This commit introduces a data structure for both and unifies
the parsing code paths.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
- Use is_memory_zero from libutil
- Move checksum update function to tar writer code
- Move checksum verify function to tar reader code
- Only export the function to compute the checksum
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
We have an "append_sparse" function in libio, with a fallback,
so use that.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Move it to a separate libxfrm library, where it can be independently
tested as well. The bulk of the new code is also mainly test cases
for the compressors.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Implement grab/drop functions to increase/decrease reference count
and destroy the object if the count drops to 0.
Make sure that all objects that maintain internal references actually
grab that reference, duplicate it in the copy function, drop it in
the destroy handler.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
mksquashfs generates extended inodes if a directory contains 256
entries. libsquashfs so far only generated extended inodes if there
is no other way to encode it. Mimic the behaviour of mksquashfs by
adding a threshold.
For this to work, the "sqfs_inode_set_xattr_index" function has to
be changed to not immediately try to demote inodes to basic types.
The fstree serialization is modified to do that itself if the index
is 0xFFFFFFFF and the target is not a directory inode.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Propperly set the parent eof flag and not a local one that isn't
accessed at all.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
The "from dir" and from "from file" code, as well as the "sort file"
code is specific to gensquashfs, so move them there and the test
cases as well.
The medium term idea is to reduce libfstree to a stub, merge it into
the generic writer and ultimately hoist that into libsquashfs.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
On Linux or BSD distributions we have a native version installed
via package manager. On Windows, we can just build it from source
like the other libraries.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
When (during fragment deduplication) a fragment block is read back
from disk and unpacked, it can happen that it is _exactly_ the
given block size. The bounds check did '>=' instead of '>' and
failed in that case with a "data corruption" error.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Initialize the output compressor pointer to NULL, so if the function
fails, the value is propperly initialized to a NULL pointer instead
of relying on the function user to initialize it.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Similar to the hex blob decoder, we need this once for tar and
once for the filemap xattr parser.
Simply add a single, central implementation to libutil, with a
simple unit test, and then use it in both libtar and gensquashfs.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Since we need it twice (once for tar, once for the filemap xattr
parser), add a single, central implementation to libutil, add a
unit test for that implementation and then use it in both libtar
and gensquashfs.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Not all CPUs may be available for the current process. Some CPUs
may be offline, others may not be included in the process affinity
mask. In such cases too many threads will be created, which will
then compete unnecessarily for CPU time.
Use sched_getaffinity() to determine the correct number of threads
to create.
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Slightly modify the byte-for-byte comparison function to compare an
arbitrary range in a file and move it to libutil. Instead of calling
it for each block in the block writer, simply let it check an entire
range in the block writer and compute the range position/size of the
reference ahead, before looking for potential matches.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Instead of open coding it, use the array_t type from libutil.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
The zlib library uses a typedef'd uInt type for ranges which may be
smaller than size_t.
The complicated clamping to the uInt range is technically not needed,
as the buffer size can grow at most to BUFSZ anyway, and it also
confuses coverity.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Instead of printing error messages to stderr, simply return an error
number instead, that the caller then prints out using sqfs_perror.
The underlying rbtree already uses sqfs error numbers, so little
change is needed here.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Test against various invariants:
- Every non-root node must have a name
- The root node muts not have a name
- The name must not be ".." or "."
- The name must not contain '/'
- The loop that chases parent pointers must terminate, i.e. we must
never reach the starting state again (link loop).
Furthermore, make sure the sum of all path components plus separators
does not overflow.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|