Age | Commit message (Collapse) | Author |
|
The strategy is as follows:
- At the beginning of every file, remember the current position
- Once a file is done scan the list of existing files for the following:
- Look for an existing file that has a block with the same size and
checksum as the first non-sparse block of the current file
- After that, every block in the current file has to match in size and
checksum the ones in the file that we found, from that point onward
- sparse blocks in either file are skipped
- If we found a match, we update the current file to point to the first
matching block and rewind the squashfs image to remove the newly written
data
This strategy should in theory be able to find an existing file where the
on-disk data *contains* the on-disk data of the current file.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
The strategy is simple:
- The data writer function that write data/fragment blocks get
access to the list files.
- When writing a fragment, we look for an already written file that has
a fragment with the same size and checksum.
- If we find one, we throw away the fragment and reuse the existing one.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
This commit attempts to make the generic table writer more readable.
A few changes are made, including heap allocation of the block list.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Requires that config.h be included before other headers, since the macro
_FILE_OFFSET_BITS changes the definitions of things like 'struct stat'.
I chose to simply include it at the top of every C file and at
immediately after the double-inclusion guards of every header.
Signed-off-by: Matt Turner <mattst88@gmail.com>
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
If read_retry fails to read the expected amount of data (EOF or otherwise),
it is almost always an error.
This commit renames read_retry to read_data and moves error handling
into the function, making a lot of error handling code redundant.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
If write_retry fails to write everything, it is *always* an error.
This commit renames write_retry to write_data and moves error handling
into the function, making a lot of error handling code redundant.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
The added flags allow controlling the following on a per file level:
- forcing a file to be written uncompressed
- forcing a file to not have a fragment, i.e. the last truncated block
actually being written as a block
- padding a file to be alligned to device block size
The flags are not yet exposed to anything user controllable (such as
command line flags).
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
The data writer sparse block code can take advantage that it can add a
block size instead of a fragment and doesn't have to initialize the
framgent location.
In return, the tree node to inode serialization code doesn't need a
special case for sparse file anymore and can now also handle files that
are forced to not have a fragment.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
This commit broadly does the following things:
- Rename and move the sparse mapping structure to libutil
- Add a function to the data writer for writing condensed versions
of sparse files, given the mapping.
- This shares code with the already existing function for regular
files. The shared code is moved to a common helper function.
- Add support to tar2sqfs for repacking sparse files.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
This commit adds support for packing sparse files into squashfs images
as follows:
- In the data writer: simply detect zero blocks and write a zero to the
block size field and don't emit any data. Record the number of bytes
saved this way. For fragments, set the fragment offset to invalid.
- In the inode writer: write out the number of bytes saved for sparse
files. If there should be a fragment but there is none, append a block
count of 0.
- In the data reader: if the block size is 0, read nothing from disk and
emit an empty block. Do the same if the fragment is missing.
- In the inode reader: restore the number of bytes saved for sparse files.
The sparse files can be packed and unpacked, but the unpacking will not
create sparse files for now.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|