Age | Commit message (Collapse) | Author |
|
This was already in the original block processor but got dropped by
accident when restructuring it.
The problem manifests itself when manually submitting fragment blocks.
They no longer get correct I/O queue tickets, clog up the queue and
the processor eventually throws an internal error.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Only clean up the fragment if it hasn't been re-assigned to the
fragment block. The NULL check is definitely wrong, because we
no longer re-assign it as NULL.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Dequeuing won't work if we have a backlog of 1 or 2 and the blocks
are used for internal buffering. Take that into account, similar to
the sync code. Also bump the minimum backlog to 3, just to make
absolutely sure we cannot run into a dequeue loop trying to allocate
a block.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
In the hash-table equals callback, if the hash and size match, do an
exact, byte-for-byte comparison of the fragment in question. The
fragment can either be in a fragment block that is in-flight (for which
we have the in-flight list), in the current, unfinished fragment block,
or it can be on disk.
In the later case, the fragment block is resolved through the fragment
table and read back from disk into a scratch buffer and decompressed.
After that, the fragment is checked for byte-for-byte equality with
the one we resolved through the hash table.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
If we want full, byte-for byte, verification of fragments during
de-duplication we need to check back with the blocks already written
to disk, or with the ones that are in flight.
The previous, extremely hacky approach simply locked up the thread
pool and investigated the queues. For the new approach, we treat the
thread pool as completely opaque and don't try to touch it.
This commit modifies the block processor to keep duplicate copies of
each submitted fragment block around, that are cleaned up once the
block is dequeued and written to disk. So instead of touching the
thread pool, we can simply investigate the in-fligth-block list and
the current block, before resorting to reading back fragment blocks
from the file.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
Simply count the number of blocks we hand out (malloc'ed or recycled)
and decrease the counter when we put blocks back for recycling.
The sync() part becomes a little more complicated, because we can get
stuck with a backlog of 1 or 2 because we have a fragment or current
block buffer in use. We also need to accout for this when creating the
processor, because we need to be able to request at least 2 blocks
without stalling.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|
|
A cleaner separation between common code, frontend code and backend
code is made.
The "is this byte blob zero" function is moved out to libutil (with
test case and everything) with a more optimized implementation.
Signed-off-by: David Oberhollenzer <david.oberhollenzer@sigma-star.at>
|