\input texinfo @c -*-texinfo-*- @c %**start of header @setfilename plzip.info @documentencoding ISO-8859-15 @settitle Plzip Manual @finalout @c %**end of header @set UPDATED 12 April 2017 @set VERSION 1.6 @dircategory Data Compression @direntry * Plzip: (plzip). Parallel compressor compatible with lzip @end direntry @ifnothtml @titlepage @title Plzip @subtitle Parallel compressor compatible with lzip @subtitle for Plzip version @value{VERSION}, @value{UPDATED} @author by Antonio Diaz Diaz @page @vskip 0pt plus 1filll @end titlepage @contents @end ifnothtml @node Top @top This manual is for Plzip (version @value{VERSION}, @value{UPDATED}). @menu * Introduction:: Purpose and features of plzip * Invoking plzip:: Command line interface * Program design:: Internal structure of plzip * File format:: Detailed format of the compressed file * Memory requirements:: Memory required to compress and decompress * Minimum file sizes:: Minimum file sizes required for full speed * Trailing data:: Extra data appended to the file * Examples:: A small tutorial with examples * Problems:: Reporting bugs * Concept index:: Index of concepts @end menu @sp 1 Copyright @copyright{} 2009-2017 Antonio Diaz Diaz. This manual is free documentation: you have unlimited permission to copy, distribute and modify it. @node Introduction @chapter Introduction @cindex introduction Plzip is a massively parallel (multi-threaded) lossless data compressor based on the lzlib compression library, with a user interface similar to the one of lzip, bzip2 or gzip. Plzip can compress/decompress large files on multiprocessor machines much faster than lzip, at the cost of a slightly reduced compression ratio (0.4 to 2 percent larger compressed files). Note that the number of usable threads is limited by file size; on files larger than a few GB plzip can use hundreds of processors, but on files of only a few MB plzip is no faster than lzip (@pxref{Minimum file sizes}). Plzip uses the lzip file format; the files produced by plzip are fully compatible with lzip-1.4 or newer, and can be rescued with lziprecover. The lzip file format is designed for data sharing and long-term archiving, taking into account both data integrity and decoder availability: @itemize @bullet @item The lzip format provides very safe integrity checking and some data recovery means. The @uref{http://www.nongnu.org/lzip/manual/lziprecover_manual.html#Data-safety,,lziprecover} program can repair bit-flip errors (one of the most common forms of data corruption) in lzip files, and provides data recovery capabilities, including error-checked merging of damaged copies of a file. @ifnothtml @xref{Data safety,,,lziprecover}. @end ifnothtml @item The lzip format is as simple as possible (but not simpler). The lzip manual provides the source code of a simple decompressor along with a detailed explanation of how it works, so that with the only help of the lzip manual it would be possible for a digital archaeologist to extract the data from a lzip file long after quantum computers eventually render LZMA obsolete. @item Additionally the lzip reference implementation is copylefted, which guarantees that it will remain free forever. @end itemize A nice feature of the lzip format is that a corrupt byte is easier to repair the nearer it is from the beginning of the file. Therefore, with the help of lziprecover, losing an entire archive just because of a corrupt byte near the beginning is a thing of the past. Plzip uses the same well-defined exit status values used by lzip and bzip2, which makes it safer than compressors returning ambiguous warning values (like gzip) when it is used as a back end for other programs like tar or zutils. Plzip will automatically use the smallest possible dictionary size for each file without exceeding the given limit. Keep in mind that the decompression memory requirement is affected at compression time by the choice of dictionary size limit (@pxref{Memory requirements}). When compressing, plzip replaces every file given in the command line with a compressed version of itself, with the name "original_name.lz". When decompressing, plzip attempts to guess the name for the decompressed file from that of the compressed file as follows: @multitable {anyothername} {becomes} {anyothername.out} @item filename.lz @tab becomes @tab filename @item filename.tlz @tab becomes @tab filename.tar @item anyothername @tab becomes @tab anyothername.out @end multitable (De)compressing a file is much like copying or moving it; therefore plzip preserves the access and modification dates, permissions, and, when possible, ownership of the file just as "cp -p" does. (If the user ID or the group ID can't be duplicated, the file permission bits S_ISUID and S_ISGID are cleared). Plzip is able to read from some types of non regular files if the @samp{--stdout} option is specified. If no file names are specified, plzip compresses (or decompresses) from standard input to standard output. In this case, plzip will decline to write compressed output to a terminal, as this would be entirely incomprehensible and therefore pointless. Plzip will correctly decompress a file which is the concatenation of two or more compressed files. The result is the concatenation of the corresponding uncompressed files. Integrity testing of concatenated compressed files is also supported. LANGUAGE NOTE: Uncompressed = not compressed = plain data; it may never have been compressed. Decompressed is used to refer to data which have undergone the process of decompression. @node Invoking plzip @chapter Invoking plzip @cindex invoking @cindex options @cindex usage @cindex version The format for running plzip is: @example plzip [@var{options}] [@var{files}] @end example @noindent @samp{-} used as a @var{file} argument means standard input. It can be mixed with other @var{files} and is read just once, the first time it appears in the command line. Plzip supports the following options: @table @code @item -h @itemx --help Print an informative help message describing the options and exit. @item -V @itemx --version Print the version number of plzip on the standard output and exit. @anchor{--trailing-error} @item -a @itemx --trailing-error Exit with error status 2 if any remaining input is detected after decompressing the last member. Such remaining input is usually trailing garbage that can be safely ignored. @xref{concat-example}. @anchor{--data-size} @item -B @var{bytes} @itemx --data-size=@var{bytes} Set the size of the input data blocks, in bytes. The input file will be divided in chunks of this size before compression is performed. Valid values range from 8 KiB to 1 GiB. Default value is two times the dictionary size, except for option @samp{-0} where it defaults to 1 MiB. Plzip will reduce the dictionary size if it is larger than the chosen data size. @item -c @itemx --stdout Compress or decompress to standard output; keep input files unchanged. If compressing several files, each file is compressed independently. This option is needed when reading from a named pipe (fifo) or from a device. @item -d @itemx --decompress Decompress the specified file(s). If a file does not exist or can't be opened, plzip continues decompressing the rest of the files. If a file fails to decompress, plzip exits immediately without decompressing the rest of the files. @item -f @itemx --force Force overwrite of output files. @item -F @itemx --recompress Force re-compression of files whose name already has the @samp{.lz} or @samp{.tlz} suffix. @item -k @itemx --keep Keep (don't delete) input files during compression or decompression. @item -l @itemx --list Print the uncompressed size, compressed size and percentage saved of the specified file(s). Trailing data are ignored. The values produced are correct even for multimember files. If more than one file is given, a final line containing the cumulative sizes is printed. With @samp{-v}, the dictionary size, the number of members in the file, and the amount of trailing data (if any) are also printed. With @samp{-vv}, the positions and sizes of each member in multimember files are also printed. @samp{-lq} can be used to verify quickly (without decompressing) the structural integrity of the specified files. (Use @samp{--test} to verify the data integrity). @samp{-alq} additionally verifies that none of the specified files contain trailing data. @item -m @var{bytes} @itemx --match-length=@var{bytes} Set the match length limit in bytes. After a match this long is found, the search is finished. Valid values range from 5 to 273. Larger values usually give better compression ratios but longer compression times. @item -n @var{n} @itemx --threads=@var{n} Set the number of worker threads. Valid values range from 1 to "as many as your system can support". If this option is not used, plzip tries to detect the number of processors in the system and use it as default value. @w{@samp{plzip --help}} shows the system's default value. Note that the number of usable threads is limited to @w{ceil( file_size / data_size )} during compression (@pxref{Minimum file sizes}), and to the number of members in the input during decompression. @item -o @var{file} @itemx --output=@var{file} When reading from standard input and @samp{--stdout} has not been specified, use @samp{@var{file}} as the virtual name of the uncompressed file. This produces a file named @samp{@var{file}} when decompressing, and a file named @samp{@var{file}.lz} when compressing. @item -q @itemx --quiet Quiet operation. Suppress all messages. @item -s @var{bytes} @itemx --dictionary-size=@var{bytes} Set the dictionary size limit in bytes. Plzip will use the smallest possible dictionary size for each file without exceeding this limit. Valid values range from 4 KiB to 512 MiB. Values 12 to 29 are interpreted as powers of two, meaning 2^12 to 2^29 bytes. Note that dictionary sizes are quantized. If the specified size does not match one of the valid sizes, it will be rounded upwards by adding up to @w{(@var{bytes} / 8)} to it. For maximum compression you should use a dictionary size limit as large as possible, but keep in mind that the decompression memory requirement is affected at compression time by the choice of dictionary size limit. @item -t @itemx --test Check integrity of the specified file(s), but don't decompress them. This really performs a trial decompression and throws away the result. Use it together with @samp{-v} to see information about the file(s). If a file does not exist, can't be opened, or is a terminal, plzip continues checking the rest of the files. If a file fails the test, plzip may be unable to check the rest of the files. @item -v @itemx --verbose Verbose mode.@* When compressing, show the compression ratio for each file processed. A second @samp{-v} shows the progress of compression.@* When decompressing or testing, further -v's (up to 4) increase the verbosity level, showing status, compression ratio, dictionary size, decompressed size, and compressed size. @item -0 .. -9 Set the compression parameters (dictionary size and match length limit) as shown in the table below. The default compression level is @samp{-6}. Note that @samp{-9} can be much slower than @samp{-0}. These options have no effect when decompressing. The bidimensional parameter space of LZMA can't be mapped to a linear scale optimal for all files. If your files are large, very repetitive, etc, you may need to use the @samp{--dictionary-size} and @samp{--match-length} options directly to achieve optimal performance. @multitable {Level} {Dictionary size} {Match length limit} @item Level @tab Dictionary size @tab Match length limit @item -0 @tab 64 KiB @tab 16 bytes @item -1 @tab 1 MiB @tab 5 bytes @item -2 @tab 1.5 MiB @tab 6 bytes @item -3 @tab 2 MiB @tab 8 bytes @item -4 @tab 3 MiB @tab 12 bytes @item -5 @tab 4 MiB @tab 20 bytes @item -6 @tab 8 MiB @tab 36 bytes @item -7 @tab 16 MiB @tab 68 bytes @item -8 @tab 24 MiB @tab 132 bytes @item -9 @tab 32 MiB @tab 273 bytes @end multitable @item --fast @itemx --best Aliases for GNU gzip compatibility. @end table Numbers given as arguments to options may be followed by a multiplier and an optional @samp{B} for "byte". Table of SI and binary prefixes (unit multipliers): @multitable {Prefix} {kilobyte (10^3 = 1000)} {|} {Prefix} {kibibyte (2^10 = 1024)} @item Prefix @tab Value @tab | @tab Prefix @tab Value @item k @tab kilobyte (10^3 = 1000) @tab | @tab Ki @tab kibibyte (2^10 = 1024) @item M @tab megabyte (10^6) @tab | @tab Mi @tab mebibyte (2^20) @item G @tab gigabyte (10^9) @tab | @tab Gi @tab gibibyte (2^30) @item T @tab terabyte (10^12) @tab | @tab Ti @tab tebibyte (2^40) @item P @tab petabyte (10^15) @tab | @tab Pi @tab pebibyte (2^50) @item E @tab exabyte (10^18) @tab | @tab Ei @tab exbibyte (2^60) @item Z @tab zettabyte (10^21) @tab | @tab Zi @tab zebibyte (2^70) @item Y @tab yottabyte (10^24) @tab | @tab Yi @tab yobibyte (2^80) @end multitable @sp 1 Exit status: 0 for a normal exit, 1 for environmental problems (file not found, invalid flags, I/O errors, etc), 2 to indicate a corrupt or invalid input file, 3 for an internal consistency error (eg, bug) which caused plzip to panic. @node Program design @chapter Program design @cindex program design When compressing, plzip divides the input file into chunks and compresses as many chunks simultaneously as worker threads are chosen, creating a multimember compressed file. When decompressing, plzip decompresses as many members simultaneously as worker threads are chosen. Files that were compressed with lzip will not be decompressed faster than using lzip (unless the @samp{-b} option was used) because lzip usually produces single-member files, which can't be decompressed in parallel. For each input file, a splitter thread and several worker threads are created, acting the main thread as muxer (multiplexer) thread. A "packet courier" takes care of data transfers among threads and limits the maximum number of data blocks (packets) being processed simultaneously. The splitter reads data blocks from the input file, and distributes them to the workers. The workers (de)compress the blocks received from the splitter. The muxer collects processed packets from the workers, and writes them to the output file. When decompressing from a regular file, the splitter is removed and the workers read directly from the input file. If the output file is also a regular file, the muxer is also removed and the workers write directly to the output file. With these optimizations, the use of RAM is greatly reduced and the decompression speed of large files with many members is only limited by the number of processors available and by I/O speed. @node File format @chapter File format @cindex file format Perfection is reached, not when there is no longer anything to add, but when there is no longer anything to take away.@* --- Antoine de Saint-Exupery @sp 1 In the diagram below, a box like this: @verbatim +---+ | | <-- the vertical bars might be missing +---+ @end verbatim represents one byte; a box like this: @verbatim +==============+ | | +==============+ @end verbatim represents a variable number of bytes. @sp 1 A lzip file consists of a series of "members" (compressed data sets). The members simply appear one after another in the file, with no additional information before, between, or after them. Each member has the following structure: @verbatim +--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | ID string | VN | DS | LZMA stream | CRC32 | Data size | Member size | +--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ @end verbatim All multibyte values are stored in little endian order. @table @samp @item ID string (the "magic" bytes) A four byte string, identifying the lzip format, with the value "LZIP" (0x4C, 0x5A, 0x49, 0x50). @item VN (version number, 1 byte) Just in case something needs to be modified in the future. 1 for now. @item DS (coded dictionary size, 1 byte) The dictionary size is calculated by taking a power of 2 (the base size) and substracting from it a fraction between 0/16 and 7/16 of the base size.@* Bits 4-0 contain the base 2 logarithm of the base size (12 to 29).@* Bits 7-5 contain the numerator of the fraction (0 to 7) to substract from the base size to obtain the dictionary size.@* Example: 0xD3 = 2^19 - 6 * 2^15 = 512 KiB - 6 * 32 KiB = 320 KiB@* Valid values for dictionary size range from 4 KiB to 512 MiB. @item LZMA stream The LZMA stream, finished by an end of stream marker. Uses default values for encoder properties. @ifnothtml @xref{Stream format,,,lzip}, @end ifnothtml @ifhtml See @uref{http://www.nongnu.org/lzip/manual/lzip_manual.html#Stream-format,,Stream format} @end ifhtml for a complete description. @item CRC32 (4 bytes) CRC of the uncompressed original data. @item Data size (8 bytes) Size of the uncompressed original data. @item Member size (8 bytes) Total size of the member, including header and trailer. This field acts as a distributed index, allows the verification of stream integrity, and facilitates safe recovery of undamaged members from multimember files. @end table @node Memory requirements @chapter Memory required to compress and decompress @cindex memory requirements The amount of memory required @strong{per thread} is approximately the following: @itemize @bullet @item For compression at level -0; 1.5 MiB plus 3 times the data size (@pxref{--data-size}). Default is 4.5 MiB. @item For compression at other levels; 11 times the dictionary size plus 3 times the data size. Default is 136 MiB. @item For decompression of a regular (seekable) file to another regular file, or for testing of a regular file; the dictionary size. @item For testing of a non-seekable file or of standard input; the dictionary size plus up to 5 MiB. @item For decompression of a regular file to a non-seekable file or to standard output; the dictionary size plus up to 32 MiB. @item For decompression of a non-seekable file or of standard input; the dictionary size plus up to 35 MiB. @end itemize @node Minimum file sizes @chapter Minimum file sizes required for full compression speed @cindex minimum file sizes When compressing, plzip divides the input file into chunks and compresses as many chunks simultaneously as worker threads are chosen, creating a multimember compressed file. For this to work as expected (and roughly multiply the compression speed by the number of available processors), the uncompressed file must be at least as large as the number of worker threads times the chunk size (@pxref{--data-size}). Else some processors will not get any data to compress, and compression will be proportionally slower. The maximum speed increase achievable on a given file is limited by the ratio @w{(file_size / data_size)}. The following table shows the minimum uncompressed file size needed for full use of N processors at a given compression level, using the default data size for each level: @multitable {Processors} {512 MiB} {512 MiB} {512 MiB} {512 MiB} {512 MiB} {512 MiB} @headitem Processors @tab 2 @tab 4 @tab 8 @tab 16 @tab 64 @tab 256 @item Level @item -0 @tab 2 MiB @tab 4 MiB @tab 8 MiB @tab 16 MiB @tab 64 MiB @tab 256 MiB @item -1 @tab 4 MiB @tab 8 MiB @tab 16 MiB @tab 32 MiB @tab 128 MiB @tab 512 MiB @item -2 @tab 6 MiB @tab 12 MiB @tab 24 MiB @tab 48 MiB @tab 192 MiB @tab 768 MiB @item -3 @tab 8 MiB @tab 16 MiB @tab 32 MiB @tab 64 MiB @tab 256 MiB @tab 1 GiB @item -4 @tab 12 MiB @tab 24 MiB @tab 48 MiB @tab 96 MiB @tab 384 MiB @tab 1.5 GiB @item -5 @tab 16 MiB @tab 32 MiB @tab 64 MiB @tab 128 MiB @tab 512 MiB @tab 2 GiB @item -6 @tab 32 MiB @tab 64 MiB @tab 128 MiB @tab 256 MiB @tab 1 GiB @tab 4 GiB @item -7 @tab 64 MiB @tab 128 MiB @tab 256 MiB @tab 512 MiB @tab 2 GiB @tab 8 GiB @item -8 @tab 96 MiB @tab 192 MiB @tab 384 MiB @tab 768 MiB @tab 3 GiB @tab 12 GiB @item -9 @tab 128 MiB @tab 256 MiB @tab 512 MiB @tab 1 GiB @tab 4 GiB @tab 16 GiB @end multitable @node Trailing data @chapter Extra data appended to the file @cindex trailing data Sometimes extra data are found appended to a lzip file after the last member. Such trailing data may be: @itemize @bullet @item Padding added to make the file size a multiple of some block size, for example when writing to a tape. It is safe to append any amount of padding zero bytes to a lzip file. @item Useful data added by the user; a cryptographically secure hash, a description of file contents, etc. It is safe to append any amount of text to a lzip file as long as the text does not begin with the string "LZIP", and does not contain any zero bytes (null characters). Nonzero bytes and zero bytes can't be safely mixed in trailing data. @item Garbage added by some not totally successful copy operation. @item Malicious data added to the file in order to make its total size and hash value (for a chosen hash) coincide with those of another file. @item In very rare cases, trailing data could be the corrupt header of another member. In multimember or concatenated files the probability of corruption happening in the magic bytes is 5 times smaller than the probability of getting a false positive caused by the corruption of the integrity information itself. Therefore it can be considered to be below the noise level. @end itemize Trailing data are in no way part of the lzip file format, but tools reading lzip files are expected to behave as correctly and usefully as possible in the presence of trailing data. Trailing data can be safely ignored in most cases. In some cases, like that of user-added data, they are expected to be ignored. In those cases where a file containing trailing data must be rejected, the option @samp{--trailing-error} can be used. @xref{--trailing-error}. @node Examples @chapter A small tutorial with examples @cindex examples WARNING! Even if plzip is bug-free, other causes may result in a corrupt compressed file (bugs in the system libraries, memory errors, etc). Therefore, if the data you are going to compress are important, give the @samp{--keep} option to plzip and don't remove the original file until you verify the compressed file with a command like @w{@samp{plzip -cd file.lz | cmp file -}}. @sp 1 @noindent Example 1: Replace a regular file with its compressed version @samp{file.lz} and show the compression ratio. @example plzip -v file @end example @sp 1 @noindent Example 2: Like example 1 but the created @samp{file.lz} has a block size of 1 MiB. The compression ratio is not shown. @example plzip -B 1MiB file @end example @sp 1 @noindent Example 3: Restore a regular file from its compressed version @samp{file.lz}. If the operation is successful, @samp{file.lz} is removed. @example plzip -d file.lz @end example @sp 1 @noindent Example 4: Verify the integrity of the compressed file @samp{file.lz} and show status. @example plzip -tv file.lz @end example @sp 1 @noindent Example 5: Compress a whole device in /dev/sdc and send the output to @samp{file.lz}. @example plzip -c /dev/sdc > file.lz @end example @sp 1 @anchor{concat-example} @noindent Example 6: The right way of concatenating the decompressed output of two or more compressed files. @xref{Trailing data}. @example Don't do this cat file1.lz file2.lz file3.lz | plzip -d Do this instead plzip -cd file1.lz file2.lz file3.lz @end example @sp 1 @noindent Example 7: Decompress @samp{file.lz} partially until 10 KiB of decompressed data are produced. @example plzip -cd file.lz | dd bs=1024 count=10 @end example @sp 1 @noindent Example 8: Decompress @samp{file.lz} partially from decompressed byte 10000 to decompressed byte 15000 (5000 bytes are produced). @example plzip -cd file.lz | dd bs=1000 skip=10 count=5 @end example @node Problems @chapter Reporting bugs @cindex bugs @cindex getting help There are probably bugs in plzip. There are certainly errors and omissions in this manual. If you report them, they will get fixed. If you don't, no one will ever know about them and they will remain unfixed for all eternity, if not longer. If you find a bug in plzip, please send electronic mail to @email{lzip-bug@@nongnu.org}. Include the version number, which you can find by running @w{@code{plzip --version}}. @node Concept index @unnumbered Concept index @printindex cp @bye