summaryrefslogtreecommitdiffstats
path: root/doc/clzip.texinfo
diff options
context:
space:
mode:
authorDaniel Baumann <mail@daniel-baumann.ch>2015-11-06 11:24:27 +0000
committerDaniel Baumann <mail@daniel-baumann.ch>2015-11-06 11:24:27 +0000
commitf86afc6aa8317a9ed84be9f8c719dcc64cbd22d1 (patch)
tree1e005d900e65bf8ef011e8c613b9cb6ca6fea24e /doc/clzip.texinfo
parentInitial commit. (diff)
downloadclzip-f86afc6aa8317a9ed84be9f8c719dcc64cbd22d1.tar.xz
clzip-f86afc6aa8317a9ed84be9f8c719dcc64cbd22d1.zip
Adding upstream version 1.0~rc2.upstream/1.0_rc2
Signed-off-by: Daniel Baumann <mail@daniel-baumann.ch>
Diffstat (limited to 'doc/clzip.texinfo')
-rw-r--r--doc/clzip.texinfo462
1 files changed, 462 insertions, 0 deletions
diff --git a/doc/clzip.texinfo b/doc/clzip.texinfo
new file mode 100644
index 0000000..e9e6339
--- /dev/null
+++ b/doc/clzip.texinfo
@@ -0,0 +1,462 @@
+\input texinfo @c -*-texinfo-*-
+@c %**start of header
+@setfilename clzip.info
+@settitle Clzip Manual
+@finalout
+@c %**end of header
+
+@set UPDATED 21 February 2010
+@set VERSION 1.0-rc2
+
+@dircategory Data Compression
+@direntry
+* Clzip: (clzip). Data compressor based on the LZMA algorithm
+@end direntry
+
+
+@titlepage
+@title Clzip
+@subtitle A data compressor based on the LZMA algorithm
+@subtitle for Clzip version @value{VERSION}, @value{UPDATED}
+@author by Antonio Diaz Diaz
+
+@page
+@vskip 0pt plus 1filll
+@end titlepage
+
+@contents
+
+@node Top
+@top
+
+This manual is for Clzip (version @value{VERSION}, @value{UPDATED}).
+
+@menu
+* Introduction:: Purpose and features of clzip
+* Algorithm:: How clzip compresses the data
+* Invoking Clzip:: Command line interface
+* File Format:: Detailed format of the compressed file
+* Examples:: A small tutorial with examples
+* Problems:: Reporting bugs
+* Concept Index:: Index of concepts
+@end menu
+
+@sp 1
+Copyright @copyright{} 2010 Antonio Diaz Diaz.
+
+This manual is free documentation: you have unlimited permission
+to copy, distribute and modify it.
+
+
+@node Introduction
+@chapter Introduction
+@cindex introduction
+
+Clzip is a lossless data compressor based on the LZMA algorithm, with
+very safe integrity checking and a user interface similar to the one of
+gzip or bzip2. Clzip decompresses almost as fast as gzip and compresses
+better than bzip2, which makes it well suited for software distribution
+and data archiving.
+
+Clzip replaces every file given in the command line with a compressed
+version of itself, with the name "original_name.lz". Each compressed
+file has the same modification date, permissions, and, when possible,
+ownership as the corresponding original, so that these properties can be
+correctly restored at decompression time. Clzip is able to read from some
+types of non regular files if the @samp{--stdout} option is specified.
+
+If no file names are specified, clzip compresses (or decompresses) from
+standard input to standard output. In this case, clzip will decline to
+write compressed output to a terminal, as this would be entirely
+incomprehensible and therefore pointless.
+
+Clzip will correctly decompress a file which is the concatenation of two
+or more compressed files. The result is the concatenation of the
+corresponding uncompressed files. Integrity testing of concatenated
+compressed files is also supported.
+
+Clzip can produce multimember files and safely recover, with lziprecover,
+the undamaged members in case of file damage. Clzip can also split the
+compressed output in volumes of a given size, even when reading from
+standard input. This allows the direct creation of multivolume
+compressed tar archives.
+
+The amount of memory required for compression is about 5 MiB plus 1 or 2
+times the dictionary size limit (1 if input file size is less than
+dictionary size limit, else 2) plus 8 times the dictionary size really
+used. For decompression is a little more than the dictionary size really
+used. Clzip will automatically use the smallest possible dictionary size
+without exceeding the given limit. It is important to appreciate that
+the decompression memory requirement is affected at compression time by
+the choice of dictionary size limit.
+
+When decompressing, clzip attempts to guess the name for the decompressed
+file from that of the compressed file as follows:
+
+@multitable {anyothername} {becomes} {anyothername.out}
+@item filename.lz @tab becomes @tab filename
+@item filename.tlz @tab becomes @tab filename.tar
+@item anyothername @tab becomes @tab anyothername.out
+@end multitable
+
+As a self-check for your protection, clzip stores in the member trailer
+the 32-bit CRC of the original data and the size of the original data,
+to make sure that the decompressed version of the data is identical to
+the original. This guards against corruption of the compressed data, and
+against undetected bugs in clzip (hopefully very unlikely). The chances
+of data corruption going undetected are microscopic, less than one
+chance in 4000 million for each member processed. Be aware, though, that
+the check occurs upon decompression, so it can only tell you that
+something is wrong. It can't help you recover the original uncompressed
+data.
+
+Return values: 0 for a normal exit, 1 for environmental problems (file
+not found, invalid flags, I/O errors, etc), 2 to indicate a corrupt or
+invalid input file, 3 for an internal consistency error (eg, bug) which
+caused clzip to panic.
+
+
+@node Algorithm
+@chapter Algorithm
+@cindex algorithm
+
+Clzip implements a simplified version of the LZMA (Lempel-Ziv-Markov
+chain-Algorithm) algorithm. The original LZMA algorithm was designed by
+Igor Pavlov.
+
+The high compression of LZMA comes from combining two basic, well-proven
+compression ideas: sliding dictionaries (LZ77/78) and markov models (the
+thing used by every compression algorithm that uses a range encoder or
+similar order-0 entropy coder as its last stage) with segregation of
+contexts according to what the bits are used for.
+
+Clzip is a two stage compressor. The first stage is a Lempel-Ziv coder,
+which reduces redundancy by translating chunks of data to their
+corresponding distance-length pairs. The second stage is a range encoder
+that uses a different probability model for each type of data;
+distances, lengths, literal bytes, etc.
+
+The match finder, part of the LZ coder, is the most important piece of
+the LZMA algorithm, as it is in many Lempel-Ziv based algorithms. Most
+of clzip's execution time is spent in the match finder, and it has the
+greatest influence on the compression ratio.
+
+Here is how it works, step by step:
+
+1) The member header is written to the output stream.
+
+2) The first byte is coded literally, because there are no previous
+bytes to which the match finder can refer to.
+
+3) The main encoder advances to the next byte in the input data and
+calls the match finder.
+
+4) The match finder fills an array with the minimum distances before the
+current byte where a match of a given length can be found.
+
+5) Go back to step 3 until a sequence (formed of pairs, repeated
+distances and literal bytes) of minimum price has been formed. Where the
+price represents the number of output bits produced.
+
+6) The range encoder encodes the sequence produced by the main encoder
+and sends the produced bytes to the output stream.
+
+7) Go back to step 3 until the input data is finished or until the
+member or volume size limits are reached.
+
+8) The range encoder is flushed.
+
+9) The member trailer is written to the output stream.
+
+10) If there are more data to compress, go back to step 1.
+
+
+@node Invoking Clzip
+@chapter Invoking Clzip
+@cindex invoking
+@cindex options
+@cindex usage
+@cindex version
+
+The format for running clzip is:
+
+@example
+clzip [@var{options}] [@var{files}]
+@end example
+
+Clzip supports the following options:
+
+@table @samp
+@item --help
+@itemx -h
+Print an informative help message describing the options and exit.
+
+@item --version
+@itemx -V
+Print the version number of clzip on the standard output and exit.
+
+@item --member-size=@var{size}
+@itemx -b @var{size}
+Produce a multimember file and set the member size limit to @var{size}
+bytes. Minimum member size limit is 100kB. Small member size may degrade
+compression ratio, so use it only when needed. The default is to produce
+single member files.
+
+@item --stdout
+@itemx -c
+Compress or decompress to standard output. Needed when reading from a
+named pipe (fifo) or from a device. Use it to recover as much of the
+uncompressed data as possible when decompressing a corrupt file.
+
+@item --decompress
+@itemx -d
+Decompress.
+
+@item --force
+@itemx -f
+Force overwrite of output file.
+
+@item --keep
+@itemx -k
+Keep (don't delete) input files during compression or decompression.
+
+@item --match-length=@var{length}
+@itemx -m @var{length}
+Set the match length limit in bytes. Valid values range from 5 to 273.
+Larger values usually give better compression ratios but longer
+compression times.
+
+@item --output=@var{file}
+@itemx -o @var{file}
+When reading from standard input and @samp{--stdout} has not been
+specified, use @samp{@var{file}} as the virtual name of the uncompressed
+file. This produces a file named @samp{@var{file}} when decompressing, a
+file named @samp{@var{file}.lz} when compressing, and several files
+named @samp{@var{file}00001.lz}, @samp{@var{file}00002.lz}, etc, when
+compressing and splitting the output in volumes.
+
+@item --quiet
+@itemx -q
+Quiet operation. Suppress all messages.
+
+@item --dictionary-size=@var{size}
+@itemx -s @var{size}
+Set the dictionary size limit in bytes. Valid values range from 4KiB to
+512MiB. Clzip will use the smallest possible dictionary size for each
+member without exceeding this limit. Note that dictionary sizes are
+quantized. If the specified size does not match one of the valid sizes,
+it will be rounded upwards.
+
+@item --volume-size=@var{size}
+@itemx -S @var{size}
+Split the compressed output into several volume files with names
+@samp{original_name00001.lz}, @samp{original_name00002.lz}, etc, and set
+the volume size limit to @var{size} bytes. Each volume is a complete,
+maybe multimember, lzip file. Minimum volume size limit is 100kB. Small
+volume size may degrade compression ratio, so use it only when needed.
+
+@item --test
+@itemx -t
+Check integrity of the specified file(s), but don't decompress them.
+This really performs a trial decompression and throws away the result.
+Use @samp{-tvv} or @samp{-tvvv} to see information about the file.
+
+@item --verbose
+@itemx -v
+Verbose mode. Show the compression ratio for each file processed.
+Further -v's increase the verbosity level.
+
+@item -1 .. -9
+Set the compression parameters (dictionary size and match length limit)
+as shown in the table below. Note that @samp{-9} can be much slower than
+@samp{-1}. These options have no effect when decompressing.
+
+@multitable {Level} {Dictionary size} {Match length limit}
+@item Level @tab Dictionary size @tab Match length limit
+@item -1 @tab 1 MiB @tab 10 bytes
+@item -2 @tab 1.5 MiB @tab 12 bytes
+@item -3 @tab 2 MiB @tab 17 bytes
+@item -4 @tab 3 MiB @tab 26 bytes
+@item -5 @tab 4 MiB @tab 44 bytes
+@item -6 @tab 8 MiB @tab 80 bytes
+@item -7 @tab 16 MiB @tab 108 bytes
+@item -8 @tab 24 MiB @tab 163 bytes
+@item -9 @tab 32 MiB @tab 273 bytes
+@end multitable
+
+@item --fast
+@itemx --best
+Aliases for GNU gzip compatibility.
+
+@end table
+
+@sp 1
+Numbers given as arguments to options may be followed by a multiplier
+and an optional @samp{B} for "byte".
+
+Table of SI and binary prefixes (unit multipliers):
+
+@multitable {Prefix} {kilobyte (10^3 = 1000)} {|} {Prefix} {kibibyte (2^10 = 1024)}
+@item Prefix @tab Value @tab | @tab Prefix @tab Value
+@item k @tab kilobyte (10^3 = 1000) @tab | @tab Ki @tab kibibyte (2^10 = 1024)
+@item M @tab megabyte (10^6) @tab | @tab Mi @tab mebibyte (2^20)
+@item G @tab gigabyte (10^9) @tab | @tab Gi @tab gibibyte (2^30)
+@item T @tab terabyte (10^12) @tab | @tab Ti @tab tebibyte (2^40)
+@item P @tab petabyte (10^15) @tab | @tab Pi @tab pebibyte (2^50)
+@item E @tab exabyte (10^18) @tab | @tab Ei @tab exbibyte (2^60)
+@item Z @tab zettabyte (10^21) @tab | @tab Zi @tab zebibyte (2^70)
+@item Y @tab yottabyte (10^24) @tab | @tab Yi @tab yobibyte (2^80)
+@end multitable
+
+
+@node File Format
+@chapter File Format
+@cindex file format
+
+In the diagram below, a box like this:
+@verbatim
++---+
+| | <-- the vertical bars might be missing
++---+
+@end verbatim
+
+represents one byte; a box like this:
+@verbatim
++==============+
+| |
++==============+
+@end verbatim
+
+represents a variable number of bytes.
+
+@sp 1
+A lzip file consists of a series of "members" (compressed data sets).
+The members simply appear one after another in the file, with no
+additional information before, between, or after them.
+
+Each member has the following structure:
+@verbatim
++--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+| ID string | VN | DS | Lzma stream | CRC32 | Data size | Member size |
++--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+@end verbatim
+
+All multibyte values are stored in little endian order.
+
+@table @samp
+@item ID string
+A four byte string, identifying the member type, with the value "LZIP".
+
+@item VN (version number, 1 byte)
+Just in case something needs to be modified in the future. Valid values
+are 0 and 1. Version 0 files have only one member and lack @samp{Member
+size}.
+
+@item DS (coded dictionary size, 1 byte)
+Bits 4-0 contain the base 2 logarithm of the base dictionary size.@*
+Bits 7-5 contain the number of "wedges" to substract from the base
+dictionary size to obtain the dictionary size. The size of a wedge is
+(base dictionary size / 16).@*
+Valid values for dictionary size range from 4KiB to 512MiB.
+
+@item Lzma stream
+The lzma stream, finished by an end of stream marker. Uses default values
+for encoder properties.
+
+@item CRC32 (4 bytes)
+CRC of the uncompressed original data.
+
+@item Data size (8 bytes)
+Size of the uncompressed original data.
+
+@item Member size (8 bytes)
+Total size of the member, including header and trailer. This facilitates
+safe recovery of undamaged members from multimember files.
+
+@end table
+
+
+@node Examples
+@chapter A small tutorial with examples
+@cindex examples
+
+WARNING! If your data is important, give the @samp{--keep} option to
+clzip and do not remove the original file until you verify the compressed
+file with a command like @samp{clzip -cd file.lz | cmp file -}.
+
+@sp 1
+@noindent
+Example 1: Replace a regular file with its compressed version file.lz
+and show the compression ratio.
+
+@example
+clzip -v file
+@end example
+
+@sp 1
+@noindent
+Example 2: Like example 1 but the created file.lz is multimember with a
+member size of 1MiB.
+
+@example
+clzip -b 1MiB file
+@end example
+
+@sp 1
+@noindent
+Example 3: Compress a whole floppy in /dev/fd0 and send the output to
+file.lz.
+
+@example
+clzip -c /dev/fd0 > file.lz
+@end example
+
+@sp 1
+@noindent
+Example 4: Create a multivolume compressed tar archive with a volume
+size of 1440KiB.
+
+@example
+tar -c some_directory | clzip -S 1440KiB -o volume_name
+@end example
+
+@sp 1
+@noindent
+Example 5: Extract a multivolume compressed tar archive.
+
+@example
+clzip -cd volume_name*.lz | tar -xf -
+@end example
+
+@sp 1
+@noindent
+Example 6: Create a multivolume compressed backup of a big database file
+with a volume size of 650MB, where each volume is a multimember file
+with a member size of 32MiB.
+
+@example
+clzip -b 32MiB -S 650MB big_database
+@end example
+
+
+@node Problems
+@chapter Reporting Bugs
+@cindex bugs
+@cindex getting help
+
+There are probably bugs in clzip. There are certainly errors and
+omissions in this manual. If you report them, they will get fixed. If
+you don't, no one will ever know about them and they will remain unfixed
+for all eternity, if not longer.
+
+If you find a bug in clzip, please send electronic mail to
+@email{lzip-bug@@nongnu.org}. Include the version number, which you can
+find by running @w{@samp{clzip --version}}.
+
+
+@node Concept Index
+@unnumbered Concept Index
+
+@printindex cp
+
+@bye