summaryrefslogtreecommitdiffstats
path: root/README
blob: 616f7e51dd7b52e8f58523d94ad36431f24f5470 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
Description

Clzip is a lossless data compressor with a user interface similar to the
one of gzip or bzip2. Clzip decompresses almost as fast as gzip,
compresses most files more than bzip2, and is better than both from a
data recovery perspective. Clzip is a clean implementation of the LZMA
"algorithm".

Clzip uses the lzip file format; the files produced by clzip are fully
compatible with lzip-1.4 or newer, and can be rescued with lziprecover.
Clzip is in fact a C language version of lzip, intended for embedded
devices or systems lacking a C++ compiler.

The lzip file format is designed for long-term data archiving, taking
into account both data integrity and decoder availability:

   * The lzip format provides very safe integrity checking and some data
     recovery means. The lziprecover program can repair bit-flip errors
     (one of the most common forms of data corruption) in lzip files,
     and provides data recovery capabilities, including error-checked
     merging of damaged copies of a file.

   * The lzip format is as simple as possible (but not simpler). The
     lzip manual provides the code of a simple decompressor along with a
     detailed explanation of how it works, so that with the only help of
     the lzip manual it would be possible for a digital archaeologist to
     extract the data from a lzip file long after quantum computers
     eventually render LZMA obsolete.

   * Additionally lzip is copylefted, which guarantees that it will
     remain free forever.

A nice feature of the lzip format is that a corrupt byte is easier to
repair the nearer it is from the beginning of the file. Therefore, with
the help of lziprecover, losing an entire archive just because of a
corrupt byte near the beginning is a thing of the past.

Clzip uses the same well-defined exit status values used by lzip and
bzip2, which makes it safer than compressors returning ambiguous warning
values (like gzip) when it is used as a back end for other programs like
tar or zutils.

Clzip will automatically use the smallest possible dictionary size for
each file without exceeding the given limit. Keep in mind that the
decompression memory requirement is affected at compression time by the
choice of dictionary size limit.

When compressing, clzip replaces every file given in the command line
with a compressed version of itself, with the name "original_name.lz".
When decompressing, clzip attempts to guess the name for the decompressed
file from that of the compressed file as follows:

filename.lz    becomes   filename
filename.tlz   becomes   filename.tar
anyothername   becomes   anyothername.out

(De)compressing a file is much like copying or moving it; therefore clzip
preserves the access and modification dates, permissions, and, when
possible, ownership of the file just as "cp -p" does. (If the user ID or
the group ID can't be duplicated, the file permission bits S_ISUID and
S_ISGID are cleared).

Clzip is able to read from some types of non regular files if the
"--stdout" option is specified.

If no file names are specified, clzip compresses (or decompresses) from
standard input to standard output. In this case, clzip will decline to
write compressed output to a terminal, as this would be entirely
incomprehensible and therefore pointless.

Clzip will correctly decompress a file which is the concatenation of two
or more compressed files. The result is the concatenation of the
corresponding uncompressed files. Integrity testing of concatenated
compressed files is also supported.

Clzip can produce multi-member files and safely recover, with
lziprecover, the undamaged members in case of file damage. Clzip can
also split the compressed output in volumes of a given size, even when
reading from standard input. This allows the direct creation of
multivolume compressed tar archives.

Clzip is able to compress and decompress streams of unlimited size by
automatically creating multi-member output. The members so created are
large, about 64 PiB each.

There is no such thing as a "LZMA algorithm"; it is more like a "LZMA
coding scheme". For example, the option '-0' of lzip uses the scheme in
almost the simplest way possible; issuing the longest match it can find,
or a literal byte if it can't find a match. Inversely, a much more
elaborated way of finding coding sequences of minimum price than the one
currently used by lzip could be developed, and the resulting sequence
could also be coded using the LZMA coding scheme.

Lzip currently implements two variants of the LZMA algorithm; fast (used
by option -0) and normal (used by all other compression levels). Clzip
just implements the "normal" variant.

The high compression of LZMA comes from combining two basic, well-proven
compression ideas: sliding dictionaries (LZ77/78) and markov models (the
thing used by every compression algorithm that uses a range encoder or
similar order-0 entropy coder as its last stage) with segregation of
contexts according to what the bits are used for.

The ideas embodied in clzip are due to (at least) the following people:
Abraham Lempel and Jacob Ziv (for the LZ algorithm), Andrey Markov (for
the definition of Markov chains), G.N.N. Martin (for the definition of
range encoding), Igor Pavlov (for putting all the above together in
LZMA), and Julian Seward (for bzip2's CLI).


Copyright (C) 2010-2014 Antonio Diaz Diaz.

This file is free documentation: you have unlimited permission to copy,
distribute and modify it.

The file Makefile.in is a data file used by configure to produce the
Makefile. It has the same copyright owner and permissions that configure
itself.