1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
|
\input texinfo @c -*-texinfo-*-
@c %**start of header
@setfilename clzip.info
@documentencoding ISO-8859-15
@settitle Clzip Manual
@finalout
@c %**end of header
@set UPDATED 7 July 2015
@set VERSION 1.7
@dircategory Data Compression
@direntry
* Clzip: (clzip). LZMA lossless data compressor
@end direntry
@ifnothtml
@titlepage
@title Clzip
@subtitle LZMA lossless data compressor
@subtitle for Clzip version @value{VERSION}, @value{UPDATED}
@author by Antonio Diaz Diaz
@page
@vskip 0pt plus 1filll
@end titlepage
@contents
@end ifnothtml
@node Top
@top
This manual is for Clzip (version @value{VERSION}, @value{UPDATED}).
@menu
* Introduction:: Purpose and features of clzip
* Invoking clzip:: Command line interface
* File format:: Detailed format of the compressed file
* Algorithm:: How clzip compresses the data
* Examples:: A small tutorial with examples
* Problems:: Reporting bugs
* Concept index:: Index of concepts
@end menu
@sp 1
Copyright @copyright{} 2010-2015 Antonio Diaz Diaz.
This manual is free documentation: you have unlimited permission
to copy, distribute and modify it.
@node Introduction
@chapter Introduction
@cindex introduction
Clzip is a lossless data compressor with a user interface similar to the
one of gzip or bzip2. Clzip is about as fast as gzip, compresses most
files more than bzip2, and is better than both from a data recovery
perspective.
Clzip uses the lzip file format; the files produced by clzip are fully
compatible with lzip-1.4 or newer, and can be rescued with lziprecover.
Clzip is in fact a C language version of lzip, intended for embedded
devices or systems lacking a C++ compiler.
The lzip file format is designed for data sharing and long-term
archiving, taking into account both data integrity and decoder
availability:
@itemize @bullet
@item
The lzip format provides very safe integrity checking and some data
recovery means. The
@uref{http://www.nongnu.org/lzip/manual/lziprecover_manual.html#Data-safety,,lziprecover}
program can repair bit-flip errors (one of the most common forms of data
corruption) in lzip files, and provides data recovery capabilities,
including error-checked merging of damaged copies of a file.
@ifnothtml
@ref{Data safety,,,lziprecover}.
@end ifnothtml
@item
The lzip format is as simple as possible (but not simpler). The lzip
manual provides the code of a simple decompressor along with a detailed
explanation of how it works, so that with the only help of the lzip
manual it would be possible for a digital archaeologist to extract the
data from a lzip file long after quantum computers eventually render
LZMA obsolete.
@item
Additionally the lzip reference implementation is copylefted, which
guarantees that it will remain free forever.
@end itemize
A nice feature of the lzip format is that a corrupt byte is easier to
repair the nearer it is from the beginning of the file. Therefore, with
the help of lziprecover, losing an entire archive just because of a
corrupt byte near the beginning is a thing of the past.
The member trailer stores the 32-bit CRC of the original data, the size
of the original data and the size of the member. These values, together
with the value remaining in the range decoder and the end-of-stream
marker, provide a 4 factor integrity checking which guarantees that the
decompressed version of the data is identical to the original. This
guards against corruption of the compressed data, and against undetected
bugs in clzip (hopefully very unlikely). The chances of data corruption
going undetected are microscopic. Be aware, though, that the check
occurs upon decompression, so it can only tell you that something is
wrong. It can't help you recover the original uncompressed data.
Clzip uses the same well-defined exit status values used by lzip and
bzip2, which makes it safer than compressors returning ambiguous warning
values (like gzip) when it is used as a back end for other programs like
tar or zutils.
Clzip will automatically use the smallest possible dictionary size for
each file without exceeding the given limit. Keep in mind that the
decompression memory requirement is affected at compression time by the
choice of dictionary size limit.
The amount of memory required for compression is about 1 or 2 times the
dictionary size limit (1 if input file size is less than dictionary size
limit, else 2) plus 9 times the dictionary size really used. The option
@samp{-0} is special and only requires about 1.5 MiB at most. The amount
of memory required for decompression is about 46 kB larger than the
dictionary size really used.
When compressing, clzip replaces every file given in the command line
with a compressed version of itself, with the name "original_name.lz".
When decompressing, clzip attempts to guess the name for the decompressed
file from that of the compressed file as follows:
@multitable {anyothername} {becomes} {anyothername.out}
@item filename.lz @tab becomes @tab filename
@item filename.tlz @tab becomes @tab filename.tar
@item anyothername @tab becomes @tab anyothername.out
@end multitable
(De)compressing a file is much like copying or moving it; therefore clzip
preserves the access and modification dates, permissions, and, when
possible, ownership of the file just as "cp -p" does. (If the user ID or
the group ID can't be duplicated, the file permission bits S_ISUID and
S_ISGID are cleared).
Clzip is able to read from some types of non regular files if the
@samp{--stdout} option is specified.
If no file names are specified, clzip compresses (or decompresses) from
standard input to standard output. In this case, clzip will decline to
write compressed output to a terminal, as this would be entirely
incomprehensible and therefore pointless.
Clzip will correctly decompress a file which is the concatenation of two
or more compressed files. The result is the concatenation of the
corresponding uncompressed files. Integrity testing of concatenated
compressed files is also supported.
Clzip can produce multi-member files and safely recover, with
lziprecover, the undamaged members in case of file damage. Clzip can
also split the compressed output in volumes of a given size, even when
reading from standard input. This allows the direct creation of
multivolume compressed tar archives.
Clzip is able to compress and decompress streams of unlimited size by
automatically creating multi-member output. The members so created are
large, about 2 PiB each.
@node Invoking clzip
@chapter Invoking clzip
@cindex invoking
@cindex options
@cindex usage
@cindex version
The format for running clzip is:
@example
clzip [@var{options}] [@var{files}]
@end example
Clzip supports the following options:
@table @code
@item -h
@itemx --help
Print an informative help message describing the options and exit.
@item -V
@itemx --version
Print the version number of clzip on the standard output and exit.
@item -b @var{bytes}
@itemx --member-size=@var{bytes}
Set the member size limit to @var{bytes}. A small member size may
degrade compression ratio, so use it only when needed. Valid values
range from 100 kB to 2 PiB. Defaults to 2 PiB.
@item -c
@itemx --stdout
Compress or decompress to standard output. Needed when reading from a
named pipe (fifo) or from a device. Use it to recover as much of the
uncompressed data as possible when decompressing a corrupt file.
@item -d
@itemx --decompress
Decompress.
@item -f
@itemx --force
Force overwrite of output files.
@item -F
@itemx --recompress
Force re-compression of files whose name already has the @samp{.lz} or
@samp{.tlz} suffix.
@item -k
@itemx --keep
Keep (don't delete) input files during compression or decompression.
@item -m @var{bytes}
@itemx --match-length=@var{bytes}
Set the match length limit in bytes. After a match this long is found,
the search is finished. Valid values range from 5 to 273. Larger values
usually give better compression ratios but longer compression times.
@item -o @var{file}
@itemx --output=@var{file}
When reading from standard input and @samp{--stdout} has not been
specified, use @samp{@var{file}} as the virtual name of the uncompressed
file. This produces a file named @samp{@var{file}} when decompressing, a
file named @samp{@var{file}.lz} when compressing, and several files
named @samp{@var{file}00001.lz}, @samp{@var{file}00002.lz}, etc, when
compressing and splitting the output in volumes.
@item -q
@itemx --quiet
Quiet operation. Suppress all messages.
@item -s @var{bytes}
@itemx --dictionary-size=@var{bytes}
Set the dictionary size limit in bytes. Valid values range from 4 KiB to
512 MiB. Clzip will use the smallest possible dictionary size for each
file without exceeding this limit. Note that dictionary sizes are
quantized. If the specified size does not match one of the valid sizes,
it will be rounded upwards by adding up to (@var{bytes} / 16) to it.
For maximum compression you should use a dictionary size limit as large
as possible, but keep in mind that the decompression memory requirement
is affected at compression time by the choice of dictionary size limit.
@item -S @var{bytes}
@itemx --volume-size=@var{bytes}
Split the compressed output into several volume files with names
@samp{original_name00001.lz}, @samp{original_name00002.lz}, etc, and set
the volume size limit to @var{bytes}. Each volume is a complete, maybe
multi-member, lzip file. A small volume size may degrade compression
ratio, so use it only when needed. Valid values range from 100 kB to 4
EiB.
@item -t
@itemx --test
Check integrity of the specified file(s), but don't decompress them.
This really performs a trial decompression and throws away the result.
Use it together with @samp{-v} to see information about the file.
@item -v
@itemx --verbose
Verbose mode.@*
When compressing, show the compression ratio for each file processed. A
second @samp{-v} shows the progress of compression.@*
When decompressing or testing, further -v's (up to 4) increase the
verbosity level, showing status, compression ratio, dictionary size,
and trailer contents (CRC, data size, member size).
@item -0 .. -9
Set the compression parameters (dictionary size and match length limit)
as shown in the table below. Note that @samp{-9} can be much slower than
@samp{-0}. These options have no effect when decompressing.
The bidimensional parameter space of LZMA can't be mapped to a linear
scale optimal for all files. If your files are large, very repetitive,
etc, you may need to use the @samp{--match-length} and
@samp{--dictionary-size} options directly to achieve optimal
performance.
@multitable {Level} {Dictionary size} {Match length limit}
@item Level @tab Dictionary size @tab Match length limit
@item -0 @tab 64 KiB @tab 16 bytes
@item -1 @tab 1 MiB @tab 5 bytes
@item -2 @tab 1.5 MiB @tab 6 bytes
@item -3 @tab 2 MiB @tab 8 bytes
@item -4 @tab 3 MiB @tab 12 bytes
@item -5 @tab 4 MiB @tab 20 bytes
@item -6 @tab 8 MiB @tab 36 bytes
@item -7 @tab 16 MiB @tab 68 bytes
@item -8 @tab 24 MiB @tab 132 bytes
@item -9 @tab 32 MiB @tab 273 bytes
@end multitable
@item --fast
@itemx --best
Aliases for GNU gzip compatibility.
@end table
Numbers given as arguments to options may be followed by a multiplier
and an optional @samp{B} for "byte".
Table of SI and binary prefixes (unit multipliers):
@multitable {Prefix} {kilobyte (10^3 = 1000)} {|} {Prefix} {kibibyte (2^10 = 1024)}
@item Prefix @tab Value @tab | @tab Prefix @tab Value
@item k @tab kilobyte (10^3 = 1000) @tab | @tab Ki @tab kibibyte (2^10 = 1024)
@item M @tab megabyte (10^6) @tab | @tab Mi @tab mebibyte (2^20)
@item G @tab gigabyte (10^9) @tab | @tab Gi @tab gibibyte (2^30)
@item T @tab terabyte (10^12) @tab | @tab Ti @tab tebibyte (2^40)
@item P @tab petabyte (10^15) @tab | @tab Pi @tab pebibyte (2^50)
@item E @tab exabyte (10^18) @tab | @tab Ei @tab exbibyte (2^60)
@item Z @tab zettabyte (10^21) @tab | @tab Zi @tab zebibyte (2^70)
@item Y @tab yottabyte (10^24) @tab | @tab Yi @tab yobibyte (2^80)
@end multitable
@sp 1
Exit status: 0 for a normal exit, 1 for environmental problems (file not
found, invalid flags, I/O errors, etc), 2 to indicate a corrupt or
invalid input file, 3 for an internal consistency error (eg, bug) which
caused clzip to panic.
@node File format
@chapter File format
@cindex file format
Perfection is reached, not when there is no longer anything to add, but
when there is no longer anything to take away.@*
--- Antoine de Saint-Exupery
@sp 1
In the diagram below, a box like this:
@verbatim
+---+
| | <-- the vertical bars might be missing
+---+
@end verbatim
represents one byte; a box like this:
@verbatim
+==============+
| |
+==============+
@end verbatim
represents a variable number of bytes.
@sp 1
A lzip file consists of a series of "members" (compressed data sets).
The members simply appear one after another in the file, with no
additional information before, between, or after them.
Each member has the following structure:
@verbatim
+--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| ID string | VN | DS | Lzma stream | CRC32 | Data size | Member size |
+--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@end verbatim
All multibyte values are stored in little endian order.
@table @samp
@item ID string
A four byte string, identifying the lzip format, with the value "LZIP"
(0x4C, 0x5A, 0x49, 0x50).
@item VN (version number, 1 byte)
Just in case something needs to be modified in the future. 1 for now.
@item DS (coded dictionary size, 1 byte)
The dictionary size is calculated by taking a power of 2 (the base size)
and substracting from it a fraction between 0/16 and 7/16 of the base
size.@*
Bits 4-0 contain the base 2 logarithm of the base size (12 to 29).@*
Bits 7-5 contain the numerator of the fraction (0 to 7) to substract
from the base size to obtain the dictionary size.@*
Example: 0xD3 = 2^19 - 6 * 2^15 = 512 KiB - 6 * 32 KiB = 320 KiB@*
Valid values for dictionary size range from 4 KiB to 512 MiB.
@item Lzma stream
The lzma stream, finished by an end of stream marker. Uses default
values for encoder properties.
@ifnothtml
@xref{Stream format,,,lzip},
@end ifnothtml
@ifhtml
See
@uref{http://www.nongnu.org/lzip/manual/lzip_manual.html#Stream-format,,Stream format}
@end ifhtml
for a complete description.
@item CRC32 (4 bytes)
CRC of the uncompressed original data.
@item Data size (8 bytes)
Size of the uncompressed original data.
@item Member size (8 bytes)
Total size of the member, including header and trailer. This field acts
as a distributed index, allows the verification of stream integrity, and
facilitates safe recovery of undamaged members from multi-member files.
@end table
@node Algorithm
@chapter Algorithm
@cindex algorithm
In spite of its name (Lempel-Ziv-Markov chain-Algorithm), LZMA is not a
concrete algorithm; it is more like "any algorithm using the LZMA coding
scheme". For example, the option @samp{-0} of lzip uses the scheme in almost
the simplest way possible; issuing the longest match it can find, or a
literal byte if it can't find a match. Inversely, a much more elaborated
way of finding coding sequences of minimum size than the one currently
used by lzip could be developed, and the resulting sequence could also
be coded using the LZMA coding scheme.
Clzip currently implements two variants of the LZMA algorithm; fast
(used by option @samp{-0}) and normal (used by all other compression levels).
The high compression of LZMA comes from combining two basic, well-proven
compression ideas: sliding dictionaries (LZ77/78) and markov models (the
thing used by every compression algorithm that uses a range encoder or
similar order-0 entropy coder as its last stage) with segregation of
contexts according to what the bits are used for.
Clzip is a two stage compressor. The first stage is a Lempel-Ziv coder,
which reduces redundancy by translating chunks of data to their
corresponding distance-length pairs. The second stage is a range encoder
that uses a different probability model for each type of data;
distances, lengths, literal bytes, etc.
Here is how it works, step by step:
1) The member header is written to the output stream.
2) The first byte is coded literally, because there are no previous
bytes to which the match finder can refer to.
3) The main encoder advances to the next byte in the input data and
calls the match finder.
4) The match finder fills an array with the minimum distances before the
current byte where a match of a given length can be found.
5) Go back to step 3 until a sequence (formed of pairs, repeated
distances and literal bytes) of minimum price has been formed. Where the
price represents the number of output bits produced.
6) The range encoder encodes the sequence produced by the main encoder
and sends the produced bytes to the output stream.
7) Go back to step 3 until the input data are finished or until the
member or volume size limits are reached.
8) The range encoder is flushed.
9) The member trailer is written to the output stream.
10) If there are more data to compress, go back to step 1.
@sp 1
@noindent
The ideas embodied in clzip are due to (at least) the following people:
Abraham Lempel and Jacob Ziv (for the LZ algorithm), Andrey Markov (for
the definition of Markov chains), G.N.N. Martin (for the definition of
range encoding), Igor Pavlov (for putting all the above together in
LZMA), and Julian Seward (for bzip2's CLI).
@node Examples
@chapter A small tutorial with examples
@cindex examples
WARNING! Even if clzip is bug-free, other causes may result in a corrupt
compressed file (bugs in the system libraries, memory errors, etc).
Therefore, if the data you are going to compress are important, give the
@samp{--keep} option to clzip and do not remove the original file until
you verify the compressed file with a command like
@w{@samp{clzip -cd file.lz | cmp file -}}.
@sp 1
@noindent
Example 1: Replace a regular file with its compressed version
@samp{file.lz} and show the compression ratio.
@example
clzip -v file
@end example
@sp 1
@noindent
Example 2: Like example 1 but the created @samp{file.lz} is multi-member
with a member size of 1 MiB. The compression ratio is not shown.
@example
clzip -b 1MiB file
@end example
@sp 1
@noindent
Example 3: Restore a regular file from its compressed version
@samp{file.lz}. If the operation is successful, @samp{file.lz} is
removed.
@example
clzip -d file.lz
@end example
@sp 1
@noindent
Example 4: Verify the integrity of the compressed file @samp{file.lz}
and show status.
@example
clzip -tv file.lz
@end example
@sp 1
@noindent
Example 5: Compress a whole floppy in /dev/fd0 and send the output to
@samp{file.lz}.
@example
clzip -c /dev/fd0 > file.lz
@end example
@sp 1
@noindent
Example 6: Decompress @samp{file.lz} partially until 10 KiB of
decompressed data are produced.
@example
clzip -cd file.lz | dd bs=1024 count=10
@end example
@sp 1
@noindent
Example 7: Decompress @samp{file.lz} partially from decompressed byte
10000 to decompressed byte 15000 (5000 bytes are produced).
@example
clzip -cd file.lz | dd bs=1000 skip=10 count=5
@end example
@sp 1
@noindent
Example 8: Create a multivolume compressed tar archive with a volume
size of 1440 KiB.
@example
tar -c some_directory | clzip -S 1440KiB -o volume_name
@end example
@sp 1
@noindent
Example 9: Extract a multivolume compressed tar archive.
@example
clzip -cd volume_name*.lz | tar -xf -
@end example
@sp 1
@noindent
Example 10: Create a multivolume compressed backup of a large database
file with a volume size of 650 MB, where each volume is a multi-member
file with a member size of 32 MiB.
@example
clzip -b 32MiB -S 650MB big_db
@end example
@node Problems
@chapter Reporting bugs
@cindex bugs
@cindex getting help
There are probably bugs in clzip. There are certainly errors and
omissions in this manual. If you report them, they will get fixed. If
you don't, no one will ever know about them and they will remain unfixed
for all eternity, if not longer.
If you find a bug in clzip, please send electronic mail to
@email{lzip-bug@@nongnu.org}. Include the version number, which you can
find by running @w{@code{clzip --version}}.
@node Concept index
@unnumbered Concept index
@printindex cp
@bye
|