From 79b2a35a210a7565b877f0c023d6c9fceab9219f Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Wed, 27 Jan 2021 16:59:02 +0100 Subject: Adding upstream version 1.9. Signed-off-by: Daniel Baumann --- ChangeLog | 104 ++++--- INSTALL | 37 ++- Makefile.in | 8 +- NEWS | 71 +++-- README | 106 ++++---- arg_parser.cc | 30 +- arg_parser.h | 69 ++--- compress.cc | 237 ++++++++-------- configure | 27 +- dec_stdout.cc | 211 ++++++++------- dec_stream.cc | 643 ++++++++++++++++++++++++------------------- decompress.cc | 220 +++++++++------ doc/plzip.1 | 50 ++-- doc/plzip.info | 692 ++++++++++++++++++++++++----------------------- doc/plzip.texi | 325 +++++++++++++--------- list.cc | 58 ++-- lzip.h | 78 ++++-- lzip_index.cc | 110 +++++--- lzip_index.h | 32 ++- main.cc | 355 +++++++++++++----------- testsuite/check.sh | 128 ++++++--- testsuite/fox_bcrc.lz | Bin 0 -> 80 bytes testsuite/fox_crc0.lz | Bin 0 -> 80 bytes testsuite/fox_das46.lz | Bin 0 -> 80 bytes testsuite/fox_de20.lz | Bin 0 -> 80 bytes testsuite/fox_mes81.lz | Bin 0 -> 80 bytes testsuite/fox_s11.lz | Bin 0 -> 80 bytes testsuite/fox_v2.lz | Bin 0 -> 80 bytes testsuite/test_em.txt.lz | Bin 0 -> 14024 bytes 29 files changed, 2014 insertions(+), 1577 deletions(-) create mode 100644 testsuite/fox_bcrc.lz create mode 100644 testsuite/fox_crc0.lz create mode 100644 testsuite/fox_das46.lz create mode 100644 testsuite/fox_de20.lz create mode 100644 testsuite/fox_mes81.lz create mode 100644 testsuite/fox_s11.lz create mode 100644 testsuite/fox_v2.lz create mode 100644 testsuite/test_em.txt.lz diff --git a/ChangeLog b/ChangeLog index cbbf413..29ee28a 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,15 +1,42 @@ +2021-01-03 Antonio Diaz Diaz + + * Version 1.9 released. + * main.cc (main): Report an error if a file name is empty. + Make '-o' behave like '-c', but writing to file instead of stdout. + Make '-c' and '-o' check whether the output is a terminal only once. + Do not open output if input is a terminal. + * main.cc: New option '--check-lib'. + * Replace 'decompressed', 'compressed' with 'out', 'in' in output. + * decompress.cc, dec_stream.cc, dec_stdout.cc: + Continue testing if any input file fails the test. + Show the largest dictionary size in a multimember file. + * main.cc: Show final diagnostic when testing multiple files. + * decompress.cc, dec_stream.cc [LZ_API_VERSION >= 1012]: Avoid + copying decompressed data when testing with lzlib 1.12 or newer. + * compress.cc, dec_stream.cc: Start only the worker threads required. + * dec_stream.cc: Splitter stops reading when trailing data is found. + Don't include trailing data in the compressed size shown. + Use plain comparison instead of Boyer-Moore to search for headers. + * lzip_index.cc: Improve messages for corruption in last header. + * decompress.cc: Shorten messages 'Data error' and 'Unexpected EOF'. + * main.cc: Set a valid invocation_name even if argc == 0. + * Document extraction from tar.lz in manual, '--help', and man page. + * plzip.texi (Introduction): Mention tarlz as an alternative. + * plzip.texi: Several fixes and improvements. + * testsuite: Add 8 new test files. + 2019-01-05 Antonio Diaz Diaz * Version 1.8 released. - * File_* renamed to Lzip_*. - * main.cc: Added new options '--in-slots' and '--out-slots'. - * main.cc: Increased default in_slots per worker from 2 to 4. - * main.cc: Increased default out_slots per worker from 32 to 64. + * Rename File_* to Lzip_*. + * main.cc: New options '--in-slots' and '--out-slots'. + * main.cc: Increase default in_slots per worker from 2 to 4. + * main.cc: Increase default out_slots per worker from 32 to 64. * lzip.h (Lzip_trailer): New function 'verify_consistency'. * lzip_index.cc: Detect some kinds of corrupt trailers. * main.cc (main): Check return value of close( infd ). - * plzip.texi: Improved description of '-0..-9', '-m' and '-s'. - * configure: Added new option '--with-mingw'. + * plzip.texi: Improve description of '-0..-9', '-m', and '-s'. + * configure: New option '--with-mingw'. * configure: Accept appending to CXXFLAGS, 'CXXFLAGS+=OPTIONS'. * INSTALL: Document use of CXXFLAGS+='-D __USE_MINGW_ANSI_STDIO'. @@ -20,19 +47,19 @@ packet queue by a circular buffer to reduce memory fragmentation. * compress.cc: Return one empty packet at a time to reduce mem use. * main.cc: Reduce threads on 32 bit systems to use under 2.22 GiB. - * main.cc: Added new option '--loose-trailing'. - * Improved corrupt header detection to HD=3 on seekable files. + * main.cc: New option '--loose-trailing'. + * Improve corrupt header detection to HD = 3 on seekable files. (On all files with lzlib 1.10 or newer). - * Replaced 'bits/byte' with inverse compression ratio in output. + * Replace 'bits/byte' with inverse compression ratio in output. * Show progress of decompression at verbosity level 2 (-vv). * Show progress of (de)compression only if stderr is a terminal. * main.cc: Do not add a second .lz extension to the arg of -o. * Show dictionary size at verbosity level 4 (-vvvv). * main.cc (cleanup_and_fail): Suppress messages from other threads. - * list.cc: Added missing '#include '. - * plzip.texi: Added chapter 'Output'. - * plzip.texi (Memory requirements): Added table. - * plzip.texi (Program design): Added a block diagram. + * list.cc: Add missing '#include '. + * plzip.texi: New chapter 'Output'. + * plzip.texi (Memory requirements): Add table. + * plzip.texi (Program design): Add a block diagram. 2017-04-12 Antonio Diaz Diaz @@ -41,15 +68,15 @@ * Don't allow mixing different operations (-d, -l or -t). * main.cc: Continue testing if any input file is a terminal. * lzip_index.cc: Improve detection of bad dict and trailing data. - * lzip.h: Unified messages for bad magic, trailing data, etc. + * lzip.h: Unify messages for bad magic, trailing data, etc. 2016-05-14 Antonio Diaz Diaz * Version 1.5 released. - * main.cc: Added new option '-a, --trailing-error'. + * main.cc: New option '-a, --trailing-error'. * main.cc (main): Delete '--output' file if infd is a terminal. * main.cc (main): Don't use stdin more than once. - * plzip.texi: Added chapters 'Trailing data' and 'Examples'. + * plzip.texi: New chapters 'Trailing data' and 'Examples'. * configure: Avoid warning on some shells when testing for g++. * Makefile.in: Detect the existence of install-info. * check.sh: A POSIX shell is required to run the tests. @@ -65,20 +92,20 @@ * Version 1.3 released. * dec_stream.cc: Don't use output packets or muxer when testing. * Make '-dvvv' and '-tvvv' show dictionary size like lzip. - * lzip.h: Added missing 'const' to the declaration of 'compress'. - * plzip.texi: Added chapters 'Memory requirements' and + * lzip.h: Add missing 'const' to the declaration of 'compress'. + * plzip.texi: New chapters 'Memory requirements' and 'Minimum file sizes'. - * Makefile.in: Added new targets 'install*-compress'. + * Makefile.in: New targets 'install*-compress'. 2014-08-29 Antonio Diaz Diaz * Version 1.2 released. * main.cc (close_and_set_permissions): Behave like 'cp -p'. - * dec_stdout.cc dec_stream.cc: Make 'slot_av' a vector to limit + * dec_stdout.cc, dec_stream.cc: Make 'slot_av' a vector to limit the number of packets produced by each worker individually. - * plzip.texinfo: Renamed to plzip.texi. - * plzip.texi: Documented the approximate amount of memory required. - * License changed to GPL version 2 or later. + * plzip.texinfo: Rename to plzip.texi. + * plzip.texi: Document the approximate amount of memory required. + * Change license to GPL version 2 or later. 2013-09-17 Antonio Diaz Diaz @@ -89,14 +116,13 @@ 2013-05-29 Antonio Diaz Diaz * Version 1.0 released. - * compress.cc: 'deliver_packet' changed to 'deliver_packets'. + * compress.cc: Change 'deliver_packet' to 'deliver_packets'. * Scalability of decompression from/to regular files has been increased by removing splitter and muxer when not needed. * The number of worker threads is now limited to the number of members when decompressing from a regular file. * configure: Options now accept a separate argument. - * Makefile.in: Added new target 'install-as-lzip'. - * Makefile.in: Added new target 'install-bin'. + * Makefile.in: New targets 'install-as-lzip' and 'install-bin'. * main.cc: Use 'setmode' instead of '_setmode' on Windows and OS/2. * main.cc: Define 'strtoull' to 'std::strtoul' on Windows. @@ -104,17 +130,17 @@ * Version 0.9 released. * Minor fixes and cleanups. - * configure: 'datadir' renamed to 'datarootdir'. + * configure: Rename 'datadir' to 'datarootdir'. 2012-01-17 Antonio Diaz Diaz * Version 0.8 released. - * main.cc: Added new option '-F, --recompress'. + * main.cc: New option '-F, --recompress'. * decompress.cc (decompress): Show compression ratio. * main.cc (close_and_set_permissions): Inability to change output file attributes has been downgraded from error to warning. * Small change in '--help' output and man page. - * Changed quote characters in messages as advised by GNU Standards. + * Change quote characters in messages as advised by GNU Standards. * main.cc: Set stdin/stdout in binary mode on OS2. * compress.cc: Reduce memory use of compressed packets. * decompress.cc: Use Boyer-Moore algorithm to search for headers. @@ -128,15 +154,16 @@ produced by workers to limit the amount of memory used. * main.cc (open_instream): Don't show the message " and '--stdout' was not specified" for directories, etc. - * main.cc: Fixed warning about fchown return value being ignored. - * testsuite: 'test1' renamed to 'test.txt'. Added new tests. + Exit with status 1 if any output file exists and is skipped. + * main.cc: Fix warning about fchown return value being ignored. + * testsuite: Rename 'test1' to 'test.txt'. New tests. 2010-03-20 Antonio Diaz Diaz * Version 0.6 released. * Small portability fixes. - * plzip.texinfo: Added chapter 'Program Design' and description - of option '--threads'. + * plzip.texinfo: New chapter 'Program Design'. + Add missing description of option '-n, --threads'. * Debug stats have been fixed. 2010-02-10 Antonio Diaz Diaz @@ -154,7 +181,7 @@ 2010-01-24 Antonio Diaz Diaz * Version 0.3 released. - * Implemented option '--data-size'. + * New option '-B, --data-size'. * Output file is now removed if plzip is interrupted. * This version automatically chooses the smallest possible dictionary size for each member during compression, saving @@ -164,15 +191,14 @@ 2010-01-17 Antonio Diaz Diaz * Version 0.2 released. - * Implemented option '--dictionary-size'. - * Implemented option '--match-length'. + * New options '-s, --dictionary-size' and '-m, --match-length'. * 'lacos_rbtree' has been replaced with a circular buffer. 2009-12-05 Antonio Diaz Diaz * Version 0.1 released. * This version is based on llzip-0.03 (2009-11-21), written by - Laszlo Ersek . + Laszlo Ersek . Thanks Laszlo! From llzip-0.03/README: llzip is a hack on my lbzip2-0.17 release. I ripped out the @@ -184,8 +210,8 @@ until something better appears on the net. -Copyright (C) 2009-2019 Antonio Diaz Diaz. +Copyright (C) 2009-2021 Antonio Diaz Diaz. This file is a collection of facts, and thus it is not copyrightable, -but just in case, you have unlimited permission to copy, distribute and +but just in case, you have unlimited permission to copy, distribute, and modify it. diff --git a/INSTALL b/INSTALL index fb86f55..aa2858a 100644 --- a/INSTALL +++ b/INSTALL @@ -1,11 +1,15 @@ Requirements ------------ -You will need a C++ compiler and the lzlib compression library installed. -I use gcc 5.3.0 and 4.1.2, but the code should compile with any standards +You will need a C++11 compiler and the compression library lzlib installed. +(gcc 3.3.6 or newer is recommended). +I use gcc 6.1.0 and 4.1.2, but the code should compile with any standards compliant compiler. -Lzlib must be version 1.0 or newer, but the fast encoder is only available -in lzlib 1.7 or newer, and the HD = 3 detection of corrupt headers on -non-seekable multimember files is only available in lzlib 1.10 or newer. + +Lzlib must be version 1.0 or newer, but the fast encoder requires lzlib 1.7 +or newer, the Hamming distance (HD) = 3 detection of corrupt headers in +non-seekable multimember files requires lzlib 1.10 or newer, and the 'no +copy' optimization for testing requires lzlib 1.12 or newer. + Gcc is available at http://gcc.gnu.org. Lzlib is available at http://www.nongnu.org/lzip/lzlib.html. @@ -33,7 +37,10 @@ the main archive. To link against a lzlib not installed in a standard place, use: - ./configure CPPFLAGS='-I' LDFLAGS='-L' + ./configure CPPFLAGS='-I ' LDFLAGS='-L ' + + (Replace with the directory containing the file lzlib.h, + and with the directory containing the file liblz.a). If you are compiling on MinGW, use --with-mingw (note that the Windows I/O functions used with MinGW are not guaranteed to be thread safe): @@ -50,11 +57,11 @@ the main archive. documentation. Or type 'make install-compress', which additionally compresses the - info manual and the man page after installation. (Installing - compressed docs may become the default in the future). + info manual and the man page after installation. + (Installing compressed docs may become the default in the future). - You can install only the program, the info manual or the man page by - typing 'make install-bin', 'make install-info' or 'make install-man' + You can install only the program, the info manual, or the man page by + typing 'make install-bin', 'make install-info', or 'make install-man' respectively. Instead of 'make install', you can type 'make install-as-lzip' to @@ -65,10 +72,10 @@ the main archive. Another way ----------- You can also compile plzip into a separate directory. -To do this, you must use a version of 'make' that supports the 'VPATH' -variable, such as GNU 'make'. 'cd' to the directory where you want the +To do this, you must use a version of 'make' that supports the variable +'VPATH', such as GNU 'make'. 'cd' to the directory where you want the object files and executables to go and run the 'configure' script. -'configure' automatically checks for the source code in '.', in '..' and +'configure' automatically checks for the source code in '.', in '..', and in the directory that 'configure' is in. 'configure' recognizes the option '--srcdir=DIR' to control where to @@ -79,7 +86,7 @@ After running 'configure', you can run 'make' and 'make install' as explained above. -Copyright (C) 2009-2019 Antonio Diaz Diaz. +Copyright (C) 2009-2021 Antonio Diaz Diaz. This file is free documentation: you have unlimited permission to copy, -distribute and modify it. +distribute, and modify it. diff --git a/Makefile.in b/Makefile.in index 6d40fd1..40e75a1 100644 --- a/Makefile.in +++ b/Makefile.in @@ -79,7 +79,7 @@ install-info : -rm -f "$(DESTDIR)$(infodir)/$(pkgname).info"* $(INSTALL_DATA) $(VPATH)/doc/$(pkgname).info "$(DESTDIR)$(infodir)/$(pkgname).info" -if $(CAN_RUN_INSTALLINFO) ; then \ - install-info --info-dir="$(DESTDIR)$(infodir)" "$(DESTDIR)$(infodir)/$(pkgname).info" ; \ + install-info --info-dir="$(DESTDIR)$(infodir)" "$(DESTDIR)$(infodir)/$(pkgname).info" ; \ fi install-info-compress : install-info @@ -104,7 +104,7 @@ uninstall-bin : uninstall-info : -if $(CAN_RUN_INSTALLINFO) ; then \ - install-info --info-dir="$(DESTDIR)$(infodir)" --remove "$(DESTDIR)$(infodir)/$(pkgname).info" ; \ + install-info --info-dir="$(DESTDIR)$(infodir)" --remove "$(DESTDIR)$(infodir)/$(pkgname).info" ; \ fi -rm -f "$(DESTDIR)$(infodir)/$(pkgname).info"* @@ -129,7 +129,9 @@ dist : doc $(DISTNAME)/*.cc \ $(DISTNAME)/testsuite/check.sh \ $(DISTNAME)/testsuite/test.txt \ - $(DISTNAME)/testsuite/test.txt.lz + $(DISTNAME)/testsuite/fox_*.lz \ + $(DISTNAME)/testsuite/test.txt.lz \ + $(DISTNAME)/testsuite/test_em.txt.lz rm -f $(DISTNAME) lzip -v -9 $(DISTNAME).tar diff --git a/NEWS b/NEWS index bfe26e6..b01da26 100644 --- a/NEWS +++ b/NEWS @@ -1,31 +1,58 @@ -Changes in version 1.8: +Changes in version 1.9: -The new options '--in-slots' and '--out-slots', setting the number of input -and output packets buffered during streamed decompression, have been added. -Increasing the number of packets may increase decompression speed, but -requires more memory. +Plzip now reports an error if a file name is empty (plzip -t ""). -The default number of input packets buffered per worker thread when -decompressing from non-seekable input has been increased from 2 to 4. +Option '-o, --output' now behaves like '-c, --stdout', but sending the +output unconditionally to a file instead of to standard output. See the new +description of '-o' in the manual. This change is backwards compatible only +when (de)compressing from standard input alone. Therefore commands like: + plzip -o foo.lz - bar < foo +must now be split into: + plzip -o foo.lz - < foo + plzip bar +or rewritten as: + plzip - bar < foo > foo.lz -The default number of output packets buffered per worker thread when -decompressing to non-seekable output has been increased from 32 to 64. +When using '-c' or '-o', plzip now checks whether the output is a terminal +only once. -Detection of forbidden combinations of characters in trailing data has been -improved. +Plzip now does not even open the output file if the input file is a terminal. -Errors are now also checked when closing the input file. +The new option '--check-lib', which compares the version of lzlib used to +compile plzip with the version actually being used at run time, has been added. -The descriptions of '-0..-9', '-m' and '-s' in the manual have been -improved. +The words 'decompressed' and 'compressed' have been replaced with the +shorter 'out' and 'in' in the verbose output when decompressing or testing. -The configure script now accepts the option '--with-mingw' to enable the -compilation of plzip under MS Windows (with the MinGW compiler). Use with -care. The Windows I/O functions used are not guaranteed to be thread safe. -(Code based on a patch by Hannes Domani). +When checking the integrity of multiple files, plzip is now able to continue +checking the rest of the files (instead of exiting) if some of them fail the +test, allowing 'plzip --test' to show a final diagnostic with the number of +files that failed (just as 'lzip --test'). -The configure script now accepts appending options to CXXFLAGS using the -syntax 'CXXFLAGS+=OPTIONS'. +Testing is now slightly (1.6%) faster when using lzlib 1.12. -It has been documented in INSTALL the use of -CXXFLAGS+='-D __USE_MINGW_ANSI_STDIO' when compiling on MinGW. +When compressing, or when decompressing or testing from a non-seekable file +or from standard input, plzip now starts only the number of worker threads +required. + +When decompressing or testing from a non-seekable file or from standard +input, trailing data are now not counted in the compressed size shown. + +When decompressing or testing a multimember file, plzip now shows the +largest dictionary size of all members in the file instead of showing the +dictionary size of the first member. + +Option '--list' now reports corruption or truncation of the last header in a +multimenber file specifically instead of showing the generic message "Last +member in input file is truncated or corrupt." + +The error messages for 'Data error' and 'Unexpected EOF' have been shortened. + +The commands needed to extract files from a tar.lz archive have been +documented in the manual, in the output of '--help', and in the man page. + +Tarlz is mentioned in the manual as an alternative to tar + plzip. + +Several fixes and improvements have been made to the manual. + +8 new test files have been added to the testsuite. diff --git a/README b/README index 695aa6c..ad008c2 100644 --- a/README +++ b/README @@ -1,30 +1,36 @@ Description Plzip is a massively parallel (multi-threaded) implementation of lzip, fully -compatible with lzip 1.4 or newer. Plzip uses the lzlib compression library. - -Lzip is a lossless data compressor with a user interface similar to the -one of gzip or bzip2. Lzip can compress about as fast as gzip (lzip -0) -or compress most files more than bzip2 (lzip -9). Decompression speed is -intermediate between gzip and bzip2. Lzip is better than gzip and bzip2 -from a data recovery perspective. Lzip has been designed, written and -tested with great care to replace gzip and bzip2 as the standard -general-purpose compressed format for unix-like systems. - -Plzip can compress/decompress large files on multiprocessor machines -much faster than lzip, at the cost of a slightly reduced compression -ratio (0.4 to 2 percent larger compressed files). Note that the number -of usable threads is limited by file size; on files larger than a few GB -plzip can use hundreds of processors, but on files of only a few MB -plzip is no faster than lzip. - -When compressing, plzip divides the input file into chunks and -compresses as many chunks simultaneously as worker threads are chosen, -creating a multimember compressed file. +compatible with lzip 1.4 or newer. Plzip uses the compression library lzlib. + +Lzip is a lossless data compressor with a user interface similar to the one +of gzip or bzip2. Lzip uses a simplified form of the 'Lempel-Ziv-Markov +chain-Algorithm' (LZMA) stream format, chosen to maximize safety and +interoperability. Lzip can compress about as fast as gzip (lzip -0) or +compress most files more than bzip2 (lzip -9). Decompression speed is +intermediate between gzip and bzip2. Lzip is better than gzip and bzip2 from +a data recovery perspective. Lzip has been designed, written, and tested +with great care to replace gzip and bzip2 as the standard general-purpose +compressed format for unix-like systems. + +Plzip can compress/decompress large files on multiprocessor machines much +faster than lzip, at the cost of a slightly reduced compression ratio (0.4 +to 2 percent larger compressed files). Note that the number of usable +threads is limited by file size; on files larger than a few GB plzip can use +hundreds of processors, but on files of only a few MB plzip is no faster +than lzip. + +For creation and manipulation of compressed tar archives tarlz can be more +efficient than using tar and plzip because tarlz is able to keep the +alignment between tar members and lzip members. + +When compressing, plzip divides the input file into chunks and compresses as +many chunks simultaneously as worker threads are chosen, creating a +multimember compressed file. When decompressing, plzip decompresses as many members simultaneously as worker threads are chosen. Files that were compressed with lzip will not -be decompressed faster than using lzip (unless the '-b' option was used) +be decompressed faster than using lzip (unless the option '-b' was used) because lzip usually produces single-member files, which can't be decompressed in parallel. @@ -32,34 +38,34 @@ The lzip file format is designed for data sharing and long-term archiving, taking into account both data integrity and decoder availability: * The lzip format provides very safe integrity checking and some data - recovery means. The lziprecover program can repair bit flip errors - (one of the most common forms of data corruption) in lzip files, - and provides data recovery capabilities, including error-checked - merging of damaged copies of a file. - - * The lzip format is as simple as possible (but not simpler). The - lzip manual provides the source code of a simple decompressor - along with a detailed explanation of how it works, so that with - the only help of the lzip manual it would be possible for a - digital archaeologist to extract the data from a lzip file long - after quantum computers eventually render LZMA obsolete. + recovery means. The program lziprecover can repair bit flip errors + (one of the most common forms of data corruption) in lzip files, and + provides data recovery capabilities, including error-checked merging + of damaged copies of a file. + + * The lzip format is as simple as possible (but not simpler). The lzip + manual provides the source code of a simple decompressor along with a + detailed explanation of how it works, so that with the only help of the + lzip manual it would be possible for a digital archaeologist to extract + the data from a lzip file long after quantum computers eventually + render LZMA obsolete. * Additionally the lzip reference implementation is copylefted, which guarantees that it will remain free forever. -A nice feature of the lzip format is that a corrupt byte is easier to -repair the nearer it is from the beginning of the file. Therefore, with -the help of lziprecover, losing an entire archive just because of a -corrupt byte near the beginning is a thing of the past. +A nice feature of the lzip format is that a corrupt byte is easier to repair +the nearer it is from the beginning of the file. Therefore, with the help of +lziprecover, losing an entire archive just because of a corrupt byte near +the beginning is a thing of the past. Plzip uses the same well-defined exit status values used by lzip, which makes it safer than compressors returning ambiguous warning values (like gzip) when it is used as a back end for other programs like tar or zutils. -Plzip will automatically use for each file the largest dictionary size -that does not exceed neither the file size nor the limit given. Keep in -mind that the decompression memory requirement is affected at -compression time by the choice of dictionary size limit. +Plzip will automatically use for each file the largest dictionary size that +does not exceed neither the file size nor the limit given. Keep in mind that +the decompression memory requirement is affected at compression time by the +choice of dictionary size limit. When compressing, plzip replaces every file given in the command line with a compressed version of itself, with the name "original_name.lz". @@ -76,28 +82,28 @@ possible, ownership of the file just as 'cp -p' does. (If the user ID or the group ID can't be duplicated, the file permission bits S_ISUID and S_ISGID are cleared). -Plzip is able to read from some types of non regular files if the -'--stdout' option is specified. +Plzip is able to read from some types of non-regular files if either the +option '-c' or the option '-o' is specified. If no file names are specified, plzip compresses (or decompresses) from -standard input to standard output. In this case, plzip will decline to -write compressed output to a terminal, as this would be entirely -incomprehensible and therefore pointless. +standard input to standard output. Plzip will refuse to read compressed data +from a terminal or write compressed data to a terminal, as this would be +entirely incomprehensible and might leave the terminal in an abnormal state. Plzip will correctly decompress a file which is the concatenation of two or more compressed files. The result is the concatenation of the corresponding decompressed files. Integrity testing of concatenated compressed files is also supported. -LANGUAGE NOTE: Uncompressed = not compressed = plain data; it may never -have been compressed. Decompressed is used to refer to data which have -undergone the process of decompression. +LANGUAGE NOTE: Uncompressed = not compressed = plain data; it may never have +been compressed. Decompressed is used to refer to data which have undergone +the process of decompression. -Copyright (C) 2009-2019 Antonio Diaz Diaz. +Copyright (C) 2009-2021 Antonio Diaz Diaz. This file is free documentation: you have unlimited permission to copy, -distribute and modify it. +distribute, and modify it. The file Makefile.in is a data file used by configure to produce the Makefile. It has the same copyright owner and permissions that configure diff --git a/arg_parser.cc b/arg_parser.cc index ea32fde..2e40a13 100644 --- a/arg_parser.cc +++ b/arg_parser.cc @@ -1,20 +1,20 @@ -/* Arg_parser - POSIX/GNU command line argument parser. (C++ version) - Copyright (C) 2006-2019 Antonio Diaz Diaz. +/* Arg_parser - POSIX/GNU command line argument parser. (C++ version) + Copyright (C) 2006-2021 Antonio Diaz Diaz. - This library is free software. Redistribution and use in source and - binary forms, with or without modification, are permitted provided - that the following conditions are met: + This library is free software. Redistribution and use in source and + binary forms, with or without modification, are permitted provided + that the following conditions are met: - 1. Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. + 1. Redistributions of source code must retain the above copyright + notice, this list of conditions, and the following disclaimer. - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions, and the following disclaimer in the + documentation and/or other materials provided with the distribution. - This library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. */ #include @@ -167,7 +167,7 @@ Arg_parser::Arg_parser( const int argc, const char * const argv[], else non_options.push_back( argv[argind++] ); } } - if( error_.size() ) data.clear(); + if( !error_.empty() ) data.clear(); else { for( unsigned i = 0; i < non_options.size(); ++i ) @@ -190,7 +190,7 @@ Arg_parser::Arg_parser( const char * const opt, const char * const arg, { if( opt[2] ) parse_long_option( opt, arg, options, argind ); } else parse_short_option( opt, arg, options, argind ); - if( error_.size() ) data.clear(); + if( !error_.empty() ) data.clear(); } else data.push_back( Record( opt ) ); } diff --git a/arg_parser.h b/arg_parser.h index ceb9933..5629b90 100644 --- a/arg_parser.h +++ b/arg_parser.h @@ -1,43 +1,43 @@ -/* Arg_parser - POSIX/GNU command line argument parser. (C++ version) - Copyright (C) 2006-2019 Antonio Diaz Diaz. +/* Arg_parser - POSIX/GNU command line argument parser. (C++ version) + Copyright (C) 2006-2021 Antonio Diaz Diaz. - This library is free software. Redistribution and use in source and - binary forms, with or without modification, are permitted provided - that the following conditions are met: + This library is free software. Redistribution and use in source and + binary forms, with or without modification, are permitted provided + that the following conditions are met: - 1. Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. + 1. Redistributions of source code must retain the above copyright + notice, this list of conditions, and the following disclaimer. - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions, and the following disclaimer in the + documentation and/or other materials provided with the distribution. - This library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. */ -/* Arg_parser reads the arguments in 'argv' and creates a number of - option codes, option arguments and non-option arguments. +/* Arg_parser reads the arguments in 'argv' and creates a number of + option codes, option arguments, and non-option arguments. - In case of error, 'error' returns a non-empty error message. + In case of error, 'error' returns a non-empty error message. - 'options' is an array of 'struct Option' terminated by an element - containing a code which is zero. A null name means a short-only - option. A code value outside the unsigned char range means a - long-only option. + 'options' is an array of 'struct Option' terminated by an element + containing a code which is zero. A null name means a short-only + option. A code value outside the unsigned char range means a + long-only option. - Arg_parser normally makes it appear as if all the option arguments - were specified before all the non-option arguments for the purposes - of parsing, even if the user of your program intermixed option and - non-option arguments. If you want the arguments in the exact order - the user typed them, call 'Arg_parser' with 'in_order' = true. + Arg_parser normally makes it appear as if all the option arguments + were specified before all the non-option arguments for the purposes + of parsing, even if the user of your program intermixed option and + non-option arguments. If you want the arguments in the exact order + the user typed them, call 'Arg_parser' with 'in_order' = true. - The argument '--' terminates all options; any following arguments are - treated as non-option arguments, even if they begin with a hyphen. + The argument '--' terminates all options; any following arguments are + treated as non-option arguments, even if they begin with a hyphen. - The syntax for optional option arguments is '-' - (without whitespace), or '--='. + The syntax for optional option arguments is '-' + (without whitespace), or '--='. */ class Arg_parser @@ -61,6 +61,7 @@ private: explicit Record( const char * const arg ) : code( 0 ), argument( arg ) {} }; + const std::string empty_arg; std::string error_; std::vector< Record > data; @@ -73,17 +74,17 @@ public: Arg_parser( const int argc, const char * const argv[], const Option options[], const bool in_order = false ); - // Restricted constructor. Parses a single token and argument (if any) + // Restricted constructor. Parses a single token and argument (if any). Arg_parser( const char * const opt, const char * const arg, const Option options[] ); const std::string & error() const { return error_; } - // The number of arguments parsed (may be different from argc) + // The number of arguments parsed. May be different from argc. int arguments() const { return data.size(); } - // If code( i ) is 0, argument( i ) is a non-option. - // Else argument( i ) is the option's argument (or empty). + /* If code( i ) is 0, argument( i ) is a non-option. + Else argument( i ) is the option's argument (or empty). */ int code( const int i ) const { if( i >= 0 && i < arguments() ) return data[i].code; @@ -93,6 +94,6 @@ public: const std::string & argument( const int i ) const { if( i >= 0 && i < arguments() ) return data[i].argument; - else return error_; + else return empty_arg; } }; diff --git a/compress.cc b/compress.cc index af36f95..d8e2536 100644 --- a/compress.cc +++ b/compress.cc @@ -1,19 +1,19 @@ -/* Plzip - Massively parallel implementation of lzip - Copyright (C) 2009 Laszlo Ersek. - Copyright (C) 2009-2019 Antonio Diaz Diaz. - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program. If not, see . +/* Plzip - Massively parallel implementation of lzip + Copyright (C) 2009 Laszlo Ersek. + Copyright (C) 2009-2021 Antonio Diaz Diaz. + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ #define _FILE_OFFSET_BITS 64 @@ -27,7 +27,6 @@ #include #include #include -#include #include #include #include @@ -39,9 +38,9 @@ #endif -// Returns the number of bytes really read. -// If (returned value < size) and (errno == 0), means EOF was reached. -// +/* Returns the number of bytes really read. + If (returned value < size) and (errno == 0), means EOF was reached. +*/ int readblock( const int fd, uint8_t * const buf, const int size ) { int sz = 0; @@ -58,9 +57,9 @@ int readblock( const int fd, uint8_t * const buf, const int size ) } -// Returns the number of bytes really written. -// If (returned value < size), it is always an error. -// +/* Returns the number of bytes really written. + If (returned value < size), it is always an error. +*/ int writeblock( const int fd, const uint8_t * const buf, const int size ) { int sz = 0; @@ -150,7 +149,7 @@ namespace { unsigned long long in_size = 0; unsigned long long out_size = 0; -const char * const mem_msg = "Not enough memory. Try a smaller dictionary size"; +const char * const mem_msg2 = "Not enough memory. Try a smaller dictionary size."; struct Packet // data block with a serial number @@ -235,8 +234,7 @@ public: xunlock( &imutex ); if( !ipacket ) // EOF { - // notify muxer when last worker exits - xlock( &omutex ); + xlock( &omutex ); // notify muxer when last worker exits if( --num_working == 0 ) xsignal( &oav_or_exit ); xunlock( &omutex ); } @@ -284,12 +282,16 @@ public: void return_empty_packet() // return a slot to the tally { slot_tally.leave_slot(); } - void finish() // splitter has no more packets to send + void finish( const int workers_spared ) { - xlock( &imutex ); + xlock( &imutex ); // splitter has no more packets to send eof = true; xbroadcast( &iav_or_eof ); xunlock( &imutex ); + xlock( &omutex ); // notify muxer if all workers have exited + num_working -= workers_spared; + if( num_working <= 0 ) xsignal( &oav_or_exit ); + xunlock( &omutex ); } bool finished() // all packets delivered to muxer @@ -303,52 +305,6 @@ public: }; -struct Splitter_arg - { - Packet_courier * courier; - const Pretty_print * pp; - int infd; - int data_size; - int offset; - }; - - - // split data from input file into chunks and pass them to - // courier for packaging and distribution to workers. -extern "C" void * csplitter( void * arg ) - { - const Splitter_arg & tmp = *(const Splitter_arg *)arg; - Packet_courier & courier = *tmp.courier; - const Pretty_print & pp = *tmp.pp; - const int infd = tmp.infd; - const int data_size = tmp.data_size; - const int offset = tmp.offset; - - for( bool first_post = true; ; first_post = false ) - { - uint8_t * const data = new( std::nothrow ) uint8_t[offset+data_size]; - if( !data ) { pp( mem_msg ); cleanup_and_fail(); } - const int size = readblock( infd, data + offset, data_size ); - if( size != data_size && errno ) - { pp(); show_error( "Read error", errno ); cleanup_and_fail(); } - - if( size > 0 || first_post ) // first packet may be empty - { - in_size += size; - courier.receive_packet( data, size ); - if( size < data_size ) break; // EOF - } - else - { - delete[] data; - break; - } - } - courier.finish(); // no more packets to send - return 0; - } - - struct Worker_arg { Packet_courier * courier; @@ -358,9 +314,18 @@ struct Worker_arg int offset; }; +struct Splitter_arg + { + struct Worker_arg worker_arg; + pthread_t * worker_threads; + int infd; + int data_size; + int num_workers; // returned by splitter to main thread + }; + - // get packets from courier, replace their contents, and return - // them to courier. +/* Get packets from courier, replace their contents, and return them to + courier. */ extern "C" void * cworker( void * arg ) { const Worker_arg & tmp = *(const Worker_arg *)arg; @@ -386,7 +351,7 @@ extern "C" void * cworker( void * arg ) if( !encoder || LZ_compress_errno( encoder ) != LZ_ok ) { if( !encoder || LZ_compress_errno( encoder ) == LZ_mem_error ) - pp( mem_msg ); + pp( mem_msg2 ); else internal_error( "invalid argument to encoder." ); cleanup_and_fail(); @@ -435,8 +400,57 @@ extern "C" void * cworker( void * arg ) } - // get from courier the processed and sorted packets, and write - // their contents to the output file. +/* Split data from input file into chunks and pass them to courier for + packaging and distribution to workers. + Start a worker per packet up to a maximum of num_workers. +*/ +extern "C" void * csplitter( void * arg ) + { + Splitter_arg & tmp = *(Splitter_arg *)arg; + Packet_courier & courier = *tmp.worker_arg.courier; + const Pretty_print & pp = *tmp.worker_arg.pp; + pthread_t * const worker_threads = tmp.worker_threads; + const int offset = tmp.worker_arg.offset; + const int infd = tmp.infd; + const int data_size = tmp.data_size; + int i = 0; // number of workers started + + for( bool first_post = true; ; first_post = false ) + { + uint8_t * const data = new( std::nothrow ) uint8_t[offset+data_size]; + if( !data ) { pp( mem_msg2 ); cleanup_and_fail(); } + const int size = readblock( infd, data + offset, data_size ); + if( size != data_size && errno ) + { pp(); show_error( "Read error", errno ); cleanup_and_fail(); } + + if( size > 0 || first_post ) // first packet may be empty + { + in_size += size; + courier.receive_packet( data, size ); + if( i < tmp.num_workers ) // start a new worker + { + const int errcode = + pthread_create( &worker_threads[i++], 0, cworker, &tmp.worker_arg ); + if( errcode ) { show_error( "Can't create worker threads", errcode ); + cleanup_and_fail(); } + } + if( size < data_size ) break; // EOF + } + else + { + delete[] data; + break; + } + } + courier.finish( tmp.num_workers - i ); // no more packets to send + tmp.num_workers = i; + return 0; + } + + +/* Get from courier the processed and sorted packets, and write their + contents to the output file. +*/ void muxer( Packet_courier & courier, const Pretty_print & pp, const int outfd ) { std::vector< const Packet * > packet_vector; @@ -450,8 +464,7 @@ void muxer( Packet_courier & courier, const Pretty_print & pp, const int outfd ) const Packet * const opacket = packet_vector[i]; out_size += opacket->size; - const int wr = writeblock( outfd, opacket->data, opacket->size ); - if( wr != opacket->size ) + if( writeblock( outfd, opacket->data, opacket->size ) != opacket->size ) { pp(); show_error( "Write error", errno ); cleanup_and_fail(); } delete[] opacket->data; courier.return_empty_packet(); @@ -462,8 +475,8 @@ void muxer( Packet_courier & courier, const Pretty_print & pp, const int outfd ) } // end namespace - // init the courier, then start the splitter and the workers and - // call the muxer. +/* Init the courier, then start the splitter and the workers and call the + muxer. */ int compress( const unsigned long long cfile_size, const int data_size, const int dictionary_size, const int match_len_limit, const int num_workers, @@ -478,50 +491,44 @@ int compress( const unsigned long long cfile_size, out_size = 0; Packet_courier courier( num_workers, num_slots ); + if( debug_level & 2 ) std::fputs( "compress.\n", stderr ); + + pthread_t * worker_threads = new( std::nothrow ) pthread_t[num_workers]; + if( !worker_threads ) { pp( mem_msg ); return 1; } + Splitter_arg splitter_arg; - splitter_arg.courier = &courier; - splitter_arg.pp = &pp; + splitter_arg.worker_arg.courier = &courier; + splitter_arg.worker_arg.pp = &pp; + splitter_arg.worker_arg.dictionary_size = dictionary_size; + splitter_arg.worker_arg.match_len_limit = match_len_limit; + splitter_arg.worker_arg.offset = offset; + splitter_arg.worker_threads = worker_threads; splitter_arg.infd = infd; splitter_arg.data_size = data_size; - splitter_arg.offset = offset; + splitter_arg.num_workers = num_workers; pthread_t splitter_thread; int errcode = pthread_create( &splitter_thread, 0, csplitter, &splitter_arg ); if( errcode ) - { show_error( "Can't create splitter thread", errcode ); cleanup_and_fail(); } + { show_error( "Can't create splitter thread", errcode ); + delete[] worker_threads; return 1; } if( verbosity >= 1 ) pp(); show_progress( 0, cfile_size, &pp ); // init - Worker_arg worker_arg; - worker_arg.courier = &courier; - worker_arg.pp = &pp; - worker_arg.dictionary_size = dictionary_size; - worker_arg.match_len_limit = match_len_limit; - worker_arg.offset = offset; - - pthread_t * worker_threads = new( std::nothrow ) pthread_t[num_workers]; - if( !worker_threads ) { pp( mem_msg ); cleanup_and_fail(); } - for( int i = 0; i < num_workers; ++i ) - { - errcode = pthread_create( worker_threads + i, 0, cworker, &worker_arg ); - if( errcode ) - { show_error( "Can't create worker threads", errcode ); cleanup_and_fail(); } - } - muxer( courier, pp, outfd ); - for( int i = num_workers - 1; i >= 0; --i ) - { + errcode = pthread_join( splitter_thread, 0 ); + if( errcode ) { show_error( "Can't join splitter thread", errcode ); + cleanup_and_fail(); } + + for( int i = splitter_arg.num_workers; --i >= 0; ) + { // join only the workers started errcode = pthread_join( worker_threads[i], 0 ); - if( errcode ) - { show_error( "Can't join worker threads", errcode ); cleanup_and_fail(); } + if( errcode ) { show_error( "Can't join worker threads", errcode ); + cleanup_and_fail(); } } delete[] worker_threads; - errcode = pthread_join( splitter_thread, 0 ); - if( errcode ) - { show_error( "Can't join splitter thread", errcode ); cleanup_and_fail(); } - if( verbosity >= 1 ) { if( in_size == 0 || out_size == 0 ) @@ -537,14 +544,14 @@ int compress( const unsigned long long cfile_size, if( debug_level & 1 ) std::fprintf( stderr, + "workers started %8u\n" "any worker tried to consume from splitter %8u times\n" "any worker had to wait %8u times\n" "muxer tried to consume from workers %8u times\n" "muxer had to wait %8u times\n", - courier.icheck_counter, - courier.iwait_counter, - courier.ocheck_counter, - courier.owait_counter ); + splitter_arg.num_workers, + courier.icheck_counter, courier.iwait_counter, + courier.ocheck_counter, courier.owait_counter ); if( !courier.finished() ) internal_error( "courier not finished." ); return 0; diff --git a/configure b/configure index c26658b..6045ad4 100755 --- a/configure +++ b/configure @@ -1,12 +1,12 @@ #! /bin/sh # configure script for Plzip - Massively parallel implementation of lzip -# Copyright (C) 2009-2019 Antonio Diaz Diaz. +# Copyright (C) 2009-2021 Antonio Diaz Diaz. # # This configure script is free software: you have unlimited permission -# to copy, distribute and modify it. +# to copy, distribute, and modify it. pkgname=plzip -pkgversion=1.8 +pkgversion=1.9 progname=plzip with_mingw= srctrigger=doc/${pkgname}.texi @@ -27,11 +27,7 @@ CXXFLAGS='-Wall -W -O2' LDFLAGS= # checking whether we are using GNU C++. -/bin/sh -c "${CXX} --version" > /dev/null 2>&1 || - { - CXX=c++ - CXXFLAGS=-O2 - } +/bin/sh -c "${CXX} --version" > /dev/null 2>&1 || { CXX=c++ ; CXXFLAGS=-O2 ; } # Loop over all args args= @@ -43,11 +39,12 @@ while [ $# != 0 ] ; do shift # Add the argument quoted to args - args="${args} \"${option}\"" + if [ -z "${args}" ] ; then args="\"${option}\"" + else args="${args} \"${option}\"" ; fi # Split out the argument for options that take them case ${option} in - *=*) optarg=`echo ${option} | sed -e 's,^[^=]*=,,;s,/$,,'` ;; + *=*) optarg=`echo "${option}" | sed -e 's,^[^=]*=,,;s,/$,,'` ;; esac # Process the options @@ -128,7 +125,7 @@ if [ -z "${srcdir}" ] ; then if [ ! -r "${srcdir}/${srctrigger}" ] ; then srcdir=.. ; fi if [ ! -r "${srcdir}/${srctrigger}" ] ; then ## the sed command below emulates the dirname command - srcdir=`echo $0 | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'` + srcdir=`echo "$0" | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'` fi fi @@ -151,7 +148,7 @@ if [ -z "${no_create}" ] ; then # Run this file to recreate the current configuration. # # This script is free software: you have unlimited permission -# to copy, distribute and modify it. +# to copy, distribute, and modify it. exec /bin/sh $0 ${args} --no-create EOF @@ -174,11 +171,11 @@ echo "LDFLAGS = ${LDFLAGS}" rm -f Makefile cat > Makefile << EOF # Makefile for Plzip - Massively parallel implementation of lzip -# Copyright (C) 2009-2019 Antonio Diaz Diaz. +# Copyright (C) 2009-2021 Antonio Diaz Diaz. # This file was generated automatically by configure. Don't edit. # # This Makefile is free software: you have unlimited permission -# to copy, distribute and modify it. +# to copy, distribute, and modify it. pkgname = ${pkgname} pkgversion = ${pkgversion} @@ -199,5 +196,5 @@ EOF cat "${srcdir}/Makefile.in" >> Makefile echo "OK. Now you can run make." -echo "If make fails, verify that the lzlib compression library is correctly" +echo "If make fails, verify that the compression library lzlib is correctly" echo "installed (see INSTALL)." diff --git a/dec_stdout.cc b/dec_stdout.cc index 2a85009..de45a86 100644 --- a/dec_stdout.cc +++ b/dec_stdout.cc @@ -1,19 +1,19 @@ -/* Plzip - Massively parallel implementation of lzip - Copyright (C) 2009 Laszlo Ersek. - Copyright (C) 2009-2019 Antonio Diaz Diaz. - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program. If not, see . +/* Plzip - Massively parallel implementation of lzip + Copyright (C) 2009 Laszlo Ersek. + Copyright (C) 2009-2021 Antonio Diaz Diaz. + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ #define _FILE_OFFSET_BITS 64 @@ -28,7 +28,6 @@ #include #include #include -#include #include #include #include @@ -44,10 +43,13 @@ enum { max_packet_size = 1 << 20 }; struct Packet // data block { - uint8_t * data; // data == 0 means end of member + uint8_t * data; // data may be null if size == 0 int size; // number of bytes in data (if any) - explicit Packet( uint8_t * const d = 0, const int s = 0 ) - : data( d ), size( s ) {} + bool eom; // end of member + Packet() : data( 0 ), size( 0 ), eom( true ) {} + Packet( uint8_t * const d, const int s, const bool e ) + : data( d ), size( s ), eom ( e ) {} + ~Packet() { if( data ) delete[] data; } }; @@ -58,23 +60,25 @@ public: unsigned owait_counter; private: int deliver_worker_id; // worker queue currently delivering packets - std::vector< std::queue< Packet * > > opacket_queues; + std::vector< std::queue< const Packet * > > opacket_queues; int num_working; // number of workers still running const int num_workers; // number of workers const unsigned out_slots; // max output packets per queue pthread_mutex_t omutex; pthread_cond_t oav_or_exit; // output packet available or all workers exited std::vector< pthread_cond_t > slot_av; // output slot available + const Shared_retval & shared_retval; // discard new packets on error Packet_courier( const Packet_courier & ); // declared as private void operator=( const Packet_courier & ); // declared as private public: - Packet_courier( const int workers, const int slots ) - : ocheck_counter( 0 ), owait_counter( 0 ), - deliver_worker_id( 0 ), + Packet_courier( const Shared_retval & sh_ret, const int workers, + const int slots ) + : ocheck_counter( 0 ), owait_counter( 0 ), deliver_worker_id( 0 ), opacket_queues( workers ), num_working( workers ), - num_workers( workers ), out_slots( slots ), slot_av( workers ) + num_workers( workers ), out_slots( slots ), slot_av( workers ), + shared_retval( sh_ret ) { xinit_mutex( &omutex ); xinit_cond( &oav_or_exit ); for( unsigned i = 0; i < slot_av.size(); ++i ) xinit_cond( &slot_av[i] ); @@ -82,6 +86,10 @@ public: ~Packet_courier() { + if( shared_retval() ) // cleanup to avoid memory leaks + for( int i = 0; i < num_workers; ++i ) + while( !opacket_queues[i].empty() ) + { delete opacket_queues[i].front(); opacket_queues[i].pop(); } for( unsigned i = 0; i < slot_av.size(); ++i ) xdestroy_cond( &slot_av[i] ); xdestroy_cond( &oav_or_exit ); xdestroy_mutex( &omutex ); } @@ -94,25 +102,28 @@ public: xunlock( &omutex ); } - // collect a packet from a worker - void collect_packet( Packet * const opacket, const int worker_id ) + // collect a packet from a worker, discard packet on error + void collect_packet( const Packet * const opacket, const int worker_id ) { xlock( &omutex ); if( opacket->data ) - { while( opacket_queues[worker_id].size() >= out_slots ) + { + if( shared_retval() ) { delete opacket; goto done; } xwait( &slot_av[worker_id], &omutex ); - } + } opacket_queues[worker_id].push( opacket ); if( worker_id == deliver_worker_id ) xsignal( &oav_or_exit ); +done: xunlock( &omutex ); } - // deliver a packet to muxer - // if packet data == 0, move to next queue and wait again - Packet * deliver_packet() + /* deliver a packet to muxer + if packet->eom, move to next queue + if packet data == 0, wait again */ + const Packet * deliver_packet() { - Packet * opacket = 0; + const Packet * opacket = 0; xlock( &omutex ); ++ocheck_counter; while( true ) @@ -127,8 +138,9 @@ public: opacket_queues[deliver_worker_id].pop(); if( opacket_queues[deliver_worker_id].size() + 1 == out_slots ) xsignal( &slot_av[deliver_worker_id] ); + if( opacket->eom && ++deliver_worker_id >= num_workers ) + deliver_worker_id = 0; if( opacket->data ) break; - if( ++deliver_worker_id >= num_workers ) deliver_worker_id = 0; delete opacket; opacket = 0; } xunlock( &omutex ); @@ -150,32 +162,34 @@ struct Worker_arg const Lzip_index * lzip_index; Packet_courier * courier; const Pretty_print * pp; + Shared_retval * shared_retval; int worker_id; int num_workers; int infd; }; - // read members from file, decompress their contents, and - // give the produced packets to courier. +/* Read members from file, decompress their contents, and give to courier + the packets produced. +*/ extern "C" void * dworker_o( void * arg ) { const Worker_arg & tmp = *(const Worker_arg *)arg; const Lzip_index & lzip_index = *tmp.lzip_index; Packet_courier & courier = *tmp.courier; const Pretty_print & pp = *tmp.pp; + Shared_retval & shared_retval = *tmp.shared_retval; const int worker_id = tmp.worker_id; const int num_workers = tmp.num_workers; const int infd = tmp.infd; const int buffer_size = 65536; - uint8_t * new_data = new( std::nothrow ) uint8_t[max_packet_size]; + int new_pos = 0; + uint8_t * new_data = 0; uint8_t * const ibuffer = new( std::nothrow ) uint8_t[buffer_size]; LZ_Decoder * const decoder = LZ_decompress_open(); - if( !new_data || !ibuffer || !decoder || - LZ_decompress_errno( decoder ) != LZ_ok ) - { pp( "Not enough memory." ); cleanup_and_fail(); } - int new_pos = 0; + if( !ibuffer || !decoder || LZ_decompress_errno( decoder ) != LZ_ok ) + { if( shared_retval.set_value( 1 ) ) { pp( mem_msg ); } goto done; } for( long i = worker_id; i < lzip_index.members(); i += num_workers ) { @@ -184,6 +198,7 @@ extern "C" void * dworker_o( void * arg ) while( member_rest > 0 ) { + if( shared_retval() ) goto done; // other worker found a problem while( LZ_decompress_write_size( decoder ) > 0 ) { const int size = std::min( LZ_decompress_write_size( decoder ), @@ -191,7 +206,8 @@ extern "C" void * dworker_o( void * arg ) if( size > 0 ) { if( preadblock( infd, ibuffer, size, member_pos ) != size ) - { pp(); show_error( "Read error", errno ); cleanup_and_fail(); } + { if( shared_retval.set_value( 1 ) ) + { pp(); show_error( "Read error", errno ); } goto done; } member_pos += size; member_rest -= size; if( LZ_decompress_write( decoder, ibuffer, size ) != size ) @@ -201,60 +217,60 @@ extern "C" void * dworker_o( void * arg ) } while( true ) // read and pack decompressed data { + if( !new_data && + !( new_data = new( std::nothrow ) uint8_t[max_packet_size] ) ) + { if( shared_retval.set_value( 1 ) ) { pp( mem_msg ); } goto done; } const int rd = LZ_decompress_read( decoder, new_data + new_pos, max_packet_size - new_pos ); if( rd < 0 ) - cleanup_and_fail( decompress_read_error( decoder, pp, worker_id ) ); + { decompress_error( decoder, pp, shared_retval, worker_id ); + goto done; } new_pos += rd; if( new_pos > max_packet_size ) internal_error( "opacket size exceeded in worker." ); - if( new_pos == max_packet_size || - LZ_decompress_finished( decoder ) == 1 ) + const bool eom = LZ_decompress_finished( decoder ) == 1; + if( new_pos == max_packet_size || eom ) // make data packet { - if( new_pos > 0 ) // make data packet - { - Packet * const opacket = new Packet( new_data, new_pos ); - courier.collect_packet( opacket, worker_id ); - new_pos = 0; - new_data = new( std::nothrow ) uint8_t[max_packet_size]; - if( !new_data ) { pp( "Not enough memory." ); cleanup_and_fail(); } - } - if( LZ_decompress_finished( decoder ) == 1 ) - { // end of member token - courier.collect_packet( new Packet, worker_id ); - LZ_decompress_reset( decoder ); // prepare for new member - break; - } + const Packet * const opacket = + new Packet( ( new_pos > 0 ) ? new_data : 0, new_pos, eom ); + courier.collect_packet( opacket, worker_id ); + if( new_pos > 0 ) { new_pos = 0; new_data = 0; } + if( eom ) + { LZ_decompress_reset( decoder ); // prepare for new member + break; } } if( rd == 0 ) break; } } show_progress( lzip_index.mblock( i ).size() ); } - - delete[] ibuffer; delete[] new_data; - if( LZ_decompress_member_position( decoder ) != 0 ) - { pp( "Error, some data remains in decoder." ); cleanup_and_fail(); } - if( LZ_decompress_close( decoder ) < 0 ) - { pp( "LZ_decompress_close failed." ); cleanup_and_fail(); } +done: + delete[] ibuffer; if( new_data ) delete[] new_data; + if( LZ_decompress_member_position( decoder ) != 0 && + shared_retval.set_value( 1 ) ) + pp( "Error, some data remains in decoder." ); + if( LZ_decompress_close( decoder ) < 0 && shared_retval.set_value( 1 ) ) + pp( "LZ_decompress_close failed." ); courier.worker_finished(); return 0; } - // get from courier the processed and sorted packets, and write - // their contents to the output file. -void muxer( Packet_courier & courier, const Pretty_print & pp, const int outfd ) +/* Get from courier the processed and sorted packets, and write their + contents to the output file. Drain queue on error. +*/ +void muxer( Packet_courier & courier, const Pretty_print & pp, + Shared_retval & shared_retval, const int outfd ) { while( true ) { - Packet * const opacket = courier.deliver_packet(); + const Packet * const opacket = courier.deliver_packet(); if( !opacket ) break; // queue is empty. all workers exited - const int wr = writeblock( outfd, opacket->data, opacket->size ); - if( wr != opacket->size ) - { pp(); show_error( "Write error", errno ); cleanup_and_fail(); } - delete[] opacket->data; + if( shared_retval() == 0 && + writeblock( outfd, opacket->data, opacket->size ) != opacket->size && + shared_retval.set_value( 1 ) ) + { pp(); show_error( "Write error", errno ); } delete opacket; } } @@ -262,66 +278,59 @@ void muxer( Packet_courier & courier, const Pretty_print & pp, const int outfd ) } // end namespace - // init the courier, then start the workers and call the muxer. +// init the courier, then start the workers and call the muxer. int dec_stdout( const int num_workers, const int infd, const int outfd, const Pretty_print & pp, const int debug_level, const int out_slots, const Lzip_index & lzip_index ) { - Packet_courier courier( num_workers, out_slots ); + Shared_retval shared_retval; + Packet_courier courier( shared_retval, num_workers, out_slots ); Worker_arg * worker_args = new( std::nothrow ) Worker_arg[num_workers]; pthread_t * worker_threads = new( std::nothrow ) pthread_t[num_workers]; if( !worker_args || !worker_threads ) - { pp( "Not enough memory." ); cleanup_and_fail(); } - for( int i = 0; i < num_workers; ++i ) + { pp( mem_msg ); delete[] worker_threads; delete[] worker_args; return 1; } + + int i = 0; // number of workers started + for( ; i < num_workers; ++i ) { worker_args[i].lzip_index = &lzip_index; worker_args[i].courier = &courier; worker_args[i].pp = &pp; + worker_args[i].shared_retval = &shared_retval; worker_args[i].worker_id = i; worker_args[i].num_workers = num_workers; worker_args[i].infd = infd; const int errcode = pthread_create( &worker_threads[i], 0, dworker_o, &worker_args[i] ); if( errcode ) - { show_error( "Can't create worker threads", errcode ); cleanup_and_fail(); } + { if( shared_retval.set_value( 1 ) ) + { show_error( "Can't create worker threads", errcode ); } break; } } - muxer( courier, pp, outfd ); + muxer( courier, pp, shared_retval, outfd ); - for( int i = num_workers - 1; i >= 0; --i ) + while( --i >= 0 ) { const int errcode = pthread_join( worker_threads[i], 0 ); - if( errcode ) - { show_error( "Can't join worker threads", errcode ); cleanup_and_fail(); } + if( errcode && shared_retval.set_value( 1 ) ) + show_error( "Can't join worker threads", errcode ); } delete[] worker_threads; delete[] worker_args; - if( verbosity >= 2 ) - { - if( verbosity >= 4 ) show_header( lzip_index.dictionary_size( 0 ) ); - const unsigned long long in_size = lzip_index.cdata_size(); - const unsigned long long out_size = lzip_index.udata_size(); - if( out_size == 0 || in_size == 0 ) - std::fputs( "no data compressed. ", stderr ); - else - std::fprintf( stderr, "%6.3f:1, %5.2f%% ratio, %5.2f%% saved. ", - (double)out_size / in_size, - ( 100.0 * in_size ) / out_size, - 100.0 - ( ( 100.0 * in_size ) / out_size ) ); - if( verbosity >= 3 ) - std::fprintf( stderr, "decompressed %9llu, compressed %8llu. ", - out_size, in_size ); - } - if( verbosity >= 1 ) std::fputs( "done\n", stderr ); + if( shared_retval() ) return shared_retval(); // some thread found a problem + + if( verbosity >= 1 ) + show_results( lzip_index.cdata_size(), lzip_index.udata_size(), + lzip_index.dictionary_size(), false ); if( debug_level & 1 ) std::fprintf( stderr, + "workers started %8u\n" "muxer tried to consume from workers %8u times\n" "muxer had to wait %8u times\n", - courier.ocheck_counter, - courier.owait_counter ); + num_workers, courier.ocheck_counter, courier.owait_counter ); if( !courier.finished() ) internal_error( "courier not finished." ); return 0; diff --git a/dec_stream.cc b/dec_stream.cc index 2e1f752..a23d5e9 100644 --- a/dec_stream.cc +++ b/dec_stream.cc @@ -1,19 +1,19 @@ -/* Plzip - Massively parallel implementation of lzip - Copyright (C) 2009 Laszlo Ersek. - Copyright (C) 2009-2019 Antonio Diaz Diaz. - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program. If not, see . +/* Plzip - Massively parallel implementation of lzip + Copyright (C) 2009 Laszlo Ersek. + Copyright (C) 2009-2021 Antonio Diaz Diaz. + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ #define _FILE_OFFSET_BITS 64 @@ -28,13 +28,19 @@ #include #include #include -#include #include #include #include #include "lzip.h" +/* When a problem is detected by any thread: + - the thread sets shared_retval to 1 or 2. + - the splitter sets eof and returns. + - the courier discards new packets received or collected. + - the workers drain the queue and return. + - the muxer drains the queue and returns. + (Draining seems to be faster than cleaning up later). */ namespace { @@ -45,10 +51,13 @@ unsigned long long out_size = 0; struct Packet // data block { - uint8_t * data; // data == 0 means end of member + uint8_t * data; // data may be null if size == 0 int size; // number of bytes in data (if any) - explicit Packet( uint8_t * const d = 0, const int s = 0 ) - : data( d ), size( s ) {} + bool eom; // end of member + Packet() : data( 0 ), size( 0 ), eom( true ) {} + Packet( uint8_t * const d, const int s, const bool e ) + : data( d ), size( s ), eom ( e ) {} + ~Packet() { if( data ) delete[] data; } }; @@ -63,8 +72,8 @@ private: int receive_worker_id; // worker queue currently receiving packets int deliver_worker_id; // worker queue currently delivering packets Slot_tally slot_tally; // limits the number of input packets - std::vector< std::queue< Packet * > > ipacket_queues; - std::vector< std::queue< Packet * > > opacket_queues; + std::vector< std::queue< const Packet * > > ipacket_queues; + std::vector< std::queue< const Packet * > > opacket_queues; int num_working; // number of workers still running const int num_workers; // number of workers const unsigned out_slots; // max output packets per queue @@ -73,20 +82,23 @@ private: pthread_mutex_t omutex; pthread_cond_t oav_or_exit; // output packet available or all workers exited std::vector< pthread_cond_t > slot_av; // output slot available + const Shared_retval & shared_retval; // discard new packets on error bool eof; // splitter done + bool trailing_data_found_; // a worker found trailing data Packet_courier( const Packet_courier & ); // declared as private void operator=( const Packet_courier & ); // declared as private public: - Packet_courier( const int workers, const int in_slots, const int oslots ) + Packet_courier( const Shared_retval & sh_ret, const int workers, + const int in_slots, const int oslots ) : icheck_counter( 0 ), iwait_counter( 0 ), ocheck_counter( 0 ), owait_counter( 0 ), receive_worker_id( 0 ), deliver_worker_id( 0 ), slot_tally( in_slots ), ipacket_queues( workers ), opacket_queues( workers ), num_working( workers ), num_workers( workers ), out_slots( oslots ), slot_av( workers ), - eof( false ) + shared_retval( sh_ret ), eof( false ), trailing_data_found_( false ) { xinit_mutex( &imutex ); xinit_cond( &iav_or_eof ); xinit_mutex( &omutex ); xinit_cond( &oav_or_exit ); @@ -95,30 +107,37 @@ public: ~Packet_courier() { + if( shared_retval() ) // cleanup to avoid memory leaks + for( int i = 0; i < num_workers; ++i ) + { + while( !ipacket_queues[i].empty() ) + { delete ipacket_queues[i].front(); ipacket_queues[i].pop(); } + while( !opacket_queues[i].empty() ) + { delete opacket_queues[i].front(); opacket_queues[i].pop(); } + } for( unsigned i = 0; i < slot_av.size(); ++i ) xdestroy_cond( &slot_av[i] ); xdestroy_cond( &oav_or_exit ); xdestroy_mutex( &omutex ); xdestroy_cond( &iav_or_eof ); xdestroy_mutex( &imutex ); } - // make a packet with data received from splitter - // if data == 0 (end of member token), move to next queue - void receive_packet( uint8_t * const data, const int size ) + /* Make a packet with data received from splitter. + If eom == true (end of member), move to next queue. */ + void receive_packet( uint8_t * const data, const int size, const bool eom ) { - Packet * const ipacket = new Packet( data, size ); - if( data ) - { in_size += size; slot_tally.get_slot(); } // wait for a free slot + if( shared_retval() ) { delete[] data; return; } // discard packet on error + const Packet * const ipacket = new Packet( data, size, eom ); + slot_tally.get_slot(); // wait for a free slot xlock( &imutex ); ipacket_queues[receive_worker_id].push( ipacket ); xbroadcast( &iav_or_eof ); xunlock( &imutex ); - if( !data && ++receive_worker_id >= num_workers ) - receive_worker_id = 0; + if( eom && ++receive_worker_id >= num_workers ) receive_worker_id = 0; } // distribute a packet to a worker - Packet * distribute_packet( const int worker_id ) + const Packet * distribute_packet( const int worker_id ) { - Packet * ipacket = 0; + const Packet * ipacket = 0; xlock( &imutex ); ++icheck_counter; while( ipacket_queues[worker_id].empty() && !eof ) @@ -132,37 +151,38 @@ public: ipacket_queues[worker_id].pop(); } xunlock( &imutex ); - if( ipacket ) - { if( ipacket->data ) slot_tally.leave_slot(); } - else + if( ipacket ) slot_tally.leave_slot(); + else // no more packets { - // notify muxer when last worker exits - xlock( &omutex ); + xlock( &omutex ); // notify muxer when last worker exits if( --num_working == 0 ) xsignal( &oav_or_exit ); xunlock( &omutex ); } return ipacket; } - // collect a packet from a worker - void collect_packet( Packet * const opacket, const int worker_id ) + // collect a packet from a worker, discard packet on error + void collect_packet( const Packet * const opacket, const int worker_id ) { xlock( &omutex ); if( opacket->data ) - { while( opacket_queues[worker_id].size() >= out_slots ) + { + if( shared_retval() ) { delete opacket; goto done; } xwait( &slot_av[worker_id], &omutex ); - } + } opacket_queues[worker_id].push( opacket ); if( worker_id == deliver_worker_id ) xsignal( &oav_or_exit ); +done: xunlock( &omutex ); } - // deliver a packet to muxer - // if packet data == 0, move to next queue and wait again - Packet * deliver_packet() + /* deliver a packet to muxer + if packet->eom, move to next queue + if packet data == 0, wait again */ + const Packet * deliver_packet() { - Packet * opacket = 0; + const Packet * opacket = 0; xlock( &omutex ); ++ocheck_counter; while( true ) @@ -177,27 +197,37 @@ public: opacket_queues[deliver_worker_id].pop(); if( opacket_queues[deliver_worker_id].size() + 1 == out_slots ) xsignal( &slot_av[deliver_worker_id] ); + if( opacket->eom && ++deliver_worker_id >= num_workers ) + deliver_worker_id = 0; if( opacket->data ) break; - if( ++deliver_worker_id >= num_workers ) deliver_worker_id = 0; delete opacket; opacket = 0; } xunlock( &omutex ); return opacket; } - void add_out_size( const unsigned long long partial_out_size ) + void add_sizes( const unsigned long long partial_in_size, + const unsigned long long partial_out_size ) { - xlock( &omutex ); + xlock( &imutex ); + in_size += partial_in_size; out_size += partial_out_size; - xunlock( &omutex ); + xunlock( &imutex ); } - void finish() // splitter has no more packets to send + void set_trailing_flag() { trailing_data_found_ = true; } + bool trailing_data_found() { return trailing_data_found_; } + + void finish( const int workers_started ) { - xlock( &imutex ); + xlock( &imutex ); // splitter has no more packets to send eof = true; xbroadcast( &iav_or_eof ); xunlock( &imutex ); + xlock( &omutex ); // notify muxer if all workers have exited + num_working -= num_workers - workers_started; // workers spared + if( num_working <= 0 ) xsignal( &oav_or_exit ); + xunlock( &omutex ); } bool finished() // all packets delivered to muxer @@ -212,100 +242,261 @@ public: }; -// Search forward from 'pos' for "LZIP" (Boyer-Moore algorithm) -// Returns pos of found string or 'pos+size' if not found. -// -int find_magic( const uint8_t * const buffer, const int pos, const int size ) +struct Worker_arg { - const uint8_t table[256] = { - 4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4, - 4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4, - 4,4,4,4,4,4,4,4,4,1,4,4,3,4,4,4,4,4,4,4,4,4,4,4,4,4,2,4,4,4,4,4, - 4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4, - 4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4, - 4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4, - 4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4, - 4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4 }; - - for( int i = pos; i <= pos + size - 4; i += table[buffer[i+3]] ) - if( buffer[i] == 'L' && buffer[i+1] == 'Z' && - buffer[i+2] == 'I' && buffer[i+3] == 'P' ) - return i; // magic string found - return pos + size; - } - + Packet_courier * courier; + const Pretty_print * pp; + Shared_retval * shared_retval; + int worker_id; + bool ignore_trailing; + bool loose_trailing; + bool testing; + bool nocopy; // avoid copying decompressed data when testing + }; struct Splitter_arg { + struct Worker_arg worker_arg; + Worker_arg * worker_args; + pthread_t * worker_threads; unsigned long long cfile_size; - Packet_courier * courier; - const Pretty_print * pp; int infd; unsigned dictionary_size; // returned by splitter to main thread + int num_workers; // returned by splitter to main thread }; - // split data from input file into chunks and pass them to - // courier for packaging and distribution to workers. -extern "C" void * dsplitter_s( void * arg ) +/* Consume packets from courier, decompress their contents and, if not + testing, give to courier the packets produced. +*/ +extern "C" void * dworker_s( void * arg ) { - Splitter_arg & tmp = *(Splitter_arg *)arg; + const Worker_arg & tmp = *(const Worker_arg *)arg; Packet_courier & courier = *tmp.courier; const Pretty_print & pp = *tmp.pp; + Shared_retval & shared_retval = *tmp.shared_retval; + const int worker_id = tmp.worker_id; + const bool ignore_trailing = tmp.ignore_trailing; + const bool loose_trailing = tmp.loose_trailing; + const bool testing = tmp.testing; + const bool nocopy = tmp.nocopy; + + unsigned long long partial_in_size = 0, partial_out_size = 0; + int new_pos = 0; + bool draining = false; // either trailing data or an error were found + uint8_t * new_data = 0; + LZ_Decoder * const decoder = LZ_decompress_open(); + if( !decoder || LZ_decompress_errno( decoder ) != LZ_ok ) + { draining = true; if( shared_retval.set_value( 1 ) ) pp( mem_msg ); } + + while( true ) + { + const Packet * const ipacket = courier.distribute_packet( worker_id ); + if( !ipacket ) break; // no more packets to process + + int written = 0; + while( !draining ) // else discard trailing data or drain queue + { + if( LZ_decompress_write_size( decoder ) > 0 && written < ipacket->size ) + { + const int wr = LZ_decompress_write( decoder, ipacket->data + written, + ipacket->size - written ); + if( wr < 0 ) internal_error( "library error (LZ_decompress_write)." ); + written += wr; + if( written > ipacket->size ) + internal_error( "ipacket size exceeded in worker." ); + } + if( ipacket->eom && written == ipacket->size ) + LZ_decompress_finish( decoder ); + unsigned long long total_in = 0; // detect empty member + corrupt header + while( !draining ) // read and pack decompressed data + { + if( !nocopy && !new_data && + !( new_data = new( std::nothrow ) uint8_t[max_packet_size] ) ) + { draining = true; if( shared_retval.set_value( 1 ) ) pp( mem_msg ); + break; } + const int rd = LZ_decompress_read( decoder, + nocopy ? 0 : new_data + new_pos, + max_packet_size - new_pos ); + if( rd < 0 ) // trailing data or decoder error + { + draining = true; + const enum LZ_Errno lz_errno = LZ_decompress_errno( decoder ); + if( lz_errno == LZ_header_error ) + { + courier.set_trailing_flag(); + if( !ignore_trailing ) + { if( shared_retval.set_value( 2 ) ) pp( trailing_msg ); } + } + else if( lz_errno == LZ_data_error && + LZ_decompress_member_position( decoder ) == 0 ) + { + courier.set_trailing_flag(); + if( !loose_trailing ) + { if( shared_retval.set_value( 2 ) ) pp( corrupt_mm_msg ); } + else if( !ignore_trailing ) + { if( shared_retval.set_value( 2 ) ) pp( trailing_msg ); } + } + else + decompress_error( decoder, pp, shared_retval, worker_id ); + } + else new_pos += rd; + if( new_pos > max_packet_size ) + internal_error( "opacket size exceeded in worker." ); + if( LZ_decompress_member_finished( decoder ) == 1 ) + { + partial_in_size += LZ_decompress_member_position( decoder ); + partial_out_size += LZ_decompress_data_position( decoder ); + } + const bool eom = draining || LZ_decompress_finished( decoder ) == 1; + if( new_pos == max_packet_size || eom ) + { + if( !testing ) // make data packet + { + const Packet * const opacket = + new Packet( ( new_pos > 0 ) ? new_data : 0, new_pos, eom ); + courier.collect_packet( opacket, worker_id ); + if( new_pos > 0 ) new_data = 0; + } + new_pos = 0; + if( eom ) + { LZ_decompress_reset( decoder ); // prepare for new member + break; } + } + if( rd == 0 ) + { + const unsigned long long size = LZ_decompress_total_in_size( decoder ); + if( total_in == size ) break; else total_in = size; + } + } + if( !ipacket->data || written == ipacket->size ) break; + } + delete ipacket; + } + + if( new_data ) delete[] new_data; + courier.add_sizes( partial_in_size, partial_out_size ); + if( LZ_decompress_member_position( decoder ) != 0 && + shared_retval.set_value( 1 ) ) + pp( "Error, some data remains in decoder." ); + if( LZ_decompress_close( decoder ) < 0 && shared_retval.set_value( 1 ) ) + pp( "LZ_decompress_close failed." ); + return 0; + } + + +bool start_worker( const Worker_arg & worker_arg, + Worker_arg * const worker_args, + pthread_t * const worker_threads, const int worker_id, + Shared_retval & shared_retval ) + { + worker_args[worker_id] = worker_arg; + worker_args[worker_id].worker_id = worker_id; + const int errcode = pthread_create( &worker_threads[worker_id], 0, + dworker_s, &worker_args[worker_id] ); + if( errcode && shared_retval.set_value( 1 ) ) + show_error( "Can't create worker threads", errcode ); + return errcode == 0; + } + + +/* Split data from input file into chunks and pass them to courier for + packaging and distribution to workers. + Start a worker per member up to a maximum of num_workers. +*/ +extern "C" void * dsplitter_s( void * arg ) + { + Splitter_arg & tmp = *(Splitter_arg *)arg; + const Worker_arg & worker_arg = tmp.worker_arg; + Packet_courier & courier = *worker_arg.courier; + const Pretty_print & pp = *worker_arg.pp; + Shared_retval & shared_retval = *worker_arg.shared_retval; + Worker_arg * const worker_args = tmp.worker_args; + pthread_t * const worker_threads = tmp.worker_threads; const int infd = tmp.infd; + int worker_id = 0; // number of workers started const int hsize = Lzip_header::size; const int tsize = Lzip_trailer::size; const int buffer_size = max_packet_size; - const int base_buffer_size = tsize + buffer_size + hsize; + // buffer with room for trailer, header, data, and sentinel "LZIP" + const int base_buffer_size = tsize + hsize + buffer_size + 4; uint8_t * const base_buffer = new( std::nothrow ) uint8_t[base_buffer_size]; - if( !base_buffer ) { pp( "Not enough memory." ); cleanup_and_fail(); } + if( !base_buffer ) + { +mem_fail: + if( shared_retval.set_value( 1 ) ) pp( mem_msg ); +fail: + delete[] base_buffer; + courier.finish( worker_id ); // no more packets to send + tmp.num_workers = worker_id; + return 0; + } uint8_t * const buffer = base_buffer + tsize; int size = readblock( infd, buffer, buffer_size + hsize ) - hsize; bool at_stream_end = ( size < buffer_size ); if( size != buffer_size && errno ) - { pp(); show_error( "Read error", errno ); cleanup_and_fail(); } + { if( shared_retval.set_value( 1 ) ) + { pp(); show_error( "Read error", errno ); } goto fail; } if( size + hsize < min_member_size ) - { show_file_error( pp.name(), "Input file is too short." ); - cleanup_and_fail( 2 ); } + { if( shared_retval.set_value( 2 ) ) show_file_error( pp.name(), + ( size <= 0 ) ? "File ends unexpectedly at member header." : + "Input file is too short." ); goto fail; } const Lzip_header & header = *(const Lzip_header *)buffer; if( !header.verify_magic() ) - { show_file_error( pp.name(), bad_magic_msg ); cleanup_and_fail( 2 ); } + { if( shared_retval.set_value( 2 ) ) + { show_file_error( pp.name(), bad_magic_msg ); } goto fail; } if( !header.verify_version() ) - { pp( bad_version( header.version() ) ); cleanup_and_fail( 2 ); } + { if( shared_retval.set_value( 2 ) ) + { pp( bad_version( header.version() ) ); } goto fail; } tmp.dictionary_size = header.dictionary_size(); if( !isvalid_ds( tmp.dictionary_size ) ) - { pp( bad_dict_msg ); cleanup_and_fail( 2 ); } + { if( shared_retval.set_value( 2 ) ) { pp( bad_dict_msg ); } goto fail; } if( verbosity >= 1 ) pp(); show_progress( 0, tmp.cfile_size, &pp ); // init unsigned long long partial_member_size = 0; + bool worker_pending = true; // start 1 worker per first packet of member while( true ) { - int pos = 0; + if( shared_retval() ) break; // stop sending packets on error + int pos = 0; // current searching position + std::memcpy( buffer + hsize + size, lzip_magic, 4 ); // sentinel for( int newpos = 1; newpos <= size; ++newpos ) { - newpos = find_magic( buffer, newpos, size + 4 - newpos ); + while( buffer[newpos] != lzip_magic[0] || + buffer[newpos+1] != lzip_magic[1] || + buffer[newpos+2] != lzip_magic[2] || + buffer[newpos+3] != lzip_magic[3] ) ++newpos; if( newpos <= size ) { const Lzip_trailer & trailer = *(const Lzip_trailer *)(buffer + newpos - tsize); const unsigned long long member_size = trailer.member_size(); - if( partial_member_size + newpos - pos == member_size ) + if( partial_member_size + newpos - pos == member_size && + trailer.verify_consistency() ) { // header found const Lzip_header & header = *(const Lzip_header *)(buffer + newpos); if( !header.verify_version() ) - { pp( bad_version( header.version() ) ); cleanup_and_fail( 2 ); } + { if( shared_retval.set_value( 2 ) ) + { pp( bad_version( header.version() ) ); } goto fail; } const unsigned dictionary_size = header.dictionary_size(); if( !isvalid_ds( dictionary_size ) ) - { pp( bad_dict_msg ); cleanup_and_fail( 2 ); } + { if( shared_retval.set_value( 2 ) ) pp( bad_dict_msg ); + goto fail; } + if( tmp.dictionary_size < dictionary_size ) + tmp.dictionary_size = dictionary_size; uint8_t * const data = new( std::nothrow ) uint8_t[newpos - pos]; - if( !data ) { pp( "Not enough memory." ); cleanup_and_fail(); } + if( !data ) goto mem_fail; std::memcpy( data, buffer + pos, newpos - pos ); - courier.receive_packet( data, newpos - pos ); - courier.receive_packet( 0, 0 ); // end of member token + courier.receive_packet( data, newpos - pos, true ); // eom partial_member_size = 0; pos = newpos; + if( worker_pending ) + { if( !start_worker( worker_arg, worker_args, worker_threads, + worker_id, shared_retval ) ) goto fail; + ++worker_id; } + worker_pending = worker_id < tmp.num_workers; show_progress( member_size ); } } @@ -314,160 +505,56 @@ extern "C" void * dsplitter_s( void * arg ) if( at_stream_end ) { uint8_t * data = new( std::nothrow ) uint8_t[size + hsize - pos]; - if( !data ) { pp( "Not enough memory." ); cleanup_and_fail(); } + if( !data ) goto mem_fail; std::memcpy( data, buffer + pos, size + hsize - pos ); - courier.receive_packet( data, size + hsize - pos ); - courier.receive_packet( 0, 0 ); // end of member token + courier.receive_packet( data, size + hsize - pos, true ); // eom + if( worker_pending && + start_worker( worker_arg, worker_args, worker_threads, + worker_id, shared_retval ) ) ++worker_id; break; } if( pos < buffer_size ) { partial_member_size += buffer_size - pos; uint8_t * data = new( std::nothrow ) uint8_t[buffer_size - pos]; - if( !data ) { pp( "Not enough memory." ); cleanup_and_fail(); } + if( !data ) goto mem_fail; std::memcpy( data, buffer + pos, buffer_size - pos ); - courier.receive_packet( data, buffer_size - pos ); + courier.receive_packet( data, buffer_size - pos, false ); + if( worker_pending ) + { if( !start_worker( worker_arg, worker_args, worker_threads, + worker_id, shared_retval ) ) break; + ++worker_id; worker_pending = false; } } + if( courier.trailing_data_found() ) break; std::memcpy( base_buffer, base_buffer + buffer_size, tsize + hsize ); size = readblock( infd, buffer + hsize, buffer_size ); at_stream_end = ( size < buffer_size ); if( size != buffer_size && errno ) - { pp(); show_error( "Read error", errno ); cleanup_and_fail(); } + { if( shared_retval.set_value( 1 ) ) + { pp(); show_error( "Read error", errno ); } break; } } delete[] base_buffer; - courier.finish(); // no more packets to send - return 0; - } - - -struct Worker_arg - { - Packet_courier * courier; - const Pretty_print * pp; - int worker_id; - bool ignore_trailing; - bool loose_trailing; - bool testing; - }; - - - // consume packets from courier, decompress their contents and, - // if not testing, give the produced packets to courier. -extern "C" void * dworker_s( void * arg ) - { - const Worker_arg & tmp = *(const Worker_arg *)arg; - Packet_courier & courier = *tmp.courier; - const Pretty_print & pp = *tmp.pp; - const int worker_id = tmp.worker_id; - const bool ignore_trailing = tmp.ignore_trailing; - const bool loose_trailing = tmp.loose_trailing; - const bool testing = tmp.testing; - - uint8_t * new_data = new( std::nothrow ) uint8_t[max_packet_size]; - LZ_Decoder * const decoder = LZ_decompress_open(); - if( !new_data || !decoder || LZ_decompress_errno( decoder ) != LZ_ok ) - { pp( "Not enough memory." ); cleanup_and_fail(); } - unsigned long long partial_out_size = 0; - int new_pos = 0; - bool trailing_data_found = false; - - while( true ) - { - const Packet * const ipacket = courier.distribute_packet( worker_id ); - if( !ipacket ) break; // no more packets to process - if( !ipacket->data ) LZ_decompress_finish( decoder ); - - int written = 0; - while( !trailing_data_found ) - { - if( LZ_decompress_write_size( decoder ) > 0 && written < ipacket->size ) - { - const int wr = LZ_decompress_write( decoder, ipacket->data + written, - ipacket->size - written ); - if( wr < 0 ) internal_error( "library error (LZ_decompress_write)." ); - written += wr; - if( written > ipacket->size ) - internal_error( "ipacket size exceeded in worker." ); - } - while( !trailing_data_found ) // read and pack decompressed data - { - const int rd = LZ_decompress_read( decoder, new_data + new_pos, - max_packet_size - new_pos ); - if( rd < 0 ) - { - const enum LZ_Errno lz_errno = LZ_decompress_errno( decoder ); - if( lz_errno == LZ_header_error ) - { - trailing_data_found = true; - if( !ignore_trailing ) - { pp( trailing_msg ); cleanup_and_fail( 2 ); } - } - else if( lz_errno == LZ_data_error && - LZ_decompress_member_position( decoder ) == 0 ) - { - trailing_data_found = true; - if( !loose_trailing ) - { pp( corrupt_mm_msg ); cleanup_and_fail( 2 ); } - else if( !ignore_trailing ) - { pp( trailing_msg ); cleanup_and_fail( 2 ); } - } - else - cleanup_and_fail( decompress_read_error( decoder, pp, worker_id ) ); - } - else new_pos += rd; - if( new_pos > max_packet_size ) - internal_error( "opacket size exceeded in worker." ); - if( new_pos == max_packet_size || trailing_data_found || - LZ_decompress_finished( decoder ) == 1 ) - { - if( !testing && new_pos > 0 ) // make data packet - { - Packet * const opacket = new Packet( new_data, new_pos ); - courier.collect_packet( opacket, worker_id ); - new_data = new( std::nothrow ) uint8_t[max_packet_size]; - if( !new_data ) { pp( "Not enough memory." ); cleanup_and_fail(); } - } - partial_out_size += new_pos; - new_pos = 0; - if( trailing_data_found || LZ_decompress_finished( decoder ) == 1 ) - { - if( !testing ) // end of member token - courier.collect_packet( new Packet, worker_id ); - LZ_decompress_reset( decoder ); // prepare for new member - break; - } - } - if( rd == 0 ) break; - } - if( !ipacket->data || written == ipacket->size ) break; - } - if( ipacket->data ) delete[] ipacket->data; - delete ipacket; - } - - delete[] new_data; - courier.add_out_size( partial_out_size ); - if( LZ_decompress_member_position( decoder ) != 0 ) - { pp( "Error, some data remains in decoder." ); cleanup_and_fail(); } - if( LZ_decompress_close( decoder ) < 0 ) - { pp( "LZ_decompress_close failed." ); cleanup_and_fail(); } + courier.finish( worker_id ); // no more packets to send + tmp.num_workers = worker_id; return 0; } - // get from courier the processed and sorted packets, and write - // their contents to the output file. -void muxer( Packet_courier & courier, const Pretty_print & pp, const int outfd ) +/* Get from courier the processed and sorted packets, and write their + contents to the output file. Drain queue on error. +*/ +void muxer( Packet_courier & courier, const Pretty_print & pp, + Shared_retval & shared_retval, const int outfd ) { while( true ) { - Packet * const opacket = courier.deliver_packet(); + const Packet * const opacket = courier.deliver_packet(); if( !opacket ) break; // queue is empty. all workers exited - const int wr = writeblock( outfd, opacket->data, opacket->size ); - if( wr != opacket->size ) - { pp(); show_error( "Write error", errno ); cleanup_and_fail(); } - delete[] opacket->data; + if( shared_retval() == 0 && + writeblock( outfd, opacket->data, opacket->size ) != opacket->size && + shared_retval.set_value( 1 ) ) + { pp(); show_error( "Write error", errno ); } delete opacket; } } @@ -475,8 +562,9 @@ void muxer( Packet_courier & courier, const Pretty_print & pp, const int outfd ) } // end namespace - // init the courier, then start the splitter and the workers and, - // if not testing, call the muxer. +/* Init the courier, then start the splitter and the workers and, if not + testing, call the muxer. +*/ int dec_stream( const unsigned long long cfile_size, const int num_workers, const int infd, const int outfd, const Pretty_print & pp, const int debug_level, @@ -487,77 +575,76 @@ int dec_stream( const unsigned long long cfile_size, num_workers * in_slots : INT_MAX; in_size = 0; out_size = 0; - Packet_courier courier( num_workers, total_in_slots, out_slots ); + Shared_retval shared_retval; + Packet_courier courier( shared_retval, num_workers, total_in_slots, out_slots ); + + if( debug_level & 2 ) std::fputs( "decompress stream.\n", stderr ); + + Worker_arg * worker_args = new( std::nothrow ) Worker_arg[num_workers]; + pthread_t * worker_threads = new( std::nothrow ) pthread_t[num_workers]; + if( !worker_args || !worker_threads ) + { pp( mem_msg ); delete[] worker_threads; delete[] worker_args; return 1; } + +#if defined LZ_API_VERSION && LZ_API_VERSION >= 1012 + const bool nocopy = ( outfd < 0 && LZ_api_version() >= 1012 ); +#else + const bool nocopy = false; +#endif Splitter_arg splitter_arg; + splitter_arg.worker_arg.courier = &courier; + splitter_arg.worker_arg.pp = &pp; + splitter_arg.worker_arg.shared_retval = &shared_retval; + splitter_arg.worker_arg.worker_id = 0; + splitter_arg.worker_arg.ignore_trailing = ignore_trailing; + splitter_arg.worker_arg.loose_trailing = loose_trailing; + splitter_arg.worker_arg.testing = ( outfd < 0 ); + splitter_arg.worker_arg.nocopy = nocopy; + splitter_arg.worker_args = worker_args; + splitter_arg.worker_threads = worker_threads; splitter_arg.cfile_size = cfile_size; - splitter_arg.courier = &courier; - splitter_arg.pp = &pp; splitter_arg.infd = infd; + splitter_arg.num_workers = num_workers; pthread_t splitter_thread; int errcode = pthread_create( &splitter_thread, 0, dsplitter_s, &splitter_arg ); if( errcode ) - { show_error( "Can't create splitter thread", errcode ); cleanup_and_fail(); } + { show_error( "Can't create splitter thread", errcode ); + delete[] worker_threads; delete[] worker_args; return 1; } - Worker_arg * worker_args = new( std::nothrow ) Worker_arg[num_workers]; - pthread_t * worker_threads = new( std::nothrow ) pthread_t[num_workers]; - if( !worker_args || !worker_threads ) - { pp( "Not enough memory." ); cleanup_and_fail(); } - for( int i = 0; i < num_workers; ++i ) - { - worker_args[i].courier = &courier; - worker_args[i].pp = &pp; - worker_args[i].worker_id = i; - worker_args[i].ignore_trailing = ignore_trailing; - worker_args[i].loose_trailing = loose_trailing; - worker_args[i].testing = ( outfd < 0 ); - errcode = pthread_create( &worker_threads[i], 0, dworker_s, &worker_args[i] ); - if( errcode ) - { show_error( "Can't create worker threads", errcode ); cleanup_and_fail(); } - } + if( outfd >= 0 ) muxer( courier, pp, shared_retval, outfd ); - if( outfd >= 0 ) muxer( courier, pp, outfd ); + errcode = pthread_join( splitter_thread, 0 ); + if( errcode && shared_retval.set_value( 1 ) ) + show_error( "Can't join splitter thread", errcode ); - for( int i = num_workers - 1; i >= 0; --i ) - { + for( int i = splitter_arg.num_workers; --i >= 0; ) + { // join only the workers started errcode = pthread_join( worker_threads[i], 0 ); - if( errcode ) - { show_error( "Can't join worker threads", errcode ); cleanup_and_fail(); } + if( errcode && shared_retval.set_value( 1 ) ) + show_error( "Can't join worker threads", errcode ); } delete[] worker_threads; delete[] worker_args; - errcode = pthread_join( splitter_thread, 0 ); - if( errcode ) - { show_error( "Can't join splitter thread", errcode ); cleanup_and_fail(); } + if( shared_retval() ) return shared_retval(); // some thread found a problem - if( verbosity >= 2 ) - { - if( verbosity >= 4 ) show_header( splitter_arg.dictionary_size ); - if( out_size == 0 || in_size == 0 ) - std::fputs( "no data compressed. ", stderr ); - else - std::fprintf( stderr, "%6.3f:1, %5.2f%% ratio, %5.2f%% saved. ", - (double)out_size / in_size, - ( 100.0 * in_size ) / out_size, - 100.0 - ( ( 100.0 * in_size ) / out_size ) ); - if( verbosity >= 3 ) - std::fprintf( stderr, "decompressed %9llu, compressed %8llu. ", - out_size, in_size ); - } - if( verbosity >= 1 ) std::fputs( (outfd < 0) ? "ok\n" : "done\n", stderr ); + show_results( in_size, out_size, splitter_arg.dictionary_size, outfd < 0 ); if( debug_level & 1 ) + { std::fprintf( stderr, + "workers started %8u\n" "any worker tried to consume from splitter %8u times\n" - "any worker had to wait %8u times\n" - "muxer tried to consume from workers %8u times\n" - "muxer had to wait %8u times\n", - courier.icheck_counter, - courier.iwait_counter, - courier.ocheck_counter, - courier.owait_counter ); + "any worker had to wait %8u times\n", + splitter_arg.num_workers, + courier.icheck_counter, courier.iwait_counter ); + if( outfd >= 0 ) + std::fprintf( stderr, + "muxer tried to consume from workers %8u times\n" + "muxer had to wait %8u times\n", + courier.ocheck_counter, courier.owait_counter ); + } if( !courier.finished() ) internal_error( "courier not finished." ); return 0; diff --git a/decompress.cc b/decompress.cc index 19cb1df..6765582 100644 --- a/decompress.cc +++ b/decompress.cc @@ -1,19 +1,19 @@ -/* Plzip - Massively parallel implementation of lzip - Copyright (C) 2009 Laszlo Ersek. - Copyright (C) 2009-2019 Antonio Diaz Diaz. - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program. If not, see . +/* Plzip - Massively parallel implementation of lzip + Copyright (C) 2009 Laszlo Ersek. + Copyright (C) 2009-2021 Antonio Diaz Diaz. + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ #define _FILE_OFFSET_BITS 64 @@ -27,7 +27,6 @@ #include #include #include -#include #include #include #include @@ -37,8 +36,9 @@ #include "lzip_index.h" -// This code is based on a patch by Hannes Domani, ssbssa@yahoo.de -// to be able to compile plzip under MS Windows (with MINGW compiler). +/* This code is based on a patch by Hannes Domani, to make + possible compiling plzip under MS Windows (with MINGW compiler). +*/ #if defined(__MSVCRT__) && defined(WITH_MINGW) #include #warning "Parallel I/O is not guaranteed to work on Windows." @@ -76,9 +76,9 @@ ssize_t pwrite( int fd, const void *buf, size_t count, uint64_t offset ) #endif // __MSVCRT__ -// Returns the number of bytes really read. -// If (returned value < size) and (errno == 0), means EOF was reached. -// +/* Returns the number of bytes really read. + If (returned value < size) and (errno == 0), means EOF was reached. +*/ int preadblock( const int fd, uint8_t * const buf, const int size, const long long pos ) { @@ -96,9 +96,9 @@ int preadblock( const int fd, uint8_t * const buf, const int size, } -// Returns the number of bytes really written. -// If (returned value < size), it is always an error. -// +/* Returns the number of bytes really written. + If (returned value < size), it is always an error. +*/ int pwriteblock( const int fd, const uint8_t * const buf, const int size, const long long pos ) { @@ -115,18 +115,39 @@ int pwriteblock( const int fd, const uint8_t * const buf, const int size, } -int decompress_read_error( struct LZ_Decoder * const decoder, - const Pretty_print & pp, const int worker_id ) +void decompress_error( struct LZ_Decoder * const decoder, + const Pretty_print & pp, + Shared_retval & shared_retval, const int worker_id ) { const LZ_Errno errcode = LZ_decompress_errno( decoder ); + const int retval = ( errcode == LZ_header_error || errcode == LZ_data_error || + errcode == LZ_unexpected_eof ) ? 2 : 1; + if( !shared_retval.set_value( retval ) ) return; pp(); if( verbosity >= 0 ) - std::fprintf( stderr, "LZ_decompress_read error in worker %d: %s\n", - worker_id, LZ_strerror( errcode ) ); - if( errcode == LZ_header_error || errcode == LZ_unexpected_eof || - errcode == LZ_data_error ) - return 2; - return 1; + std::fprintf( stderr, "%s in worker %d\n", LZ_strerror( errcode ), + worker_id ); + } + + +void show_results( const unsigned long long in_size, + const unsigned long long out_size, + const unsigned dictionary_size, const bool testing ) + { + if( verbosity >= 2 ) + { + if( verbosity >= 4 ) show_header( dictionary_size ); + if( out_size == 0 || in_size == 0 ) + std::fputs( "no data compressed. ", stderr ); + else + std::fprintf( stderr, "%6.3f:1, %5.2f%% ratio, %5.2f%% saved. ", + (double)out_size / in_size, + ( 100.0 * in_size ) / out_size, + 100.0 - ( ( 100.0 * in_size ) / out_size ) ); + if( verbosity >= 3 ) + std::fprintf( stderr, "%9llu out, %8llu in. ", out_size, in_size ); + } + if( verbosity >= 1 ) std::fputs( testing ? "ok\n" : "done\n", stderr ); } @@ -136,32 +157,38 @@ struct Worker_arg { const Lzip_index * lzip_index; const Pretty_print * pp; + Shared_retval * shared_retval; int worker_id; int num_workers; int infd; int outfd; + bool nocopy; // avoid copying decompressed data when testing }; - // read members from file, decompress their contents, and - // write the produced data to file. +/* Read members from input file, decompress their contents, and write to + output file the data produced. +*/ extern "C" void * dworker( void * arg ) { const Worker_arg & tmp = *(const Worker_arg *)arg; const Lzip_index & lzip_index = *tmp.lzip_index; const Pretty_print & pp = *tmp.pp; + Shared_retval & shared_retval = *tmp.shared_retval; const int worker_id = tmp.worker_id; const int num_workers = tmp.num_workers; const int infd = tmp.infd; const int outfd = tmp.outfd; + const bool nocopy = tmp.nocopy; const int buffer_size = 65536; uint8_t * const ibuffer = new( std::nothrow ) uint8_t[buffer_size]; - uint8_t * const obuffer = new( std::nothrow ) uint8_t[buffer_size]; + uint8_t * const obuffer = + nocopy ? 0 : new( std::nothrow ) uint8_t[buffer_size]; LZ_Decoder * const decoder = LZ_decompress_open(); - if( !ibuffer || !obuffer || !decoder || + if( !ibuffer || ( !nocopy && !obuffer ) || !decoder || LZ_decompress_errno( decoder ) != LZ_ok ) - { pp( "Not enough memory." ); cleanup_and_fail(); } + { if( shared_retval.set_value( 1 ) ) { pp( mem_msg ); } goto done; } for( long i = worker_id; i < lzip_index.members(); i += num_workers ) { @@ -172,6 +199,7 @@ extern "C" void * dworker( void * arg ) while( member_rest > 0 ) { + if( shared_retval() ) goto done; // other worker found a problem while( LZ_decompress_write_size( decoder ) > 0 ) { const int size = std::min( LZ_decompress_write_size( decoder ), @@ -179,7 +207,8 @@ extern "C" void * dworker( void * arg ) if( size > 0 ) { if( preadblock( infd, ibuffer, size, member_pos ) != size ) - { pp(); show_error( "Read error", errno ); cleanup_and_fail(); } + { if( shared_retval.set_value( 1 ) ) + { pp(); show_error( "Read error", errno ); } goto done; } member_pos += size; member_rest -= size; if( LZ_decompress_write( decoder, ibuffer, size ) != size ) @@ -191,17 +220,18 @@ extern "C" void * dworker( void * arg ) { const int rd = LZ_decompress_read( decoder, obuffer, buffer_size ); if( rd < 0 ) - cleanup_and_fail( decompress_read_error( decoder, pp, worker_id ) ); + { decompress_error( decoder, pp, shared_retval, worker_id ); + goto done; } if( rd > 0 && outfd >= 0 ) { const int wr = pwriteblock( outfd, obuffer, rd, data_pos ); if( wr != rd ) { - pp(); - if( verbosity >= 0 ) - std::fprintf( stderr, "Write error in worker %d: %s\n", - worker_id, std::strerror( errno ) ); - cleanup_and_fail(); + if( shared_retval.set_value( 1 ) ) { pp(); + if( verbosity >= 0 ) + std::fprintf( stderr, "Write error in worker %d: %s\n", + worker_id, std::strerror( errno ) ); } + goto done; } } if( rd > 0 ) @@ -221,98 +251,114 @@ extern "C" void * dworker( void * arg ) } show_progress( lzip_index.mblock( i ).size() ); } - - delete[] obuffer; delete[] ibuffer; - if( LZ_decompress_member_position( decoder ) != 0 ) - { pp( "Error, some data remains in decoder." ); cleanup_and_fail(); } - if( LZ_decompress_close( decoder ) < 0 ) - { pp( "LZ_decompress_close failed." ); cleanup_and_fail(); } +done: + if( obuffer ) { delete[] obuffer; } delete[] ibuffer; + if( LZ_decompress_member_position( decoder ) != 0 && + shared_retval.set_value( 1 ) ) + pp( "Error, some data remains in decoder." ); + if( LZ_decompress_close( decoder ) < 0 && shared_retval.set_value( 1 ) ) + pp( "LZ_decompress_close failed." ); return 0; } } // end namespace - // start the workers and wait for them to finish. +// start the workers and wait for them to finish. int decompress( const unsigned long long cfile_size, int num_workers, const int infd, const int outfd, const Pretty_print & pp, const int debug_level, const int in_slots, const int out_slots, const bool ignore_trailing, - const bool loose_trailing, const bool infd_isreg ) + const bool loose_trailing, const bool infd_isreg, + const bool one_to_one ) { if( !infd_isreg ) return dec_stream( cfile_size, num_workers, infd, outfd, pp, debug_level, in_slots, out_slots, ignore_trailing, loose_trailing ); const Lzip_index lzip_index( infd, ignore_trailing, loose_trailing ); - if( lzip_index.retval() == 1 ) + if( lzip_index.retval() == 1 ) // decompress as stream if seek fails { lseek( infd, 0, SEEK_SET ); return dec_stream( cfile_size, num_workers, infd, outfd, pp, debug_level, in_slots, out_slots, ignore_trailing, loose_trailing ); } - if( lzip_index.retval() != 0 ) - { show_file_error( pp.name(), lzip_index.error().c_str() ); - return lzip_index.retval(); } + if( lzip_index.retval() != 0 ) // corrupt or invalid input file + { + if( lzip_index.bad_magic() ) + show_file_error( pp.name(), lzip_index.error().c_str() ); + else pp( lzip_index.error().c_str() ); + return lzip_index.retval(); + } - if( num_workers > lzip_index.members() ) - num_workers = lzip_index.members(); - if( verbosity >= 1 ) pp(); - show_progress( 0, cfile_size, &pp ); // init + if( num_workers > lzip_index.members() ) num_workers = lzip_index.members(); if( outfd >= 0 ) { struct stat st; - if( fstat( outfd, &st ) != 0 || !S_ISREG( st.st_mode ) || + if( !one_to_one || fstat( outfd, &st ) != 0 || !S_ISREG( st.st_mode ) || lseek( outfd, 0, SEEK_CUR ) < 0 ) - return dec_stdout( num_workers, infd, outfd, pp, debug_level, out_slots, - lzip_index ); + { + if( debug_level & 2 ) std::fputs( "decompress file to stdout.\n", stderr ); + if( verbosity >= 1 ) pp(); + show_progress( 0, cfile_size, &pp ); // init + return dec_stdout( num_workers, infd, outfd, pp, debug_level, out_slots, + lzip_index ); + } } + if( debug_level & 2 ) std::fputs( "decompress file to file.\n", stderr ); + if( verbosity >= 1 ) pp(); + show_progress( 0, cfile_size, &pp ); // init + Worker_arg * worker_args = new( std::nothrow ) Worker_arg[num_workers]; pthread_t * worker_threads = new( std::nothrow ) pthread_t[num_workers]; if( !worker_args || !worker_threads ) - { pp( "Not enough memory." ); cleanup_and_fail(); } - for( int i = 0; i < num_workers; ++i ) + { pp( mem_msg ); delete[] worker_threads; delete[] worker_args; return 1; } + +#if defined LZ_API_VERSION && LZ_API_VERSION >= 1012 + const bool nocopy = ( outfd < 0 && LZ_api_version() >= 1012 ); +#else + const bool nocopy = false; +#endif + + Shared_retval shared_retval; + int i = 0; // number of workers started + for( ; i < num_workers; ++i ) { worker_args[i].lzip_index = &lzip_index; worker_args[i].pp = &pp; + worker_args[i].shared_retval = &shared_retval; worker_args[i].worker_id = i; worker_args[i].num_workers = num_workers; worker_args[i].infd = infd; worker_args[i].outfd = outfd; + worker_args[i].nocopy = nocopy; const int errcode = pthread_create( &worker_threads[i], 0, dworker, &worker_args[i] ); if( errcode ) - { show_error( "Can't create worker threads", errcode ); cleanup_and_fail(); } + { if( shared_retval.set_value( 1 ) ) + { show_error( "Can't create worker threads", errcode ); } break; } } - for( int i = num_workers - 1; i >= 0; --i ) + while( --i >= 0 ) { const int errcode = pthread_join( worker_threads[i], 0 ); - if( errcode ) - { show_error( "Can't join worker threads", errcode ); cleanup_and_fail(); } + if( errcode && shared_retval.set_value( 1 ) ) + show_error( "Can't join worker threads", errcode ); } delete[] worker_threads; delete[] worker_args; - if( verbosity >= 2 ) - { - if( verbosity >= 4 ) show_header( lzip_index.dictionary_size( 0 ) ); - const unsigned long long in_size = lzip_index.cdata_size(); - const unsigned long long out_size = lzip_index.udata_size(); - if( out_size == 0 || in_size == 0 ) - std::fputs( "no data compressed. ", stderr ); - else - std::fprintf( stderr, "%6.3f:1, %5.2f%% ratio, %5.2f%% saved. ", - (double)out_size / in_size, - ( 100.0 * in_size ) / out_size, - 100.0 - ( ( 100.0 * in_size ) / out_size ) ); - if( verbosity >= 3 ) - std::fprintf( stderr, "decompressed %9llu, compressed %8llu. ", - out_size, in_size ); - } - if( verbosity >= 1 ) std::fputs( (outfd < 0) ? "ok\n" : "done\n", stderr ); + if( shared_retval() ) return shared_retval(); // some thread found a problem + + if( verbosity >= 1 ) + show_results( lzip_index.cdata_size(), lzip_index.udata_size(), + lzip_index.dictionary_size(), outfd < 0 ); + + if( debug_level & 1 ) + std::fprintf( stderr, + "workers started %8u\n", num_workers ); return 0; } diff --git a/doc/plzip.1 b/doc/plzip.1 index 694a99d..deb0ea5 100644 --- a/doc/plzip.1 +++ b/doc/plzip.1 @@ -1,5 +1,5 @@ -.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.46.1. -.TH PLZIP "1" "January 2019" "plzip 1.8" "User Commands" +.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.47.16. +.TH PLZIP "1" "January 2021" "plzip 1.9" "User Commands" .SH NAME plzip \- reduces the size of files .SH SYNOPSIS @@ -7,22 +7,24 @@ plzip \- reduces the size of files [\fI\,options\/\fR] [\fI\,files\/\fR] .SH DESCRIPTION Plzip is a massively parallel (multi\-threaded) implementation of lzip, fully -compatible with lzip 1.4 or newer. Plzip uses the lzlib compression library. +compatible with lzip 1.4 or newer. Plzip uses the compression library lzlib. .PP -Lzip is a lossless data compressor with a user interface similar to the -one of gzip or bzip2. Lzip can compress about as fast as gzip (lzip \fB\-0\fR) -or compress most files more than bzip2 (lzip \fB\-9\fR). Decompression speed is -intermediate between gzip and bzip2. Lzip is better than gzip and bzip2 -from a data recovery perspective. Lzip has been designed, written and -tested with great care to replace gzip and bzip2 as the standard -general\-purpose compressed format for unix\-like systems. +Lzip is a lossless data compressor with a user interface similar to the one +of gzip or bzip2. Lzip uses a simplified form of the 'Lempel\-Ziv\-Markov +chain\-Algorithm' (LZMA) stream format, chosen to maximize safety and +interoperability. Lzip can compress about as fast as gzip (lzip \fB\-0\fR) or +compress most files more than bzip2 (lzip \fB\-9\fR). Decompression speed is +intermediate between gzip and bzip2. Lzip is better than gzip and bzip2 from +a data recovery perspective. Lzip has been designed, written, and tested +with great care to replace gzip and bzip2 as the standard general\-purpose +compressed format for unix\-like systems. .PP -Plzip can compress/decompress large files on multiprocessor machines -much faster than lzip, at the cost of a slightly reduced compression -ratio (0.4 to 2 percent larger compressed files). Note that the number -of usable threads is limited by file size; on files larger than a few GB -plzip can use hundreds of processors, but on files of only a few MB -plzip is no faster than lzip. +Plzip can compress/decompress large files on multiprocessor machines much +faster than lzip, at the cost of a slightly reduced compression ratio (0.4 +to 2 percent larger compressed files). Note that the number of usable +threads is limited by file size; on files larger than a few GB plzip can use +hundreds of processors, but on files of only a few MB plzip is no faster +than lzip. .SH OPTIONS .TP \fB\-h\fR, \fB\-\-help\fR @@ -62,7 +64,7 @@ set match length limit in bytes [36] set number of (de)compression threads [2] .TP \fB\-o\fR, \fB\-\-output=\fR -if reading standard input, write to +write to , keep input files .TP \fB\-q\fR, \fB\-\-quiet\fR suppress all messages @@ -93,6 +95,9 @@ number of 1 MiB input packets buffered [4] .TP \fB\-\-out\-slots=\fR number of 1 MiB output packets buffered [64] +.TP +\fB\-\-check\-lib\fR +compare version of lzlib.h with liblz.{a,so} .PP If no file names are given, or if a file is '\-', plzip compresses or decompresses from standard input to standard output. @@ -103,8 +108,11 @@ to 2^29 bytes. .PP The bidimensional parameter space of LZMA can't be mapped to a linear scale optimal for all files. If your files are large, very repetitive, -etc, you may need to use the \fB\-\-dictionary\-size\fR and \fB\-\-match\-length\fR -options directly to achieve optimal performance. +etc, you may need to use the options \fB\-\-dictionary\-size\fR and \fB\-\-match\-length\fR +directly to achieve optimal performance. +.PP +To extract all the files from archive 'foo.tar.lz', use the commands +\&'tar \fB\-xf\fR foo.tar.lz' or 'plzip \fB\-cd\fR foo.tar.lz | tar \fB\-xf\fR \-'. .PP Exit status: 0 for a normal exit, 1 for environmental problems (file not found, invalid flags, I/O errors, etc), 2 to indicate a corrupt or @@ -117,8 +125,8 @@ Plzip home page: http://www.nongnu.org/lzip/plzip.html .SH COPYRIGHT Copyright \(co 2009 Laszlo Ersek. .br -Copyright \(co 2019 Antonio Diaz Diaz. -Using lzlib 1.11 +Copyright \(co 2021 Antonio Diaz Diaz. +Using lzlib 1.12 License GPLv2+: GNU GPL version 2 or later .br This is free software: you are free to change and redistribute it. diff --git a/doc/plzip.info b/doc/plzip.info index 2b7aa52..d70163e 100644 --- a/doc/plzip.info +++ b/doc/plzip.info @@ -11,7 +11,7 @@ File: plzip.info, Node: Top, Next: Introduction, Up: (dir) Plzip Manual ************ -This manual is for Plzip (version 1.8, 5 January 2019). +This manual is for Plzip (version 1.9, 3 January 2021). * Menu: @@ -28,10 +28,10 @@ This manual is for Plzip (version 1.8, 5 January 2019). * Concept index:: Index of concepts - Copyright (C) 2009-2019 Antonio Diaz Diaz. + Copyright (C) 2009-2021 Antonio Diaz Diaz. - This manual is free documentation: you have unlimited permission to -copy, distribute and modify it. + This manual is free documentation: you have unlimited permission to copy, +distribute, and modify it.  File: plzip.info, Node: Introduction, Next: Output, Prev: Top, Up: Top @@ -39,88 +39,89 @@ File: plzip.info, Node: Introduction, Next: Output, Prev: Top, Up: Top 1 Introduction ************** -Plzip is a massively parallel (multi-threaded) implementation of lzip, -fully compatible with lzip 1.4 or newer. Plzip uses the lzlib -compression library. - - Lzip is a lossless data compressor with a user interface similar to -the one of gzip or bzip2. Lzip can compress about as fast as gzip -(lzip -0) or compress most files more than bzip2 (lzip -9). -Decompression speed is intermediate between gzip and bzip2. Lzip is -better than gzip and bzip2 from a data recovery perspective. Lzip has -been designed, written and tested with great care to replace gzip and -bzip2 as the standard general-purpose compressed format for unix-like -systems. - - Plzip can compress/decompress large files on multiprocessor machines -much faster than lzip, at the cost of a slightly reduced compression -ratio (0.4 to 2 percent larger compressed files). Note that the number -of usable threads is limited by file size; on files larger than a few GB -plzip can use hundreds of processors, but on files of only a few MB -plzip is no faster than lzip. *Note Minimum file sizes::. +Plzip is a massively parallel (multi-threaded) implementation of lzip, fully +compatible with lzip 1.4 or newer. Plzip uses the compression library lzlib. + + Lzip is a lossless data compressor with a user interface similar to the +one of gzip or bzip2. Lzip uses a simplified form of the 'Lempel-Ziv-Markov +chain-Algorithm' (LZMA) stream format, chosen to maximize safety and +interoperability. Lzip can compress about as fast as gzip (lzip -0) or +compress most files more than bzip2 (lzip -9). Decompression speed is +intermediate between gzip and bzip2. Lzip is better than gzip and bzip2 from +a data recovery perspective. Lzip has been designed, written, and tested +with great care to replace gzip and bzip2 as the standard general-purpose +compressed format for unix-like systems. + + Plzip can compress/decompress large files on multiprocessor machines much +faster than lzip, at the cost of a slightly reduced compression ratio (0.4 +to 2 percent larger compressed files). Note that the number of usable +threads is limited by file size; on files larger than a few GB plzip can use +hundreds of processors, but on files of only a few MB plzip is no faster +than lzip. *Note Minimum file sizes::. + + For creation and manipulation of compressed tar archives tarlz can be +more efficient than using tar and plzip because tarlz is able to keep the +alignment between tar members and lzip members. *Note tarlz manual: +(tarlz)Top. The lzip file format is designed for data sharing and long-term -archiving, taking into account both data integrity and decoder -availability: +archiving, taking into account both data integrity and decoder availability: * The lzip format provides very safe integrity checking and some data - recovery means. The lziprecover program can repair bit flip errors - (one of the most common forms of data corruption) in lzip files, - and provides data recovery capabilities, including error-checked - merging of damaged copies of a file. *Note Data safety: - (lziprecover)Data safety. - - * The lzip format is as simple as possible (but not simpler). The - lzip manual provides the source code of a simple decompressor - along with a detailed explanation of how it works, so that with - the only help of the lzip manual it would be possible for a - digital archaeologist to extract the data from a lzip file long - after quantum computers eventually render LZMA obsolete. + recovery means. The program lziprecover can repair bit flip errors + (one of the most common forms of data corruption) in lzip files, and + provides data recovery capabilities, including error-checked merging + of damaged copies of a file. *Note Data safety: (lziprecover)Data + safety. + + * The lzip format is as simple as possible (but not simpler). The lzip + manual provides the source code of a simple decompressor along with a + detailed explanation of how it works, so that with the only help of the + lzip manual it would be possible for a digital archaeologist to extract + the data from a lzip file long after quantum computers eventually + render LZMA obsolete. * Additionally the lzip reference implementation is copylefted, which guarantees that it will remain free forever. A nice feature of the lzip format is that a corrupt byte is easier to -repair the nearer it is from the beginning of the file. Therefore, with -the help of lziprecover, losing an entire archive just because of a -corrupt byte near the beginning is a thing of the past. +repair the nearer it is from the beginning of the file. Therefore, with the +help of lziprecover, losing an entire archive just because of a corrupt +byte near the beginning is a thing of the past. - Plzip uses the same well-defined exit status values used by lzip, -which makes it safer than compressors returning ambiguous warning -values (like gzip) when it is used as a back end for other programs -like tar or zutils. + Plzip uses the same well-defined exit status values used by lzip, which +makes it safer than compressors returning ambiguous warning values (like +gzip) when it is used as a back end for other programs like tar or zutils. - Plzip will automatically use for each file the largest dictionary -size that does not exceed neither the file size nor the limit given. -Keep in mind that the decompression memory requirement is affected at -compression time by the choice of dictionary size limit. *Note Memory -requirements::. + Plzip will automatically use for each file the largest dictionary size +that does not exceed neither the file size nor the limit given. Keep in +mind that the decompression memory requirement is affected at compression +time by the choice of dictionary size limit. *Note Memory requirements::. When compressing, plzip replaces every file given in the command line -with a compressed version of itself, with the name "original_name.lz". -When decompressing, plzip attempts to guess the name for the -decompressed file from that of the compressed file as follows: +with a compressed version of itself, with the name "original_name.lz". When +decompressing, plzip attempts to guess the name for the decompressed file +from that of the compressed file as follows: filename.lz becomes filename filename.tlz becomes filename.tar anyothername becomes anyothername.out - (De)compressing a file is much like copying or moving it; therefore -plzip preserves the access and modification dates, permissions, and, -when possible, ownership of the file just as 'cp -p' does. (If the user -ID or the group ID can't be duplicated, the file permission bits -S_ISUID and S_ISGID are cleared). + (De)compressing a file is much like copying or moving it; therefore plzip +preserves the access and modification dates, permissions, and, when +possible, ownership of the file just as 'cp -p' does. (If the user ID or +the group ID can't be duplicated, the file permission bits S_ISUID and +S_ISGID are cleared). - Plzip is able to read from some types of non regular files if the -'--stdout' option is specified. + Plzip is able to read from some types of non-regular files if either the +option '-c' or the option '-o' is specified. - If no file names are specified, plzip compresses (or decompresses) -from standard input to standard output. In this case, plzip will -decline to write compressed output to a terminal, as this would be -entirely incomprehensible and therefore pointless. + Plzip will refuse to read compressed data from a terminal or write +compressed data to a terminal, as this would be entirely incomprehensible +and might leave the terminal in an abnormal state. - Plzip will correctly decompress a file which is the concatenation of -two or more compressed files. The result is the concatenation of the + Plzip will correctly decompress a file which is the concatenation of two +or more compressed files. The result is the concatenation of the corresponding decompressed files. Integrity testing of concatenated compressed files is also supported. @@ -135,41 +136,40 @@ The output of plzip looks like this: plzip -v foo foo: 6.676:1, 14.98% ratio, 85.02% saved, 450560 in, 67493 out. - plzip -tvv foo.lz - foo.lz: 6.676:1, 14.98% ratio, 85.02% saved. ok + plzip -tvvv foo.lz + foo.lz: 6.676:1, 14.98% ratio, 85.02% saved. 450560 out, 67493 in. ok The meaning of each field is as follows: 'N:1' - The compression ratio (uncompressed_size / compressed_size), shown - as N to 1. + The compression ratio (uncompressed_size / compressed_size), shown as + N to 1. 'ratio' - The inverse compression ratio - (compressed_size / uncompressed_size), shown as a percentage. A - decimal ratio is easily obtained by moving the decimal point two - places to the left; 14.98% = 0.1498. + The inverse compression ratio (compressed_size / uncompressed_size), + shown as a percentage. A decimal ratio is easily obtained by moving the + decimal point two places to the left; 14.98% = 0.1498. 'saved' The space saved by compression (1 - ratio), shown as a percentage. 'in' - The size of the uncompressed data. When decompressing or testing, - it is shown as 'decompressed'. Note that plzip always prints the - uncompressed size before the compressed size when compressing, - decompressing, testing or listing. + Size of the input data. This is the uncompressed size when + compressing, or the compressed size when decompressing or testing. + Note that plzip always prints the uncompressed size before the + compressed size when compressing, decompressing, testing, or listing. 'out' - The size of the compressed data. When decompressing or testing, it - is shown as 'compressed'. + Size of the output data. This is the compressed size when compressing, + or the decompressed size when decompressing or testing. When decompressing or testing at verbosity level 4 (-vvvv), the dictionary size used to compress the file is also shown. - LANGUAGE NOTE: Uncompressed = not compressed = plain data; it may -never have been compressed. Decompressed is used to refer to data which -have undergone the process of decompression. + LANGUAGE NOTE: Uncompressed = not compressed = plain data; it may never +have been compressed. Decompressed is used to refer to data which have +undergone the process of decompression.  File: plzip.info, Node: Invoking plzip, Next: Program design, Prev: Output, Up: Top @@ -181,11 +181,13 @@ The format for running plzip is: plzip [OPTIONS] [FILES] -'-' used as a FILE argument means standard input. It can be mixed with -other FILES and is read just once, the first time it appears in the -command line. +If no file names are specified, plzip compresses (or decompresses) from +standard input to standard output. A hyphen '-' used as a FILE argument +means standard input. It can be mixed with other FILES and is read just +once, the first time it appears in the command line. - plzip supports the following options: + plzip supports the following options: *Note Argument syntax: +(arg_parser)Argument syntax. '-h' '--help' @@ -199,32 +201,33 @@ command line. '-a' '--trailing-error' Exit with error status 2 if any remaining input is detected after - decompressing the last member. Such remaining input is usually - trailing garbage that can be safely ignored. *Note - concat-example::. + decompressing the last member. Such remaining input is usually trailing + garbage that can be safely ignored. *Note concat-example::. '-B BYTES' '--data-size=BYTES' - When compressing, set the size of the input data blocks in bytes. - The input file will be divided in chunks of this size before - compression is performed. Valid values range from 8 KiB to 1 GiB. - Default value is two times the dictionary size, except for option - '-0' where it defaults to 1 MiB. Plzip will reduce the dictionary - size if it is larger than the chosen data size. + When compressing, set the size of the input data blocks in bytes. The + input file will be divided in chunks of this size before compression is + performed. Valid values range from 8 KiB to 1 GiB. Default value is + two times the dictionary size, except for option '-0' where it + defaults to 1 MiB. Plzip will reduce the dictionary size if it is + larger than the data size specified. *Note Minimum file sizes::. '-c' '--stdout' - Compress or decompress to standard output; keep input files - unchanged. If compressing several files, each file is compressed - independently. This option is needed when reading from a named - pipe (fifo) or from a device. + Compress or decompress to standard output; keep input files unchanged. + If compressing several files, each file is compressed independently. + This option (or '-o') is needed when reading from a named pipe (fifo) + or from a device. Use 'lziprecover -cd -i' to recover as much of the + decompressed data as possible when decompressing a corrupt file. '-c' + overrides '-o'. '-c' has no effect when testing or listing. '-d' '--decompress' - Decompress the specified files. If a file does not exist or can't - be opened, plzip continues decompressing the rest of the files. If - a file fails to decompress, or is a terminal, plzip exits - immediately without decompressing the rest of the files. + Decompress the files specified. If a file does not exist or can't be + opened, plzip continues decompressing the rest of the files. If a file + fails to decompress, or is a terminal, plzip exits immediately without + decompressing the rest of the files. '-f' '--force' @@ -232,59 +235,69 @@ command line. '-F' '--recompress' - When compressing, force re-compression of files whose name already - has the '.lz' or '.tlz' suffix. + When compressing, force re-compression of files whose name already has + the '.lz' or '.tlz' suffix. '-k' '--keep' - Keep (don't delete) input files during compression or - decompression. + Keep (don't delete) input files during compression or decompression. '-l' '--list' - Print the uncompressed size, compressed size and percentage saved - of the specified files. Trailing data are ignored. The values - produced are correct even for multimember files. If more than one - file is given, a final line containing the cumulative sizes is - printed. With '-v', the dictionary size, the number of members in - the file, and the amount of trailing data (if any) are also - printed. With '-vv', the positions and sizes of each member in - multimember files are also printed. '-lq' can be used to verify - quickly (without decompressing) the structural integrity of the - specified files. (Use '--test' to verify the data integrity). - '-alq' additionally verifies that none of the specified files - contain trailing data. + Print the uncompressed size, compressed size, and percentage saved of + the files specified. Trailing data are ignored. The values produced + are correct even for multimember files. If more than one file is + given, a final line containing the cumulative sizes is printed. With + '-v', the dictionary size, the number of members in the file, and the + amount of trailing data (if any) are also printed. With '-vv', the + positions and sizes of each member in multimember files are also + printed. + + '-lq' can be used to verify quickly (without decompressing) the + structural integrity of the files specified. (Use '--test' to verify + the data integrity). '-alq' additionally verifies that none of the + files specified contain trailing data. '-m BYTES' '--match-length=BYTES' - When compressing, set the match length limit in bytes. After a - match this long is found, the search is finished. Valid values - range from 5 to 273. Larger values usually give better compression - ratios but longer compression times. + When compressing, set the match length limit in bytes. After a match + this long is found, the search is finished. Valid values range from 5 + to 273. Larger values usually give better compression ratios but longer + compression times. '-n N' '--threads=N' - Set the number of worker threads, overriding the system's default. - Valid values range from 1 to "as many as your system can support". - If this option is not used, plzip tries to detect the number of - processors in the system and use it as default value. When - compressing on a 32 bit system, plzip tries to limit the memory - use to under 2.22 GiB (4 worker threads at level -9) by reducing - the number of threads below the system's default. 'plzip --help' - shows the system's default value. - - Note that the number of usable threads is limited to - ceil( file_size / data_size ) during compression (*note Minimum - file sizes::), and to the number of members in the input during - decompression. + Set the maximum number of worker threads, overriding the system's + default. Valid values range from 1 to "as many as your system can + support". If this option is not used, plzip tries to detect the number + of processors in the system and use it as default value. When + compressing on a 32 bit system, plzip tries to limit the memory use to + under 2.22 GiB (4 worker threads at level -9) by reducing the number + of threads below the system's default. 'plzip --help' shows the + system's default value. + + Plzip starts the number of threads required by each file without + exceeding the value specified. Note that the number of usable threads + is limited to ceil( file_size / data_size ) during compression (*note + Minimum file sizes::), and to the number of members in the input + during decompression. You can find the number of members in a lzip + file by running 'plzip -lv file.lz'. '-o FILE' '--output=FILE' - When reading from standard input and '--stdout' has not been - specified, use 'FILE' as the virtual name of the uncompressed - file. This produces a file named 'FILE' when decompressing, or a - file named 'FILE.lz' when compressing. A second '.lz' extension is - not added if 'FILE' already ends in '.lz' or '.tlz'. + If '-c' has not been also specified, write the (de)compressed output to + FILE; keep input files unchanged. If compressing several files, each + file is compressed independently. This option (or '-c') is needed when + reading from a named pipe (fifo) or from a device. '-o -' is + equivalent to '-c'. '-o' has no effect when testing or listing. + + In order to keep backward compatibility with plzip versions prior to + 1.9, when compressing from standard input and no other file names are + given, the extension '.lz' is appended to FILE unless it already ends + in '.lz' or '.tlz'. This feature will be removed in a future version + of plzip. Meanwhile, redirection may be used instead of '-o' to write + the compressed output to a file without the extension '.lz' in its + name: 'plzip < file > foo'. '-q' '--quiet' @@ -292,30 +305,28 @@ command line. '-s BYTES' '--dictionary-size=BYTES' - When compressing, set the dictionary size limit in bytes. Plzip - will use for each file the largest dictionary size that does not - exceed neither the file size nor this limit. Valid values range - from 4 KiB to 512 MiB. Values 12 to 29 are interpreted as powers - of two, meaning 2^12 to 2^29 bytes. Dictionary sizes are quantized - so that they can be coded in just one byte (*note - coded-dict-size::). If the specified size does not match one of - the valid sizes, it will be rounded upwards by adding up to - (BYTES / 8) to it. - - For maximum compression you should use a dictionary size limit as - large as possible, but keep in mind that the decompression memory - requirement is affected at compression time by the choice of - dictionary size limit. + When compressing, set the dictionary size limit in bytes. Plzip will + use for each file the largest dictionary size that does not exceed + neither the file size nor this limit. Valid values range from 4 KiB to + 512 MiB. Values 12 to 29 are interpreted as powers of two, meaning + 2^12 to 2^29 bytes. Dictionary sizes are quantized so that they can be + coded in just one byte (*note coded-dict-size::). If the size specified + does not match one of the valid sizes, it will be rounded upwards by + adding up to (BYTES / 8) to it. + + For maximum compression you should use a dictionary size limit as large + as possible, but keep in mind that the decompression memory requirement + is affected at compression time by the choice of dictionary size limit. '-t' '--test' - Check integrity of the specified files, but don't decompress them. - This really performs a trial decompression and throws away the - result. Use it together with '-v' to see information about the - files. If a file does not exist, can't be opened, or is a - terminal, plzip continues checking the rest of the files. If a - file fails the test, plzip may be unable to check the rest of the - files. + Check integrity of the files specified, but don't decompress them. This + really performs a trial decompression and throws away the result. Use + it together with '-v' to see information about the files. If a file + fails the test, does not exist, can't be opened, or is a terminal, + plzip continues checking the rest of the files. A final diagnostic is + shown at verbosity level 1 or higher if any file fails the test when + testing multiple files. '-v' '--verbose' @@ -323,26 +334,26 @@ command line. When compressing, show the compression ratio and size for each file processed. When decompressing or testing, further -v's (up to 4) increase the - verbosity level, showing status, compression ratio, dictionary - size, decompressed size, and compressed size. - Two or more '-v' options show the progress of (de)compression, - except for single-member files. + verbosity level, showing status, compression ratio, dictionary size, + decompressed size, and compressed size. + Two or more '-v' options show the progress of (de)compression, except + for single-member files. '-0 .. -9' - Compression level. Set the compression parameters (dictionary size - and match length limit) as shown in the table below. The default - compression level is '-6', equivalent to '-s8MiB -m36'. Note that - '-9' can be much slower than '-0'. These options have no effect - when decompressing, testing or listing. + Compression level. Set the compression parameters (dictionary size and + match length limit) as shown in the table below. The default + compression level is '-6', equivalent to '-s8MiB -m36'. Note that '-9' + can be much slower than '-0'. These options have no effect when + decompressing, testing, or listing. - The bidimensional parameter space of LZMA can't be mapped to a - linear scale optimal for all files. If your files are large, very - repetitive, etc, you may need to use the '--dictionary-size' and - '--match-length' options directly to achieve optimal performance. + The bidimensional parameter space of LZMA can't be mapped to a linear + scale optimal for all files. If your files are large, very repetitive, + etc, you may need to use the options '--dictionary-size' and + '--match-length' directly to achieve optimal performance. - If several compression levels or '-s' or '-m' options are given, - the last setting is used. For example '-9 -s64MiB' is equivalent - to '-s64MiB -m273' + If several compression levels or '-s' or '-m' options are given, the + last setting is used. For example '-9 -s64MiB' is equivalent to + '-s64MiB -m273' Level Dictionary size (-s) Match length limit (-m) -0 64 KiB 16 bytes @@ -361,23 +372,33 @@ command line. Aliases for GNU gzip compatibility. '--loose-trailing' - When decompressing, testing or listing, allow trailing data whose - first bytes are so similar to the magic bytes of a lzip header - that they can be confused with a corrupt header. Use this option - if a file triggers a "corrupt header" error and the cause is not - indeed a corrupt header. + When decompressing, testing, or listing, allow trailing data whose + first bytes are so similar to the magic bytes of a lzip header that + they can be confused with a corrupt header. Use this option if a file + triggers a "corrupt header" error and the cause is not indeed a + corrupt header. '--in-slots=N' Number of 1 MiB input packets buffered per worker thread when - decompressing from non-seekable input. Increasing the number of - packets may increase decompression speed, but requires more - memory. Valid values range from 1 to 64. The default value is 4. + decompressing from non-seekable input. Increasing the number of packets + may increase decompression speed, but requires more memory. Valid + values range from 1 to 64. The default value is 4. '--out-slots=N' Number of 1 MiB output packets buffered per worker thread when - decompressing to non-seekable output. Increasing the number of - packets may increase decompression speed, but requires more - memory. Valid values range from 1 to 1024. The default value is 64. + decompressing to non-seekable output. Increasing the number of packets + may increase decompression speed, but requires more memory. Valid + values range from 1 to 1024. The default value is 64. + +'--check-lib' + Compare the version of lzlib used to compile plzip with the version + actually being used at run time and exit. Report any differences + found. Exit with error status 1 if differences are found. A mismatch + may indicate that lzlib is not correctly installed or that a different + version of lzlib has been installed after compiling plzip. + 'plzip -v --check-lib' shows the version of lzlib being used and the + value of 'LZ_API_VERSION' (if defined). *Note Library version: + (lzlib)Library version. Numbers given as arguments to options may be followed by a multiplier @@ -396,36 +417,36 @@ Z zettabyte (10^21) | Zi zebibyte (2^70) Y yottabyte (10^24) | Yi yobibyte (2^80) - Exit status: 0 for a normal exit, 1 for environmental problems (file -not found, invalid flags, I/O errors, etc), 2 to indicate a corrupt or -invalid input file, 3 for an internal consistency error (eg, bug) which -caused plzip to panic. + Exit status: 0 for a normal exit, 1 for environmental problems (file not +found, invalid flags, I/O errors, etc), 2 to indicate a corrupt or invalid +input file, 3 for an internal consistency error (eg, bug) which caused +plzip to panic.  File: plzip.info, Node: Program design, Next: File format, Prev: Invoking plzip, Up: Top -4 Program design -**************** +4 Internal structure of plzip +***************************** -When compressing, plzip divides the input file into chunks and -compresses as many chunks simultaneously as worker threads are chosen, -creating a multimember compressed file. +When compressing, plzip divides the input file into chunks and compresses as +many chunks simultaneously as worker threads are chosen, creating a +multimember compressed file. - When decompressing, plzip decompresses as many members -simultaneously as worker threads are chosen. Files that were compressed -with lzip will not be decompressed faster than using lzip (unless the -'-b' option was used) because lzip usually produces single-member -files, which can't be decompressed in parallel. + When decompressing, plzip decompresses as many members simultaneously as +worker threads are chosen. Files that were compressed with lzip will not be +decompressed faster than using lzip (unless the option '-b' was used) +because lzip usually produces single-member files, which can't be +decompressed in parallel. For each input file, a splitter thread and several worker threads are created, acting the main thread as muxer (multiplexer) thread. A "packet -courier" takes care of data transfers among threads and limits the -maximum number of data blocks (packets) being processed simultaneously. +courier" takes care of data transfers among threads and limits the maximum +number of data blocks (packets) being processed simultaneously. - The splitter reads data blocks from the input file, and distributes -them to the workers. The workers (de)compress the blocks received from -the splitter. The muxer collects processed packets from the workers, and -writes them to the output file. + The splitter reads data blocks from the input file, and distributes them +to the workers. The workers (de)compress the blocks received from the +splitter. The muxer collects processed packets from the workers, and writes +them to the output file. ,------------, ,-->| worker 0 |--, @@ -438,13 +459,12 @@ writes them to the output file. `-->| worker N-1 |--' `------------' - When decompressing from a regular file, the splitter is removed and -the workers read directly from the input file. If the output file is -also a regular file, the muxer is also removed and the workers write -directly to the output file. With these optimizations, the use of RAM -is greatly reduced and the decompression speed of large files with many -members is only limited by the number of processors available and by -I/O speed. + When decompressing from a regular file, the splitter is removed and the +workers read directly from the input file. If the output file is also a +regular file, the muxer is also removed and the workers write directly to +the output file. With these optimizations, the use of RAM is greatly +reduced and the decompression speed of large files with many members is +only limited by the number of processors available and by I/O speed.  File: plzip.info, Node: File format, Next: Memory requirements, Prev: Program design, Up: Top @@ -458,11 +478,13 @@ when there is no longer anything to take away. In the diagram below, a box like this: + +---+ | | <-- the vertical bars might be missing +---+ represents one byte; a box like this: + +==============+ | | +==============+ @@ -471,10 +493,11 @@ when there is no longer anything to take away. A lzip file consists of a series of "members" (compressed data sets). -The members simply appear one after another in the file, with no -additional information before, between, or after them. +The members simply appear one after another in the file, with no additional +information before, between, or after them. Each member has the following structure: + +--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | ID string | VN | DS | LZMA stream | CRC32 | Data size | Member size | +--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ @@ -482,17 +505,16 @@ additional information before, between, or after them. All multibyte values are stored in little endian order. 'ID string (the "magic" bytes)' - A four byte string, identifying the lzip format, with the value - "LZIP" (0x4C, 0x5A, 0x49, 0x50). + A four byte string, identifying the lzip format, with the value "LZIP" + (0x4C, 0x5A, 0x49, 0x50). 'VN (version number, 1 byte)' - Just in case something needs to be modified in the future. 1 for - now. + Just in case something needs to be modified in the future. 1 for now. 'DS (coded dictionary size, 1 byte)' The dictionary size is calculated by taking a power of 2 (the base - size) and subtracting from it a fraction between 0/16 and 7/16 of - the base size. + size) and subtracting from it a fraction between 0/16 and 7/16 of the + base size. Bits 4-0 contain the base 2 logarithm of the base size (12 to 29). Bits 7-5 contain the numerator of the fraction (0 to 7) to subtract from the base size to obtain the dictionary size. @@ -501,20 +523,20 @@ additional information before, between, or after them. 'LZMA stream' The LZMA stream, finished by an end of stream marker. Uses default - values for encoder properties. *Note Stream format: (lzip)Stream + values for encoder properties. *Note Stream format: (lzip)Stream format, for a complete description. 'CRC32 (4 bytes)' - CRC of the uncompressed original data. + Cyclic Redundancy Check (CRC) of the uncompressed original data. 'Data size (8 bytes)' Size of the uncompressed original data. 'Member size (8 bytes)' - Total size of the member, including header and trailer. This field - acts as a distributed index, allows the verification of stream - integrity, and facilitates safe recovery of undamaged members from - multimember files. + Total size of the member, including header and trailer. This field acts + as a distributed index, allows the verification of stream integrity, + and facilitates safe recovery of undamaged members from multimember + files.  @@ -526,20 +548,20 @@ File: plzip.info, Node: Memory requirements, Next: Minimum file sizes, Prev: The amount of memory required *per worker thread* for decompression or testing is approximately the following: - * For decompression of a regular (seekable) file to another regular - file, or for testing of a regular file; the dictionary size. + * For decompression of a regular (seekable) file to another regular file, + or for testing of a regular file; the dictionary size. - * For testing of a non-seekable file or of standard input; the - dictionary size plus 1 MiB plus up to the number of 1 MiB input - packets buffered (4 by default). + * For testing of a non-seekable file or of standard input; the dictionary + size plus 1 MiB plus up to the number of 1 MiB input packets buffered + (4 by default). * For decompression of a regular file to a non-seekable file or to standard output; the dictionary size plus up to the number of 1 MiB output packets buffered (64 by default). * For decompression of a non-seekable file or of standard input; the - dictionary size plus 1 MiB plus up to the number of 1 MiB input - and output packets buffered (68 by default). + dictionary size plus 1 MiB plus up to the number of 1 MiB input and + output packets buffered (68 by default). The amount of memory required *per worker thread* for compression is approximately the following: @@ -550,9 +572,8 @@ approximately the following: * For compression at other levels; 11 times the dictionary size plus 3.375 times the data size. Default is 142 MiB. -The following table shows the memory required *per thread* for -compression at a given level, using the default data size for each -level: +The following table shows the memory required *per thread* for compression +at a given level, using the default data size for each level: Level Memory required -0 4.875 MiB @@ -572,22 +593,22 @@ File: plzip.info, Node: Minimum file sizes, Next: Trailing data, Prev: Memory 7 Minimum file sizes required for full compression speed ******************************************************** -When compressing, plzip divides the input file into chunks and -compresses as many chunks simultaneously as worker threads are chosen, -creating a multimember compressed file. +When compressing, plzip divides the input file into chunks and compresses +as many chunks simultaneously as worker threads are chosen, creating a +multimember compressed file. - For this to work as expected (and roughly multiply the compression -speed by the number of available processors), the uncompressed file -must be at least as large as the number of worker threads times the -chunk size (*note --data-size::). Else some processors will not get any -data to compress, and compression will be proportionally slower. The -maximum speed increase achievable on a given file is limited by the -ratio (file_size / data_size). For example, a tarball the size of gcc or -linux will scale up to 8 processors at level -9. + For this to work as expected (and roughly multiply the compression speed +by the number of available processors), the uncompressed file must be at +least as large as the number of worker threads times the chunk size (*note +--data-size::). Else some processors will not get any data to compress, and +compression will be proportionally slower. The maximum speed increase +achievable on a given file is limited by the ratio (file_size / data_size). +For example, a tarball the size of gcc or linux will scale up to 10 or 14 +processors at level -9. - The following table shows the minimum uncompressed file size needed -for full use of N processors at a given compression level, using the -default data size for each level: + The following table shows the minimum uncompressed file size needed for +full use of N processors at a given compression level, using the default +data size for each level: Processors 2 4 8 16 64 256 ------------------------------------------------------------------ @@ -612,43 +633,40 @@ File: plzip.info, Node: Trailing data, Next: Examples, Prev: Minimum file siz Sometimes extra data are found appended to a lzip file after the last member. Such trailing data may be: - * Padding added to make the file size a multiple of some block size, - for example when writing to a tape. It is safe to append any - amount of padding zero bytes to a lzip file. + * Padding added to make the file size a multiple of some block size, for + example when writing to a tape. It is safe to append any amount of + padding zero bytes to a lzip file. * Useful data added by the user; a cryptographically secure hash, a - description of file contents, etc. It is safe to append any amount - of text to a lzip file as long as none of the first four bytes of - the text match the corresponding byte in the string "LZIP", and - the text does not contain any zero bytes (null characters). - Nonzero bytes and zero bytes can't be safely mixed in trailing - data. + description of file contents, etc. It is safe to append any amount of + text to a lzip file as long as none of the first four bytes of the text + match the corresponding byte in the string "LZIP", and the text does + not contain any zero bytes (null characters). Nonzero bytes and zero + bytes can't be safely mixed in trailing data. * Garbage added by some not totally successful copy operation. - * Malicious data added to the file in order to make its total size - and hash value (for a chosen hash) coincide with those of another - file. + * Malicious data added to the file in order to make its total size and + hash value (for a chosen hash) coincide with those of another file. * In rare cases, trailing data could be the corrupt header of another member. In multimember or concatenated files the probability of corruption happening in the magic bytes is 5 times smaller than the - probability of getting a false positive caused by the corruption - of the integrity information itself. Therefore it can be - considered to be below the noise level. Additionally, the test - used by plzip to discriminate trailing data from a corrupt header - has a Hamming distance (HD) of 3, and the 3 bit flips must happen - in different magic bytes for the test to fail. In any case, the - option '--trailing-error' guarantees that any corrupt header will - be detected. + probability of getting a false positive caused by the corruption of the + integrity information itself. Therefore it can be considered to be + below the noise level. Additionally, the test used by plzip to + discriminate trailing data from a corrupt header has a Hamming + distance (HD) of 3, and the 3 bit flips must happen in different magic + bytes for the test to fail. In any case, the option '--trailing-error' + guarantees that any corrupt header will be detected. Trailing data are in no way part of the lzip file format, but tools reading lzip files are expected to behave as correctly and usefully as possible in the presence of trailing data. - Trailing data can be safely ignored in most cases. In some cases, -like that of user-added data, they are expected to be ignored. In those -cases where a file containing trailing data must be rejected, the option + Trailing data can be safely ignored in most cases. In some cases, like +that of user-added data, they are expected to be ignored. In those cases +where a file containing trailing data must be rejected, the option '--trailing-error' can be used. *Note --trailing-error::.  @@ -660,62 +678,70 @@ File: plzip.info, Node: Examples, Next: Problems, Prev: Trailing data, Up: T WARNING! Even if plzip is bug-free, other causes may result in a corrupt compressed file (bugs in the system libraries, memory errors, etc). Therefore, if the data you are going to compress are important, give the -'--keep' option to plzip and don't remove the original file until you +option '--keep' to plzip and don't remove the original file until you verify the compressed file with a command like 'plzip -cd file.lz | cmp file -'. Most RAM errors happening during -compression can only be detected by comparing the compressed file with -the original because the corruption happens before plzip compresses the -RAM contents, resulting in a valid compressed file containing wrong -data. +compression can only be detected by comparing the compressed file with the +original because the corruption happens before plzip compresses the RAM +contents, resulting in a valid compressed file containing wrong data. + + +Example 1: Extract all the files from archive 'foo.tar.lz'. + + tar -xf foo.tar.lz + or + plzip -cd foo.tar.lz | tar -xf - -Example 1: Replace a regular file with its compressed version 'file.lz' -and show the compression ratio. +Example 2: Replace a regular file with its compressed version 'file.lz' and +show the compression ratio. plzip -v file -Example 2: Like example 1 but the created 'file.lz' has a block size of +Example 3: Like example 1 but the created 'file.lz' has a block size of 1 MiB. The compression ratio is not shown. plzip -B 1MiB file -Example 3: Restore a regular file from its compressed version -'file.lz'. If the operation is successful, 'file.lz' is removed. +Example 4: Restore a regular file from its compressed version 'file.lz'. If +the operation is successful, 'file.lz' is removed. plzip -d file.lz -Example 4: Verify the integrity of the compressed file 'file.lz' and -show status. +Example 5: Verify the integrity of the compressed file 'file.lz' and show +status. plzip -tv file.lz -Example 5: Compress a whole device in /dev/sdc and send the output to +Example 6: Compress a whole device in /dev/sdc and send the output to 'file.lz'. - plzip -c /dev/sdc > file.lz + plzip -c /dev/sdc > file.lz + or + plzip /dev/sdc -o file.lz -Example 6: The right way of concatenating the decompressed output of two -or more compressed files. *Note Trailing data::. +Example 7: The right way of concatenating the decompressed output of two or +more compressed files. *Note Trailing data::. Don't do this - cat file1.lz file2.lz file3.lz | plzip -d + cat file1.lz file2.lz file3.lz | plzip -d - Do this instead plzip -cd file1.lz file2.lz file3.lz -Example 7: Decompress 'file.lz' partially until 10 KiB of decompressed -data are produced. +Example 8: Decompress 'file.lz' partially until 10 KiB of decompressed data +are produced. plzip -cd file.lz | dd bs=1024 count=10 -Example 8: Decompress 'file.lz' partially from decompressed byte 10000 -to decompressed byte 15000 (5000 bytes are produced). +Example 9: Decompress 'file.lz' partially from decompressed byte at offset +10000 to decompressed byte at offset 14999 (5000 bytes are produced). plzip -cd file.lz | dd bs=1000 skip=10 count=5 @@ -725,14 +751,14 @@ File: plzip.info, Node: Problems, Next: Concept index, Prev: Examples, Up: T 10 Reporting bugs ***************** -There are probably bugs in plzip. There are certainly errors and -omissions in this manual. If you report them, they will get fixed. If -you don't, no one will ever know about them and they will remain unfixed -for all eternity, if not longer. +There are probably bugs in plzip. There are certainly errors and omissions +in this manual. If you report them, they will get fixed. If you don't, no +one will ever know about them and they will remain unfixed for all +eternity, if not longer. If you find a bug in plzip, please send electronic mail to -. Include the version number, which you can find -by running 'plzip --version'. +. Include the version number, which you can find by +running 'plzip --version'.  File: plzip.info, Node: Concept index, Prev: Problems, Up: Top @@ -743,40 +769,40 @@ Concept index [index] * Menu: -* bugs: Problems. (line 6) -* examples: Examples. (line 6) -* file format: File format. (line 6) -* getting help: Problems. (line 6) -* introduction: Introduction. (line 6) -* invoking: Invoking plzip. (line 6) -* memory requirements: Memory requirements. (line 6) -* minimum file sizes: Minimum file sizes. (line 6) -* options: Invoking plzip. (line 6) -* output: Output. (line 6) -* program design: Program design. (line 6) -* trailing data: Trailing data. (line 6) -* usage: Invoking plzip. (line 6) -* version: Invoking plzip. (line 6) +* bugs: Problems. (line 6) +* examples: Examples. (line 6) +* file format: File format. (line 6) +* getting help: Problems. (line 6) +* introduction: Introduction. (line 6) +* invoking: Invoking plzip. (line 6) +* memory requirements: Memory requirements. (line 6) +* minimum file sizes: Minimum file sizes. (line 6) +* options: Invoking plzip. (line 6) +* output: Output. (line 6) +* program design: Program design. (line 6) +* trailing data: Trailing data. (line 6) +* usage: Invoking plzip. (line 6) +* version: Invoking plzip. (line 6)  Tag Table: Node: Top222 -Node: Introduction1158 -Node: Output5456 -Node: Invoking plzip6936 -Ref: --trailing-error7563 -Ref: --data-size7806 -Node: Program design16267 -Node: File format18419 -Ref: coded-dict-size19719 -Node: Memory requirements20849 -Node: Minimum file sizes22531 -Node: Trailing data24540 -Node: Examples26823 -Ref: concat-example28238 -Node: Problems28813 -Node: Concept index29341 +Node: Introduction1159 +Node: Output5788 +Node: Invoking plzip7351 +Ref: --trailing-error8146 +Ref: --data-size8384 +Node: Program design18364 +Node: File format20542 +Ref: coded-dict-size21840 +Node: Memory requirements22995 +Node: Minimum file sizes24677 +Node: Trailing data26693 +Node: Examples28961 +Ref: concat-example30556 +Node: Problems31153 +Node: Concept index31681  End Tag Table diff --git a/doc/plzip.texi b/doc/plzip.texi index b5469b9..26c0820 100644 --- a/doc/plzip.texi +++ b/doc/plzip.texi @@ -6,8 +6,8 @@ @finalout @c %**end of header -@set UPDATED 5 January 2019 -@set VERSION 1.8 +@set UPDATED 3 January 2021 +@set VERSION 1.9 @dircategory Data Compression @direntry @@ -29,6 +29,7 @@ @contents @end ifnothtml +@ifnottex @node Top @top @@ -49,35 +50,47 @@ This manual is for Plzip (version @value{VERSION}, @value{UPDATED}). @end menu @sp 1 -Copyright @copyright{} 2009-2019 Antonio Diaz Diaz. +Copyright @copyright{} 2009-2021 Antonio Diaz Diaz. -This manual is free documentation: you have unlimited permission -to copy, distribute and modify it. +This manual is free documentation: you have unlimited permission to copy, +distribute, and modify it. +@end ifnottex @node Introduction @chapter Introduction @cindex introduction -@uref{http://www.nongnu.org/lzip/plzip.html,,Plzip} is a massively parallel -(multi-threaded) implementation of lzip, fully compatible with lzip 1.4 or -newer. Plzip uses the lzlib compression library. - -@uref{http://www.nongnu.org/lzip/lzip.html,,Lzip} is a lossless data -compressor with a user interface similar to the one of gzip or bzip2. Lzip -can compress about as fast as gzip @w{(lzip -0)} or compress most files more -than bzip2 @w{(lzip -9)}. Decompression speed is intermediate between gzip -and bzip2. Lzip is better than gzip and bzip2 from a data recovery -perspective. Lzip has been designed, written and tested with great care to -replace gzip and bzip2 as the standard general-purpose compressed format for -unix-like systems. - -Plzip can compress/decompress large files on multiprocessor machines -much faster than lzip, at the cost of a slightly reduced compression -ratio (0.4 to 2 percent larger compressed files). Note that the number -of usable threads is limited by file size; on files larger than a few GB -plzip can use hundreds of processors, but on files of only a few MB -plzip is no faster than lzip. @xref{Minimum file sizes}. +@uref{http://www.nongnu.org/lzip/plzip.html,,Plzip} +is a massively parallel (multi-threaded) implementation of lzip, fully +compatible with lzip 1.4 or newer. Plzip uses the compression library +@uref{http://www.nongnu.org/lzip/lzlib.html,,lzlib}. + +@uref{http://www.nongnu.org/lzip/lzip.html,,Lzip} +is a lossless data compressor with a user interface similar to the one +of gzip or bzip2. Lzip uses a simplified form of the 'Lempel-Ziv-Markov +chain-Algorithm' (LZMA) stream format, chosen to maximize safety and +interoperability. Lzip can compress about as fast as gzip @w{(lzip -0)} or +compress most files more than bzip2 @w{(lzip -9)}. Decompression speed is +intermediate between gzip and bzip2. Lzip is better than gzip and bzip2 from +a data recovery perspective. Lzip has been designed, written, and tested +with great care to replace gzip and bzip2 as the standard general-purpose +compressed format for unix-like systems. + +Plzip can compress/decompress large files on multiprocessor machines much +faster than lzip, at the cost of a slightly reduced compression ratio (0.4 +to 2 percent larger compressed files). Note that the number of usable +threads is limited by file size; on files larger than a few GB plzip can use +hundreds of processors, but on files of only a few MB plzip is no faster +than lzip. @xref{Minimum file sizes}. + +For creation and manipulation of compressed tar archives +@uref{http://www.nongnu.org/lzip/manual/tarlz_manual.html,,tarlz} can be +more efficient than using tar and plzip because tarlz is able to keep the +alignment between tar members and lzip members. +@ifnothtml +@xref{Top,tarlz manual,,tarlz}. +@end ifnothtml The lzip file format is designed for data sharing and long-term archiving, taking into account both data integrity and decoder availability: @@ -85,11 +98,11 @@ taking into account both data integrity and decoder availability: @itemize @bullet @item The lzip format provides very safe integrity checking and some data -recovery means. The +recovery means. The program @uref{http://www.nongnu.org/lzip/manual/lziprecover_manual.html#Data-safety,,lziprecover} -program can repair bit flip errors (one of the most common forms of data -corruption) in lzip files, and provides data recovery capabilities, -including error-checked merging of damaged copies of a file. +can repair bit flip errors (one of the most common forms of data corruption) +in lzip files, and provides data recovery capabilities, including +error-checked merging of damaged copies of a file. @ifnothtml @xref{Data safety,,,lziprecover}. @end ifnothtml @@ -107,10 +120,10 @@ Additionally the lzip reference implementation is copylefted, which guarantees that it will remain free forever. @end itemize -A nice feature of the lzip format is that a corrupt byte is easier to -repair the nearer it is from the beginning of the file. Therefore, with -the help of lziprecover, losing an entire archive just because of a -corrupt byte near the beginning is a thing of the past. +A nice feature of the lzip format is that a corrupt byte is easier to repair +the nearer it is from the beginning of the file. Therefore, with the help of +lziprecover, losing an entire archive just because of a corrupt byte near +the beginning is a thing of the past. Plzip uses the same well-defined exit status values used by lzip, which makes it safer than compressors returning ambiguous warning values (like @@ -138,13 +151,12 @@ possible, ownership of the file just as @samp{cp -p} does. (If the user ID or the group ID can't be duplicated, the file permission bits S_ISUID and S_ISGID are cleared). -Plzip is able to read from some types of non regular files if the -@samp{--stdout} option is specified. +Plzip is able to read from some types of non-regular files if either the +option @samp{-c} or the option @samp{-o} is specified. -If no file names are specified, plzip compresses (or decompresses) from -standard input to standard output. In this case, plzip will decline to -write compressed output to a terminal, as this would be entirely -incomprehensible and therefore pointless. +Plzip will refuse to read compressed data from a terminal or write compressed +data to a terminal, as this would be entirely incomprehensible and might +leave the terminal in an abnormal state. Plzip will correctly decompress a file which is the concatenation of two or more compressed files. The result is the concatenation of the corresponding @@ -162,16 +174,16 @@ The output of plzip looks like this: plzip -v foo foo: 6.676:1, 14.98% ratio, 85.02% saved, 450560 in, 67493 out. -plzip -tvv foo.lz - foo.lz: 6.676:1, 14.98% ratio, 85.02% saved. ok +plzip -tvvv foo.lz + foo.lz: 6.676:1, 14.98% ratio, 85.02% saved. 450560 out, 67493 in. ok @end example The meaning of each field is as follows: @table @code @item N:1 -The compression ratio @w{(uncompressed_size / compressed_size)}, shown -as N to 1. +The compression ratio @w{(uncompressed_size / compressed_size)}, shown as +@w{N to 1}. @item ratio The inverse compression ratio @w{(compressed_size / uncompressed_size)}, @@ -182,23 +194,23 @@ decimal point two places to the left; @w{14.98% = 0.1498}. The space saved by compression @w{(1 - ratio)}, shown as a percentage. @item in -The size of the uncompressed data. When decompressing or testing, it is -shown as @code{decompressed}. Note that plzip always prints the -uncompressed size before the compressed size when compressing, -decompressing, testing or listing. +Size of the input data. This is the uncompressed size when compressing, or +the compressed size when decompressing or testing. Note that plzip always +prints the uncompressed size before the compressed size when compressing, +decompressing, testing, or listing. @item out -The size of the compressed data. When decompressing or testing, it is -shown as @code{compressed}. +Size of the output data. This is the compressed size when compressing, or +the decompressed size when decompressing or testing. @end table -When decompressing or testing at verbosity level 4 (-vvvv), the -dictionary size used to compress the file is also shown. +When decompressing or testing at verbosity level 4 (-vvvv), the dictionary +size used to compress the file is also shown. -LANGUAGE NOTE: Uncompressed = not compressed = plain data; it may never -have been compressed. Decompressed is used to refer to data which have -undergone the process of decompression. +LANGUAGE NOTE: Uncompressed = not compressed = plain data; it may never have +been compressed. Decompressed is used to refer to data which have undergone +the process of decompression. @node Invoking plzip @@ -215,11 +227,16 @@ plzip [@var{options}] [@var{files}] @end example @noindent -@samp{-} used as a @var{file} argument means standard input. It can be -mixed with other @var{files} and is read just once, the first time it -appears in the command line. +If no file names are specified, plzip compresses (or decompresses) from +standard input to standard output. A hyphen @samp{-} used as a @var{file} +argument means standard input. It can be mixed with other @var{files} and is +read just once, the first time it appears in the command line. -plzip supports the following options: +plzip supports the following +@uref{http://www.nongnu.org/arg-parser/manual/arg_parser_manual.html#Argument-syntax,,options}: +@ifnothtml +@xref{Argument syntax,,,arg_parser}. +@end ifnothtml @table @code @item -h @@ -246,18 +263,20 @@ input file will be divided in chunks of this size before compression is performed. Valid values range from @w{8 KiB} to @w{1 GiB}. Default value is two times the dictionary size, except for option @samp{-0} where it defaults to @w{1 MiB}. Plzip will reduce the dictionary size if it is -larger than the chosen data size. +larger than the data size specified. @xref{Minimum file sizes}. @item -c @itemx --stdout -Compress or decompress to standard output; keep input files unchanged. -If compressing several files, each file is compressed independently. -This option is needed when reading from a named pipe (fifo) or from a -device. +Compress or decompress to standard output; keep input files unchanged. If +compressing several files, each file is compressed independently. This +option (or @samp{-o}) is needed when reading from a named pipe (fifo) or +from a device. Use @w{@samp{lziprecover -cd -i}} to recover as much of the +decompressed data as possible when decompressing a corrupt file. @samp{-c} +overrides @samp{-o}. @samp{-c} has no effect when testing or listing. @item -d @itemx --decompress -Decompress the specified files. If a file does not exist or can't be +Decompress the files specified. If a file does not exist or can't be opened, plzip continues decompressing the rest of the files. If a file fails to decompress, or is a terminal, plzip exits immediately without decompressing the rest of the files. @@ -277,17 +296,18 @@ Keep (don't delete) input files during compression or decompression. @item -l @itemx --list -Print the uncompressed size, compressed size and percentage saved of the -specified files. Trailing data are ignored. The values produced are -correct even for multimember files. If more than one file is given, a -final line containing the cumulative sizes is printed. With @samp{-v}, -the dictionary size, the number of members in the file, and the amount -of trailing data (if any) are also printed. With @samp{-vv}, the -positions and sizes of each member in multimember files are also -printed. @samp{-lq} can be used to verify quickly (without -decompressing) the structural integrity of the specified files. (Use -@samp{--test} to verify the data integrity). @samp{-alq} additionally -verifies that none of the specified files contain trailing data. +Print the uncompressed size, compressed size, and percentage saved of the +files specified. Trailing data are ignored. The values produced are correct +even for multimember files. If more than one file is given, a final line +containing the cumulative sizes is printed. With @samp{-v}, the dictionary +size, the number of members in the file, and the amount of trailing data (if +any) are also printed. With @samp{-vv}, the positions and sizes of each +member in multimember files are also printed. + +@samp{-lq} can be used to verify quickly (without decompressing) the +structural integrity of the files specified. (Use @samp{--test} to verify +the data integrity). @samp{-alq} additionally verifies that none of the +files specified contain trailing data. @item -m @var{bytes} @itemx --match-length=@var{bytes} @@ -298,27 +318,36 @@ compression times. @item -n @var{n} @itemx --threads=@var{n} -Set the number of worker threads, overriding the system's default. Valid -values range from 1 to "as many as your system can support". If this -option is not used, plzip tries to detect the number of processors in -the system and use it as default value. When compressing on a @w{32 bit} -system, plzip tries to limit the memory use to under @w{2.22 GiB} (4 -worker threads at level -9) by reducing the number of threads below the -system's default. @w{@samp{plzip --help}} shows the system's default -value. - -Note that the number of usable threads is limited to @w{ceil( file_size -/ data_size )} during compression (@pxref{Minimum file sizes}), and to -the number of members in the input during decompression. +Set the maximum number of worker threads, overriding the system's default. +Valid values range from 1 to "as many as your system can support". If this +option is not used, plzip tries to detect the number of processors in the +system and use it as default value. When compressing on a @w{32 bit} system, +plzip tries to limit the memory use to under @w{2.22 GiB} (4 worker threads +at level -9) by reducing the number of threads below the system's default. +@w{@samp{plzip --help}} shows the system's default value. + +Plzip starts the number of threads required by each file without exceeding +the value specified. Note that the number of usable threads is limited to +@w{ceil( file_size / data_size )} during compression (@pxref{Minimum file +sizes}), and to the number of members in the input during decompression. You +can find the number of members in a lzip file by running +@w{@samp{plzip -lv file.lz}}. @item -o @var{file} @itemx --output=@var{file} -When reading from standard input and @samp{--stdout} has not been -specified, use @samp{@var{file}} as the virtual name of the uncompressed -file. This produces a file named @samp{@var{file}} when decompressing, -or a file named @samp{@var{file}.lz} when compressing. A second -@samp{.lz} extension is not added if @samp{@var{file}} already ends in -@samp{.lz} or @samp{.tlz}. +If @samp{-c} has not been also specified, write the (de)compressed output to +@var{file}; keep input files unchanged. If compressing several files, each +file is compressed independently. This option (or @samp{-c}) is needed when +reading from a named pipe (fifo) or from a device. @w{@samp{-o -}} is +equivalent to @samp{-c}. @samp{-o} has no effect when testing or listing. + +In order to keep backward compatibility with plzip versions prior to 1.9, +when compressing from standard input and no other file names are given, the +extension @samp{.lz} is appended to @var{file} unless it already ends in +@samp{.lz} or @samp{.tlz}. This feature will be removed in a future version +of plzip. Meanwhile, redirection may be used instead of @samp{-o} to write +the compressed output to a file without the extension @samp{.lz} in its +name: @w{@samp{plzip < file > foo}}. @item -q @itemx --quiet @@ -331,7 +360,7 @@ for each file the largest dictionary size that does not exceed neither the file size nor this limit. Valid values range from @w{4 KiB} to @w{512 MiB}. Values 12 to 29 are interpreted as powers of two, meaning 2^12 to 2^29 bytes. Dictionary sizes are quantized so that they can be -coded in just one byte (@pxref{coded-dict-size}). If the specified size +coded in just one byte (@pxref{coded-dict-size}). If the size specified does not match one of the valid sizes, it will be rounded upwards by adding up to @w{(@var{bytes} / 8)} to it. @@ -341,12 +370,13 @@ is affected at compression time by the choice of dictionary size limit. @item -t @itemx --test -Check integrity of the specified files, but don't decompress them. This +Check integrity of the files specified, but don't decompress them. This really performs a trial decompression and throws away the result. Use it together with @samp{-v} to see information about the files. If a file -does not exist, can't be opened, or is a terminal, plzip continues -checking the rest of the files. If a file fails the test, plzip may be -unable to check the rest of the files. +fails the test, does not exist, can't be opened, or is a terminal, plzip +continues checking the rest of the files. A final diagnostic is shown at +verbosity level 1 or higher if any file fails the test when testing +multiple files. @item -v @itemx --verbose @@ -364,12 +394,12 @@ Compression level. Set the compression parameters (dictionary size and match length limit) as shown in the table below. The default compression level is @samp{-6}, equivalent to @w{@samp{-s8MiB -m36}}. Note that @samp{-9} can be much slower than @samp{-0}. These options have no -effect when decompressing, testing or listing. +effect when decompressing, testing, or listing. The bidimensional parameter space of LZMA can't be mapped to a linear scale optimal for all files. If your files are large, very repetitive, -etc, you may need to use the @samp{--dictionary-size} and -@samp{--match-length} options directly to achieve optimal performance. +etc, you may need to use the options @samp{--dictionary-size} and +@samp{--match-length} directly to achieve optimal performance. If several compression levels or @samp{-s} or @samp{-m} options are given, the last setting is used. For example @w{@samp{-9 -s64MiB}} is @@ -394,7 +424,7 @@ equivalent to @w{@samp{-s64MiB -m273}} Aliases for GNU gzip compatibility. @item --loose-trailing -When decompressing, testing or listing, allow trailing data whose first +When decompressing, testing, or listing, allow trailing data whose first bytes are so similar to the magic bytes of a lzip header that they can be confused with a corrupt header. Use this option if a file triggers a "corrupt header" error and the cause is not indeed a corrupt header. @@ -411,6 +441,19 @@ decompressing to non-seekable output. Increasing the number of packets may increase decompression speed, but requires more memory. Valid values range from 1 to 1024. The default value is 64. +@item --check-lib +Compare the +@uref{http://www.nongnu.org/lzip/manual/lzlib_manual.html#Library-version,,version of lzlib} +used to compile plzip with the version actually being used at run time and +exit. Report any differences found. Exit with error status 1 if differences +are found. A mismatch may indicate that lzlib is not correctly installed or +that a different version of lzlib has been installed after compiling plzip. +@w{@samp{plzip -v --check-lib}} shows the version of lzlib being used and +the value of @samp{LZ_API_VERSION} (if defined). +@ifnothtml +@xref{Library version,,,lzlib}. +@end ifnothtml + @end table Numbers given as arguments to options may be followed by a multiplier @@ -438,16 +481,16 @@ caused plzip to panic. @node Program design -@chapter Program design +@chapter Internal structure of plzip @cindex program design -When compressing, plzip divides the input file into chunks and -compresses as many chunks simultaneously as worker threads are chosen, -creating a multimember compressed file. +When compressing, plzip divides the input file into chunks and compresses as +many chunks simultaneously as worker threads are chosen, creating a +multimember compressed file. When decompressing, plzip decompresses as many members simultaneously as worker threads are chosen. Files that were compressed with lzip will not -be decompressed faster than using lzip (unless the @samp{-b} option was used) +be decompressed faster than using lzip (unless the option @samp{-b} was used) because lzip usually produces single-member files, which can't be decompressed in parallel. @@ -492,6 +535,7 @@ when there is no longer anything to take away.@* @sp 1 In the diagram below, a box like this: + @verbatim +---+ | | <-- the vertical bars might be missing @@ -499,6 +543,7 @@ In the diagram below, a box like this: @end verbatim represents one byte; a box like this: + @verbatim +==============+ | | @@ -513,6 +558,7 @@ The members simply appear one after another in the file, with no additional information before, between, or after them. Each member has the following structure: + @verbatim +--+--+--+--+----+----+=============+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | ID string | VN | DS | LZMA stream | CRC32 | Data size | Member size | @@ -532,8 +578,7 @@ Just in case something needs to be modified in the future. 1 for now. @anchor{coded-dict-size} @item DS (coded dictionary size, 1 byte) The dictionary size is calculated by taking a power of 2 (the base size) -and subtracting from it a fraction between 0/16 and 7/16 of the base -size.@* +and subtracting from it a fraction between 0/16 and 7/16 of the base size.@* Bits 4-0 contain the base 2 logarithm of the base size (12 to 29).@* Bits 7-5 contain the numerator of the fraction (0 to 7) to subtract from the base size to obtain the dictionary size.@* @@ -541,8 +586,8 @@ Example: 0xD3 = 2^19 - 6 * 2^15 = 512 KiB - 6 * 32 KiB = 320 KiB@* Valid values for dictionary size range from 4 KiB to 512 MiB. @item LZMA stream -The LZMA stream, finished by an end of stream marker. Uses default -values for encoder properties. +The LZMA stream, finished by an end of stream marker. Uses default values +for encoder properties. @ifnothtml @xref{Stream format,,,lzip}, @end ifnothtml @@ -553,7 +598,7 @@ See for a complete description. @item CRC32 (4 bytes) -CRC of the uncompressed original data. +Cyclic Redundancy Check (CRC) of the uncompressed original data. @item Data size (8 bytes) Size of the uncompressed original data. @@ -570,8 +615,8 @@ facilitates safe recovery of undamaged members from multimember files. @chapter Memory required to compress and decompress @cindex memory requirements -The amount of memory required @strong{per worker thread} for -decompression or testing is approximately the following: +The amount of memory required @strong{per worker thread} for decompression +or testing is approximately the following: @itemize @bullet @item @@ -610,8 +655,7 @@ times the data size. Default is @w{142 MiB}. @noindent The following table shows the memory required @strong{per thread} for -compression at a given level, using the default data size for each -level: +compression at a given level, using the default data size for each level: @multitable {Level} {Memory required} @item Level @tab Memory required @@ -643,7 +687,7 @@ least as large as the number of worker threads times the chunk size compress, and compression will be proportionally slower. The maximum speed increase achievable on a given file is limited by the ratio @w{(file_size / data_size)}. For example, a tarball the size of gcc or -linux will scale up to 8 processors at level -9. +linux will scale up to 10 or 14 processors at level -9. The following table shows the minimum uncompressed file size needed for full use of N processors at a given compression level, using the default @@ -723,7 +767,7 @@ where a file containing trailing data must be rejected, the option WARNING! Even if plzip is bug-free, other causes may result in a corrupt compressed file (bugs in the system libraries, memory errors, etc). Therefore, if the data you are going to compress are important, give the -@samp{--keep} option to plzip and don't remove the original file until you +option @samp{--keep} to plzip and don't remove the original file until you verify the compressed file with a command like @w{@samp{plzip -cd file.lz | cmp file -}}. Most RAM errors happening during compression can only be detected by comparing the compressed file with the @@ -732,8 +776,18 @@ contents, resulting in a valid compressed file containing wrong data. @sp 1 @noindent -Example 1: Replace a regular file with its compressed version -@samp{file.lz} and show the compression ratio. +Example 1: Extract all the files from archive @samp{foo.tar.lz}. + +@example + tar -xf foo.tar.lz +or + plzip -cd foo.tar.lz | tar -xf - +@end example + +@sp 1 +@noindent +Example 2: Replace a regular file with its compressed version @samp{file.lz} +and show the compression ratio. @example plzip -v file @@ -741,8 +795,8 @@ plzip -v file @sp 1 @noindent -Example 2: Like example 1 but the created @samp{file.lz} has a block -size of @w{1 MiB}. The compression ratio is not shown. +Example 3: Like example 1 but the created @samp{file.lz} has a block size of +@w{1 MiB}. The compression ratio is not shown. @example plzip -B 1MiB file @@ -750,9 +804,8 @@ plzip -B 1MiB file @sp 1 @noindent -Example 3: Restore a regular file from its compressed version -@samp{file.lz}. If the operation is successful, @samp{file.lz} is -removed. +Example 4: Restore a regular file from its compressed version +@samp{file.lz}. If the operation is successful, @samp{file.lz} is removed. @example plzip -d file.lz @@ -760,8 +813,8 @@ plzip -d file.lz @sp 1 @noindent -Example 4: Verify the integrity of the compressed file @samp{file.lz} -and show status. +Example 5: Verify the integrity of the compressed file @samp{file.lz} and +show status. @example plzip -tv file.lz @@ -769,29 +822,31 @@ plzip -tv file.lz @sp 1 @noindent -Example 5: Compress a whole device in /dev/sdc and send the output to +Example 6: Compress a whole device in /dev/sdc and send the output to @samp{file.lz}. @example -plzip -c /dev/sdc > file.lz + plzip -c /dev/sdc > file.lz +or + plzip /dev/sdc -o file.lz @end example @sp 1 @anchor{concat-example} @noindent -Example 6: The right way of concatenating the decompressed output of two -or more compressed files. @xref{Trailing data}. +Example 7: The right way of concatenating the decompressed output of two or +more compressed files. @xref{Trailing data}. @example Don't do this - cat file1.lz file2.lz file3.lz | plzip -d + cat file1.lz file2.lz file3.lz | plzip -d - Do this instead plzip -cd file1.lz file2.lz file3.lz @end example @sp 1 @noindent -Example 7: Decompress @samp{file.lz} partially until @w{10 KiB} of +Example 8: Decompress @samp{file.lz} partially until @w{10 KiB} of decompressed data are produced. @example @@ -800,8 +855,8 @@ plzip -cd file.lz | dd bs=1024 count=10 @sp 1 @noindent -Example 8: Decompress @samp{file.lz} partially from decompressed byte -10000 to decompressed byte 15000 (5000 bytes are produced). +Example 9: Decompress @samp{file.lz} partially from decompressed byte at +offset 10000 to decompressed byte at offset 14999 (5000 bytes are produced). @example plzip -cd file.lz | dd bs=1000 skip=10 count=5 @@ -820,7 +875,7 @@ for all eternity, if not longer. If you find a bug in plzip, please send electronic mail to @email{lzip-bug@@nongnu.org}. Include the version number, which you can -find by running @w{@code{plzip --version}}. +find by running @w{@samp{plzip --version}}. @node Concept index diff --git a/list.cc b/list.cc index d3e4908..cc8c6da 100644 --- a/list.cc +++ b/list.cc @@ -1,18 +1,18 @@ -/* Plzip - Massively parallel implementation of lzip - Copyright (C) 2009-2019 Antonio Diaz Diaz. +/* Plzip - Massively parallel implementation of lzip + Copyright (C) 2009-2021 Antonio Diaz Diaz. - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 2 of the License, or - (at your option) any later version. + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 2 of the License, or + (at your option) any later version. - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. - You should have received a copy of the GNU General Public License - along with this program. If not, see . + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ #define _FILE_OFFSET_BITS 64 @@ -21,7 +21,6 @@ #include #include #include -#include #include #include #include @@ -37,11 +36,11 @@ void list_line( const unsigned long long uncomp_size, const char * const input_filename ) { if( uncomp_size > 0 ) - std::printf( "%15llu %15llu %6.2f%% %s\n", uncomp_size, comp_size, + std::printf( "%14llu %14llu %6.2f%% %s\n", uncomp_size, comp_size, 100.0 - ( ( 100.0 * comp_size ) / uncomp_size ), input_filename ); else - std::printf( "%15llu %15llu -INF%% %s\n", uncomp_size, comp_size, + std::printf( "%14llu %14llu -INF%% %s\n", uncomp_size, comp_size, input_filename ); } @@ -63,15 +62,15 @@ int list_files( const std::vector< std::string > & filenames, from_stdin ? "(stdin)" : filenames[i].c_str(); struct stat in_stats; // not used const int infd = from_stdin ? STDIN_FILENO : - open_instream( input_filename, &in_stats, true, true ); - if( infd < 0 ) { if( retval < 1 ) retval = 1; continue; } + open_instream( input_filename, &in_stats, false, true ); + if( infd < 0 ) { set_retval( retval, 1 ); continue; } const Lzip_index lzip_index( infd, ignore_trailing, loose_trailing ); close( infd ); if( lzip_index.retval() != 0 ) { show_file_error( input_filename, lzip_index.error().c_str() ); - if( retval < lzip_index.retval() ) retval = lzip_index.retval(); + set_retval( retval, lzip_index.retval() ); continue; } if( verbosity >= 0 ) @@ -79,32 +78,27 @@ int list_files( const std::vector< std::string > & filenames, const unsigned long long udata_size = lzip_index.udata_size(); const unsigned long long cdata_size = lzip_index.cdata_size(); total_comp += cdata_size; total_uncomp += udata_size; ++files; + const long members = lzip_index.members(); if( first_post ) { first_post = false; if( verbosity >= 1 ) std::fputs( " dict memb trail ", stdout ); - std::fputs( " uncompressed compressed saved name\n", stdout ); + std::fputs( " uncompressed compressed saved name\n", stdout ); } if( verbosity >= 1 ) - { - unsigned dictionary_size = 0; - for( long i = 0; i < lzip_index.members(); ++i ) - dictionary_size = - std::max( dictionary_size, lzip_index.dictionary_size( i ) ); - const long long trailing_size = lzip_index.file_size() - cdata_size; - std::printf( "%s %5ld %6lld ", format_ds( dictionary_size ), - lzip_index.members(), trailing_size ); - } + std::printf( "%s %5ld %6lld ", + format_ds( lzip_index.dictionary_size() ), members, + lzip_index.file_size() - cdata_size ); list_line( udata_size, cdata_size, input_filename ); - if( verbosity >= 2 && lzip_index.members() > 1 ) + if( verbosity >= 2 && members > 1 ) { - std::fputs( " member data_pos data_size member_pos member_size\n", stdout ); - for( long i = 0; i < lzip_index.members(); ++i ) + std::fputs( " member data_pos data_size member_pos member_size\n", stdout ); + for( long i = 0; i < members; ++i ) { const Block & db = lzip_index.dblock( i ); const Block & mb = lzip_index.mblock( i ); - std::printf( "%5ld %15llu %15llu %15llu %15llu\n", + std::printf( "%6ld %14llu %14llu %14llu %14llu\n", i + 1, db.pos(), db.size(), mb.pos(), mb.size() ); } first_post = true; // reprint heading after list of members diff --git a/lzip.h b/lzip.h index dfbf4f7..be64e1b 100644 --- a/lzip.h +++ b/lzip.h @@ -1,27 +1,25 @@ -/* Plzip - Massively parallel implementation of lzip - Copyright (C) 2009-2019 Antonio Diaz Diaz. +/* Plzip - Massively parallel implementation of lzip + Copyright (C) 2009-2021 Antonio Diaz Diaz. - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 2 of the License, or - (at your option) any later version. + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 2 of the License, or + (at your option) any later version. - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. - You should have received a copy of the GNU General Public License - along with this program. If not, see . + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ -#ifndef LZ_API_VERSION -#define LZ_API_VERSION 1 -#endif +#include enum { min_dictionary_bits = 12, - min_dictionary_size = 1 << min_dictionary_bits, + min_dictionary_size = 1 << min_dictionary_bits, // >= modeled_distances max_dictionary_bits = 29, max_dictionary_size = 1 << max_dictionary_bits, min_member_size = 36 }; @@ -88,7 +86,7 @@ struct Lzip_header { uint8_t data[6]; // 0-3 magic bytes // 4 version - // 5 coded_dict_size + // 5 coded dictionary size enum { size = 6 }; void set_magic() { std::memcpy( data, lzip_magic, 4 ); data[4] = 1; } @@ -134,6 +132,10 @@ struct Lzip_header } return true; } + + bool verify() const + { return verify_magic() && verify_version() && + isvalid_ds( dictionary_size() ); } }; @@ -190,10 +192,14 @@ struct Lzip_trailer }; +inline void set_retval( int & retval, const int new_val ) + { if( retval < new_val ) retval = new_val; } + const char * const bad_magic_msg = "Bad magic number (file not in lzip format)."; const char * const bad_dict_msg = "Invalid dictionary size in member header."; const char * const corrupt_mm_msg = "Corrupt header in multimember file."; const char * const trailing_msg = "Trailing data not allowed."; +const char * const mem_msg = "Not enough memory."; // defined in compress.cc int readblock( const int fd, uint8_t * const buf, const int size ); @@ -231,13 +237,19 @@ int dec_stream( const unsigned long long cfile_size, // defined in decompress.cc int preadblock( const int fd, uint8_t * const buf, const int size, const long long pos ); -int decompress_read_error( struct LZ_Decoder * const decoder, - const Pretty_print & pp, const int worker_id ); +class Shared_retval; +void decompress_error( struct LZ_Decoder * const decoder, + const Pretty_print & pp, + Shared_retval & shared_retval, const int worker_id ); +void show_results( const unsigned long long in_size, + const unsigned long long out_size, + const unsigned dictionary_size, const bool testing ); int decompress( const unsigned long long cfile_size, int num_workers, const int infd, const int outfd, const Pretty_print & pp, const int debug_level, const int in_slots, const int out_slots, const bool ignore_trailing, - const bool loose_trailing, const bool infd_isreg ); + const bool loose_trailing, const bool infd_isreg, + const bool one_to_one ); // defined in list.cc int list_files( const std::vector< std::string > & filenames, @@ -249,7 +261,7 @@ const char * bad_version( const unsigned version ); const char * format_ds( const unsigned dictionary_size ); void show_header( const unsigned dictionary_size ); int open_instream( const char * const name, struct stat * const in_statsp, - const bool no_ofile, const bool reg_only = false ); + const bool one_to_one, const bool reg_only = false ); void cleanup_and_fail( const int retval = 1 ); // terminate the program void show_error( const char * const msg, const int errcode = 0, const bool help = false ); @@ -295,3 +307,27 @@ public: xunlock( &mutex ); } }; + + +class Shared_retval // shared return value protected by a mutex + { + int retval; + pthread_mutex_t mutex; + + Shared_retval( const Shared_retval & ); // declared as private + void operator=( const Shared_retval & ); // declared as private + +public: + Shared_retval() : retval( 0 ) { xinit_mutex( &mutex ); } + + bool set_value( const int val ) // only one thread can set retval > 0 + { // (and print an error message) + xlock( &mutex ); + const bool done = ( retval == 0 && val > 0 ); + if( done ) retval = val; + xunlock( &mutex ); + return done; + } + + int operator()() const { return retval; } + }; diff --git a/lzip_index.cc b/lzip_index.cc index d9c810c..fe79f5b 100644 --- a/lzip_index.cc +++ b/lzip_index.cc @@ -1,18 +1,18 @@ -/* Plzip - Massively parallel implementation of lzip - Copyright (C) 2009-2019 Antonio Diaz Diaz. +/* Plzip - Massively parallel implementation of lzip + Copyright (C) 2009-2021 Antonio Diaz Diaz. - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 2 of the License, or - (at your option) any later version. + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 2 of the License, or + (at your option) any later version. - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. - You should have received a copy of the GNU General Public License - along with this program. If not, see . + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ #define _FILE_OFFSET_BITS 64 @@ -23,7 +23,6 @@ #include #include #include -#include #include #include @@ -44,6 +43,19 @@ int seek_read( const int fd, uint8_t * const buf, const int size, } // end namespace +bool Lzip_index::check_header_error( const Lzip_header & header, + const bool first ) + { + if( !header.verify_magic() ) + { error_ = bad_magic_msg; retval_ = 2; if( first ) bad_magic_ = true; + return true; } + if( !header.verify_version() ) + { error_ = bad_version( header.version() ); retval_ = 2; return true; } + if( !isvalid_ds( header.dictionary_size() ) ) + { error_ = bad_dict_msg; retval_ = 2; return true; } + return false; + } + void Lzip_index::set_errno_error( const char * const msg ) { error_ = msg; error_ += std::strerror( errno ); @@ -59,14 +71,24 @@ void Lzip_index::set_num_error( const char * const msg, unsigned long long num ) } +bool Lzip_index::read_header( const int fd, Lzip_header & header, + const long long pos ) + { + if( seek_read( fd, header.data, Lzip_header::size, pos ) != Lzip_header::size ) + { set_errno_error( "Error reading member header: " ); return false; } + return true; + } + + // If successful, push last member and set pos to member header. -bool Lzip_index::skip_trailing_data( const int fd, long long & pos, - const bool ignore_trailing, const bool loose_trailing ) +bool Lzip_index::skip_trailing_data( const int fd, unsigned long long & pos, + const bool ignore_trailing, + const bool loose_trailing ) { + if( pos < min_member_size ) return false; enum { block_size = 16384, buffer_size = block_size + Lzip_trailer::size - 1 + Lzip_header::size }; uint8_t buffer[buffer_size]; - if( pos < min_member_size ) return false; int bsize = pos % block_size; // total bytes in buffer if( bsize <= buffer_size - block_size ) bsize += block_size; int search_size = bsize; // bytes to search for trailer @@ -89,26 +111,30 @@ bool Lzip_index::skip_trailing_data( const int fd, long long & pos, if( member_size > ipos + i || !trailer.verify_consistency() ) continue; Lzip_header header; - if( seek_read( fd, header.data, Lzip_header::size, - ipos + i - member_size ) != Lzip_header::size ) - { set_errno_error( "Error reading member header: " ); return false; } - const unsigned dictionary_size = header.dictionary_size(); - if( !header.verify_magic() || !header.verify_version() || - !isvalid_ds( dictionary_size ) ) continue; - if( (*(const Lzip_header *)( buffer + i )).verify_prefix( bsize - i ) ) - { error_ = "Last member in input file is truncated or corrupt."; - retval_ = 2; return false; } - if( !loose_trailing && bsize - i >= Lzip_header::size && - (*(const Lzip_header *)( buffer + i )).verify_corrupt() ) + if( !read_header( fd, header, ipos + i - member_size ) ) return false; + if( !header.verify() ) continue; + const Lzip_header & header2 = *(const Lzip_header *)( buffer + i ); + const bool full_h2 = bsize - i >= Lzip_header::size; + if( header2.verify_prefix( bsize - i ) ) // last member + { + if( !full_h2 ) error_ = "Last member in input file is truncated."; + else if( !check_header_error( header2, false ) ) + error_ = "Last member in input file is truncated or corrupt."; + retval_ = 2; return false; + } + if( !loose_trailing && full_h2 && header2.verify_corrupt() ) { error_ = corrupt_mm_msg; retval_ = 2; return false; } if( !ignore_trailing ) { error_ = trailing_msg; retval_ = 2; return false; } pos = ipos + i - member_size; + const unsigned dictionary_size = header.dictionary_size(); member_vector.push_back( Member( 0, trailer.data_size(), pos, member_size, dictionary_size ) ); + if( dictionary_size_ < dictionary_size ) + dictionary_size_ = dictionary_size; return true; } - if( ipos <= 0 ) + if( ipos == 0 ) { set_num_error( "Bad trailer at pos ", pos - Lzip_trailer::size ); return false; } bsize = buffer_size; @@ -122,7 +148,8 @@ bool Lzip_index::skip_trailing_data( const int fd, long long & pos, Lzip_index::Lzip_index( const int infd, const bool ignore_trailing, const bool loose_trailing ) - : insize( lseek( infd, 0, SEEK_END ) ), retval_( 0 ) + : insize( lseek( infd, 0, SEEK_END ) ), retval_( 0 ), dictionary_size_( 0 ), + bad_magic_( false ) { if( insize < 0 ) { set_errno_error( "Input file is not seekable: " ); return; } @@ -133,16 +160,10 @@ Lzip_index::Lzip_index( const int infd, const bool ignore_trailing, retval_ = 2; return; } Lzip_header header; - if( seek_read( infd, header.data, Lzip_header::size, 0 ) != Lzip_header::size ) - { set_errno_error( "Error reading member header: " ); return; } - if( !header.verify_magic() ) - { error_ = bad_magic_msg; retval_ = 2; return; } - if( !header.verify_version() ) - { error_ = bad_version( header.version() ); retval_ = 2; return; } - if( !isvalid_ds( header.dictionary_size() ) ) - { error_ = bad_dict_msg; retval_ = 2; return; } + if( !read_header( infd, header, 0 ) ) return; + if( check_header_error( header, true ) ) return; - long long pos = insize; // always points to a header or to EOF + unsigned long long pos = insize; // always points to a header or to EOF while( pos >= min_member_size ) { Lzip_trailer trailer; @@ -150,7 +171,7 @@ Lzip_index::Lzip_index( const int infd, const bool ignore_trailing, pos - Lzip_trailer::size ) != Lzip_trailer::size ) { set_errno_error( "Error reading member trailer: " ); break; } const unsigned long long member_size = trailer.member_size(); - if( member_size > (unsigned long long)pos || !trailer.verify_consistency() ) + if( member_size > pos || !trailer.verify_consistency() ) // bad trailer { if( member_vector.empty() ) { if( skip_trailing_data( infd, pos, ignore_trailing, loose_trailing ) ) @@ -158,12 +179,8 @@ Lzip_index::Lzip_index( const int infd, const bool ignore_trailing, set_num_error( "Bad trailer at pos ", pos - Lzip_trailer::size ); break; } - if( seek_read( infd, header.data, Lzip_header::size, - pos - member_size ) != Lzip_header::size ) - { set_errno_error( "Error reading member header: " ); break; } - const unsigned dictionary_size = header.dictionary_size(); - if( !header.verify_magic() || !header.verify_version() || - !isvalid_ds( dictionary_size ) ) + if( !read_header( infd, header, pos - member_size ) ) break; + if( !header.verify() ) // bad header { if( member_vector.empty() ) { if( skip_trailing_data( infd, pos, ignore_trailing, loose_trailing ) ) @@ -172,8 +189,11 @@ Lzip_index::Lzip_index( const int infd, const bool ignore_trailing, break; } pos -= member_size; + const unsigned dictionary_size = header.dictionary_size(); member_vector.push_back( Member( 0, trailer.data_size(), pos, member_size, dictionary_size ) ); + if( dictionary_size_ < dictionary_size ) + dictionary_size_ = dictionary_size; } if( pos != 0 || member_vector.empty() ) { diff --git a/lzip_index.h b/lzip_index.h index 3775446..601b32a 100644 --- a/lzip_index.h +++ b/lzip_index.h @@ -1,18 +1,18 @@ -/* Plzip - Massively parallel implementation of lzip - Copyright (C) 2009-2019 Antonio Diaz Diaz. +/* Plzip - Massively parallel implementation of lzip + Copyright (C) 2009-2021 Antonio Diaz Diaz. - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 2 of the License, or - (at your option) any later version. + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 2 of the License, or + (at your option) any later version. - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. - You should have received a copy of the GNU General Public License - along with this program. If not, see . + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ #ifndef INT64_MAX @@ -52,10 +52,14 @@ class Lzip_index std::string error_; const long long insize; int retval_; + unsigned dictionary_size_; // largest dictionary size in the file + bool bad_magic_; // bad magic in first header + bool check_header_error( const Lzip_header & header, const bool first ); void set_errno_error( const char * const msg ); void set_num_error( const char * const msg, unsigned long long num ); - bool skip_trailing_data( const int fd, long long & pos, + bool read_header( const int fd, Lzip_header & header, const long long pos ); + bool skip_trailing_data( const int fd, unsigned long long & pos, const bool ignore_trailing, const bool loose_trailing ); public: @@ -65,6 +69,8 @@ public: long members() const { return member_vector.size(); } const std::string & error() const { return error_; } int retval() const { return retval_; } + unsigned dictionary_size() const { return dictionary_size_; } + bool bad_magic() const { return bad_magic_; } long long udata_size() const { if( member_vector.empty() ) return 0; diff --git a/main.cc b/main.cc index 5eab9f9..6eae5c1 100644 --- a/main.cc +++ b/main.cc @@ -1,25 +1,25 @@ -/* Plzip - Massively parallel implementation of lzip - Copyright (C) 2009 Laszlo Ersek. - Copyright (C) 2009-2019 Antonio Diaz Diaz. - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program. If not, see . +/* Plzip - Massively parallel implementation of lzip + Copyright (C) 2009 Laszlo Ersek. + Copyright (C) 2009-2021 Antonio Diaz Diaz. + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ /* - Exit status: 0 for a normal exit, 1 for environmental problems - (file not found, invalid flags, I/O errors, etc), 2 to indicate a - corrupt or invalid input file, 3 for an internal consistency error - (eg, bug) which caused plzip to panic. + Exit status: 0 for a normal exit, 1 for environmental problems + (file not found, invalid flags, I/O errors, etc), 2 to indicate a + corrupt or invalid input file, 3 for an internal consistency error + (eg, bug) which caused plzip to panic. */ #define _FILE_OFFSET_BITS 64 @@ -34,7 +34,6 @@ #include #include #include -#include #include #include #include @@ -73,8 +72,8 @@ int verbosity = 0; namespace { const char * const program_name = "plzip"; -const char * const program_year = "2019"; -const char * invocation_name = 0; +const char * const program_year = "2021"; +const char * invocation_name = program_name; // default value const struct { const char * from; const char * to; } known_extensions[] = { { ".lz", "" }, @@ -99,20 +98,22 @@ bool delete_output_on_interrupt = false; void show_help( const long num_online ) { std::printf( "Plzip is a massively parallel (multi-threaded) implementation of lzip, fully\n" - "compatible with lzip 1.4 or newer. Plzip uses the lzlib compression library.\n" - "\nLzip is a lossless data compressor with a user interface similar to the\n" - "one of gzip or bzip2. Lzip can compress about as fast as gzip (lzip -0)\n" - "or compress most files more than bzip2 (lzip -9). Decompression speed is\n" - "intermediate between gzip and bzip2. Lzip is better than gzip and bzip2\n" - "from a data recovery perspective. Lzip has been designed, written and\n" - "tested with great care to replace gzip and bzip2 as the standard\n" - "general-purpose compressed format for unix-like systems.\n" - "\nPlzip can compress/decompress large files on multiprocessor machines\n" - "much faster than lzip, at the cost of a slightly reduced compression\n" - "ratio (0.4 to 2 percent larger compressed files). Note that the number\n" - "of usable threads is limited by file size; on files larger than a few GB\n" - "plzip can use hundreds of processors, but on files of only a few MB\n" - "plzip is no faster than lzip.\n" + "compatible with lzip 1.4 or newer. Plzip uses the compression library lzlib.\n" + "\nLzip is a lossless data compressor with a user interface similar to the one\n" + "of gzip or bzip2. Lzip uses a simplified form of the 'Lempel-Ziv-Markov\n" + "chain-Algorithm' (LZMA) stream format, chosen to maximize safety and\n" + "interoperability. Lzip can compress about as fast as gzip (lzip -0) or\n" + "compress most files more than bzip2 (lzip -9). Decompression speed is\n" + "intermediate between gzip and bzip2. Lzip is better than gzip and bzip2 from\n" + "a data recovery perspective. Lzip has been designed, written, and tested\n" + "with great care to replace gzip and bzip2 as the standard general-purpose\n" + "compressed format for unix-like systems.\n" + "\nPlzip can compress/decompress large files on multiprocessor machines much\n" + "faster than lzip, at the cost of a slightly reduced compression ratio (0.4\n" + "to 2 percent larger compressed files). Note that the number of usable\n" + "threads is limited by file size; on files larger than a few GB plzip can use\n" + "hundreds of processors, but on files of only a few MB plzip is no faster\n" + "than lzip.\n" "\nUsage: %s [options] [files]\n", invocation_name ); std::printf( "\nOptions:\n" " -h, --help display this help and exit\n" @@ -127,7 +128,7 @@ void show_help( const long num_online ) " -l, --list print (un)compressed file sizes\n" " -m, --match-length= set match length limit in bytes [36]\n" " -n, --threads= set number of (de)compression threads [%ld]\n" - " -o, --output= if reading standard input, write to \n" + " -o, --output= write to , keep input files\n" " -q, --quiet suppress all messages\n" " -s, --dictionary-size= set dictionary size limit in bytes [8 MiB]\n" " -t, --test test compressed file integrity\n" @@ -138,12 +139,13 @@ void show_help( const long num_online ) " --loose-trailing allow trailing data seeming corrupt header\n" " --in-slots= number of 1 MiB input packets buffered [4]\n" " --out-slots= number of 1 MiB output packets buffered [64]\n" - , num_online ); + " --check-lib compare version of lzlib.h with liblz.{a,so}\n", + num_online ); if( verbosity >= 1 ) { - std::printf( " --debug= (0-1) print debug statistics to stderr\n" ); + std::printf( " --debug= print mode(2), debug statistics(1) to stderr\n" ); } - std::printf( "If no file names are given, or if a file is '-', plzip compresses or\n" + std::printf( "\nIf no file names are given, or if a file is '-', plzip compresses or\n" "decompresses from standard input to standard output.\n" "Numbers may be followed by a multiplier: k = kB = 10^3 = 1000,\n" "Ki = KiB = 2^10 = 1024, M = 10^6, Mi = 2^20, G = 10^9, Gi = 2^30, etc...\n" @@ -151,8 +153,10 @@ void show_help( const long num_online ) "to 2^29 bytes.\n" "\nThe bidimensional parameter space of LZMA can't be mapped to a linear\n" "scale optimal for all files. If your files are large, very repetitive,\n" - "etc, you may need to use the --dictionary-size and --match-length\n" - "options directly to achieve optimal performance.\n" + "etc, you may need to use the options --dictionary-size and --match-length\n" + "directly to achieve optimal performance.\n" + "\nTo extract all the files from archive 'foo.tar.lz', use the commands\n" + "'tar -xf foo.tar.lz' or 'plzip -cd foo.tar.lz | tar -xf -'.\n" "\nExit status: 0 for a normal exit, 1 for environmental problems (file\n" "not found, invalid flags, I/O errors, etc), 2 to indicate a corrupt or\n" "invalid input file, 3 for an internal consistency error (eg, bug) which\n" @@ -173,6 +177,37 @@ void show_version() "There is NO WARRANTY, to the extent permitted by law.\n" ); } + +int check_lib() + { + bool warning = false; + if( std::strcmp( LZ_version_string, LZ_version() ) != 0 ) + { warning = true; + if( verbosity >= 0 ) + std::printf( "warning: LZ_version_string != LZ_version() (%s vs %s)\n", + LZ_version_string, LZ_version() ); } +#if defined LZ_API_VERSION && LZ_API_VERSION >= 1012 + if( LZ_API_VERSION != LZ_api_version() ) + { warning = true; + if( verbosity >= 0 ) + std::printf( "warning: LZ_API_VERSION != LZ_api_version() (%u vs %u)\n", + LZ_API_VERSION, LZ_api_version() ); } +#endif + if( verbosity >= 1 ) + { + std::printf( "Using lzlib %s\n", LZ_version() ); +#if !defined LZ_API_VERSION + std::fputs( "LZ_API_VERSION is not defined.\n", stdout ); +#elif LZ_API_VERSION >= 1012 + std::printf( "Using LZ_API_VERSION = %u\n", LZ_api_version() ); +#else + std::printf( "Compiled with LZ_API_VERSION = %u. " + "Using an unknown LZ_API_VERSION\n", LZ_API_VERSION ); +#endif + } + return warning; + } + } // end namespace void Pretty_print::operator()( const char * const msg ) const @@ -220,7 +255,7 @@ const char * format_ds( const unsigned dictionary_size ) void show_header( const unsigned dictionary_size ) { - std::fprintf( stderr, "dictionary %s, ", format_ds( dictionary_size ) ); + std::fprintf( stderr, "dict %s, ", format_ds( dictionary_size ) ); } namespace { @@ -313,10 +348,14 @@ int extension_index( const std::string & name ) } -void set_c_outname( const std::string & name, const bool force_ext ) +void set_c_outname( const std::string & name, const bool filenames_given, + const bool force_ext ) { + /* zupdate < 1.9 depends on lzip adding the extension '.lz' to name when + reading from standard input. */ output_filename = name; - if( force_ext || extension_index( output_filename ) < 0 ) + if( force_ext || + ( !filenames_given && extension_index( output_filename ) < 0 ) ) output_filename += known_extensions[0].from; } @@ -342,7 +381,7 @@ void set_d_outname( const std::string & name, const int eindex ) } // end namespace int open_instream( const char * const name, struct stat * const in_statsp, - const bool no_ofile, const bool reg_only ) + const bool one_to_one, const bool reg_only ) { int infd = open( name, O_RDONLY | O_BINARY ); if( infd < 0 ) @@ -354,13 +393,12 @@ int open_instream( const char * const name, struct stat * const in_statsp, const bool can_read = ( i == 0 && !reg_only && ( S_ISBLK( mode ) || S_ISCHR( mode ) || S_ISFIFO( mode ) || S_ISSOCK( mode ) ) ); - if( i != 0 || ( !S_ISREG( mode ) && ( !can_read || !no_ofile ) ) ) + if( i != 0 || ( !S_ISREG( mode ) && ( !can_read || one_to_one ) ) ) { if( verbosity >= 0 ) std::fprintf( stderr, "%s: Input file '%s' is not a regular file%s.\n", - program_name, name, - ( can_read && !no_ofile ) ? - ",\n and '--stdout' was not specified" : "" ); + program_name, name, ( can_read && one_to_one ) ? + ",\n and neither '-c' nor '-o' were specified" : "" ); close( infd ); infd = -1; } @@ -372,7 +410,7 @@ namespace { int open_instream2( const char * const name, struct stat * const in_statsp, const Mode program_mode, const int eindex, - const bool recompress, const bool to_stdout ) + const bool one_to_one, const bool recompress ) { if( program_mode == m_compress && !recompress && eindex >= 0 ) { @@ -381,16 +419,15 @@ int open_instream2( const char * const name, struct stat * const in_statsp, program_name, name, known_extensions[eindex].from ); return -1; } - const bool no_ofile = ( to_stdout || program_mode == m_test ); - return open_instream( name, in_statsp, no_ofile, false ); + return open_instream( name, in_statsp, one_to_one, false ); } -bool open_outstream( const bool force, const bool from_stdin ) +bool open_outstream( const bool force, const bool protect ) { const mode_t usr_rw = S_IRUSR | S_IWUSR; const mode_t all_rw = usr_rw | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH; - const mode_t outfd_mode = from_stdin ? all_rw : usr_rw; + const mode_t outfd_mode = protect ? usr_rw : all_rw; int flags = O_CREAT | O_WRONLY | O_BINARY; if( force ) flags |= O_TRUNC; else flags |= O_EXCL; @@ -409,25 +446,6 @@ bool open_outstream( const bool force, const bool from_stdin ) } -bool check_tty( const char * const input_filename, const int infd, - const Mode program_mode ) - { - if( program_mode == m_compress && isatty( outfd ) ) - { - show_error( "I won't write compressed data to a terminal.", 0, true ); - return false; - } - if( ( program_mode == m_decompress || program_mode == m_test ) && - isatty( infd ) ) - { - show_file_error( input_filename, - "I won't read compressed data from a terminal." ); - return false; - } - return true; - } - - void set_signals( void (*action)(int) ) { std::signal( SIGHUP, action ); @@ -437,10 +455,10 @@ void set_signals( void (*action)(int) ) } // end namespace -// This can be called from any thread, main thread or sub-threads alike, -// since they all call common helper functions that call cleanup_and_fail() -// in case of an error. -// +/* This can be called from any thread, main thread or sub-threads alike, + since they all call common helper functions like 'xlock' that call + cleanup_and_fail() in case of an error. +*/ void cleanup_and_fail( const int retval ) { // only one thread can delete and exit @@ -474,7 +492,31 @@ extern "C" void signal_handler( int ) } - // Set permissions, owner and times. +bool check_tty_in( const char * const input_filename, const int infd, + const Mode program_mode, int & retval ) + { + if( ( program_mode == m_decompress || program_mode == m_test ) && + isatty( infd ) ) // for example /dev/tty + { show_file_error( input_filename, + "I won't read compressed data from a terminal." ); + close( infd ); set_retval( retval, 1 ); + if( program_mode != m_test ) cleanup_and_fail( retval ); + return false; } + return true; + } + +bool check_tty_out( const Mode program_mode ) + { + if( program_mode == m_compress && isatty( outfd ) ) + { show_file_error( output_filename.size() ? + output_filename.c_str() : "(stdout)", + "I won't write compressed data to a terminal." ); + return false; } + return true; + } + + +// Set permissions, owner, and times. void close_and_set_permissions( const struct stat * const in_statsp ) { bool warning = false; @@ -622,24 +664,20 @@ int main( const int argc, const char * const argv[] ) bool loose_trailing = false; bool recompress = false; bool to_stdout = false; - invocation_name = argv[0]; - - if( LZ_version()[0] < '1' ) - { show_error( "Bad library version. At least lzlib 1.0 is required." ); - return 1; } + if( argc > 0 ) invocation_name = argv[0]; - enum { opt_dbg = 256, opt_in, opt_lt, opt_out }; + enum { opt_chk = 256, opt_dbg, opt_in, opt_lt, opt_out }; const Arg_parser::Option options[] = { { '0', "fast", Arg_parser::no }, - { '1', 0, Arg_parser::no }, - { '2', 0, Arg_parser::no }, - { '3', 0, Arg_parser::no }, - { '4', 0, Arg_parser::no }, - { '5', 0, Arg_parser::no }, - { '6', 0, Arg_parser::no }, - { '7', 0, Arg_parser::no }, - { '8', 0, Arg_parser::no }, + { '1', 0, Arg_parser::no }, + { '2', 0, Arg_parser::no }, + { '3', 0, Arg_parser::no }, + { '4', 0, Arg_parser::no }, + { '5', 0, Arg_parser::no }, + { '6', 0, Arg_parser::no }, + { '7', 0, Arg_parser::no }, + { '8', 0, Arg_parser::no }, { '9', "best", Arg_parser::no }, { 'a', "trailing-error", Arg_parser::no }, { 'b', "member-size", Arg_parser::yes }, @@ -660,11 +698,12 @@ int main( const int argc, const char * const argv[] ) { 't', "test", Arg_parser::no }, { 'v', "verbose", Arg_parser::no }, { 'V', "version", Arg_parser::no }, + { opt_chk, "check-lib", Arg_parser::no }, { opt_dbg, "debug", Arg_parser::yes }, { opt_in, "in-slots", Arg_parser::yes }, { opt_lt, "loose-trailing", Arg_parser::no }, { opt_out, "out-slots", Arg_parser::yes }, - { 0 , 0, Arg_parser::no } }; + { 0, 0, Arg_parser::no } }; const Arg_parser parser( argc, argv, options ); if( parser.error().size() ) // bad option @@ -702,7 +741,8 @@ int main( const int argc, const char * const argv[] ) getnum( arg, LZ_min_match_len_limit(), LZ_max_match_len_limit() ); break; case 'n': num_workers = getnum( arg, 1, max_workers ); break; - case 'o': default_output_filename = sarg; break; + case 'o': if( sarg == "-" ) to_stdout = true; + else { default_output_filename = sarg; } break; case 'q': verbosity = -1; break; case 's': encoder_options.dictionary_size = get_dict_size( arg ); break; @@ -710,6 +750,7 @@ int main( const int argc, const char * const argv[] ) case 't': set_mode( program_mode, m_test ); break; case 'v': if( verbosity < 4 ) ++verbosity; break; case 'V': show_version(); return 0; + case opt_chk: return check_lib(); case opt_dbg: debug_level = getnum( arg, 0, 3 ); break; case opt_in: in_slots = getnum( arg, 1, 64 ); break; case opt_lt: loose_trailing = true; break; @@ -718,6 +759,10 @@ int main( const int argc, const char * const argv[] ) } } // end process options + if( LZ_version()[0] < '1' ) + { show_error( "Wrong library version. At least lzlib 1.0 is required." ); + return 1; } + #if defined(__MSVCRT__) || defined(__OS2__) setmode( STDIN_FILENO, O_BINARY ); setmode( STDOUT_FILENO, O_BINARY ); @@ -734,9 +779,6 @@ int main( const int argc, const char * const argv[] ) if( program_mode == m_list ) return list_files( filenames, ignore_trailing, loose_trailing ); - if( program_mode == m_test ) - outfd = -1; - const bool fast = encoder_options.dictionary_size == 65535 && encoder_options.match_len_limit == 16; if( data_size <= 0 ) @@ -762,112 +804,99 @@ int main( const int argc, const char * const argv[] ) num_workers = std::min( num_online, max_workers ); } - if( !to_stdout && program_mode != m_test && - ( filenames_given || default_output_filename.size() ) ) + if( program_mode == m_test ) to_stdout = false; // apply overrides + if( program_mode == m_test || to_stdout ) default_output_filename.clear(); + + if( to_stdout && program_mode != m_test ) // check tty only once + { outfd = STDOUT_FILENO; if( !check_tty_out( program_mode ) ) return 1; } + else outfd = -1; + + const bool to_file = !to_stdout && program_mode != m_test && + default_output_filename.size(); + if( !to_stdout && program_mode != m_test && ( filenames_given || to_file ) ) set_signals( signal_handler ); Pretty_print pp( filenames ); int failed_tests = 0; int retval = 0; + const bool one_to_one = !to_stdout && program_mode != m_test && !to_file; bool stdin_used = false; for( unsigned i = 0; i < filenames.size(); ++i ) { std::string input_filename; int infd; struct stat in_stats; - output_filename.clear(); - if( filenames[i].empty() || filenames[i] == "-" ) + pp.set_name( filenames[i] ); + if( filenames[i] == "-" ) { if( stdin_used ) continue; else stdin_used = true; infd = STDIN_FILENO; - if( program_mode != m_test ) - { - if( to_stdout || default_output_filename.empty() ) - outfd = STDOUT_FILENO; - else - { - if( program_mode == m_compress ) - set_c_outname( default_output_filename, false ); - else output_filename = default_output_filename; - if( !open_outstream( force, true ) ) - { - if( retval < 1 ) retval = 1; - close( infd ); - continue; - } - } - } + if( !check_tty_in( pp.name(), infd, program_mode, retval ) ) continue; + if( one_to_one ) { outfd = STDOUT_FILENO; output_filename.clear(); } } else { const int eindex = extension_index( input_filename = filenames[i] ); infd = open_instream2( input_filename.c_str(), &in_stats, program_mode, - eindex, recompress, to_stdout ); - if( infd < 0 ) { if( retval < 1 ) retval = 1; continue; } - if( program_mode != m_test ) + eindex, one_to_one, recompress ); + if( infd < 0 ) { set_retval( retval, 1 ); continue; } + if( !check_tty_in( pp.name(), infd, program_mode, retval ) ) continue; + if( one_to_one ) // open outfd after verifying infd { - if( to_stdout ) outfd = STDOUT_FILENO; - else - { - if( program_mode == m_compress ) - set_c_outname( input_filename, true ); - else set_d_outname( input_filename, eindex ); - if( !open_outstream( force, false ) ) - { - if( retval < 1 ) retval = 1; - close( infd ); - continue; - } - } + if( program_mode == m_compress ) + set_c_outname( input_filename, true, true ); + else set_d_outname( input_filename, eindex ); + if( !open_outstream( force, true ) ) + { close( infd ); set_retval( retval, 1 ); continue; } } } - pp.set_name( input_filename ); - if( !check_tty( pp.name(), infd, program_mode ) ) + if( one_to_one && !check_tty_out( program_mode ) ) + { set_retval( retval, 1 ); return retval; } // don't delete a tty + + if( to_file && outfd < 0 ) // open outfd after verifying infd { - if( retval < 1 ) retval = 1; - if( program_mode == m_test ) { close( infd ); continue; } - cleanup_and_fail( retval ); + if( program_mode == m_compress ) set_c_outname( default_output_filename, + filenames_given, false ); + else output_filename = default_output_filename; + if( !open_outstream( force, false ) || !check_tty_out( program_mode ) ) + return 1; // check tty only once and don't try to delete a tty } - const struct stat * const in_statsp = input_filename.size() ? &in_stats : 0; - const bool infd_isreg = in_statsp && S_ISREG( in_statsp->st_mode ); + const struct stat * const in_statsp = + ( input_filename.size() && one_to_one ) ? &in_stats : 0; + const bool infd_isreg = input_filename.size() && S_ISREG( in_stats.st_mode ); const unsigned long long cfile_size = - infd_isreg ? ( in_statsp->st_size + 99 ) / 100 : 0; + infd_isreg ? ( in_stats.st_size + 99 ) / 100 : 0; int tmp; if( program_mode == m_compress ) tmp = compress( cfile_size, data_size, encoder_options.dictionary_size, - encoder_options.match_len_limit, - num_workers, infd, outfd, pp, debug_level ); + encoder_options.match_len_limit, num_workers, + infd, outfd, pp, debug_level ); else - tmp = decompress( cfile_size, num_workers, infd, outfd, pp, debug_level, - in_slots, out_slots, ignore_trailing, loose_trailing, - infd_isreg ); + tmp = decompress( cfile_size, num_workers, infd, outfd, pp, + debug_level, in_slots, out_slots, ignore_trailing, + loose_trailing, infd_isreg, one_to_one ); if( close( infd ) != 0 ) - { - show_error( input_filename.size() ? "Error closing input file" : - "Error closing stdin", errno ); - if( tmp < 1 ) tmp = 1; - } - if( tmp > retval ) retval = tmp; + { show_file_error( pp.name(), "Error closing input file", errno ); + set_retval( tmp, 1 ); } + set_retval( retval, tmp ); if( tmp ) { if( program_mode != m_test ) cleanup_and_fail( retval ); else ++failed_tests; } - if( delete_output_on_interrupt ) + if( delete_output_on_interrupt && one_to_one ) close_and_set_permissions( in_statsp ); - if( input_filename.size() ) - { - if( !keep_input_files && !to_stdout && program_mode != m_test ) - std::remove( input_filename.c_str() ); - } + if( input_filename.size() && !keep_input_files && one_to_one ) + std::remove( input_filename.c_str() ); } - if( outfd >= 0 && close( outfd ) != 0 ) + if( delete_output_on_interrupt ) close_and_set_permissions( 0 ); // -o + else if( outfd >= 0 && close( outfd ) != 0 ) // -c { show_error( "Error closing stdout", errno ); - if( retval < 1 ) retval = 1; + set_retval( retval, 1 ); } if( failed_tests > 0 && verbosity >= 1 && filenames.size() > 1 ) std::fprintf( stderr, "%s: warning: %d %s failed the test.\n", diff --git a/testsuite/check.sh b/testsuite/check.sh index 59fe3f5..d4ee57e 100755 --- a/testsuite/check.sh +++ b/testsuite/check.sh @@ -1,9 +1,9 @@ #! /bin/sh # check script for Plzip - Massively parallel implementation of lzip -# Copyright (C) 2009-2019 Antonio Diaz Diaz. +# Copyright (C) 2009-2021 Antonio Diaz Diaz. # # This script is free software: you have unlimited permission -# to copy, distribute and modify it. +# to copy, distribute, and modify it. LC_ALL=C export LC_ALL @@ -30,6 +30,7 @@ cd "${objdir}"/tmp || framework_failure cat "${testdir}"/test.txt > in || framework_failure in_lz="${testdir}"/test.txt.lz +in_em="${testdir}"/test_em.txt.lz fail=0 lwarn8=0 lwarn10=0 @@ -41,6 +42,7 @@ lzlib_1_10() { [ ${lwarn10} = 0 ] && printf "\nwarning: header HD=3 detection requires lzlib 1.10 or newer" lwarn10=1 ; } +"${LZIP}" --check-lib # just print warning printf "testing plzip-%s..." "$2" "${LZIP}" -fkqm4 in @@ -66,6 +68,14 @@ done [ $? = 2 ] || test_failed $LINENO "${LZIP}" -dq -o in < "${in_lz}" [ $? = 1 ] || test_failed $LINENO +"${LZIP}" -dq -o in "${in_lz}" +[ $? = 1 ] || test_failed $LINENO +"${LZIP}" -dq -o out nx_file.lz +[ $? = 1 ] || test_failed $LINENO +[ ! -e out ] || test_failed $LINENO +"${LZIP}" -q -o out.lz nx_file +[ $? = 1 ] || test_failed $LINENO +[ ! -e out.lz ] || test_failed $LINENO # these are for code coverage "${LZIP}" -lt "${in_lz}" 2> /dev/null [ $? = 1 ] || test_failed $LINENO @@ -73,7 +83,9 @@ done [ $? = 1 ] || test_failed $LINENO "${LZIP}" -cdt "${in_lz}" > out 2> /dev/null [ $? = 1 ] || test_failed $LINENO -"${LZIP}" -t -- nx_file 2> /dev/null +"${LZIP}" -t -- nx_file.lz 2> /dev/null +[ $? = 1 ] || test_failed $LINENO +"${LZIP}" -t "" < /dev/null 2> /dev/null [ $? = 1 ] || test_failed $LINENO "${LZIP}" --help > /dev/null || test_failed $LINENO "${LZIP}" -n1 -V > /dev/null || test_failed $LINENO @@ -97,12 +109,26 @@ printf "LZIP\001+.............................." | "${LZIP}" -t 2> /dev/null printf "\ntesting decompression..." -"${LZIP}" -lq "${in_lz}" || test_failed $LINENO -"${LZIP}" -t "${in_lz}" || test_failed $LINENO -"${LZIP}" -cd "${in_lz}" > copy || test_failed $LINENO -cmp in copy || test_failed $LINENO +for i in "${in_lz}" "${in_em}" ; do + "${LZIP}" -lq "$i" || test_failed $LINENO "$i" + "${LZIP}" -t "$i" || test_failed $LINENO "$i" + "${LZIP}" -d "$i" -o copy || test_failed $LINENO "$i" + cmp in copy || test_failed $LINENO "$i" + "${LZIP}" -cd "$i" > copy || test_failed $LINENO "$i" + cmp in copy || test_failed $LINENO "$i" + "${LZIP}" -d "$i" -o - > copy || test_failed $LINENO "$i" + cmp in copy || test_failed $LINENO "$i" + "${LZIP}" -d < "$i" > copy || test_failed $LINENO "$i" + cmp in copy || test_failed $LINENO "$i" + rm -f copy || framework_failure +done + +lines=$("${LZIP}" -tvv "${in_em}" 2>&1 | wc -l) || test_failed $LINENO +[ "${lines}" -eq 1 ] || test_failed $LINENO "${lines}" + +lines=$("${LZIP}" -lvv "${in_em}" | wc -l) || test_failed $LINENO +[ "${lines}" -eq 11 ] || test_failed $LINENO "${lines}" -rm -f copy || framework_failure cat "${in_lz}" > copy.lz || framework_failure "${LZIP}" -dk copy.lz || test_failed $LINENO cmp in copy || test_failed $LINENO @@ -113,19 +139,19 @@ printf "to be overwritten" > copy || framework_failure [ ! -e copy.lz ] || test_failed $LINENO cmp in copy || test_failed $LINENO -rm -f copy || framework_failure -cat "${in_lz}" > copy.lz || framework_failure -"${LZIP}" -d -S100k copy.lz || test_failed $LINENO # ignore -S -[ ! -e copy.lz ] || test_failed $LINENO -cmp in copy || test_failed $LINENO - printf "to be overwritten" > copy || framework_failure "${LZIP}" -df -o copy < "${in_lz}" || test_failed $LINENO cmp in copy || test_failed $LINENO +rm -f out copy || framework_failure +"${LZIP}" -d -o ./- "${in_lz}" || test_failed $LINENO +cmp in ./- || test_failed $LINENO +rm -f ./- || framework_failure +"${LZIP}" -d -o ./- < "${in_lz}" || test_failed $LINENO +cmp in ./- || test_failed $LINENO +rm -f ./- || framework_failure -rm -f copy || framework_failure -"${LZIP}" < in > anyothername || test_failed $LINENO -"${LZIP}" -dv --output copy - anyothername - < "${in_lz}" 2> /dev/null || +cat "${in_lz}" > anyothername || framework_failure +"${LZIP}" -dv - anyothername - < "${in_lz}" > copy 2> /dev/null || test_failed $LINENO cmp in copy || test_failed $LINENO cmp in anyothername.out || test_failed $LINENO @@ -166,21 +192,19 @@ done cmp in copy || test_failed $LINENO cat in in > in2 || framework_failure -cat "${in_lz}" "${in_lz}" > in2.lz || framework_failure -"${LZIP}" -lq in2.lz || test_failed $LINENO -"${LZIP}" -t in2.lz || test_failed $LINENO -"${LZIP}" -cd in2.lz > copy2 || test_failed $LINENO +"${LZIP}" -lq "${in_lz}" "${in_lz}" || test_failed $LINENO +"${LZIP}" -t "${in_lz}" "${in_lz}" || test_failed $LINENO +"${LZIP}" -cd "${in_lz}" "${in_lz}" -o out > copy2 || test_failed $LINENO +[ ! -e out ] || test_failed $LINENO # override -o cmp in2 copy2 || test_failed $LINENO - -"${LZIP}" --output=copy2.lz < in2 || test_failed $LINENO -"${LZIP}" -lq copy2.lz || test_failed $LINENO -"${LZIP}" -t copy2.lz || test_failed $LINENO -"${LZIP}" -cd copy2.lz > copy2 || test_failed $LINENO +rm -f copy2 || framework_failure +"${LZIP}" -d "${in_lz}" "${in_lz}" -o copy2 || test_failed $LINENO cmp in2 copy2 || test_failed $LINENO +rm -f copy2 || framework_failure +cat "${in_lz}" "${in_lz}" > copy2.lz || framework_failure printf "\ngarbage" >> copy2.lz || framework_failure "${LZIP}" -tvvvv copy2.lz 2> /dev/null || test_failed $LINENO -rm -f copy2 || framework_failure "${LZIP}" -alq copy2.lz [ $? = 2 ] || test_failed $LINENO "${LZIP}" -atq copy2.lz @@ -202,37 +226,46 @@ printf "\ntesting compression..." "${LZIP}" -cf "${in_lz}" > out 2> /dev/null # /dev/null is a tty on OS/2 [ $? = 1 ] || test_failed $LINENO -"${LZIP}" -cFvvm36 "${in_lz}" > out 2> /dev/null || test_failed $LINENO +"${LZIP}" -Fvvm36 -o - "${in_lz}" > out 2> /dev/null || test_failed $LINENO "${LZIP}" -cd out | "${LZIP}" -d > copy || test_failed $LINENO cmp in copy || test_failed $LINENO +"${LZIP}" -0 -o ./- in || test_failed $LINENO +"${LZIP}" -cd ./- | cmp in - || test_failed $LINENO +rm -f ./- || framework_failure +"${LZIP}" -0 -o ./- < in || test_failed $LINENO # add .lz +[ ! -e ./- ] || test_failed $LINENO +"${LZIP}" -cd -- -.lz | cmp in - || test_failed $LINENO +rm -f ./-.lz || framework_failure + for i in s4Ki 0 1 2 3 4 5 6 7 8 9 ; do "${LZIP}" -k -$i in || test_failed $LINENO $i mv -f in.lz copy.lz || test_failed $LINENO $i printf "garbage" >> copy.lz || framework_failure "${LZIP}" -df copy.lz || test_failed $LINENO $i cmp in copy || test_failed $LINENO $i -done -for i in s4Ki 0 1 2 3 4 5 6 7 8 9 ; do - "${LZIP}" -c -$i in > out || test_failed $LINENO $i + "${LZIP}" -$i in -c > out || test_failed $LINENO $i + "${LZIP}" -$i in -o o_out || test_failed $LINENO $i # don't add .lz + [ ! -e o_out.lz ] || test_failed $LINENO + cmp out o_out || test_failed $LINENO $i + rm -f o_out || framework_failure printf "g" >> out || framework_failure "${LZIP}" -cd out > copy || test_failed $LINENO $i cmp in copy || test_failed $LINENO $i -done -for i in s4Ki 0 1 2 3 4 5 6 7 8 9 ; do "${LZIP}" -$i < in > out || test_failed $LINENO $i "${LZIP}" -d < out > copy || test_failed $LINENO $i cmp in copy || test_failed $LINENO $i -done -for i in s4Ki 0 1 2 3 4 5 6 7 8 9 ; do - "${LZIP}" -f -$i -o out < in || test_failed $LINENO $i + rm -f out || framework_failure + printf "to be overwritten" > out.lz || framework_failure + "${LZIP}" -f -$i -o out < in || test_failed $LINENO $i # add .lz + [ ! -e out ] || test_failed $LINENO "${LZIP}" -df -o copy < out.lz || test_failed $LINENO $i cmp in copy || test_failed $LINENO $i done -rm -f out.lz || framework_failure +rm -f out out.lz || framework_failure cat in in in in > in4 || framework_failure for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ; do @@ -317,6 +350,13 @@ else fi rm -f int.lz || framework_failure +for i in fox_v2.lz fox_s11.lz fox_de20.lz \ + fox_bcrc.lz fox_crc0.lz fox_das46.lz fox_mes81.lz ; do + "${LZIP}" -tq "${testdir}"/$i + [ $? = 2 ] || test_failed $LINENO $i +done + +cat "${in_lz}" "${in_lz}" > in2.lz || framework_failure cat "${in_lz}" "${in_lz}" "${in_lz}" > in3.lz || framework_failure if dd if=in3.lz of=trunc.lz bs=14752 count=1 2> /dev/null && [ -e trunc.lz ] && cmp in2.lz trunc.lz > /dev/null 2>&1 ; then @@ -343,14 +383,22 @@ printf "g" >> ingin.lz || framework_failure cat "${in_lz}" >> ingin.lz || framework_failure "${LZIP}" -lq ingin.lz [ $? = 2 ] || test_failed $LINENO -"${LZIP}" -tq ingin.lz +"${LZIP}" -atq ingin.lz +[ $? = 2 ] || test_failed $LINENO +"${LZIP}" -atq < ingin.lz [ $? = 2 ] || test_failed $LINENO -"${LZIP}" -cdq ingin.lz > out +"${LZIP}" -acdq ingin.lz > out +[ $? = 2 ] || test_failed $LINENO +"${LZIP}" -adq < ingin.lz > out +[ $? = 2 ] || test_failed $LINENO +"${LZIP}" -tq ingin.lz [ $? = 2 ] || test_failed $LINENO "${LZIP}" -t < ingin.lz || test_failed $LINENO +"${LZIP}" -cdq ingin.lz > copy +[ $? = 2 ] || test_failed $LINENO "${LZIP}" -d < ingin.lz > copy || test_failed $LINENO cmp in copy || test_failed $LINENO -rm -f copy ingin.lz || framework_failure +rm -f copy ingin.lz out || framework_failure echo if [ ${fail} = 0 ] ; then diff --git a/testsuite/fox_bcrc.lz b/testsuite/fox_bcrc.lz new file mode 100644 index 0000000..8f6a7c4 Binary files /dev/null and b/testsuite/fox_bcrc.lz differ diff --git a/testsuite/fox_crc0.lz b/testsuite/fox_crc0.lz new file mode 100644 index 0000000..1abe926 Binary files /dev/null and b/testsuite/fox_crc0.lz differ diff --git a/testsuite/fox_das46.lz b/testsuite/fox_das46.lz new file mode 100644 index 0000000..43ed9f9 Binary files /dev/null and b/testsuite/fox_das46.lz differ diff --git a/testsuite/fox_de20.lz b/testsuite/fox_de20.lz new file mode 100644 index 0000000..10949d8 Binary files /dev/null and b/testsuite/fox_de20.lz differ diff --git a/testsuite/fox_mes81.lz b/testsuite/fox_mes81.lz new file mode 100644 index 0000000..d50ef2e Binary files /dev/null and b/testsuite/fox_mes81.lz differ diff --git a/testsuite/fox_s11.lz b/testsuite/fox_s11.lz new file mode 100644 index 0000000..dca909c Binary files /dev/null and b/testsuite/fox_s11.lz differ diff --git a/testsuite/fox_v2.lz b/testsuite/fox_v2.lz new file mode 100644 index 0000000..8620981 Binary files /dev/null and b/testsuite/fox_v2.lz differ diff --git a/testsuite/test_em.txt.lz b/testsuite/test_em.txt.lz new file mode 100644 index 0000000..7e96250 Binary files /dev/null and b/testsuite/test_em.txt.lz differ -- cgit v1.2.3