Commit Graph

39 Commits

Author SHA1 Message Date
Zbigniew Jędrzejewski-Szmek 349cc4a507 build-sys: use #if Y instead of #ifdef Y everywhere
The advantage is that is the name is mispellt, cpp will warn us.

$ git grep -Ee "conf.set\('(HAVE|ENABLE)_" -l|xargs sed -r -i "s/conf.set\('(HAVE|ENABLE)_/conf.set10('\1_/"
$ git grep -Ee '#ifn?def (HAVE|ENABLE)' -l|xargs sed -r -i 's/#ifdef (HAVE|ENABLE)/#if \1/; s/#ifndef (HAVE|ENABLE)/#if ! \1/;'
$ git grep -Ee 'if.*defined\(HAVE' -l|xargs sed -i -r 's/defined\((HAVE_[A-Z0-9_]*)\)/\1/g'
$ git grep -Ee 'if.*defined\(ENABLE' -l|xargs sed -i -r 's/defined\((ENABLE_[A-Z0-9_]*)\)/\1/g'
+ manual changes to meson.build

squash! build-sys: use #if Y instead of #ifdef Y everywhere

v2:
- fix incorrect setting of HAVE_LIBIDN2
2017-10-04 12:09:29 +02:00
Zbigniew Jędrzejewski-Szmek 691b90d465 journal: fix warning about LZ4_compress_limitedOutput 2016-12-10 13:52:49 -05:00
Zbigniew Jędrzejewski-Szmek 8e170d2909 compress: fix gcc warnings about void* used in arithmetic
src/journal/compress.c: In function ‘compress_blob_lz4’:
src/journal/compress.c:115:49: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
         r = LZ4_compress_limitedOutput(src, dst + 8, src_size, (int) dst_alloc_size - 8);
                                                 ^
src/journal/compress.c: In function ‘decompress_blob_xz’:
src/journal/compress.c:179:35: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
                 s.next_out = *dst + used;
                                   ^
src/journal/compress.c: In function ‘decompress_blob_lz4’:
src/journal/compress.c:218:37: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
         r = LZ4_decompress_safe(src + 8, out, src_size - 8, size);
                                     ^
src/journal/compress.c: In function ‘decompress_startswith_xz’:
src/journal/compress.c:294:38: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
                 s.next_out = *buffer + *buffer_size - s.avail_out;
                                      ^
src/journal/compress.c:294:53: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
                 s.next_out = *buffer + *buffer_size - s.avail_out;
                                                     ^
src/journal/compress.c: In function ‘decompress_startswith_lz4’:
src/journal/compress.c:327:45: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
         r = LZ4_decompress_safe_partial(src + 8, *buffer, src_size - 8,
                                             ^

LZ4 and XZ functions use char* and unsigned char*, respectively,
so keep void* in our internal APIs and add casts.
2016-04-02 18:58:21 -04:00
Elias Probst 82e24b0068
Use `PRIu64` to print `uint64_t` in log msgs 2016-02-29 23:00:21 +01:00
Daniel Mack b26fa1a2fb tree-wide: remove Emacs lines from all files
This should be handled fine now by .dir-locals.el, so need to carry that
stuff in every file.
2016-02-10 13:41:57 +01:00
Lennart Poettering afd806fc48 Merge pull request #1607 from keszybz/lz4-remove-v1
Remove the old version of the lz4 stream compressor
2016-01-20 17:24:59 +01:00
Zbigniew Jędrzejewski-Szmek d487b81513 journal: fix reporting of output size in compres_stream_lz4
The header is 7 bytes, and this size was not accounted for in
total_out. This means that we could create a file that was 7 bytes
longer than requested, and the debug output was also inconsistent.
2015-12-13 15:00:19 -05:00
Zbigniew Jędrzejewski-Szmek 5d6f46b6bf journal: add dst_allocated_size parameter for compress_blob
compress_blob took src, src_size, dst and *dst_size, but dst_size
wasn't used as an input parameter with the size of dst, but only as an
output parameter. dst was implicitly assumed to be at least src_size-1.

This code wasn't *wrong*, because the only real caller in
journal-file.c got it right. But it was misleading, and the tests in
test-compress.c got it wrong, and worked only because the output
buffer happened to be the same size as input buffer. So add a seperate
dst_allocated_size parameter to make it explicit what the size of the
buffer is, and to allow test to proceed with different output buffer
sizes.
2015-12-13 14:54:47 -05:00
Zbigniew Jędrzejewski-Szmek 1f4b467daa journal: in some cases we have to decompress the full lz4 field
lz4 has to decompress a whole "sequence" at a time. When the compressed
data is composed of a repeating pattern, the whole set of repeats has
do be docompressed, and the output buffer has to be big enough.

This is unfortunate, because potentially the slowdown is very big. We
are only interested in the field name, but we might have to decompress
the whole thing. But the full cost will be borne out only when the
full entry is a repeating pattern. In practice this shouldn't happen
(apart from tests and the like). Hopefully lz4 will be fixed to avoid
this problem, or it will grow a new function which we can use [1], so
this fix should be remporary.

[1] https://groups.google.com/d/msg/lz4c/_3kkz5N6n00/oTahzqErCgAJ
2015-12-13 14:54:47 -05:00
Zbigniew Jędrzejewski-Szmek b3aa622929 lz4: fix size check which had no chance of working on big-endian 2015-12-02 09:50:01 -05:00
Lennart Poettering b5efdb8af4 util-lib: split out allocation calls into alloc-util.[ch] 2015-10-27 13:45:53 +01:00
Lennart Poettering 8b43440b7e util-lib: move string table stuff into its own string-table.[ch] 2015-10-27 13:25:56 +01:00
Lennart Poettering c004493cde util-lib: split out IO related calls to io-util.[ch] 2015-10-26 01:24:38 +01:00
Tom Gundersen 7c8871d315 Merge pull request #1654 from poettering/util-lib
Various changes to src/basic/
2015-10-25 14:22:43 +01:00
Lennart Poettering 3ffd4af220 util-lib: split out fd-related operations into fd-util.[ch]
There are more than enough to deserve their own .c file, hence move them
over.
2015-10-25 13:19:18 +01:00
Lennart Poettering 07630cea1f util-lib: split our string related calls from util.[ch] into its own file string-util.[ch]
There are more than enough calls doing string manipulations to deserve
its own files, hence do something about it.

This patch also sorts the #include blocks of all files that needed to be
updated, according to the sorting suggestions from CODING_STYLE. Since
pretty much every file needs our string manipulation functions this
effectively means that most files have sorted #include blocks now.

Also touches a few unrelated include files.
2015-10-24 23:05:02 +02:00
Lennart Poettering 0240c60369 journal: irrelevant coding style fixes 2015-10-24 15:08:15 +02:00
Zbigniew Jędrzejewski-Szmek 8e64dd1ecf compress: remove the lz4 v1 compression
This was the original lz4 file header, custom in systemd, that was
not compatible with the lz4 binary. It was not compiled in by default,
and was only used for coredumps stored as files on disk. It is safe to
remove it after a transition period in which coredumps have been
rotated.
2015-10-23 09:46:23 -04:00
Zbigniew Jędrzejewski-Szmek 5146f9f065 compress: return errors without logging, do not fake errno
Logging for compression and decompression is assymetrical on purpose:
if compiled without some type of compression, those compression code
paths should never be invoked. OTOH, it is possible to encounter
unsupported format on decompression, so leave those log_debug statements
in, to make it easier to diagnose stuff.
2015-10-14 21:24:36 -04:00
Zbigniew Jędrzejewski-Szmek e068517205 compress: fix mmap error handling 2015-10-14 10:15:27 -04:00
Zbigniew Jędrzejewski-Szmek 4b5bc5396c coredump: use lz4frame api to compress coredumps
This converts the stream compression to use the new lz4frame api,
compatible with lz4cat. Previous code used custom headers, so the
compressed file was not compatible with lz4 command line tools.
I considered this the last blocker to using lz4 by default.

Speed seems to be reasonable, although a bit (a few percent) slower
than the lz4 binary, even though compression is the same. I don't
consider this important. It could be caused by the overhead of library
calls, but is probably caused by slightly different buffer sizes or
such. The code in this patch uses mmap, since since this allows the
buffer to be reused while not making the code more complicated at all.
In my testing, this version is noticably faster (~20%) than a naive
single-buffered version. mmap can cause the program to be killed with
SIGBUS, if the underlying file is truncated or a disk error occurs. We
only use this from within coredump and coredumpctl, so I don't
consider this an issue.

Old decompression code is retained and is used if the new code fails
indicating a format error. There have been reports of various smaller
distributions using previous lz4 code, i.e. the old format, and it is
nice to provide backwards compatibility. We can remove the legacy code
in a few versions.

The way that blobs are compressed in the journal is not affected.
2015-10-10 23:05:21 -04:00
Lennart Poettering 59f448cf15 tree-wide: never use the off_t unless glibc makes us use it
off_t is a really weird type as it is usually 64bit these days (at least
in sane programs), but could theoretically be 32bit. We don't support
off_t as 32bit builds though, but still constantly deal with safely
converting from off_t to other types and back for no point.

Hence, never use the type anymore. Always use uint64_t instead. This has
various benefits, including that we can expose these values directly as
D-Bus properties, and also that the values parse the same in all cases.
2015-09-10 18:16:18 +02:00
Zbigniew Jędrzejewski-Szmek a6dcc7e592 Introduce loop_read_exact helper
Usually when using loop_read(), we want to read the full buffer.
Add a helper that mirrors loop_write(), and returns 0 when full buffer
was read, and an error otherwise.

Use -ENODATA for the short read, to distinguish it from a read error.
2015-03-09 22:10:54 -04:00
Thomas Hindoe Paaboel Andersen 2eec67acbb remove unused includes
This patch removes includes that are not used. The removals were found with
include-what-you-use which checks if any of the symbols from a header is
in use.
2015-02-23 23:53:42 +01:00
Zbigniew Jędrzejewski-Szmek 1fa2f38f0f Assorted format fixes
Types used for pids and uids in various interfaces are unpredictable.
Too bad.
2015-01-22 01:14:52 -05:00
Zbigniew Jędrzejewski-Szmek 553acb7b6b treewide: sanitize loop_write
loop_write() didn't follow the usual systemd rules and returned status
partially in errno and required extensive checks from callers. Some of
the callers dealt with this properly, but many did not, treating
partial writes as successful. Simplify things by conforming to usual rules.
2014-12-09 21:36:08 -05:00
Evangelos Foutras b4232628f3 journal/compress: use LZ4_compress_continue()
We can't use LZ4_compress_limitedOutput_continue() because in the
worst-case scenario the compressed output can be slightly bigger than
the input block. This generally affects very few blocks and is no reason
to abort the compression process.

I ran into this when I noticed that Chromium core dumps weren't being
compressed. After switching to LZ4_compress_continue() a ~330MB Chromium
core dump gets compressed to ~17M.
2014-08-30 17:41:15 -04:00
Zbigniew Jędrzejewski-Szmek fa1c4b518e Fix misuse of uint64_t as size_t
They have different size on 32 bit, so they are really not interchangable.
2014-08-03 23:53:49 -04:00
Zbigniew Jędrzejewski-Szmek 01c3322e01 compress: fix return value 2014-07-18 21:44:36 -04:00
Zbigniew Jędrzejewski-Szmek 3b1a55e110 Fix build without any compression enabled 2014-07-11 10:42:27 -04:00
Jon Severinsson 1930eed2a7 journal/compress: improve xz compression performance
The new lzma2 compression options at the top of compress_blob_xz are
equivalent to using preset "0", exept for using a 1 MiB dictionary
(the same as preset "1"). This makes the memory usage at most 7.5 MiB
in the compressor, and 1 MiB in the decompressor, instead of the
previous 92 MiB in the compressor and 8 MiB in the decompressor.

According to test-compress-benchmark this commit makes XZ compression
20 times faster, with no increase in compressed data size.
Using more realistic test data (an ELF binary rather than repeating
ASCII letters 'a' through 'z' in order) it only provides a factor 10
speedup, and at a cost if a 10% increase in compressed data size.
But that is still a worthwhile trade-off.

According to test-compress-benchmark XZ compression is still 25 times
slower than LZ4, but the compressed data is one eighth the size.
Using more realistic test data XZ compression is only 18 times slower
than LZ4, and the compressed data is only one quarter the size.

$ ./test-compress-benchmark
XZ: compressed & decompressed 2535300963 bytes in 42.30s (57.15MiB/s), mean compresion 99.95%, skipped 3570 bytes
LZ4: compressed & decompressed 2535303543 bytes in 1.60s (1510.60MiB/s), mean compresion 99.60%, skipped 990 bytes
2014-07-08 23:16:21 -04:00
Zbigniew Jędrzejewski-Szmek d89c8fdf48 journal: add LZ4 as optional compressor
Add liblz4 as an optional dependency when requested with --enable-lz4,
and use it in preference to liblzma for journal blob and coredump
compression. To retain backwards compatibility, XZ is used to
decompress old blobs.

Things will function correctly only with lz4-119.

Based on the benchmarks found on the web, lz4 seems to be the best
choice for "quick" compressors atm.

For pkg-config status, see http://code.google.com/p/lz4/issues/detail?id=135.
2014-07-06 19:06:03 -04:00
Zbigniew Jędrzejewski-Szmek 5e592c66bd journal/compress: return early in uncompress_startswith
uncompress_startswith would always decode the whole stream, even
if it did not start with the given prefix.

Reallocation policy was also strange.
2014-07-06 19:06:02 -04:00
Zbigniew Jędrzejewski-Szmek 347272731e coredump: make compression configurable
Add Compression={none,xz} and CompressionLevel=0-9 settings. Defaults
are xz/6.

Compression=filesystem may be added later.

I picked "xz" for the compression "type", since we might want to add
different compressors later on. XZ is fairly memory and CPU intensive, and
embedded users will likely want to use LZO or some other lightweight compression
mechanism.
2014-06-26 01:41:04 -04:00
Zbigniew Jędrzejewski-Szmek 355b59e252 journal/compress: add stream compression/decompression functions 2014-06-26 01:41:04 -04:00
Zbigniew Jędrzejewski-Szmek 76cc0bf682 journal/compress: simplify compress_blob 2014-06-26 01:41:04 -04:00
Lennart Poettering 93b73b064c journal: by default do not decompress dat objects larger than 64K
This introduces a new data threshold setting for sd_journal objects
which controls the maximum size of objects to decompress. This is
relieves the library from having to decompress full data objects even
if a client program is only interested in the initial part of them.

This speeds up "systemd-coredumpctl" drastically when invoked without
parameters.
2012-11-21 00:28:00 +01:00
Lennart Poettering 5430f7f2bc relicense to LGPLv2.1 (with exceptions)
We finally got the OK from all contributors with non-trivial commits to
relicense systemd from GPL2+ to LGPL2.1+.

Some udev bits continue to be GPL2+ for now, but we are looking into
relicensing them too, to allow free copy/paste of all code within
systemd.

The bits that used to be MIT continue to be MIT.

The big benefit of the relicensing is that closed source code may now
link against libsystemd-login.so and friends.
2012-04-12 00:24:39 +02:00
Lennart Poettering e4e61fdbed journal: add missing compress.[ch] 2011-12-21 19:00:10 +01:00