This is useful for development where overwriting files out side
the configured prefix will affect the host as well as stateless
systems such as NixOS that don't let packages install to /etc but handle
configuration on their own.
Alternative to https://github.com/systemd/systemd/pull/17501
tested with:
$ mkdir inst build && cd build
$ meson \
-Dcreate-log-dirs=false \
-Dsysvrcnd-path=$(realpath ../inst)/etc/rc.d \
-Dsysvinit-path=$(realpath ../inst)/etc/init.d \
-Drootprefix=$(realpath ../inst) \
-Dinstall-sysconfdir=false \
--prefix=$(realpath ../inst) ..
$ ninja install
We had two of each: both homectl and journalctl had the whole dlopen()
wrapper, and journalctl had two implementations (slightly different) of the
code to print the fss:// pattern.
print_qrcode() now returns -EOPNOTSUPP when compiled with qrcode support. Both
callers ignore the return value, so this changes nothing.
No functional change.
Let's not have #ifdeffery both in the consumers and the providers of the
selinux glue code. Unless the code is particularly complex, let's do the
ifdeffery only in the provider of the selinux glue code, and let's keep
the consumers simple and just invoke it.
I find this version much more readable.
Add replacement defines so that when acl/libacl.h is not available, the
ACL_{READ,WRITE,EXECUTE} constants are also defined. Those constants were
declared in the kernel headers already in 1da177e4c3f41524e886b7f1b8a0c1f,
so they should be the same pretty much everywhere.
This comes up occasionally with new users. The phrase "Logs begin ..." is
ambiguous because it can be taken to mean the logs being displayed or all logs
(the intended meaning). Let's rephrase this as "Journal begins ..." to make
this clearer.
We talked about the verification key, then about sealing keys, and then
about the verification key again. Let's shorten things a bit, and divide
the output in three paragraphs: one about the machine, one about the sealing
keys, and one about verification keys and the qr code with them.
decompress_blob_zstd() would allocate ever bigger buffers in a loop trying to
get a buffer big enough to decompress the input data. This is wasteful, since
we can just query the size of the decompressed data from the compressed header.
Worse, it doesn't work when the output size is capped, i.e. when dst_max != 0.
If the decompressed blob happened to be bigger than dst_max, decompression
would fail with -ENOBUFS. We need to use "stream decompression" instead, and
only get min(uncompressed size, dst_max) bytes of output.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1856037 in a second way.
SD_JOURNAL_FOREACH_DATA() and SD_JOURNAL_FOREACH_UNIQUE() would immediately
terminate when a field couldn't be accessed. This can happen for example when a
field is compressed with an unavailable compression format. But it's likely
that this is the wrong thing to do: the caller for example might want to
iterate over the fields but isn't interested in all of them. coredumpctl is
like this: it uses SD_JOURNAL_FOREACH_DATA() but only uses a subset of the
fields.
Add two new functions sd_journal_enumerate_good_data() and
sd_journal_enumerate_good_unique() that retry sd_journal_enumerate_data() and
sd_journal_enumerate_unique() if the return value is something that applies to
a single field: ENOBUS, E2BIG, EOPNOTSUPP.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1856037.
An alternative would be to make the macros themselves smarter instead of adding
new symbols, and do the looping internally in the macro. I don't like that
approach for two reasons. First, it would embed the logic in the macro, so
recompilation would be required if we decide to update the logic. With the
current version of the patch, recompilation is required to use the new symbols,
but after that, library upgrades are enough. So the current approach is safer
in case further updates are needed. Second, our headers use primitive C, and it
is hard to do the macros without using newer features.
Even with the new keyed hash table journal feature: if an attacker
manages to get access to the journal file id it could synthesize records
that result in hash collisions. Let's rotate automatically when we
notice that, so that a new journal file ID is generated, our performance
is restored and the attacker has to guess a new file ID before being
able to trigger the issue again.
That said, untrusted peers should never get access to journal files in
the first case...