Previously, we'd already have explicit logging for the case where
$XDG_RUNTIME_DIR is not set. Let's also add some explicit logging for
the EPERM/ACCESS case. Let's also in both cases suggest the
--machine=<user>@.host syntax.
And while we are at it, let's remove side-effects from the macro.
By checking for both the EPERM/EACCES case and the $XDG_RUNTIME_DIR case
we will now catch both the cases where people use "su" to issue a
"systemctl --user" operation, and those where they (more correctly, but
still not good enough) call "su -".
Fixes: #17901
This is unfortunately harder to implement than it sounds. The user's bus
is bound a to the user's lifecycle after all (i.e. only exists as long
as the user has at least one PAM session), and the path dynamically (at
least theoretically, in practice it's going to be the same always)
generated via $XDG_RUNTIME_DIR in /run/.
To fix this properly, we'll thus go through PAM before connecting to a
user bus. Which is hard since we cannot just link against libpam in the
container, since the container might have been compiled entirely
differently. So our way out is to use systemd-run from outside, which
invokes a transient unit that does PAM from outside, doing so via D-Bus.
Inside the transient unit we then invoke systemd-stdio-bridge which
forwards D-Bus from the user bus to us. The systemd-stdio-bridge makes
up the PAM session and thus we can sure tht the bus exists at least as
long as the bus connection is kept.
Or so say this differently: if you use "systemctl -M lennart@foobar"
now, the bus connection works like this:
1. sd-bus on the host forks off:
systemd-run -M foobar -PGq --wait -pUser=lennart -pPAMName=login systemd-stdio-bridge
2. systemd-run gets a connection to the "foobar" container's
system bus, and invokes the "systemd-stdio-bridge" binary as
transient service inside a PAM session for the user "lennart"
3. The systemd-stdio-bridge then proxies our D-Bus traffic to
the user bus.
sd-bus (on host) → systemd-run (on host) → systemd-stdio-bridge (in container)
Complicated? Well, to some point yes, but otoh it's actually nice in
various other ways, primarily as it makes the -H and -M codepaths more
alike. In the -H case (i.e. connect to remote host via SSH) a very
similar three steps are used. The only difference is that instead of
"systemd-run" the "ssh" binary is used to invoke the stdio bridge in a
PAM session of some other system. Thus we get similar implementation and
isolation for similar operations.
Fixes: #14580
The immediately following container_get_leader() call validate the name
anyway, no need to twice exactly the same way twice immediately after
each other.
When something fails, we need some logs to figure out what happened.
This is primarily relevant for connection errors, but in general we
want to log about all errors, even if they are relatively unlikely.
We want one log on failure, and generally no logs on success.
The general idea is to not log in static functions, and to log in the
non-static functions. Non-static functions which call other functions
may thus log or not log as appropriate to have just one log entry in the
end.
Normally, the udev rules operate on "change" events. But when
coldplugging, there's an "add" event present. The udev rules have to
recognize this and do some actions in this particular situation, too.
Also, we don't want the nodes to be created prematurely on "add"
events while not coldplugging. The udev rules will check
DM_UDEV_PRIMARY_SOURCE_FLAG to see if the device was activated
correctly before and if not, it ignore the "add" event totally.
This way the udev rules can support udev triggers generating "add"
events (e.g. "udevadm trigger --action=add" or
"echo add > /sys/block/<dm_device>/uevent").
In this case, the udevd service is started after
systemd-cryptsetup@config.service, is started, which will cause udevd
service to miss the "change" uevent with DM_UDEV_PRIMARY_SOURCE_FLAG
flag generated by systemd-cryptsetup@config.service. To solve this
issue, we let the cryptsetup service be started after the udevd
service.
When seccomp_restrict_archs is called, architectures that are blocked
are replaced by the SECCOMP_LOCAL_ARCH_BLOCKED marker so that they are
not disabled again and filters are not installed for them.
This can make some service that use SystemCallArchitecture= and
SystemCallFilter= start faster.
E.g. in nss-resolve it is still useful to print the location of the error:
src/test/test-nss.c:231: dlsym(0x0x1dc6fb0, _nss_resolve_gethostbyname2_r) → 0x0x7fdbfc53f626
(string):1:40: JSON field ifindex is out of bounds for an interface index.
I opted to use a partially duplicated if condition to avoid nesting. It's nice
to have the log calls vertically aligned. The compiler will optimize this nicely.
Let's add a dlopen_qrencode() function that does the actual dlopen()
stuff and caches the result.
This is useful so that we later can automatically test for all dlopen
hookups to work correctly.
Similar to the previous commit. All callers pass NULL. This will
ease initial nftables backend implementation (less features to cover).
Add the function parameters as local variables and let compiler
remove branches. Followup patch can remove the if (NULL) conditionals.
All users pass a NULL/0 for those, things haven't changed since 2015
when this was added originally, so remove the arguments.
THe paramters are re-added as local function variables, initalised
to NULL or 0. A followup patch can then manually remove all
if (NULL) rather than leaving dead-branch optimization to compiler.
Reason for not doing it here is to ease patch review.
Not requiring support for this will ease initial nftables backend
implementation.
In case a use-case comues up later this feature can be re-added.
Less 568 properly shows urlified strings.
Putative NEWS entry:
* Urlification is now enabled by default even when a pager is used.
Previously it was disabled, because less would not show such markup
properly. This has been fixed in less 568.
Please either upgrade less, or use SYSTEMD_URLIFY=0 to disable the
feature.
This reverts the gist of da1921a5c3 and
0d9fca76bb (for ppc).
Quoting #17559:
> libseccomp 2.5 added socket syscall multiplexing on ppc64(el):
> https://github.com/seccomp/libseccomp/pull/229
>
> Like with i386, s390 and s390x this breaks socket argument filtering, so
> RestrictAddressFamilies doesn't work.
>
> This causes the unit test to fail:
> /* test_restrict_address_families */
> Operating on architecture: ppc
> Failed to install socket family rules for architecture ppc, skipping: Operation canceled
> Operating on architecture: ppc64
> Failed to add socket() rule for architecture ppc64, skipping: Invalid argument
> Operating on architecture: ppc64-le
> Failed to add socket() rule for architecture ppc64-le, skipping: Invalid argument
> Assertion 'fd < 0' failed at src/test/test-seccomp.c:424, function test_restrict_address_families(). Aborting.
>
> The socket filters can't be added so `socket(AF_UNIX, SOCK_DGRAM, 0);` still
> works, triggering the assertion.
Fixes#17559.
In many cases the tables are largely the same, hence define a common set
of macros to generate the common parts.
This adds in a couple of missing specifiers here and there, so is more
thant just refactoring: it actually fixes accidental omissions.
Note that some entries that look like they could be unified under these
macros can't really be unified, since they are slightly different. For
example in the DNSSD service logic we want to use the DNSSD hostname for
%H rather than the unmodified kernel one.
These three syscalls are internally used by libc's memory allocation
logic, i.e. ultimately back malloc(). Allocating a bit of memory is so
basic, it should just be in the default set.
This fixes a couple of issues with asan/msan and the seccomp tests: when
asan/msan is used some additional, large memory allocations take place
in the background, and unless mmap/mmap2/brk are allowlisted these will
fail, aborting the test prematurely.
There are downsides to using fexecve:
when fexecve is used (for normal executables), /proc/pid/status shows Name: 3,
which means that ps -C foobar doesn't work. pidof works, because it checks
/proc/self/cmdline. /proc/self/exe also shows the correct link, but requires
privileges to read. /proc/self/comm also shows "3".
I think this can be considered a kernel deficiency: when O_CLOEXEC is used, this
"3" is completely meaningless. It could be any number. The kernel should use
argv[0] instead, which at least has *some* meaning.
I think the approach with fexecve/execveat is instersting, so let's provide it
as opt-in.
For scripts, when we call fexecve(), on new kernels glibc calls execveat(),
which fails with ENOENT, and then we fall back to execve() which succeeds:
[pid 63039] execveat(3, "", ["/home/zbyszek/src/systemd/test/test-path-util/script.sh", "--version"], 0x7ffefa3633f0 /* 0 vars */, AT_EMPTY_PATH) = -1 ENOENT (No such file or directory)
[pid 63039] execve("/home/zbyszek/src/systemd/test/test-path-util/script.sh", ["/home/zbyszek/src/systemd/test/test-path-util/script.sh", "--version"], 0x7ffefa3633f0 /* 0 vars */) = 0
But on older kernels glibc (some versions?) implement a fallback which falls
into the same trap with bash $0:
[pid 13534] execve("/proc/self/fd/3", ["/home/test/systemd/test/test-path-util/script.sh", "--version"], 0x7fff84995870 /* 0 vars */) = 0
We don't want that, so let's call execveat() ourselves. Then we can do the
execve() fallback as we want.
FixedRandomDelay=yes will use
`siphash24(sd_id128_get_machine() || MANAGER_IS_SYSTEM(m) || getuid() || u->id)`,
where || is concatenation, instead of a random number to choose a value between
0 and RandomizedDelaySec= as the timer delay.
This essentially sets up a fixed, but seemingly random, offset for each timer
iteration rather than having a random offset recalculated each time it fires.
Closes#10355
Co-author: Anita Zhang <the.anitazha@gmail.com>
Quoting the manual page of stime(2): "Starting with glibc 2.31, this function
is no longer available to newly linked applications and is no longer declared
in <time.h>."
This beefs up the READ_FULL_FILE_CONNECT_SOCKET logic of
read_full_file_full() a bit: when used a sender socket name may be
specified. If specified as NULL behaviour is as before: the client
socket name is picked by the kernel. But if specified as non-NULL the
client can pick a socket name to use when connecting. This is useful to
communicate a minimal amount of metainformation from client to server,
outside of the transport payload.
Specifically, these beefs up the service credential logic to pass an
abstract AF_UNIX socket name as client socket name when connecting via
READ_FULL_FILE_CONNECT_SOCKET, that includes the requesting unit name
and the eventual credential name. This allows servers implementing the
trivial credential socket logic to distinguish clients: via a simple
getpeername() it can be determined which unit is requesting a
credential, and which credential specifically.
Example: with this patch in place, in a unit file "waldo.service" a
configuration line like the following:
LoadCredential=foo:/run/quux/creds.sock
will result in a connection to the AF_UNIX socket /run/quux/creds.sock,
originating from an abstract namespace AF_UNIX socket:
@$RANDOM/unit/waldo.service/foo
(The $RANDOM is replaced by some randomized string. This is included in
the socket name order to avoid namespace squatting issues: the abstract
socket namespace is open to unprivileged users after all, and care needs
to be taken not to use guessable names)
The services listening on the /run/quux/creds.sock socket may thus
easily retrieve the name of the unit the credential is requested for
plus the credential name, via a simpler getpeername(), discarding the
random preifx and the /unit/ string.
This logic uses "/" as separator between the fields, since both unit
names and credential names appear in the file system, and thus are
designed to use "/" as outer separators. Given that it's a good safe
choice to use as separators here, too avoid any conflicts.
This is a minimal patch only: the new logic is used only for the unit
file credential logic. For other places where we use
READ_FULL_FILE_CONNECT_SOCKET it is probably a good idea to use this
scheme too, but this should be done carefully in later patches, since
the socket names become API that way, and we should determine the right
amount of info to pass over.
* Existing valid rule files written with KEY="value" are not affected
* Now, KEY=e"value\n" becomes valid. Where `\n` is a newline character
* Escape sequences supported by src/basic/escape.h:cunescape() is
supported
We had two of each: both homectl and journalctl had the whole dlopen()
wrapper, and journalctl had two implementations (slightly different) of the
code to print the fss:// pattern.
print_qrcode() now returns -EOPNOTSUPP when compiled with qrcode support. Both
callers ignore the return value, so this changes nothing.
No functional change.
This adds a way to control SO_TIMESTAMP/SO_TIMESTAMPNS socket options
for sockets PID 1 binds to.
This is useful in journald so that we get proper timestamps even for
ingress log messages that are submitted before journald is running.
We recently turned on packet info metadata from PID 1 for these sockets,
but the timestamping info was still missing. Let's correct that.
Let's try to make collisions when multiple clients want to use the same
device less likely, by sleeping a random time on collision.
The loop device allocation protocol is inherently collision prone:
first, a program asks which is the next free loop device, then it tries
to acquire it, in a separate, unsynchronized setp. If many peers do this
all at the same time, they'll likely all collide when trying to
acquire the device, so that they need to ask for a free device again and
again.
Let's make this a little less prone to collisions, reducing the number
of failing attempts: whenever we notice a collision we'll now wait
short and randomized time, making it more likely another peer succeeds.
(This also adds a similar logic when retrying LOOP_SET_STATUS64, but
with a slightly altered calculation, since there we definitely want to
wait a bit, under all cases)
On systems that have a udev before
a7fdc6cbd3 uevents would sometimes be
eaten because of device node collisions that caused the ruleset to fail.
Let's add an (ugly) work-around for this, so that we can even work with
such an older udev.
On current kernels (5.8 for example) under some conditions I don't fully
grok it might happen that a detached loopback block device still has
partition block devices around. Accessing these partition block devices
results in EIO errors (that also fill up dmesg). These devices cannot be
claned up with LOOP_CLR_FD (since the main device already is officially
detached), nor with LOOP_CTL_DELETE (returns EBUSY as long as the
partitions still exist). This is a kernel bug. But it appears to apply
to all recent kernels. I cannot really pin down what triggers this,
suffice to say our heavy-duty test can trigger it.
Either way, let's do something about it: when we notice this state we'll
attach an empty file to it, which is guaranteed to have to part table.
This makes the partitions go away. After closing/reoping the device we
hence are good to go again. ugly workaround, but I think OK enough to
use.
The net result is: with this commit, we'll guarantee that by the time we
attach a file to the loopback device we have zero kernel partitions
associated with it. Thus if we then wait for the kernel partitions we
need to appear we should have entirely reliable behaviour even if
loopback devices by the name are heavily recycled and udev events reach
us very late.
Fixes: #16858
Previously, we'd just wait for the first moment where the kernel exposes
the same numbre of partitions as libblkid tells us. After that point we
enumerate kernel partitions and look for matching libblkid partitions.
With this change we'll instead enumerate with libblkid only, and then
wait for each kernel partition to show up with the exact parameters we
expect them to show up. Once that happens we are happy.
Care is taken to use the udev device notification messages only as hint
to recheck what the kernel actually says. That's because we are
otherwise subject to a race: we might see udev events from an earlier
use of a loopback device. After all these devices are heavily recycled.
Under the assumption that we'll get udev events for *at least* all
partitions we care about (but possibly more) we can fix the race
entirely with one more fix coming in a later commit: if we make sure
that a loopback block device has zero kernel partitions when we take
possession of it, it doesn't matter anymore if we get spurious udev
events from a previous use. All we have to do is notice when the devices
we need all popped up.
When we fall back to classic LOOP_SET_FD logic in case LOOP_CONFIGURE
didn't work we issue LOOP_CLR_FD first. But that call turns out to be
potentially async in the kernel: if something else (let's say
udev/blkid) is accessing the device the ioctl just sets the autoclear
flag and exits. Hence quite often the LOOP_SET_FD will subsequently
fail. Let's avoid the trouble, and immediately exit with EBUSY if
LOOP_CONFIGURE fails, and but remember that LOOP_CONFIGURE is not
available so that on the next iteration we go directly for LOOP_SET_FD
instead.
v246 is long released. Hence the new scheme should be named v247.
(Interesting, how we pretty systematically for the last releases changed
the scheme only every second release)