* Use more secure https://www.uefi.orghttp://www.uefi.org directs to https://uefi.org/, so this saves one
redirect.
$ curl -I http://www.uefi.org
HTTP/1.1 302 Found
Server: nginx
Date: Tue, 09 Apr 2019 14:54:46 GMT
Content-Type: text/html; charset=iso-8859-1
Connection: keep-alive
X-Content-Type-Options: nosniff
Location: https://uefi.org/
Cache-Control: max-age=1209600
Expires: Tue, 23 Apr 2019 14:54:46 GMT
Run the command below to update all occurrences.
git grep -l http://www.uefi.org | xargs sed -i 's,http://www.uefi.org,https://www.uefi.org,'
* Use https://uefi.org to save redirect
Save one redirect by using the target location.
$ curl -I https://www.uefi.org
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Tue, 09 Apr 2019 14:55:42 GMT
Content-Type: text/html; charset=iso-8859-1
Connection: keep-alive
X-Content-Type-Options: nosniff
Location: https://uefi.org/
Cache-Control: max-age=1209600
Expires: Tue, 23 Apr 2019 14:55:42 GMT
Run the command below to update all occurrences.
git grep -l https://www.uefi.org | xargs sed -i 's,https://www.uefi.org,https://uefi.org,'
This should PID collisions a tiny bit less likely, and thus improve
security and robustness.
2^22 isn't particularly a lot either, but it's the current kernel
limitation.
Bumping this limit was suggested by Linus himself:
https://lwn.net/ml/linux-kernel/CAHk-=wiZ40LVjnXSi9iHLE_-ZBsWFGCgdmNiYZUXn1-V5YBg2g@mail.gmail.com/
Let's experiment with this in systemd upstream first. Downstreams and
users can after all still comment this easily.
Besides compat concern the most often heard issue with such high PIDs is
usability, since they are potentially hard to type. I am not entirely sure though
whether 4194304 (as largest new PID) is that much worse to type or to
copy than 65563.
This should also simplify management of per system tasks limits as by
this move the sysctl /proc/sys/kernel/threads-max becomes the primary
knob to control how many processes to have in parallel.
This adds a new per-service OOMPolicy= (along with a global
DefaultOOMPolicy=) that controls what to do if a process of the service
is killed by the kernel's OOM killer. It has three different values:
"continue" (old behaviour), "stop" (terminate the service), "kill" (let
the kernel kill all the service's processes).
On top of that, track OOM killer events per unit: generate a per-unit
structured, recognizable log message when we see an OOM killer event,
and put the service in a failure state if an OOM killer event was seen
and the selected policy was not "continue". A new "result" is defined
for this case: "oom-kill".
All of this relies on new cgroupv2 kernel functionality: the
"memory.events" notification interface and the "memory.oom.group"
attribute (which makes the kernel kill all cgroup processes
automatically).
Let's rename the .cgroup_inotify_wd field of the Unit object to
.cgroup_control_inotify_wd. Let's similarly rename the hashmap
.cgroup_inotify_wd_unit of the Manager object to
.cgroup_control_inotify_wd_unit.
Why? As preparation for a later commit that allows us to watch the
"memory.events" cgroup attribute file in addition to the "cgroup.events"
file we already watch with the fields above. In that later commit we'll
add new fields "cgroup_memory_inotify_wd" to Unit and
"cgroup_memory_inotify_wd_unit" to Manager, that are used to watch these
other events file.
No change in behaviour. Just some renaming.
So far the priorities for cgroup empty event handling were pretty weird.
The raw events (on cgroupsv2 from inotify, on cgroupsv1 from the agent
dgram socket) where scheduled at a lower priority than the cgroup empty
queue dispatcher. Let's swap that and ensure that we can coalesce events
more agressively: let's process the raw events at higher priority than
the cgroup empty event (which remains at the same prio).
The description of NamePolicy= implied this, but didn't spell it out. It's a
very common use case, so let's add a bit of explanation and ehance the example
a bit.
Inspired by https://bugzilla.redhat.com/show_bug.cgi?id=1695894.
time-sync.target is supposed to indicate system clock is synchronized
with a remote clock, but as used through 241 it only provided a system
clock that was updated based on a locally-maintained timestamp. Systems
that are powered off for extended periods would not come up with
accurate time.
Retain the existing behavior using a new time-set.target leaving
time-sync.target for cases where accuracy is required.
Closes#8861