2017-11-18 17:09:20 +01:00
|
|
|
/* SPDX-License-Identifier: LGPL-2.1+ */
|
2013-10-10 04:40:28 +02:00
|
|
|
#ifndef foosdeventhfoo
|
|
|
|
#define foosdeventhfoo
|
|
|
|
|
|
|
|
/***
|
|
|
|
systemd is free software; you can redistribute it and/or modify it
|
|
|
|
under the terms of the GNU Lesser General Public License as published by
|
|
|
|
the Free Software Foundation; either version 2.1 of the License, or
|
|
|
|
(at your option) any later version.
|
|
|
|
|
|
|
|
systemd is distributed in the hope that it will be useful, but
|
|
|
|
WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
|
|
Lesser General Public License for more details.
|
|
|
|
|
|
|
|
You should have received a copy of the GNU Lesser General Public License
|
|
|
|
along with systemd; If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
***/
|
|
|
|
|
|
|
|
#include <inttypes.h>
|
|
|
|
#include <signal.h>
|
2015-11-18 22:46:33 +01:00
|
|
|
#include <sys/epoll.h>
|
sd-event: add new API for subscribing to inotify events
This adds a new call sd_event_add_inotify() which allows watching for
inotify events on specified paths.
sd-event will try to minimize the number of inotify fds allocated, and
will try to add file watches to the same inotify fd objects as far as
that's possible. Doing this kind of inotify object should optimize
behaviour in programs that watch a limited set of mostly independent
files as in most cases a single inotify object will suffice for watching
all files.
Traditionally, this kind of coalescing logic (i.e. that multiple event
sources are implemented on top of a single inotify object) was very hard
to do, as the inotify API had serious limitations: it only allowed
adding watches by path, and would implicitly merge watches installed on
the same node via different path, without letting the caller know about
whether such merging took place or not.
With the advent of O_PATH this issue can be dealt with to some point:
instead of adding a path to watch to an inotify object with
inotify_add_watch() right away, we can open the path with O_PATH first,
call fstat() on the fd, and check the .st_dev/.st_ino fields of that
against a list of watches we already have in place. If we find one we
know that the inotify_add_watch() will update the watch mask of the
existing watch, otherwise it will create a new watch. To make this
race-free we use inotify_add_watch() on the /proc/self/fd/ path of the
O_PATH fd, instead of the original path, so that we do the checking and
watch updating with guaranteed the same inode.
This approach let's us deal safely with inodes that may appear under
various different paths (due to symlinks, hardlinks, bind mounts, fs
namespaces). However it's not a perfect solution: currently the kernel
has no API for changing the watch mask of an existing watch -- unless
you have a path or fd to the original inode. This means we can "merge"
the watches of the same inode of multiple event sources correctly, but
we cannot "unmerge" it again correctly in many cases, as access to the
original inode might have been lost, due to renames, mount/unmount, or
deletions. We could in theory always keep open an O_PATH fd of the inode
to watch so that we can change the mask anytime we want, but this is
highly problematics, as it would consume too many fds (and in fact the
scarcity of fds is the reason why watch descriptors are a separate
concepts from fds) and would keep the backing mounts busy (wds do not
keep mounts busy, fds do). The current implemented approach to all this:
filter in userspace and accept that the watch mask on some inode might
be higher than necessary due to earlier installed event sources that
might have ceased to exist. This approach while ugly shouldn't be too
bad for most cases as the same inodes are probably wacthed for the same
masks in most implementations.
In order to implement priorities correctly a seperate inotify object is
allocated for each priority that is used. This way we get separate
per-priority event queues, of which we never dequeue more than a few
events at a time.
Fixes: #3982
2018-05-28 16:26:50 +02:00
|
|
|
#include <sys/inotify.h>
|
2015-11-18 22:46:33 +01:00
|
|
|
#include <sys/signalfd.h>
|
|
|
|
#include <sys/types.h>
|
2019-10-30 17:22:49 +01:00
|
|
|
#include <sys/wait.h>
|
2018-01-24 15:53:49 +01:00
|
|
|
#include <time.h>
|
2013-10-10 04:40:28 +02:00
|
|
|
|
2013-11-07 16:44:48 +01:00
|
|
|
#include "_sd-common.h"
|
|
|
|
|
2013-10-10 04:40:28 +02:00
|
|
|
/*
|
|
|
|
Why is this better than pure epoll?
|
|
|
|
|
2014-03-25 04:14:28 +01:00
|
|
|
- Supports event source prioritization
|
|
|
|
- Scales better with a large number of time events because it does not require one timerfd each
|
2013-10-10 21:37:50 +02:00
|
|
|
- Automatically tries to coalesce timer events system-wide
|
2018-10-12 12:12:42 +02:00
|
|
|
- Handles signals, child PIDs, inotify events
|
|
|
|
- Supports systemd-style automatic watchdog event generation
|
2013-10-10 04:40:28 +02:00
|
|
|
*/
|
|
|
|
|
2013-11-07 16:44:48 +01:00
|
|
|
_SD_BEGIN_DECLARATIONS;
|
2013-11-07 03:47:42 +01:00
|
|
|
|
2018-01-22 21:38:07 +01:00
|
|
|
#define SD_EVENT_DEFAULT ((sd_event *) 1)
|
|
|
|
|
2013-10-10 04:40:28 +02:00
|
|
|
typedef struct sd_event sd_event;
|
|
|
|
typedef struct sd_event_source sd_event_source;
|
|
|
|
|
2013-10-11 00:49:11 +02:00
|
|
|
enum {
|
2013-10-11 01:08:15 +02:00
|
|
|
SD_EVENT_OFF = 0,
|
|
|
|
SD_EVENT_ON = 1,
|
2013-10-10 04:40:28 +02:00
|
|
|
SD_EVENT_ONESHOT = -1
|
2013-10-11 00:49:11 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
enum {
|
2015-03-14 11:47:35 +01:00
|
|
|
SD_EVENT_INITIAL,
|
|
|
|
SD_EVENT_ARMED,
|
sd-event: split run into prepare/wait/dispatch
This will allow sd-event to be integrated into an external event loop, which
in turn will allow (say) glib-based applications to use our various libraries,
without manually integrating each of them (bus, rtnl, dhcp, ...).
The external event-loop should integrate sd-event int he following way:
Every iteration must start with a call to sd_event_prepare(), which will
return 0 if no event sources are ready to be processed, a positive value if
they are and a negative value on error. sd_event_prepare() may only be called
following sd_event_dispatch(); a call to sd_event_wait() indicating that no
sources are ready to be dispatched; or a failed call to sd_event_dispatch() or
sd_event_wait().
A successful call to sd_event_prepare() indicating that no event sources are
ready to be dispatched must be followed by a call to sd_event_wait(),
which will return 0 if it timed out without event sources being ready to
be processed, a negative value on error and a positive value otherwise.
sd_event_wait() may only be called following a successful call to
sd_event_prepare() indicating that no event sources are ready to be dispatched.
If sd_event_wait() indicates that some events sources are ready to be
dispatched, it must be followed by a call to sd_event_dispatch(). This
is the only time sd_event_dispatch() may be called.
2014-08-15 18:49:29 +02:00
|
|
|
SD_EVENT_PENDING,
|
2013-10-11 00:49:11 +02:00
|
|
|
SD_EVENT_RUNNING,
|
2013-12-12 22:21:25 +01:00
|
|
|
SD_EVENT_EXITING,
|
2015-11-19 23:33:55 +01:00
|
|
|
SD_EVENT_FINISHED,
|
2016-03-11 19:50:56 +01:00
|
|
|
SD_EVENT_PREPARING
|
2013-10-11 00:49:11 +02:00
|
|
|
};
|
2013-10-10 04:40:28 +02:00
|
|
|
|
2013-11-05 00:51:14 +01:00
|
|
|
enum {
|
2014-03-25 04:14:28 +01:00
|
|
|
/* And everything in-between and outside is good too */
|
2013-11-19 21:12:59 +01:00
|
|
|
SD_EVENT_PRIORITY_IMPORTANT = -100,
|
|
|
|
SD_EVENT_PRIORITY_NORMAL = 0,
|
|
|
|
SD_EVENT_PRIORITY_IDLE = 100
|
2013-11-05 00:51:14 +01:00
|
|
|
};
|
|
|
|
|
2013-11-19 21:12:59 +01:00
|
|
|
typedef int (*sd_event_handler_t)(sd_event_source *s, void *userdata);
|
|
|
|
typedef int (*sd_event_io_handler_t)(sd_event_source *s, int fd, uint32_t revents, void *userdata);
|
|
|
|
typedef int (*sd_event_time_handler_t)(sd_event_source *s, uint64_t usec, void *userdata);
|
|
|
|
typedef int (*sd_event_signal_handler_t)(sd_event_source *s, const struct signalfd_siginfo *si, void *userdata);
|
2019-02-21 16:19:04 +01:00
|
|
|
#if defined _GNU_SOURCE || (defined _POSIX_C_SOURCE && _POSIX_C_SOURCE >= 199309L)
|
2013-11-19 21:12:59 +01:00
|
|
|
typedef int (*sd_event_child_handler_t)(sd_event_source *s, const siginfo_t *si, void *userdata);
|
2016-03-11 19:46:12 +01:00
|
|
|
#else
|
|
|
|
typedef void* sd_event_child_handler_t;
|
|
|
|
#endif
|
sd-event: add new API for subscribing to inotify events
This adds a new call sd_event_add_inotify() which allows watching for
inotify events on specified paths.
sd-event will try to minimize the number of inotify fds allocated, and
will try to add file watches to the same inotify fd objects as far as
that's possible. Doing this kind of inotify object should optimize
behaviour in programs that watch a limited set of mostly independent
files as in most cases a single inotify object will suffice for watching
all files.
Traditionally, this kind of coalescing logic (i.e. that multiple event
sources are implemented on top of a single inotify object) was very hard
to do, as the inotify API had serious limitations: it only allowed
adding watches by path, and would implicitly merge watches installed on
the same node via different path, without letting the caller know about
whether such merging took place or not.
With the advent of O_PATH this issue can be dealt with to some point:
instead of adding a path to watch to an inotify object with
inotify_add_watch() right away, we can open the path with O_PATH first,
call fstat() on the fd, and check the .st_dev/.st_ino fields of that
against a list of watches we already have in place. If we find one we
know that the inotify_add_watch() will update the watch mask of the
existing watch, otherwise it will create a new watch. To make this
race-free we use inotify_add_watch() on the /proc/self/fd/ path of the
O_PATH fd, instead of the original path, so that we do the checking and
watch updating with guaranteed the same inode.
This approach let's us deal safely with inodes that may appear under
various different paths (due to symlinks, hardlinks, bind mounts, fs
namespaces). However it's not a perfect solution: currently the kernel
has no API for changing the watch mask of an existing watch -- unless
you have a path or fd to the original inode. This means we can "merge"
the watches of the same inode of multiple event sources correctly, but
we cannot "unmerge" it again correctly in many cases, as access to the
original inode might have been lost, due to renames, mount/unmount, or
deletions. We could in theory always keep open an O_PATH fd of the inode
to watch so that we can change the mask anytime we want, but this is
highly problematics, as it would consume too many fds (and in fact the
scarcity of fds is the reason why watch descriptors are a separate
concepts from fds) and would keep the backing mounts busy (wds do not
keep mounts busy, fds do). The current implemented approach to all this:
filter in userspace and accept that the watch mask on some inode might
be higher than necessary due to earlier installed event sources that
might have ceased to exist. This approach while ugly shouldn't be too
bad for most cases as the same inodes are probably wacthed for the same
masks in most implementations.
In order to implement priorities correctly a seperate inotify object is
allocated for each priority that is used. This way we get separate
per-priority event queues, of which we never dequeue more than a few
events at a time.
Fixes: #3982
2018-05-28 16:26:50 +02:00
|
|
|
typedef int (*sd_event_inotify_handler_t)(sd_event_source *s, const struct inotify_event *event, void *userdata);
|
2018-11-28 16:28:53 +01:00
|
|
|
typedef _sd_destroy_t sd_event_destroy_t;
|
2013-10-10 04:40:28 +02:00
|
|
|
|
2013-11-11 19:34:13 +01:00
|
|
|
int sd_event_default(sd_event **e);
|
|
|
|
|
2013-10-10 04:40:28 +02:00
|
|
|
int sd_event_new(sd_event **e);
|
|
|
|
sd_event* sd_event_ref(sd_event *e);
|
|
|
|
sd_event* sd_event_unref(sd_event *e);
|
|
|
|
|
2014-02-19 23:54:58 +01:00
|
|
|
int sd_event_add_io(sd_event *e, sd_event_source **s, int fd, uint32_t events, sd_event_io_handler_t callback, void *userdata);
|
2014-03-24 02:49:09 +01:00
|
|
|
int sd_event_add_time(sd_event *e, sd_event_source **s, clockid_t clock, uint64_t usec, uint64_t accuracy, sd_event_time_handler_t callback, void *userdata);
|
2020-07-28 11:17:00 +02:00
|
|
|
int sd_event_add_time_relative(sd_event *e, sd_event_source **s, clockid_t clock, uint64_t usec, uint64_t accuracy, sd_event_time_handler_t callback, void *userdata);
|
2014-02-19 23:54:58 +01:00
|
|
|
int sd_event_add_signal(sd_event *e, sd_event_source **s, int sig, sd_event_signal_handler_t callback, void *userdata);
|
|
|
|
int sd_event_add_child(sd_event *e, sd_event_source **s, pid_t pid, int options, sd_event_child_handler_t callback, void *userdata);
|
2019-10-30 17:22:49 +01:00
|
|
|
int sd_event_add_child_pidfd(sd_event *e, sd_event_source **s, int pidfd, int options, sd_event_child_handler_t callback, void *userdata);
|
sd-event: add new API for subscribing to inotify events
This adds a new call sd_event_add_inotify() which allows watching for
inotify events on specified paths.
sd-event will try to minimize the number of inotify fds allocated, and
will try to add file watches to the same inotify fd objects as far as
that's possible. Doing this kind of inotify object should optimize
behaviour in programs that watch a limited set of mostly independent
files as in most cases a single inotify object will suffice for watching
all files.
Traditionally, this kind of coalescing logic (i.e. that multiple event
sources are implemented on top of a single inotify object) was very hard
to do, as the inotify API had serious limitations: it only allowed
adding watches by path, and would implicitly merge watches installed on
the same node via different path, without letting the caller know about
whether such merging took place or not.
With the advent of O_PATH this issue can be dealt with to some point:
instead of adding a path to watch to an inotify object with
inotify_add_watch() right away, we can open the path with O_PATH first,
call fstat() on the fd, and check the .st_dev/.st_ino fields of that
against a list of watches we already have in place. If we find one we
know that the inotify_add_watch() will update the watch mask of the
existing watch, otherwise it will create a new watch. To make this
race-free we use inotify_add_watch() on the /proc/self/fd/ path of the
O_PATH fd, instead of the original path, so that we do the checking and
watch updating with guaranteed the same inode.
This approach let's us deal safely with inodes that may appear under
various different paths (due to symlinks, hardlinks, bind mounts, fs
namespaces). However it's not a perfect solution: currently the kernel
has no API for changing the watch mask of an existing watch -- unless
you have a path or fd to the original inode. This means we can "merge"
the watches of the same inode of multiple event sources correctly, but
we cannot "unmerge" it again correctly in many cases, as access to the
original inode might have been lost, due to renames, mount/unmount, or
deletions. We could in theory always keep open an O_PATH fd of the inode
to watch so that we can change the mask anytime we want, but this is
highly problematics, as it would consume too many fds (and in fact the
scarcity of fds is the reason why watch descriptors are a separate
concepts from fds) and would keep the backing mounts busy (wds do not
keep mounts busy, fds do). The current implemented approach to all this:
filter in userspace and accept that the watch mask on some inode might
be higher than necessary due to earlier installed event sources that
might have ceased to exist. This approach while ugly shouldn't be too
bad for most cases as the same inodes are probably wacthed for the same
masks in most implementations.
In order to implement priorities correctly a seperate inotify object is
allocated for each priority that is used. This way we get separate
per-priority event queues, of which we never dequeue more than a few
events at a time.
Fixes: #3982
2018-05-28 16:26:50 +02:00
|
|
|
int sd_event_add_inotify(sd_event *e, sd_event_source **s, const char *path, uint32_t mask, sd_event_inotify_handler_t callback, void *userdata);
|
2014-02-19 23:54:58 +01:00
|
|
|
int sd_event_add_defer(sd_event *e, sd_event_source **s, sd_event_handler_t callback, void *userdata);
|
2014-02-21 21:06:09 +01:00
|
|
|
int sd_event_add_post(sd_event *e, sd_event_source **s, sd_event_handler_t callback, void *userdata);
|
2014-02-19 23:54:58 +01:00
|
|
|
int sd_event_add_exit(sd_event *e, sd_event_source **s, sd_event_handler_t callback, void *userdata);
|
2013-10-10 04:40:28 +02:00
|
|
|
|
sd-event: split run into prepare/wait/dispatch
This will allow sd-event to be integrated into an external event loop, which
in turn will allow (say) glib-based applications to use our various libraries,
without manually integrating each of them (bus, rtnl, dhcp, ...).
The external event-loop should integrate sd-event int he following way:
Every iteration must start with a call to sd_event_prepare(), which will
return 0 if no event sources are ready to be processed, a positive value if
they are and a negative value on error. sd_event_prepare() may only be called
following sd_event_dispatch(); a call to sd_event_wait() indicating that no
sources are ready to be dispatched; or a failed call to sd_event_dispatch() or
sd_event_wait().
A successful call to sd_event_prepare() indicating that no event sources are
ready to be dispatched must be followed by a call to sd_event_wait(),
which will return 0 if it timed out without event sources being ready to
be processed, a negative value on error and a positive value otherwise.
sd_event_wait() may only be called following a successful call to
sd_event_prepare() indicating that no event sources are ready to be dispatched.
If sd_event_wait() indicates that some events sources are ready to be
dispatched, it must be followed by a call to sd_event_dispatch(). This
is the only time sd_event_dispatch() may be called.
2014-08-15 18:49:29 +02:00
|
|
|
int sd_event_prepare(sd_event *e);
|
2015-11-19 23:33:55 +01:00
|
|
|
int sd_event_wait(sd_event *e, uint64_t usec);
|
sd-event: split run into prepare/wait/dispatch
This will allow sd-event to be integrated into an external event loop, which
in turn will allow (say) glib-based applications to use our various libraries,
without manually integrating each of them (bus, rtnl, dhcp, ...).
The external event-loop should integrate sd-event int he following way:
Every iteration must start with a call to sd_event_prepare(), which will
return 0 if no event sources are ready to be processed, a positive value if
they are and a negative value on error. sd_event_prepare() may only be called
following sd_event_dispatch(); a call to sd_event_wait() indicating that no
sources are ready to be dispatched; or a failed call to sd_event_dispatch() or
sd_event_wait().
A successful call to sd_event_prepare() indicating that no event sources are
ready to be dispatched must be followed by a call to sd_event_wait(),
which will return 0 if it timed out without event sources being ready to
be processed, a negative value on error and a positive value otherwise.
sd_event_wait() may only be called following a successful call to
sd_event_prepare() indicating that no event sources are ready to be dispatched.
If sd_event_wait() indicates that some events sources are ready to be
dispatched, it must be followed by a call to sd_event_dispatch(). This
is the only time sd_event_dispatch() may be called.
2014-08-15 18:49:29 +02:00
|
|
|
int sd_event_dispatch(sd_event *e);
|
2015-11-19 23:33:55 +01:00
|
|
|
int sd_event_run(sd_event *e, uint64_t usec);
|
2013-10-10 04:40:28 +02:00
|
|
|
int sd_event_loop(sd_event *e);
|
2013-12-12 22:21:25 +01:00
|
|
|
int sd_event_exit(sd_event *e, int code);
|
2013-10-10 04:40:28 +02:00
|
|
|
|
2014-03-24 02:49:09 +01:00
|
|
|
int sd_event_now(sd_event *e, clockid_t clock, uint64_t *usec);
|
|
|
|
|
2014-08-15 21:04:07 +02:00
|
|
|
int sd_event_get_fd(sd_event *e);
|
2013-10-11 00:49:11 +02:00
|
|
|
int sd_event_get_state(sd_event *e);
|
2013-11-11 19:34:13 +01:00
|
|
|
int sd_event_get_tid(sd_event *e, pid_t *tid);
|
2013-12-12 22:21:25 +01:00
|
|
|
int sd_event_get_exit_code(sd_event *e, int *code);
|
2013-12-11 18:14:52 +01:00
|
|
|
int sd_event_set_watchdog(sd_event *e, int b);
|
2013-12-13 04:14:25 +01:00
|
|
|
int sd_event_get_watchdog(sd_event *e);
|
2016-06-30 21:25:07 +02:00
|
|
|
int sd_event_get_iteration(sd_event *e, uint64_t *ret);
|
2013-10-10 04:40:28 +02:00
|
|
|
|
|
|
|
sd_event_source* sd_event_source_ref(sd_event_source *s);
|
|
|
|
sd_event_source* sd_event_source_unref(sd_event_source *s);
|
2019-05-08 14:39:57 +02:00
|
|
|
sd_event_source* sd_event_source_disable_unref(sd_event_source *s);
|
2013-10-10 04:40:28 +02:00
|
|
|
|
2014-05-15 00:44:29 +02:00
|
|
|
sd_event *sd_event_source_get_event(sd_event_source *s);
|
|
|
|
void* sd_event_source_get_userdata(sd_event_source *s);
|
|
|
|
void* sd_event_source_set_userdata(sd_event_source *s, void *userdata);
|
|
|
|
|
2014-11-04 16:27:05 +01:00
|
|
|
int sd_event_source_set_description(sd_event_source *s, const char *description);
|
|
|
|
int sd_event_source_get_description(sd_event_source *s, const char **description);
|
2013-11-19 21:12:59 +01:00
|
|
|
int sd_event_source_set_prepare(sd_event_source *s, sd_event_handler_t callback);
|
2013-10-10 04:40:28 +02:00
|
|
|
int sd_event_source_get_pending(sd_event_source *s);
|
2014-01-15 09:08:20 +01:00
|
|
|
int sd_event_source_get_priority(sd_event_source *s, int64_t *priority);
|
|
|
|
int sd_event_source_set_priority(sd_event_source *s, int64_t priority);
|
2013-10-11 01:08:15 +02:00
|
|
|
int sd_event_source_get_enabled(sd_event_source *s, int *enabled);
|
|
|
|
int sd_event_source_set_enabled(sd_event_source *s, int enabled);
|
2013-10-11 02:11:30 +02:00
|
|
|
int sd_event_source_get_io_fd(sd_event_source *s);
|
2013-12-13 05:13:59 +01:00
|
|
|
int sd_event_source_set_io_fd(sd_event_source *s, int fd);
|
2018-01-24 15:45:48 +01:00
|
|
|
int sd_event_source_get_io_fd_own(sd_event_source *s);
|
|
|
|
int sd_event_source_set_io_fd_own(sd_event_source *s, int own);
|
2013-10-11 02:11:30 +02:00
|
|
|
int sd_event_source_get_io_events(sd_event_source *s, uint32_t* events);
|
|
|
|
int sd_event_source_set_io_events(sd_event_source *s, uint32_t events);
|
|
|
|
int sd_event_source_get_io_revents(sd_event_source *s, uint32_t* revents);
|
2013-10-10 04:40:28 +02:00
|
|
|
int sd_event_source_get_time(sd_event_source *s, uint64_t *usec);
|
|
|
|
int sd_event_source_set_time(sd_event_source *s, uint64_t usec);
|
2020-07-28 11:17:00 +02:00
|
|
|
int sd_event_source_set_time_relative(sd_event_source *s, uint64_t usec);
|
2013-10-10 21:16:21 +02:00
|
|
|
int sd_event_source_get_time_accuracy(sd_event_source *s, uint64_t *usec);
|
2014-05-06 18:51:08 +02:00
|
|
|
int sd_event_source_set_time_accuracy(sd_event_source *s, uint64_t usec);
|
2014-03-24 02:49:09 +01:00
|
|
|
int sd_event_source_get_time_clock(sd_event_source *s, clockid_t *clock);
|
2013-10-11 02:11:30 +02:00
|
|
|
int sd_event_source_get_signal(sd_event_source *s);
|
2013-10-11 01:33:25 +02:00
|
|
|
int sd_event_source_get_child_pid(sd_event_source *s, pid_t *pid);
|
2019-10-30 17:22:49 +01:00
|
|
|
int sd_event_source_get_child_pidfd(sd_event_source *s);
|
|
|
|
int sd_event_source_get_child_pidfd_own(sd_event_source *s);
|
|
|
|
int sd_event_source_set_child_pidfd_own(sd_event_source *s, int own);
|
|
|
|
int sd_event_source_get_child_process_own(sd_event_source *s);
|
|
|
|
int sd_event_source_set_child_process_own(sd_event_source *s, int own);
|
|
|
|
#if defined _GNU_SOURCE || (defined _POSIX_C_SOURCE && _POSIX_C_SOURCE >= 199309L)
|
|
|
|
int sd_event_source_send_child_signal(sd_event_source *s, int sig, const siginfo_t *si, unsigned flags);
|
|
|
|
#else
|
|
|
|
int sd_event_source_send_child_signal(sd_event_source *s, int sig, const void *si, unsigned flags);
|
|
|
|
#endif
|
sd-event: add new API for subscribing to inotify events
This adds a new call sd_event_add_inotify() which allows watching for
inotify events on specified paths.
sd-event will try to minimize the number of inotify fds allocated, and
will try to add file watches to the same inotify fd objects as far as
that's possible. Doing this kind of inotify object should optimize
behaviour in programs that watch a limited set of mostly independent
files as in most cases a single inotify object will suffice for watching
all files.
Traditionally, this kind of coalescing logic (i.e. that multiple event
sources are implemented on top of a single inotify object) was very hard
to do, as the inotify API had serious limitations: it only allowed
adding watches by path, and would implicitly merge watches installed on
the same node via different path, without letting the caller know about
whether such merging took place or not.
With the advent of O_PATH this issue can be dealt with to some point:
instead of adding a path to watch to an inotify object with
inotify_add_watch() right away, we can open the path with O_PATH first,
call fstat() on the fd, and check the .st_dev/.st_ino fields of that
against a list of watches we already have in place. If we find one we
know that the inotify_add_watch() will update the watch mask of the
existing watch, otherwise it will create a new watch. To make this
race-free we use inotify_add_watch() on the /proc/self/fd/ path of the
O_PATH fd, instead of the original path, so that we do the checking and
watch updating with guaranteed the same inode.
This approach let's us deal safely with inodes that may appear under
various different paths (due to symlinks, hardlinks, bind mounts, fs
namespaces). However it's not a perfect solution: currently the kernel
has no API for changing the watch mask of an existing watch -- unless
you have a path or fd to the original inode. This means we can "merge"
the watches of the same inode of multiple event sources correctly, but
we cannot "unmerge" it again correctly in many cases, as access to the
original inode might have been lost, due to renames, mount/unmount, or
deletions. We could in theory always keep open an O_PATH fd of the inode
to watch so that we can change the mask anytime we want, but this is
highly problematics, as it would consume too many fds (and in fact the
scarcity of fds is the reason why watch descriptors are a separate
concepts from fds) and would keep the backing mounts busy (wds do not
keep mounts busy, fds do). The current implemented approach to all this:
filter in userspace and accept that the watch mask on some inode might
be higher than necessary due to earlier installed event sources that
might have ceased to exist. This approach while ugly shouldn't be too
bad for most cases as the same inodes are probably wacthed for the same
masks in most implementations.
In order to implement priorities correctly a seperate inotify object is
allocated for each priority that is used. This way we get separate
per-priority event queues, of which we never dequeue more than a few
events at a time.
Fixes: #3982
2018-05-28 16:26:50 +02:00
|
|
|
int sd_event_source_get_inotify_mask(sd_event_source *s, uint32_t *ret);
|
2018-06-07 12:40:35 +02:00
|
|
|
int sd_event_source_set_destroy_callback(sd_event_source *s, sd_event_destroy_t callback);
|
|
|
|
int sd_event_source_get_destroy_callback(sd_event_source *s, sd_event_destroy_t *ret);
|
2018-11-03 17:55:28 +01:00
|
|
|
int sd_event_source_get_floating(sd_event_source *s);
|
|
|
|
int sd_event_source_set_floating(sd_event_source *s, int b);
|
2020-10-01 22:17:31 +02:00
|
|
|
int sd_event_source_get_exit_on_failure(sd_event_source *s);
|
|
|
|
int sd_event_source_set_exit_on_failure(sd_event_source *s, int b);
|
2013-10-10 04:40:28 +02:00
|
|
|
|
tree-wide: expose "p"-suffix unref calls in public APIs to make gcc cleanup easy
GLIB has recently started to officially support the gcc cleanup
attribute in its public API, hence let's do the same for our APIs.
With this patch we'll define an xyz_unrefp() call for each public
xyz_unref() call, to make it easy to use inside a
__attribute__((cleanup())) expression. Then, all code is ported over to
make use of this.
The new calls are also documented in the man pages, with examples how to
use them (well, I only added docs where the _unref() call itself already
had docs, and the examples, only cover sd_bus_unrefp() and
sd_event_unrefp()).
This also renames sd_lldp_free() to sd_lldp_unref(), since that's how we
tend to call our destructors these days.
Note that this defines no public macro that wraps gcc's attribute and
makes it easier to use. While I think it's our duty in the library to
make our stuff easy to use, I figure it's not our duty to make gcc's own
features easy to use on its own. Most likely, client code which wants to
make use of this should define its own:
#define _cleanup_(function) __attribute__((cleanup(function)))
Or similar, to make the gcc feature easier to use.
Making this logic public has the benefit that we can remove three header
files whose only purpose was to define these functions internally.
See #2008.
2015-11-27 19:13:45 +01:00
|
|
|
/* Define helpers so that __attribute__((cleanup(sd_event_unrefp))) and similar may be used. */
|
|
|
|
_SD_DEFINE_POINTER_CLEANUP_FUNC(sd_event, sd_event_unref);
|
|
|
|
_SD_DEFINE_POINTER_CLEANUP_FUNC(sd_event_source, sd_event_source_unref);
|
2019-05-10 09:54:10 +02:00
|
|
|
_SD_DEFINE_POINTER_CLEANUP_FUNC(sd_event_source, sd_event_source_disable_unref);
|
tree-wide: expose "p"-suffix unref calls in public APIs to make gcc cleanup easy
GLIB has recently started to officially support the gcc cleanup
attribute in its public API, hence let's do the same for our APIs.
With this patch we'll define an xyz_unrefp() call for each public
xyz_unref() call, to make it easy to use inside a
__attribute__((cleanup())) expression. Then, all code is ported over to
make use of this.
The new calls are also documented in the man pages, with examples how to
use them (well, I only added docs where the _unref() call itself already
had docs, and the examples, only cover sd_bus_unrefp() and
sd_event_unrefp()).
This also renames sd_lldp_free() to sd_lldp_unref(), since that's how we
tend to call our destructors these days.
Note that this defines no public macro that wraps gcc's attribute and
makes it easier to use. While I think it's our duty in the library to
make our stuff easy to use, I figure it's not our duty to make gcc's own
features easy to use on its own. Most likely, client code which wants to
make use of this should define its own:
#define _cleanup_(function) __attribute__((cleanup(function)))
Or similar, to make the gcc feature easier to use.
Making this logic public has the benefit that we can remove three header
files whose only purpose was to define these functions internally.
See #2008.
2015-11-27 19:13:45 +01:00
|
|
|
|
2013-11-07 16:44:48 +01:00
|
|
|
_SD_END_DECLARATIONS;
|
2013-11-07 03:47:42 +01:00
|
|
|
|
2013-10-10 04:40:28 +02:00
|
|
|
#endif
|