[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
/*
|
2012-11-12 19:36:23 +01:00
|
|
|
* Copyright (C) 2004-2012 Kay Sievers <kay@vrfy.org>
|
[PATCH] convert udevsend/udevd to DGRAM and single-threaded
On Fri, Feb 06, 2004 at 01:08:24AM -0500, Chris Friesen wrote:
>
> Kay, you said "unless we can get rid of _all_ the threads or at least
> getting faster, I don't want to change it."
>
> Well how about we get rid of all the threads, *and* we get faster?
Yes, we are twice as fast now on my box :)
> This patch applies to current bk trees, and does the following:
>
> 1) Switch to DGRAM sockets rather than STREAM. This simplifies things
> as mentioned in the previous message.
>
> 2) Invalid sequence numbers are mapped to -1 rather than zero, since
> zero is a valid sequence number (I think). Also, this allows for real
> speed tests using scripts starting at a zero sequence number, since that
> is what the initial expected sequence number is.
>
> 3) Get rid of all threading. This is the biggie. Some highlights:
> a) timeout using setitimer() and SIGALRM
> b) async child death notification via SIGCHLD
> c) these two signal handlers do nothing but raise volatile flags,
> all the
> work is done in the main loop
> d) locking no longer required
I cleaned up the rest of the comments, the whitespace and a few names to match
the whole thing. Please recheck it. Test script is switched to work on subsystem
'test' to let udev ignore it.
2004-02-07 07:21:15 +01:00
|
|
|
* Copyright (C) 2004 Chris Friesen <chris_friesen@sympatico.ca>
|
2009-03-10 14:00:16 +01:00
|
|
|
* Copyright (C) 2009 Canonical Ltd.
|
|
|
|
* Copyright (C) 2009 Scott James Remnant <scott@netsplit.com>
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
*
|
2008-09-10 02:40:42 +02:00
|
|
|
* This program is free software: you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License as published by
|
|
|
|
* the Free Software Foundation, either version 2 of the License, or
|
|
|
|
* (at your option) any later version.
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
*
|
2008-09-10 02:40:42 +02:00
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
*
|
2008-09-10 02:40:42 +02:00
|
|
|
* You should have received a copy of the GNU General Public License
|
|
|
|
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
*/
|
|
|
|
|
2004-01-27 03:19:33 +01:00
|
|
|
#include <stddef.h>
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
#include <signal.h>
|
|
|
|
#include <unistd.h>
|
|
|
|
#include <errno.h>
|
|
|
|
#include <stdio.h>
|
|
|
|
#include <stdlib.h>
|
2009-07-13 03:33:15 +02:00
|
|
|
#include <stdbool.h>
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
#include <string.h>
|
2005-01-17 00:53:08 +01:00
|
|
|
#include <fcntl.h>
|
2006-09-08 11:27:03 +02:00
|
|
|
#include <getopt.h>
|
2014-04-14 04:54:27 +02:00
|
|
|
#include <sys/file.h>
|
2009-07-17 13:24:37 +02:00
|
|
|
#include <sys/time.h>
|
2009-06-04 01:44:04 +02:00
|
|
|
#include <sys/prctl.h>
|
|
|
|
#include <sys/socket.h>
|
|
|
|
#include <sys/signalfd.h>
|
2011-04-13 01:17:09 +02:00
|
|
|
#include <sys/epoll.h>
|
2014-06-04 13:30:24 +02:00
|
|
|
#include <sys/mount.h>
|
2005-03-10 00:58:01 +01:00
|
|
|
#include <sys/wait.h>
|
2004-10-14 07:37:59 +02:00
|
|
|
#include <sys/stat.h>
|
2005-10-27 21:04:38 +02:00
|
|
|
#include <sys/ioctl.h>
|
2008-07-30 01:45:23 +02:00
|
|
|
#include <sys/inotify.h>
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
|
2010-10-07 14:59:11 +02:00
|
|
|
#include "sd-daemon.h"
|
2015-05-18 17:22:36 +02:00
|
|
|
#include "sd-event.h"
|
2015-06-02 17:07:21 +02:00
|
|
|
|
2015-06-17 17:43:11 +02:00
|
|
|
#include "terminal-util.h"
|
2015-06-02 17:07:21 +02:00
|
|
|
#include "signal-util.h"
|
2015-05-18 17:22:36 +02:00
|
|
|
#include "event-util.h"
|
2015-06-12 16:31:33 +02:00
|
|
|
#include "netlink-util.h"
|
2012-04-15 02:35:31 +02:00
|
|
|
#include "cgroup-util.h"
|
2015-05-18 17:17:07 +02:00
|
|
|
#include "process-util.h"
|
2012-04-17 22:25:24 +02:00
|
|
|
#include "dev-setup.h"
|
2013-02-14 12:26:13 +01:00
|
|
|
#include "fileio.h"
|
2014-12-27 18:46:36 +01:00
|
|
|
#include "selinux-util.h"
|
|
|
|
#include "udev.h"
|
|
|
|
#include "udev-util.h"
|
2015-04-10 20:43:52 +02:00
|
|
|
#include "formats-util.h"
|
2015-05-06 23:26:25 +02:00
|
|
|
#include "hashmap.h"
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
|
2014-09-12 14:42:59 +02:00
|
|
|
static bool arg_debug = false;
|
|
|
|
static int arg_daemonize = false;
|
|
|
|
static int arg_resolve_names = 1;
|
2015-05-06 23:36:36 +02:00
|
|
|
static unsigned arg_children_max;
|
2014-09-12 14:42:59 +02:00
|
|
|
static int arg_exec_delay;
|
|
|
|
static usec_t arg_event_timeout_usec = 180 * USEC_PER_SEC;
|
|
|
|
static usec_t arg_event_timeout_warn_usec = 180 * USEC_PER_SEC / 3;
|
2015-05-12 18:37:04 +02:00
|
|
|
|
|
|
|
typedef struct Manager {
|
|
|
|
struct udev *udev;
|
2015-05-18 17:22:36 +02:00
|
|
|
sd_event *event;
|
2015-05-12 18:37:04 +02:00
|
|
|
Hashmap *workers;
|
2015-05-12 19:06:33 +02:00
|
|
|
struct udev_list_node events;
|
2015-06-03 01:53:20 +02:00
|
|
|
const char *cgroup;
|
2015-05-16 10:14:20 +02:00
|
|
|
pid_t pid; /* the process that originally allocated the manager object */
|
2015-05-12 18:37:04 +02:00
|
|
|
|
2015-05-12 19:06:33 +02:00
|
|
|
struct udev_rules *rules;
|
2015-05-12 18:37:04 +02:00
|
|
|
struct udev_list properties;
|
|
|
|
|
|
|
|
struct udev_monitor *monitor;
|
|
|
|
struct udev_ctrl *ctrl;
|
|
|
|
struct udev_ctrl_connection *ctrl_conn_blocking;
|
2015-05-12 21:16:47 +02:00
|
|
|
int fd_inotify;
|
|
|
|
int worker_watch[2];
|
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
sd_event_source *ctrl_event;
|
|
|
|
sd_event_source *uevent_event;
|
|
|
|
sd_event_source *inotify_event;
|
|
|
|
|
2015-05-18 16:25:22 +02:00
|
|
|
usec_t last_usec;
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
bool stop_exec_queue:1;
|
|
|
|
bool exit:1;
|
|
|
|
} Manager;
|
2009-06-04 01:44:04 +02:00
|
|
|
|
|
|
|
enum event_state {
|
2012-01-10 01:34:15 +01:00
|
|
|
EVENT_UNDEF,
|
|
|
|
EVENT_QUEUED,
|
|
|
|
EVENT_RUNNING,
|
2009-06-04 01:44:04 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
struct event {
|
2012-01-10 01:34:15 +01:00
|
|
|
struct udev_list_node node;
|
2015-05-16 10:14:20 +02:00
|
|
|
Manager *manager;
|
2012-01-10 01:34:15 +01:00
|
|
|
struct udev *udev;
|
|
|
|
struct udev_device *dev;
|
2015-03-11 15:41:32 +01:00
|
|
|
struct udev_device *dev_kernel;
|
2015-04-27 11:43:31 +02:00
|
|
|
struct worker *worker;
|
2012-01-10 01:34:15 +01:00
|
|
|
enum event_state state;
|
|
|
|
unsigned long long int delaying_seqnum;
|
|
|
|
unsigned long long int seqnum;
|
|
|
|
const char *devpath;
|
|
|
|
size_t devpath_len;
|
|
|
|
const char *devpath_old;
|
|
|
|
dev_t devnum;
|
|
|
|
int ifindex;
|
2012-10-07 18:21:38 +02:00
|
|
|
bool is_block;
|
2015-05-18 17:22:36 +02:00
|
|
|
sd_event_source *timeout_warning;
|
|
|
|
sd_event_source *timeout;
|
2009-06-04 01:44:04 +02:00
|
|
|
};
|
|
|
|
|
2014-07-29 15:47:41 +02:00
|
|
|
static inline struct event *node_to_event(struct udev_list_node *node) {
|
2012-04-26 18:36:02 +02:00
|
|
|
return container_of(node, struct event, node);
|
2009-06-04 01:44:04 +02:00
|
|
|
}
|
|
|
|
|
2015-05-12 19:06:33 +02:00
|
|
|
static void event_queue_cleanup(Manager *manager, enum event_state type);
|
2011-04-13 01:17:09 +02:00
|
|
|
|
2009-06-04 01:44:04 +02:00
|
|
|
enum worker_state {
|
2012-01-10 01:34:15 +01:00
|
|
|
WORKER_UNDEF,
|
|
|
|
WORKER_RUNNING,
|
|
|
|
WORKER_IDLE,
|
|
|
|
WORKER_KILLED,
|
2009-06-04 01:44:04 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
struct worker {
|
2015-05-12 18:37:04 +02:00
|
|
|
Manager *manager;
|
2012-01-10 01:34:15 +01:00
|
|
|
struct udev_list_node node;
|
|
|
|
int refcount;
|
|
|
|
pid_t pid;
|
|
|
|
struct udev_monitor *monitor;
|
|
|
|
enum worker_state state;
|
|
|
|
struct event *event;
|
2009-06-04 01:44:04 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
/* passed from worker to main process */
|
|
|
|
struct worker_message {
|
|
|
|
};
|
|
|
|
|
2015-04-27 11:43:31 +02:00
|
|
|
static void event_free(struct event *event) {
|
2015-05-16 10:14:20 +02:00
|
|
|
int r;
|
|
|
|
|
2015-04-27 11:43:31 +02:00
|
|
|
if (!event)
|
|
|
|
return;
|
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
udev_list_node_remove(&event->node);
|
|
|
|
udev_device_unref(event->dev);
|
2015-03-11 15:41:32 +01:00
|
|
|
udev_device_unref(event->dev_kernel);
|
2015-04-27 11:43:31 +02:00
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
sd_event_source_unref(event->timeout_warning);
|
|
|
|
sd_event_source_unref(event->timeout);
|
|
|
|
|
2015-04-27 11:43:31 +02:00
|
|
|
if (event->worker)
|
|
|
|
event->worker->event = NULL;
|
|
|
|
|
2015-05-16 10:14:20 +02:00
|
|
|
assert(event->manager);
|
|
|
|
|
|
|
|
if (udev_list_node_is_empty(&event->manager->events)) {
|
|
|
|
/* only clean up the queue from the process that created it */
|
|
|
|
if (event->manager->pid == getpid()) {
|
|
|
|
r = unlink("/run/udev/queue");
|
|
|
|
if (r < 0)
|
|
|
|
log_warning_errno(errno, "could not unlink /run/udev/queue: %m");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
free(event);
|
2008-10-16 17:16:58 +02:00
|
|
|
}
|
2005-11-16 04:12:53 +01:00
|
|
|
|
2015-04-27 11:43:31 +02:00
|
|
|
static void worker_free(struct worker *worker) {
|
|
|
|
if (!worker)
|
|
|
|
return;
|
2009-07-13 03:09:05 +02:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
assert(worker->manager);
|
|
|
|
|
|
|
|
hashmap_remove(worker->manager->workers, UINT_TO_PTR(worker->pid));
|
2012-01-10 01:34:15 +01:00
|
|
|
udev_monitor_unref(worker->monitor);
|
2015-04-27 11:43:31 +02:00
|
|
|
event_free(worker->event);
|
|
|
|
|
|
|
|
free(worker);
|
2011-04-13 01:17:09 +02:00
|
|
|
}
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
static void manager_workers_free(Manager *manager) {
|
2015-05-06 23:26:25 +02:00
|
|
|
struct worker *worker;
|
|
|
|
Iterator i;
|
2011-04-13 01:17:09 +02:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
assert(manager);
|
|
|
|
|
|
|
|
HASHMAP_FOREACH(worker, manager->workers, i)
|
2015-04-27 11:43:31 +02:00
|
|
|
worker_free(worker);
|
2015-05-06 23:26:25 +02:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
manager->workers = hashmap_free(manager->workers);
|
2005-11-07 14:10:09 +01:00
|
|
|
}
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
static int worker_new(struct worker **ret, Manager *manager, struct udev_monitor *worker_monitor, pid_t pid) {
|
2015-05-06 23:26:25 +02:00
|
|
|
_cleanup_free_ struct worker *worker = NULL;
|
|
|
|
int r;
|
2015-04-27 11:26:47 +02:00
|
|
|
|
|
|
|
assert(ret);
|
2015-05-12 18:37:04 +02:00
|
|
|
assert(manager);
|
2015-04-27 11:26:47 +02:00
|
|
|
assert(worker_monitor);
|
|
|
|
assert(pid > 1);
|
|
|
|
|
|
|
|
worker = new0(struct worker, 1);
|
|
|
|
if (!worker)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
2015-04-27 11:33:41 +02:00
|
|
|
worker->refcount = 1;
|
2015-05-12 18:37:04 +02:00
|
|
|
worker->manager = manager;
|
2015-04-27 11:26:47 +02:00
|
|
|
/* close monitor, but keep address around */
|
|
|
|
udev_monitor_disconnect(worker_monitor);
|
|
|
|
worker->monitor = udev_monitor_ref(worker_monitor);
|
|
|
|
worker->pid = pid;
|
2015-05-06 23:26:25 +02:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
r = hashmap_ensure_allocated(&manager->workers, NULL);
|
2015-05-06 23:26:25 +02:00
|
|
|
if (r < 0)
|
|
|
|
return r;
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
r = hashmap_put(manager->workers, UINT_TO_PTR(pid), worker);
|
2015-05-06 23:26:25 +02:00
|
|
|
if (r < 0)
|
|
|
|
return r;
|
|
|
|
|
2015-04-27 11:26:47 +02:00
|
|
|
*ret = worker;
|
2015-05-06 23:26:25 +02:00
|
|
|
worker = NULL;
|
2015-04-27 11:26:47 +02:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2015-05-18 17:17:07 +02:00
|
|
|
static int on_event_timeout(sd_event_source *s, uint64_t usec, void *userdata) {
|
|
|
|
struct event *event = userdata;
|
|
|
|
|
|
|
|
assert(event);
|
|
|
|
assert(event->worker);
|
|
|
|
|
|
|
|
kill_and_sigcont(event->worker->pid, SIGKILL);
|
|
|
|
event->worker->state = WORKER_KILLED;
|
|
|
|
|
|
|
|
log_error("seq %llu '%s' killed", udev_device_get_seqnum(event->dev), event->devpath);
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int on_event_timeout_warning(sd_event_source *s, uint64_t usec, void *userdata) {
|
|
|
|
struct event *event = userdata;
|
|
|
|
|
|
|
|
assert(event);
|
|
|
|
|
|
|
|
log_warning("seq %llu '%s' is taking a long time", udev_device_get_seqnum(event->dev), event->devpath);
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2015-04-27 11:33:41 +02:00
|
|
|
static void worker_attach_event(struct worker *worker, struct event *event) {
|
2015-05-18 17:22:36 +02:00
|
|
|
sd_event *e;
|
|
|
|
uint64_t usec;
|
|
|
|
|
2015-04-27 11:43:31 +02:00
|
|
|
assert(worker);
|
2015-05-18 17:22:36 +02:00
|
|
|
assert(worker->manager);
|
2015-04-27 11:43:31 +02:00
|
|
|
assert(event);
|
|
|
|
assert(!event->worker);
|
|
|
|
assert(!worker->event);
|
|
|
|
|
2015-04-27 11:33:41 +02:00
|
|
|
worker->state = WORKER_RUNNING;
|
|
|
|
worker->event = event;
|
|
|
|
event->state = EVENT_RUNNING;
|
2015-04-27 11:43:31 +02:00
|
|
|
event->worker = worker;
|
2015-05-18 17:22:36 +02:00
|
|
|
|
|
|
|
e = worker->manager->event;
|
|
|
|
|
sd-event: make sure sd_event_now() cannot fail
Previously, if the event loop never ran before sd_event_now() would
fail. With this change it will instead fall back to invoking now(). This
way, the function cannot fail anymore, except for programming error when
invoking it with wrong parameters.
This takes into account the fact that many callers did not handle the
error condition correctly, and if the callers did, then they kept simply
invoking now() as fall back on their own. Hence let's shorten the code
using this call, and make things more robust, and let's just fall back
to now() internally.
Whether now() is used or the cache timestamp may still be detected via
the return value of sd_event_now(). If > 0 is returned, then the fall
back to now() was used, if == 0 is returned, then the cached value was
returned.
This patch also simplifies many of the invocations of sd_event_now():
the manual fall back to now() can be removed. Also, in cases where the
call is invoked withing void functions we can now protect the invocation
via assert_se(), acknowledging the fact that the call cannot fail
anymore except for programming errors with the parameters.
This change is inspired by #841.
2015-08-03 17:29:09 +02:00
|
|
|
assert_se(sd_event_now(e, clock_boottime_or_monotonic(), &usec) >= 0);
|
2015-05-18 17:22:36 +02:00
|
|
|
|
|
|
|
(void) sd_event_add_time(e, &event->timeout_warning, clock_boottime_or_monotonic(),
|
|
|
|
usec + arg_event_timeout_warn_usec, USEC_PER_SEC, on_event_timeout_warning, event);
|
|
|
|
|
|
|
|
(void) sd_event_add_time(e, &event->timeout, clock_boottime_or_monotonic(),
|
|
|
|
usec + arg_event_timeout_usec, USEC_PER_SEC, on_event_timeout, event);
|
2015-04-27 11:33:41 +02:00
|
|
|
}
|
|
|
|
|
2015-05-12 21:16:47 +02:00
|
|
|
static void manager_free(Manager *manager) {
|
|
|
|
if (!manager)
|
|
|
|
return;
|
|
|
|
|
2015-05-13 11:39:45 +02:00
|
|
|
udev_builtin_exit(manager->udev);
|
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
sd_event_source_unref(manager->ctrl_event);
|
|
|
|
sd_event_source_unref(manager->uevent_event);
|
|
|
|
sd_event_source_unref(manager->inotify_event);
|
|
|
|
|
2015-05-12 21:16:47 +02:00
|
|
|
udev_unref(manager->udev);
|
2015-05-18 17:22:36 +02:00
|
|
|
sd_event_unref(manager->event);
|
2015-05-12 21:16:47 +02:00
|
|
|
manager_workers_free(manager);
|
|
|
|
event_queue_cleanup(manager, EVENT_UNDEF);
|
|
|
|
|
|
|
|
udev_monitor_unref(manager->monitor);
|
|
|
|
udev_ctrl_unref(manager->ctrl);
|
|
|
|
udev_ctrl_connection_unref(manager->ctrl_conn_blocking);
|
|
|
|
|
|
|
|
udev_list_cleanup(&manager->properties);
|
|
|
|
udev_rules_unref(manager->rules);
|
|
|
|
|
|
|
|
safe_close(manager->fd_inotify);
|
|
|
|
safe_close_pair(manager->worker_watch);
|
|
|
|
|
|
|
|
free(manager);
|
|
|
|
}
|
|
|
|
|
|
|
|
DEFINE_TRIVIAL_CLEANUP_FUNC(Manager*, manager_free);
|
|
|
|
|
2015-05-15 11:41:36 +02:00
|
|
|
static int worker_send_message(int fd) {
|
|
|
|
struct worker_message message = {};
|
|
|
|
|
|
|
|
return loop_write(fd, &message, sizeof(message), false);
|
|
|
|
}
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
static void worker_spawn(Manager *manager, struct event *event) {
|
2012-01-10 01:34:15 +01:00
|
|
|
struct udev *udev = event->udev;
|
2015-04-27 11:26:47 +02:00
|
|
|
_cleanup_udev_monitor_unref_ struct udev_monitor *worker_monitor = NULL;
|
2012-01-10 01:34:15 +01:00
|
|
|
pid_t pid;
|
2015-06-23 17:07:40 +02:00
|
|
|
int r = 0;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
/* listen for new events */
|
|
|
|
worker_monitor = udev_monitor_new_from_netlink(udev, NULL);
|
|
|
|
if (worker_monitor == NULL)
|
|
|
|
return;
|
|
|
|
/* allow the main daemon netlink address to send devices to the worker */
|
2015-05-12 18:37:04 +02:00
|
|
|
udev_monitor_allow_unicast_sender(worker_monitor, manager->monitor);
|
2015-06-23 17:07:40 +02:00
|
|
|
r = udev_monitor_enable_receiving(worker_monitor);
|
|
|
|
if (r < 0)
|
|
|
|
log_error_errno(r, "worker: could not enable receiving of device: %m");
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
pid = fork();
|
|
|
|
switch (pid) {
|
|
|
|
case 0: {
|
|
|
|
struct udev_device *dev = NULL;
|
2015-06-12 16:31:33 +02:00
|
|
|
_cleanup_netlink_unref_ sd_netlink *rtnl = NULL;
|
2012-01-10 01:34:15 +01:00
|
|
|
int fd_monitor;
|
2015-05-12 21:16:47 +02:00
|
|
|
_cleanup_close_ int fd_signal = -1, fd_ep = -1;
|
2015-05-24 15:14:43 +02:00
|
|
|
struct epoll_event ep_signal = { .events = EPOLLIN };
|
|
|
|
struct epoll_event ep_monitor = { .events = EPOLLIN };
|
2012-01-10 01:34:15 +01:00
|
|
|
sigset_t mask;
|
|
|
|
|
2012-01-21 03:07:32 +01:00
|
|
|
/* take initial device from queue */
|
2012-01-10 01:34:15 +01:00
|
|
|
dev = event->dev;
|
|
|
|
event->dev = NULL;
|
|
|
|
|
2015-05-29 18:32:15 +02:00
|
|
|
unsetenv("NOTIFY_SOCKET");
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
manager_workers_free(manager);
|
2015-05-12 19:06:33 +02:00
|
|
|
event_queue_cleanup(manager, EVENT_UNDEF);
|
2015-05-24 15:10:04 +02:00
|
|
|
|
2015-05-12 21:16:47 +02:00
|
|
|
manager->monitor = udev_monitor_unref(manager->monitor);
|
2015-05-24 15:10:04 +02:00
|
|
|
manager->ctrl_conn_blocking = udev_ctrl_connection_unref(manager->ctrl_conn_blocking);
|
2015-05-12 21:16:47 +02:00
|
|
|
manager->ctrl = udev_ctrl_unref(manager->ctrl);
|
2015-05-18 17:22:36 +02:00
|
|
|
manager->ctrl_conn_blocking = udev_ctrl_connection_unref(manager->ctrl_conn_blocking);
|
2015-05-12 21:16:47 +02:00
|
|
|
manager->worker_watch[READ_END] = safe_close(manager->worker_watch[READ_END]);
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
manager->ctrl_event = sd_event_source_unref(manager->ctrl_event);
|
|
|
|
manager->uevent_event = sd_event_source_unref(manager->uevent_event);
|
|
|
|
manager->inotify_event = sd_event_source_unref(manager->inotify_event);
|
|
|
|
|
|
|
|
manager->event = sd_event_unref(manager->event);
|
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
sigfillset(&mask);
|
|
|
|
fd_signal = signalfd(-1, &mask, SFD_NONBLOCK|SFD_CLOEXEC);
|
|
|
|
if (fd_signal < 0) {
|
2015-04-27 14:56:21 +02:00
|
|
|
r = log_error_errno(errno, "error creating signalfd %m");
|
2012-01-10 01:34:15 +01:00
|
|
|
goto out;
|
|
|
|
}
|
2015-05-24 15:14:43 +02:00
|
|
|
ep_signal.data.fd = fd_signal;
|
|
|
|
|
|
|
|
fd_monitor = udev_monitor_get_fd(worker_monitor);
|
|
|
|
ep_monitor.data.fd = fd_monitor;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
fd_ep = epoll_create1(EPOLL_CLOEXEC);
|
|
|
|
if (fd_ep < 0) {
|
2015-04-27 14:56:21 +02:00
|
|
|
r = log_error_errno(errno, "error creating epoll fd: %m");
|
2012-01-10 01:34:15 +01:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (epoll_ctl(fd_ep, EPOLL_CTL_ADD, fd_signal, &ep_signal) < 0 ||
|
|
|
|
epoll_ctl(fd_ep, EPOLL_CTL_ADD, fd_monitor, &ep_monitor) < 0) {
|
2015-04-27 14:56:21 +02:00
|
|
|
r = log_error_errno(errno, "fail to add fds to epoll: %m");
|
2012-01-10 01:34:15 +01:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* request TERM signal if parent exits */
|
|
|
|
prctl(PR_SET_PDEATHSIG, SIGTERM);
|
|
|
|
|
2012-06-04 18:10:50 +02:00
|
|
|
/* reset OOM score, we only protect the main daemon */
|
2015-07-07 01:27:20 +02:00
|
|
|
write_string_file("/proc/self/oom_score_adj", "0", 0);
|
2012-06-04 18:10:50 +02:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
for (;;) {
|
|
|
|
struct udev_event *udev_event;
|
2015-04-27 14:56:21 +02:00
|
|
|
int fd_lock = -1;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-06-02 16:52:07 +02:00
|
|
|
assert(dev);
|
|
|
|
|
2013-12-24 16:39:37 +01:00
|
|
|
log_debug("seq %llu running", udev_device_get_seqnum(dev));
|
2012-01-10 01:34:15 +01:00
|
|
|
udev_event = udev_event_new(dev);
|
|
|
|
if (udev_event == NULL) {
|
2015-04-27 14:56:21 +02:00
|
|
|
r = -ENOMEM;
|
2012-01-10 01:34:15 +01:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2014-09-12 14:42:59 +02:00
|
|
|
if (arg_exec_delay > 0)
|
|
|
|
udev_event->exec_delay = arg_exec_delay;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2014-04-14 04:54:27 +02:00
|
|
|
/*
|
2014-07-24 23:37:35 +02:00
|
|
|
* Take a shared lock on the device node; this establishes
|
2014-04-14 04:54:27 +02:00
|
|
|
* a concept of device "ownership" to serialize device
|
2014-07-24 23:37:35 +02:00
|
|
|
* access. External processes holding an exclusive lock will
|
2014-04-14 04:54:27 +02:00
|
|
|
* cause udev to skip the event handling; in the case udev
|
2014-07-24 23:37:35 +02:00
|
|
|
* acquired the lock, the external process can block until
|
2014-04-14 04:54:27 +02:00
|
|
|
* udev has finished its event handling.
|
|
|
|
*/
|
2014-07-24 23:37:35 +02:00
|
|
|
if (!streq_ptr(udev_device_get_action(dev), "remove") &&
|
|
|
|
streq_ptr("block", udev_device_get_subsystem(dev)) &&
|
|
|
|
!startswith(udev_device_get_sysname(dev), "dm-") &&
|
|
|
|
!startswith(udev_device_get_sysname(dev), "md")) {
|
2014-04-14 04:54:27 +02:00
|
|
|
struct udev_device *d = dev;
|
|
|
|
|
|
|
|
if (streq_ptr("partition", udev_device_get_devtype(d)))
|
|
|
|
d = udev_device_get_parent(d);
|
|
|
|
|
|
|
|
if (d) {
|
|
|
|
fd_lock = open(udev_device_get_devnode(d), O_RDONLY|O_CLOEXEC|O_NOFOLLOW|O_NONBLOCK);
|
|
|
|
if (fd_lock >= 0 && flock(fd_lock, LOCK_SH|LOCK_NB) < 0) {
|
2014-11-28 19:29:59 +01:00
|
|
|
log_debug_errno(errno, "Unable to flock(%s), skipping event handling: %m", udev_device_get_devnode(d));
|
2014-06-03 10:46:51 +02:00
|
|
|
fd_lock = safe_close(fd_lock);
|
2014-04-14 04:54:27 +02:00
|
|
|
goto skip;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-09-09 11:15:37 +02:00
|
|
|
/* needed for renaming netifs */
|
|
|
|
udev_event->rtnl = rtnl;
|
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
/* apply rules, create node, symlinks */
|
2014-11-13 20:35:06 +01:00
|
|
|
udev_event_execute_rules(udev_event,
|
|
|
|
arg_event_timeout_usec, arg_event_timeout_warn_usec,
|
2015-05-12 18:37:04 +02:00
|
|
|
&manager->properties,
|
2015-06-02 17:07:21 +02:00
|
|
|
manager->rules);
|
2014-11-13 20:35:06 +01:00
|
|
|
|
|
|
|
udev_event_execute_run(udev_event,
|
2015-06-02 17:07:21 +02:00
|
|
|
arg_event_timeout_usec, arg_event_timeout_warn_usec);
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2014-12-03 20:00:28 +01:00
|
|
|
if (udev_event->rtnl)
|
|
|
|
/* in case rtnl was initialized */
|
2015-06-12 16:31:33 +02:00
|
|
|
rtnl = sd_netlink_ref(udev_event->rtnl);
|
2014-09-09 11:15:37 +02:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
/* apply/restore inotify watch */
|
2014-05-16 23:46:48 +02:00
|
|
|
if (udev_event->inotify_watch) {
|
2012-01-10 01:34:15 +01:00
|
|
|
udev_watch_begin(udev, dev);
|
|
|
|
udev_device_update_db(dev);
|
|
|
|
}
|
|
|
|
|
2014-06-03 10:46:51 +02:00
|
|
|
safe_close(fd_lock);
|
2014-04-14 04:54:27 +02:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
/* send processed event back to libudev listeners */
|
|
|
|
udev_monitor_send_device(worker_monitor, NULL, dev);
|
|
|
|
|
2014-04-14 04:54:27 +02:00
|
|
|
skip:
|
2015-04-26 01:40:12 +02:00
|
|
|
log_debug("seq %llu processed", udev_device_get_seqnum(dev));
|
2015-04-24 20:36:02 +02:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
/* send udevd the result of the event execution */
|
2015-05-12 21:16:47 +02:00
|
|
|
r = worker_send_message(manager->worker_watch[WRITE_END]);
|
2015-04-24 20:36:02 +02:00
|
|
|
if (r < 0)
|
2015-05-15 11:41:36 +02:00
|
|
|
log_error_errno(r, "failed to send result of seq %llu to main daemon: %m",
|
2015-04-24 20:36:02 +02:00
|
|
|
udev_device_get_seqnum(dev));
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
udev_device_unref(dev);
|
|
|
|
dev = NULL;
|
|
|
|
|
2013-08-22 23:03:29 +02:00
|
|
|
udev_event_unref(udev_event);
|
2013-08-15 19:51:08 +02:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
/* wait for more device messages from main udevd, or term signal */
|
|
|
|
while (dev == NULL) {
|
|
|
|
struct epoll_event ev[4];
|
|
|
|
int fdcount;
|
|
|
|
int i;
|
|
|
|
|
2012-04-16 03:13:22 +02:00
|
|
|
fdcount = epoll_wait(fd_ep, ev, ELEMENTSOF(ev), -1);
|
2012-01-10 01:34:15 +01:00
|
|
|
if (fdcount < 0) {
|
|
|
|
if (errno == EINTR)
|
|
|
|
continue;
|
2015-04-27 14:56:21 +02:00
|
|
|
r = log_error_errno(errno, "failed to poll: %m");
|
2012-01-10 01:34:15 +01:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < fdcount; i++) {
|
|
|
|
if (ev[i].data.fd == fd_monitor && ev[i].events & EPOLLIN) {
|
|
|
|
dev = udev_monitor_receive_device(worker_monitor);
|
|
|
|
break;
|
|
|
|
} else if (ev[i].data.fd == fd_signal && ev[i].events & EPOLLIN) {
|
|
|
|
struct signalfd_siginfo fdsi;
|
|
|
|
ssize_t size;
|
|
|
|
|
|
|
|
size = read(fd_signal, &fdsi, sizeof(struct signalfd_siginfo));
|
|
|
|
if (size != sizeof(struct signalfd_siginfo))
|
|
|
|
continue;
|
|
|
|
switch (fdsi.ssi_signo) {
|
|
|
|
case SIGTERM:
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2011-04-18 02:14:24 +02:00
|
|
|
out:
|
2012-01-10 01:34:15 +01:00
|
|
|
udev_device_unref(dev);
|
2015-05-12 21:16:47 +02:00
|
|
|
manager_free(manager);
|
2012-04-08 16:06:20 +02:00
|
|
|
log_close();
|
2015-04-21 15:53:10 +02:00
|
|
|
_exit(r < 0 ? EXIT_FAILURE : EXIT_SUCCESS);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
case -1:
|
|
|
|
event->state = EVENT_QUEUED;
|
2014-11-28 19:29:59 +01:00
|
|
|
log_error_errno(errno, "fork of child failed: %m");
|
2012-01-10 01:34:15 +01:00
|
|
|
break;
|
|
|
|
default:
|
2015-04-27 11:11:58 +02:00
|
|
|
{
|
|
|
|
struct worker *worker;
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
r = worker_new(&worker, manager, worker_monitor, pid);
|
2015-04-27 11:26:47 +02:00
|
|
|
if (r < 0)
|
2015-04-27 11:11:58 +02:00
|
|
|
return;
|
|
|
|
|
2015-04-27 11:33:41 +02:00
|
|
|
worker_attach_event(worker, event);
|
|
|
|
|
2015-01-21 04:22:15 +01:00
|
|
|
log_debug("seq %llu forked new worker ["PID_FMT"]", udev_device_get_seqnum(event->dev), pid);
|
2012-01-10 01:34:15 +01:00
|
|
|
break;
|
|
|
|
}
|
2015-04-27 11:11:58 +02:00
|
|
|
}
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
}
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
static void event_run(Manager *manager, struct event *event) {
|
2015-05-06 23:26:25 +02:00
|
|
|
struct worker *worker;
|
|
|
|
Iterator i;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
assert(manager);
|
|
|
|
assert(event);
|
|
|
|
|
|
|
|
HASHMAP_FOREACH(worker, manager->workers, i) {
|
2012-01-10 01:34:15 +01:00
|
|
|
ssize_t count;
|
|
|
|
|
|
|
|
if (worker->state != WORKER_IDLE)
|
|
|
|
continue;
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
count = udev_monitor_send_device(manager->monitor, worker->monitor, event->dev);
|
2012-01-10 01:34:15 +01:00
|
|
|
if (count < 0) {
|
2015-01-21 04:22:15 +01:00
|
|
|
log_error_errno(errno, "worker ["PID_FMT"] did not accept message %zi (%m), kill it",
|
|
|
|
worker->pid, count);
|
2012-01-10 01:34:15 +01:00
|
|
|
kill(worker->pid, SIGKILL);
|
|
|
|
worker->state = WORKER_KILLED;
|
|
|
|
continue;
|
|
|
|
}
|
2015-04-27 11:33:41 +02:00
|
|
|
worker_attach_event(worker, event);
|
2012-01-10 01:34:15 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
if (hashmap_size(manager->workers) >= arg_children_max) {
|
2014-09-12 14:42:59 +02:00
|
|
|
if (arg_children_max > 1)
|
2015-05-12 18:37:04 +02:00
|
|
|
log_debug("maximum number (%i) of children reached", hashmap_size(manager->workers));
|
2012-01-10 01:34:15 +01:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* start new worker and pass initial device */
|
2015-05-12 18:37:04 +02:00
|
|
|
worker_spawn(manager, event);
|
2009-06-04 01:44:04 +02:00
|
|
|
}
|
|
|
|
|
2015-05-12 19:06:33 +02:00
|
|
|
static int event_queue_insert(Manager *manager, struct udev_device *dev) {
|
2012-01-10 01:34:15 +01:00
|
|
|
struct event *event;
|
2015-05-16 10:14:20 +02:00
|
|
|
int r;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-12 19:06:33 +02:00
|
|
|
assert(manager);
|
|
|
|
assert(dev);
|
|
|
|
|
2015-05-24 15:20:36 +02:00
|
|
|
/* only one process can add events to the queue */
|
|
|
|
if (manager->pid == 0)
|
|
|
|
manager->pid = getpid();
|
|
|
|
|
2015-05-16 10:14:20 +02:00
|
|
|
assert(manager->pid == getpid());
|
|
|
|
|
2014-02-13 14:45:51 +01:00
|
|
|
event = new0(struct event, 1);
|
2015-05-16 10:14:20 +02:00
|
|
|
if (!event)
|
|
|
|
return -ENOMEM;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
event->udev = udev_device_get_udev(dev);
|
2015-05-16 10:14:20 +02:00
|
|
|
event->manager = manager;
|
2012-01-10 01:34:15 +01:00
|
|
|
event->dev = dev;
|
2015-03-11 15:41:32 +01:00
|
|
|
event->dev_kernel = udev_device_shallow_clone(dev);
|
|
|
|
udev_device_copy_properties(event->dev_kernel, dev);
|
2012-01-10 01:34:15 +01:00
|
|
|
event->seqnum = udev_device_get_seqnum(dev);
|
|
|
|
event->devpath = udev_device_get_devpath(dev);
|
|
|
|
event->devpath_len = strlen(event->devpath);
|
|
|
|
event->devpath_old = udev_device_get_devpath_old(dev);
|
|
|
|
event->devnum = udev_device_get_devnum(dev);
|
2012-10-07 18:21:38 +02:00
|
|
|
event->is_block = streq("block", udev_device_get_subsystem(dev));
|
2012-01-10 01:34:15 +01:00
|
|
|
event->ifindex = udev_device_get_ifindex(dev);
|
|
|
|
|
2013-12-24 16:39:37 +01:00
|
|
|
log_debug("seq %llu queued, '%s' '%s'", udev_device_get_seqnum(dev),
|
2012-01-10 01:34:15 +01:00
|
|
|
udev_device_get_action(dev), udev_device_get_subsystem(dev));
|
|
|
|
|
|
|
|
event->state = EVENT_QUEUED;
|
2015-05-16 10:14:20 +02:00
|
|
|
|
|
|
|
if (udev_list_node_is_empty(&manager->events)) {
|
|
|
|
r = touch("/run/udev/queue");
|
|
|
|
if (r < 0)
|
|
|
|
log_warning_errno(r, "could not touch /run/udev/queue: %m");
|
|
|
|
}
|
|
|
|
|
2015-05-12 19:06:33 +02:00
|
|
|
udev_list_node_append(&event->node, &manager->events);
|
2015-05-16 10:14:20 +02:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
return 0;
|
2005-11-07 14:10:09 +01:00
|
|
|
}
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
static void manager_kill_workers(Manager *manager) {
|
2015-05-06 23:26:25 +02:00
|
|
|
struct worker *worker;
|
|
|
|
Iterator i;
|
2009-06-04 01:44:04 +02:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
assert(manager);
|
|
|
|
|
|
|
|
HASHMAP_FOREACH(worker, manager->workers, i) {
|
2012-01-10 01:34:15 +01:00
|
|
|
if (worker->state == WORKER_KILLED)
|
|
|
|
continue;
|
2009-06-04 01:44:04 +02:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
worker->state = WORKER_KILLED;
|
|
|
|
kill(worker->pid, SIGTERM);
|
|
|
|
}
|
2009-06-04 01:44:04 +02:00
|
|
|
}
|
|
|
|
|
2009-05-04 22:08:05 +02:00
|
|
|
/* lookup event for identical, parent, child device */
|
2015-05-12 19:06:33 +02:00
|
|
|
static bool is_devpath_busy(Manager *manager, struct event *event) {
|
2012-01-10 01:34:15 +01:00
|
|
|
struct udev_list_node *loop;
|
|
|
|
size_t common;
|
|
|
|
|
|
|
|
/* check if queue contains events we depend on */
|
2015-05-12 19:06:33 +02:00
|
|
|
udev_list_node_foreach(loop, &manager->events) {
|
2012-01-10 01:34:15 +01:00
|
|
|
struct event *loop_event = node_to_event(loop);
|
|
|
|
|
|
|
|
/* we already found a later event, earlier can not block us, no need to check again */
|
|
|
|
if (loop_event->seqnum < event->delaying_seqnum)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* event we checked earlier still exists, no need to check again */
|
|
|
|
if (loop_event->seqnum == event->delaying_seqnum)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
/* found ourself, no later event can block us */
|
|
|
|
if (loop_event->seqnum >= event->seqnum)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* check major/minor */
|
|
|
|
if (major(event->devnum) != 0 && event->devnum == loop_event->devnum && event->is_block == loop_event->is_block)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
/* check network device ifindex */
|
|
|
|
if (event->ifindex != 0 && event->ifindex == loop_event->ifindex)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
/* check our old name */
|
2013-02-13 18:13:22 +01:00
|
|
|
if (event->devpath_old != NULL && streq(loop_event->devpath, event->devpath_old)) {
|
2012-01-10 01:34:15 +01:00
|
|
|
event->delaying_seqnum = loop_event->seqnum;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* compare devpath */
|
|
|
|
common = MIN(loop_event->devpath_len, event->devpath_len);
|
|
|
|
|
|
|
|
/* one devpath is contained in the other? */
|
|
|
|
if (memcmp(loop_event->devpath, event->devpath, common) != 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* identical device event found */
|
|
|
|
if (loop_event->devpath_len == event->devpath_len) {
|
|
|
|
/* devices names might have changed/swapped in the meantime */
|
|
|
|
if (major(event->devnum) != 0 && (event->devnum != loop_event->devnum || event->is_block != loop_event->is_block))
|
|
|
|
continue;
|
|
|
|
if (event->ifindex != 0 && event->ifindex != loop_event->ifindex)
|
|
|
|
continue;
|
|
|
|
event->delaying_seqnum = loop_event->seqnum;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* parent device event found */
|
|
|
|
if (event->devpath[common] == '/') {
|
|
|
|
event->delaying_seqnum = loop_event->seqnum;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* child device event found */
|
|
|
|
if (loop_event->devpath[common] == '/') {
|
|
|
|
event->delaying_seqnum = loop_event->seqnum;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* no matching device */
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
}
|
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
static int on_exit_timeout(sd_event_source *s, uint64_t usec, void *userdata) {
|
|
|
|
Manager *manager = userdata;
|
|
|
|
|
|
|
|
assert(manager);
|
|
|
|
|
|
|
|
log_error_errno(ETIMEDOUT, "giving up waiting for workers to finish");
|
|
|
|
|
|
|
|
sd_event_exit(manager->event, -ETIMEDOUT);
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2015-05-13 11:26:32 +02:00
|
|
|
static void manager_exit(Manager *manager) {
|
2015-05-18 17:22:36 +02:00
|
|
|
uint64_t usec;
|
|
|
|
int r;
|
2015-05-13 11:26:32 +02:00
|
|
|
|
|
|
|
assert(manager);
|
|
|
|
|
|
|
|
manager->exit = true;
|
|
|
|
|
2015-05-29 18:38:44 +02:00
|
|
|
sd_notify(false,
|
|
|
|
"STOPPING=1\n"
|
|
|
|
"STATUS=Starting shutdown...");
|
|
|
|
|
2015-05-13 11:26:32 +02:00
|
|
|
/* close sources of new events and discard buffered events */
|
2015-05-18 17:22:36 +02:00
|
|
|
manager->ctrl_event = sd_event_source_unref(manager->ctrl_event);
|
udev: don't close FDs before dropping them from epoll
Make sure we never close fds before we drop their related event-source.
This will cause horrible disruptions if the fd-num is re-used by someone
else. Under normal conditions, this should not cause any problems as the
close() will drop the fd from the epoll-set automatically. However, this
changes if you have any child processes with a copy of that fd.
This fixes issue #163.
Background:
If you create an epoll-set via epoll_create() (lets call it 'EFD')
you can add file-descriptors to it to watch for events. Whenever
you call EPOLL_CTL_ADD on a file-descriptor you want to watch, the
kernel looks up the attached "struct file" pointer, that this FD
refers to. This combination of the FD-number and the "struct file"
pointer is used as key to link it into the epoll-set (EFD).
This means, if you duplicate your file-descriptor, you can watch
this file-descriptor, too (because the duplicate will have a
different FD-number, hence, the combination of FD-number and
"struct file" is different as before).
If you want to stop watching an FD, you use EPOLL_CTL_DEL and pass
the FD to the kernel. The kernel again looks up your
file-descriptor in your FD-table to find the linked "struct file".
This FD-number and "struct file" combination is then dropped from
the epoll-set (EFD).
Last, but not least: If you close a file-descriptor that is linked
to an epoll-set, the kernel does *NOTHING* regarding the
epoll-set. This is a vital observation! Because this means, your
epoll_wait() calls will still return the metadata you used to
watch/subscribe your file-descriptor to events.
There is one exception to this rule: If the file-descriptor that
you just close()ed was the last FD that referred to the underlying
"struct file", then _all_ epoll-set watches/subscriptions are
destroyed. Hence, if you never dup()ed your FD, then a simple
close() will also unsubscribe it from any epoll-set.
With this in mind, lets look at fork():
Assume you have an epoll-set (EFD) and a bunch of FDs
subscribed to events on that EFD. If you now call fork(),
the new process gets a copy of your file-descriptor table.
This means, the whole table is copied and the "struct
file" reference of each FD is increased by 1. It is
important to notice that the FD-numbers in the child are
exactly the same as in the parent (eg., FD #5 in the child
refers to the same "struct file" as FD #5 in the parent).
This means, if the child calls EPOLL_CTL_DEL on an FD, the
kernel will look up the linked "struct file" and drop the
FD-number and "struct file" combination from the epoll-set
(EFD). However, this will effectively drop the
subscription that was installed by the parent.
To sum up: even though the child gets a duplicate of the
EFD and all FDs, the subscriptions in the EFD are *NOT*
duplicated!
Now, with this in mind, lets look at what udevd does:
Udevd has a bunch of file-descriptors that it watches in its
sd-event main-loop. Whenever a uevent is received, the event is
dispatched on its workers. If no suitable worker is present, a new
worker is fork()ed to handle the event. Inside of this worker, we
try to free all resources we inherited. However, the fork() call
is done from a call-stack that is never rewinded. Therefore, this
call stack might own references that it drops once it is left.
Those references we cannot deduce from the fork()'ed process;
effectively causing us to leak objects in the worker (eg., the
call to sd_event_dispatch() that dispatched our uevent owns a
reference to the sd_event object it used; and drops it again once
the function is left).
(Another example is udev_monitor_ref() for each 'worker' that is
also inherited by all children; thus keeping the udev-monitor and
the uevent-fd alive in all children (which is the real cause for
bug #163))
(The extreme variant is sd_event_source_unref(), which explicitly
keeps event-sources alive, if they're currently dispatched,
knowing that the dispatcher will free the event once done. But
if the dispatcher is in the parent, the child will never ever
free that object, thus leaking it)
This is usually not an issue. However, if such an object has a
file-descriptor embedded, this FD is left open and never closed in
the child.
In manager_exit(), if we now destroy an object (i.e., close its embedded
file-descriptor) before we destroy its related sd_event_source, then
sd-event will not be able to drop the FD from the epoll-set (EFD). This
is, because the FD is no longer valid at the time we call EPOLL_CTL_DEL.
Hence, the kernel cannot figure out the linked "struct file" and thus
cannot remove the FD-number plus "struct file" combination; effectively
leaving the subscription in the epoll-set.
Since we leak the uevent-fd in the children, they retain a copy of the FD
pointing to the same "struct file". Thus, the EFD-subscription are not
automatically removed by close() (as described above). Therefore, the main
daemon will still get its metadata back on epoll_watch() whenever an event
occurs (even though it already freed the metadata). This then causes the
free-after-use bug described in #163.
This patch fixes the order in which we destruct objects and related
sd-event-sources. Some open questions remain:
* Why does source_io_unregister() not warn on EPOLL_CTL_DEL failures?
This really needs to be turned into an assert_return().
* udevd really should not leak file-descriptors into its children. Fixing
this would *not* have prevented this bug, though (since the child-setup
is still async).
It's non-trivial to fix this, though. The stack-context of the caller
cannot be rewinded, so we cannot figure out temporary refs. Maybe it's
time to exec() the udev-workers?
* Why does the kernel not copy FD-subscriptions across fork()?
Or at least drop subscriptions if you close() your FD (it uses the
FD-number as key, so it better subscribe to it)?
Or it better used
FD+"struct file_table*"+"struct file*"
as key to not allow the childen to share the subscription table..
*sigh*
Seems like we have to live with that API forever.
2015-06-16 23:36:36 +02:00
|
|
|
manager->ctrl = udev_ctrl_unref(manager->ctrl);
|
2015-05-13 11:26:32 +02:00
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
manager->inotify_event = sd_event_source_unref(manager->inotify_event);
|
udev: don't close FDs before dropping them from epoll
Make sure we never close fds before we drop their related event-source.
This will cause horrible disruptions if the fd-num is re-used by someone
else. Under normal conditions, this should not cause any problems as the
close() will drop the fd from the epoll-set automatically. However, this
changes if you have any child processes with a copy of that fd.
This fixes issue #163.
Background:
If you create an epoll-set via epoll_create() (lets call it 'EFD')
you can add file-descriptors to it to watch for events. Whenever
you call EPOLL_CTL_ADD on a file-descriptor you want to watch, the
kernel looks up the attached "struct file" pointer, that this FD
refers to. This combination of the FD-number and the "struct file"
pointer is used as key to link it into the epoll-set (EFD).
This means, if you duplicate your file-descriptor, you can watch
this file-descriptor, too (because the duplicate will have a
different FD-number, hence, the combination of FD-number and
"struct file" is different as before).
If you want to stop watching an FD, you use EPOLL_CTL_DEL and pass
the FD to the kernel. The kernel again looks up your
file-descriptor in your FD-table to find the linked "struct file".
This FD-number and "struct file" combination is then dropped from
the epoll-set (EFD).
Last, but not least: If you close a file-descriptor that is linked
to an epoll-set, the kernel does *NOTHING* regarding the
epoll-set. This is a vital observation! Because this means, your
epoll_wait() calls will still return the metadata you used to
watch/subscribe your file-descriptor to events.
There is one exception to this rule: If the file-descriptor that
you just close()ed was the last FD that referred to the underlying
"struct file", then _all_ epoll-set watches/subscriptions are
destroyed. Hence, if you never dup()ed your FD, then a simple
close() will also unsubscribe it from any epoll-set.
With this in mind, lets look at fork():
Assume you have an epoll-set (EFD) and a bunch of FDs
subscribed to events on that EFD. If you now call fork(),
the new process gets a copy of your file-descriptor table.
This means, the whole table is copied and the "struct
file" reference of each FD is increased by 1. It is
important to notice that the FD-numbers in the child are
exactly the same as in the parent (eg., FD #5 in the child
refers to the same "struct file" as FD #5 in the parent).
This means, if the child calls EPOLL_CTL_DEL on an FD, the
kernel will look up the linked "struct file" and drop the
FD-number and "struct file" combination from the epoll-set
(EFD). However, this will effectively drop the
subscription that was installed by the parent.
To sum up: even though the child gets a duplicate of the
EFD and all FDs, the subscriptions in the EFD are *NOT*
duplicated!
Now, with this in mind, lets look at what udevd does:
Udevd has a bunch of file-descriptors that it watches in its
sd-event main-loop. Whenever a uevent is received, the event is
dispatched on its workers. If no suitable worker is present, a new
worker is fork()ed to handle the event. Inside of this worker, we
try to free all resources we inherited. However, the fork() call
is done from a call-stack that is never rewinded. Therefore, this
call stack might own references that it drops once it is left.
Those references we cannot deduce from the fork()'ed process;
effectively causing us to leak objects in the worker (eg., the
call to sd_event_dispatch() that dispatched our uevent owns a
reference to the sd_event object it used; and drops it again once
the function is left).
(Another example is udev_monitor_ref() for each 'worker' that is
also inherited by all children; thus keeping the udev-monitor and
the uevent-fd alive in all children (which is the real cause for
bug #163))
(The extreme variant is sd_event_source_unref(), which explicitly
keeps event-sources alive, if they're currently dispatched,
knowing that the dispatcher will free the event once done. But
if the dispatcher is in the parent, the child will never ever
free that object, thus leaking it)
This is usually not an issue. However, if such an object has a
file-descriptor embedded, this FD is left open and never closed in
the child.
In manager_exit(), if we now destroy an object (i.e., close its embedded
file-descriptor) before we destroy its related sd_event_source, then
sd-event will not be able to drop the FD from the epoll-set (EFD). This
is, because the FD is no longer valid at the time we call EPOLL_CTL_DEL.
Hence, the kernel cannot figure out the linked "struct file" and thus
cannot remove the FD-number plus "struct file" combination; effectively
leaving the subscription in the epoll-set.
Since we leak the uevent-fd in the children, they retain a copy of the FD
pointing to the same "struct file". Thus, the EFD-subscription are not
automatically removed by close() (as described above). Therefore, the main
daemon will still get its metadata back on epoll_watch() whenever an event
occurs (even though it already freed the metadata). This then causes the
free-after-use bug described in #163.
This patch fixes the order in which we destruct objects and related
sd-event-sources. Some open questions remain:
* Why does source_io_unregister() not warn on EPOLL_CTL_DEL failures?
This really needs to be turned into an assert_return().
* udevd really should not leak file-descriptors into its children. Fixing
this would *not* have prevented this bug, though (since the child-setup
is still async).
It's non-trivial to fix this, though. The stack-context of the caller
cannot be rewinded, so we cannot figure out temporary refs. Maybe it's
time to exec() the udev-workers?
* Why does the kernel not copy FD-subscriptions across fork()?
Or at least drop subscriptions if you close() your FD (it uses the
FD-number as key, so it better subscribe to it)?
Or it better used
FD+"struct file_table*"+"struct file*"
as key to not allow the childen to share the subscription table..
*sigh*
Seems like we have to live with that API forever.
2015-06-16 23:36:36 +02:00
|
|
|
manager->fd_inotify = safe_close(manager->fd_inotify);
|
2015-05-13 11:26:32 +02:00
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
manager->uevent_event = sd_event_source_unref(manager->uevent_event);
|
udev: don't close FDs before dropping them from epoll
Make sure we never close fds before we drop their related event-source.
This will cause horrible disruptions if the fd-num is re-used by someone
else. Under normal conditions, this should not cause any problems as the
close() will drop the fd from the epoll-set automatically. However, this
changes if you have any child processes with a copy of that fd.
This fixes issue #163.
Background:
If you create an epoll-set via epoll_create() (lets call it 'EFD')
you can add file-descriptors to it to watch for events. Whenever
you call EPOLL_CTL_ADD on a file-descriptor you want to watch, the
kernel looks up the attached "struct file" pointer, that this FD
refers to. This combination of the FD-number and the "struct file"
pointer is used as key to link it into the epoll-set (EFD).
This means, if you duplicate your file-descriptor, you can watch
this file-descriptor, too (because the duplicate will have a
different FD-number, hence, the combination of FD-number and
"struct file" is different as before).
If you want to stop watching an FD, you use EPOLL_CTL_DEL and pass
the FD to the kernel. The kernel again looks up your
file-descriptor in your FD-table to find the linked "struct file".
This FD-number and "struct file" combination is then dropped from
the epoll-set (EFD).
Last, but not least: If you close a file-descriptor that is linked
to an epoll-set, the kernel does *NOTHING* regarding the
epoll-set. This is a vital observation! Because this means, your
epoll_wait() calls will still return the metadata you used to
watch/subscribe your file-descriptor to events.
There is one exception to this rule: If the file-descriptor that
you just close()ed was the last FD that referred to the underlying
"struct file", then _all_ epoll-set watches/subscriptions are
destroyed. Hence, if you never dup()ed your FD, then a simple
close() will also unsubscribe it from any epoll-set.
With this in mind, lets look at fork():
Assume you have an epoll-set (EFD) and a bunch of FDs
subscribed to events on that EFD. If you now call fork(),
the new process gets a copy of your file-descriptor table.
This means, the whole table is copied and the "struct
file" reference of each FD is increased by 1. It is
important to notice that the FD-numbers in the child are
exactly the same as in the parent (eg., FD #5 in the child
refers to the same "struct file" as FD #5 in the parent).
This means, if the child calls EPOLL_CTL_DEL on an FD, the
kernel will look up the linked "struct file" and drop the
FD-number and "struct file" combination from the epoll-set
(EFD). However, this will effectively drop the
subscription that was installed by the parent.
To sum up: even though the child gets a duplicate of the
EFD and all FDs, the subscriptions in the EFD are *NOT*
duplicated!
Now, with this in mind, lets look at what udevd does:
Udevd has a bunch of file-descriptors that it watches in its
sd-event main-loop. Whenever a uevent is received, the event is
dispatched on its workers. If no suitable worker is present, a new
worker is fork()ed to handle the event. Inside of this worker, we
try to free all resources we inherited. However, the fork() call
is done from a call-stack that is never rewinded. Therefore, this
call stack might own references that it drops once it is left.
Those references we cannot deduce from the fork()'ed process;
effectively causing us to leak objects in the worker (eg., the
call to sd_event_dispatch() that dispatched our uevent owns a
reference to the sd_event object it used; and drops it again once
the function is left).
(Another example is udev_monitor_ref() for each 'worker' that is
also inherited by all children; thus keeping the udev-monitor and
the uevent-fd alive in all children (which is the real cause for
bug #163))
(The extreme variant is sd_event_source_unref(), which explicitly
keeps event-sources alive, if they're currently dispatched,
knowing that the dispatcher will free the event once done. But
if the dispatcher is in the parent, the child will never ever
free that object, thus leaking it)
This is usually not an issue. However, if such an object has a
file-descriptor embedded, this FD is left open and never closed in
the child.
In manager_exit(), if we now destroy an object (i.e., close its embedded
file-descriptor) before we destroy its related sd_event_source, then
sd-event will not be able to drop the FD from the epoll-set (EFD). This
is, because the FD is no longer valid at the time we call EPOLL_CTL_DEL.
Hence, the kernel cannot figure out the linked "struct file" and thus
cannot remove the FD-number plus "struct file" combination; effectively
leaving the subscription in the epoll-set.
Since we leak the uevent-fd in the children, they retain a copy of the FD
pointing to the same "struct file". Thus, the EFD-subscription are not
automatically removed by close() (as described above). Therefore, the main
daemon will still get its metadata back on epoll_watch() whenever an event
occurs (even though it already freed the metadata). This then causes the
free-after-use bug described in #163.
This patch fixes the order in which we destruct objects and related
sd-event-sources. Some open questions remain:
* Why does source_io_unregister() not warn on EPOLL_CTL_DEL failures?
This really needs to be turned into an assert_return().
* udevd really should not leak file-descriptors into its children. Fixing
this would *not* have prevented this bug, though (since the child-setup
is still async).
It's non-trivial to fix this, though. The stack-context of the caller
cannot be rewinded, so we cannot figure out temporary refs. Maybe it's
time to exec() the udev-workers?
* Why does the kernel not copy FD-subscriptions across fork()?
Or at least drop subscriptions if you close() your FD (it uses the
FD-number as key, so it better subscribe to it)?
Or it better used
FD+"struct file_table*"+"struct file*"
as key to not allow the childen to share the subscription table..
*sigh*
Seems like we have to live with that API forever.
2015-06-16 23:36:36 +02:00
|
|
|
manager->monitor = udev_monitor_unref(manager->monitor);
|
2015-05-13 11:26:32 +02:00
|
|
|
|
|
|
|
/* discard queued events and kill workers */
|
|
|
|
event_queue_cleanup(manager, EVENT_QUEUED);
|
|
|
|
manager_kill_workers(manager);
|
2015-05-18 17:22:36 +02:00
|
|
|
|
sd-event: make sure sd_event_now() cannot fail
Previously, if the event loop never ran before sd_event_now() would
fail. With this change it will instead fall back to invoking now(). This
way, the function cannot fail anymore, except for programming error when
invoking it with wrong parameters.
This takes into account the fact that many callers did not handle the
error condition correctly, and if the callers did, then they kept simply
invoking now() as fall back on their own. Hence let's shorten the code
using this call, and make things more robust, and let's just fall back
to now() internally.
Whether now() is used or the cache timestamp may still be detected via
the return value of sd_event_now(). If > 0 is returned, then the fall
back to now() was used, if == 0 is returned, then the cached value was
returned.
This patch also simplifies many of the invocations of sd_event_now():
the manual fall back to now() can be removed. Also, in cases where the
call is invoked withing void functions we can now protect the invocation
via assert_se(), acknowledging the fact that the call cannot fail
anymore except for programming errors with the parameters.
This change is inspired by #841.
2015-08-03 17:29:09 +02:00
|
|
|
assert_se(sd_event_now(manager->event, clock_boottime_or_monotonic(), &usec) >= 0);
|
2015-05-18 17:22:36 +02:00
|
|
|
|
|
|
|
r = sd_event_add_time(manager->event, NULL, clock_boottime_or_monotonic(),
|
|
|
|
usec + 30 * USEC_PER_SEC, USEC_PER_SEC, on_exit_timeout, manager);
|
|
|
|
if (r < 0)
|
|
|
|
return;
|
2015-05-13 11:26:32 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* reload requested, HUP signal received, rules changed, builtin changed */
|
|
|
|
static void manager_reload(Manager *manager) {
|
|
|
|
|
|
|
|
assert(manager);
|
|
|
|
|
2015-05-29 18:38:44 +02:00
|
|
|
sd_notify(false,
|
|
|
|
"RELOADING=1\n"
|
|
|
|
"STATUS=Flushing configuration...");
|
|
|
|
|
2015-05-13 11:26:32 +02:00
|
|
|
manager_kill_workers(manager);
|
|
|
|
manager->rules = udev_rules_unref(manager->rules);
|
|
|
|
udev_builtin_exit(manager->udev);
|
2015-05-29 18:38:44 +02:00
|
|
|
|
|
|
|
sd_notify(false,
|
|
|
|
"READY=1\n"
|
|
|
|
"STATUS=Processing...");
|
2015-05-13 11:26:32 +02:00
|
|
|
}
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
static void event_queue_start(Manager *manager) {
|
2012-01-10 01:34:15 +01:00
|
|
|
struct udev_list_node *loop;
|
2015-05-18 17:22:36 +02:00
|
|
|
usec_t usec;
|
2005-06-16 01:58:47 +02:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
assert(manager);
|
|
|
|
|
2015-05-18 16:25:22 +02:00
|
|
|
if (udev_list_node_is_empty(&manager->events) ||
|
|
|
|
manager->exit || manager->stop_exec_queue)
|
|
|
|
return;
|
|
|
|
|
sd-event: make sure sd_event_now() cannot fail
Previously, if the event loop never ran before sd_event_now() would
fail. With this change it will instead fall back to invoking now(). This
way, the function cannot fail anymore, except for programming error when
invoking it with wrong parameters.
This takes into account the fact that many callers did not handle the
error condition correctly, and if the callers did, then they kept simply
invoking now() as fall back on their own. Hence let's shorten the code
using this call, and make things more robust, and let's just fall back
to now() internally.
Whether now() is used or the cache timestamp may still be detected via
the return value of sd_event_now(). If > 0 is returned, then the fall
back to now() was used, if == 0 is returned, then the cached value was
returned.
This patch also simplifies many of the invocations of sd_event_now():
the manual fall back to now() can be removed. Also, in cases where the
call is invoked withing void functions we can now protect the invocation
via assert_se(), acknowledging the fact that the call cannot fail
anymore except for programming errors with the parameters.
This change is inspired by #841.
2015-08-03 17:29:09 +02:00
|
|
|
assert_se(sd_event_now(manager->event, clock_boottime_or_monotonic(), &usec) >= 0);
|
|
|
|
/* check for changed config, every 3 seconds at most */
|
|
|
|
if (manager->last_usec == 0 ||
|
|
|
|
(usec - manager->last_usec) > 3 * USEC_PER_SEC) {
|
|
|
|
if (udev_rules_check_timestamp(manager->rules) ||
|
|
|
|
udev_builtin_validate(manager->udev))
|
|
|
|
manager_reload(manager);
|
2015-05-18 17:22:36 +02:00
|
|
|
|
sd-event: make sure sd_event_now() cannot fail
Previously, if the event loop never ran before sd_event_now() would
fail. With this change it will instead fall back to invoking now(). This
way, the function cannot fail anymore, except for programming error when
invoking it with wrong parameters.
This takes into account the fact that many callers did not handle the
error condition correctly, and if the callers did, then they kept simply
invoking now() as fall back on their own. Hence let's shorten the code
using this call, and make things more robust, and let's just fall back
to now() internally.
Whether now() is used or the cache timestamp may still be detected via
the return value of sd_event_now(). If > 0 is returned, then the fall
back to now() was used, if == 0 is returned, then the cached value was
returned.
This patch also simplifies many of the invocations of sd_event_now():
the manual fall back to now() can be removed. Also, in cases where the
call is invoked withing void functions we can now protect the invocation
via assert_se(), acknowledging the fact that the call cannot fail
anymore except for programming errors with the parameters.
This change is inspired by #841.
2015-08-03 17:29:09 +02:00
|
|
|
manager->last_usec = usec;
|
2015-05-18 16:25:22 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
udev_builtin_init(manager->udev);
|
|
|
|
|
|
|
|
if (!manager->rules) {
|
|
|
|
manager->rules = udev_rules_new(manager->udev, arg_resolve_names);
|
|
|
|
if (!manager->rules)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2015-05-12 19:06:33 +02:00
|
|
|
udev_list_node_foreach(loop, &manager->events) {
|
2012-01-10 01:34:15 +01:00
|
|
|
struct event *event = node_to_event(loop);
|
2008-10-29 17:32:13 +01:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
if (event->state != EVENT_QUEUED)
|
|
|
|
continue;
|
2008-10-29 17:32:13 +01:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
/* do not start event if parent or child event is still running */
|
2015-05-12 19:06:33 +02:00
|
|
|
if (is_devpath_busy(manager, event))
|
2012-01-10 01:34:15 +01:00
|
|
|
continue;
|
2005-11-07 14:10:09 +01:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
event_run(manager, event);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
2009-06-04 01:44:04 +02:00
|
|
|
}
|
|
|
|
|
2015-05-12 19:06:33 +02:00
|
|
|
static void event_queue_cleanup(Manager *manager, enum event_state match_type) {
|
2012-01-10 01:34:15 +01:00
|
|
|
struct udev_list_node *loop, *tmp;
|
2011-04-13 01:17:09 +02:00
|
|
|
|
2015-05-12 19:06:33 +02:00
|
|
|
udev_list_node_foreach_safe(loop, tmp, &manager->events) {
|
2012-01-10 01:34:15 +01:00
|
|
|
struct event *event = node_to_event(loop);
|
2011-04-13 01:17:09 +02:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
if (match_type != EVENT_UNDEF && match_type != event->state)
|
|
|
|
continue;
|
2011-04-13 01:17:09 +02:00
|
|
|
|
2015-04-27 11:43:31 +02:00
|
|
|
event_free(event);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
2011-04-13 01:17:09 +02:00
|
|
|
}
|
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
static int on_worker(sd_event_source *s, int fd, uint32_t revents, void *userdata) {
|
2015-05-12 18:37:04 +02:00
|
|
|
Manager *manager = userdata;
|
|
|
|
|
|
|
|
assert(manager);
|
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
for (;;) {
|
|
|
|
struct worker_message msg;
|
2015-04-21 18:55:05 +02:00
|
|
|
struct iovec iovec = {
|
|
|
|
.iov_base = &msg,
|
|
|
|
.iov_len = sizeof(msg),
|
|
|
|
};
|
|
|
|
union {
|
|
|
|
struct cmsghdr cmsghdr;
|
|
|
|
uint8_t buf[CMSG_SPACE(sizeof(struct ucred))];
|
|
|
|
} control = {};
|
|
|
|
struct msghdr msghdr = {
|
|
|
|
.msg_iov = &iovec,
|
|
|
|
.msg_iovlen = 1,
|
|
|
|
.msg_control = &control,
|
|
|
|
.msg_controllen = sizeof(control),
|
|
|
|
};
|
|
|
|
struct cmsghdr *cmsg;
|
2012-01-10 01:34:15 +01:00
|
|
|
ssize_t size;
|
2015-04-21 18:55:05 +02:00
|
|
|
struct ucred *ucred = NULL;
|
2015-05-06 23:26:25 +02:00
|
|
|
struct worker *worker;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
size = recvmsg(fd, &msghdr, MSG_DONTWAIT);
|
2015-04-21 18:55:05 +02:00
|
|
|
if (size < 0) {
|
2015-05-16 01:12:21 +02:00
|
|
|
if (errno == EINTR)
|
|
|
|
continue;
|
|
|
|
else if (errno == EAGAIN)
|
|
|
|
/* nothing more to read */
|
|
|
|
break;
|
2015-04-21 18:55:05 +02:00
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
return log_error_errno(errno, "failed to receive message: %m");
|
2015-04-21 18:55:05 +02:00
|
|
|
} else if (size != sizeof(struct worker_message)) {
|
|
|
|
log_warning_errno(EIO, "ignoring worker message with invalid size %zi bytes", size);
|
2015-05-12 14:54:52 +02:00
|
|
|
continue;
|
2015-04-21 18:55:05 +02:00
|
|
|
}
|
|
|
|
|
2015-06-10 19:10:47 +02:00
|
|
|
CMSG_FOREACH(cmsg, &msghdr) {
|
2015-04-21 18:55:05 +02:00
|
|
|
if (cmsg->cmsg_level == SOL_SOCKET &&
|
|
|
|
cmsg->cmsg_type == SCM_CREDENTIALS &&
|
|
|
|
cmsg->cmsg_len == CMSG_LEN(sizeof(struct ucred)))
|
|
|
|
ucred = (struct ucred*) CMSG_DATA(cmsg);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!ucred || ucred->pid <= 0) {
|
|
|
|
log_warning_errno(EIO, "ignoring worker message without valid PID");
|
|
|
|
continue;
|
|
|
|
}
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
/* lookup worker who sent the signal */
|
2015-05-12 18:37:04 +02:00
|
|
|
worker = hashmap_get(manager->workers, UINT_TO_PTR(ucred->pid));
|
2015-05-06 23:26:25 +02:00
|
|
|
if (!worker) {
|
|
|
|
log_debug("worker ["PID_FMT"] returned, but is no longer tracked", ucred->pid);
|
|
|
|
continue;
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
2015-04-24 19:29:53 +02:00
|
|
|
|
2015-05-06 23:26:25 +02:00
|
|
|
if (worker->state != WORKER_KILLED)
|
|
|
|
worker->state = WORKER_IDLE;
|
|
|
|
|
|
|
|
/* worker returned */
|
|
|
|
event_free(worker->event);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
2015-05-12 14:54:52 +02:00
|
|
|
|
2015-05-18 17:06:00 +02:00
|
|
|
/* we have free workers, try to schedule events */
|
|
|
|
event_queue_start(manager);
|
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int on_uevent(sd_event_source *s, int fd, uint32_t revents, void *userdata) {
|
2015-05-12 18:37:04 +02:00
|
|
|
Manager *manager = userdata;
|
2015-05-12 14:54:52 +02:00
|
|
|
struct udev_device *dev;
|
|
|
|
int r;
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
assert(manager);
|
2015-05-12 14:54:52 +02:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
dev = udev_monitor_receive_device(manager->monitor);
|
2015-05-12 14:54:52 +02:00
|
|
|
if (dev) {
|
|
|
|
udev_device_ensure_usec_initialized(dev, NULL);
|
2015-05-12 19:06:33 +02:00
|
|
|
r = event_queue_insert(manager, dev);
|
2015-05-12 14:54:52 +02:00
|
|
|
if (r < 0)
|
|
|
|
udev_device_unref(dev);
|
2015-05-18 17:06:00 +02:00
|
|
|
else
|
|
|
|
/* we have fresh events, try to schedule them */
|
|
|
|
event_queue_start(manager);
|
2015-05-12 14:54:52 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
2005-06-05 04:38:10 +02:00
|
|
|
}
|
|
|
|
|
2005-06-05 04:41:09 +02:00
|
|
|
/* receive the udevd message from userspace */
|
2015-05-12 14:54:52 +02:00
|
|
|
static int on_ctrl_msg(sd_event_source *s, int fd, uint32_t revents, void *userdata) {
|
2015-05-12 18:37:04 +02:00
|
|
|
Manager *manager = userdata;
|
2015-05-11 22:17:49 +02:00
|
|
|
_cleanup_udev_ctrl_connection_unref_ struct udev_ctrl_connection *ctrl_conn = NULL;
|
|
|
|
_cleanup_udev_ctrl_msg_unref_ struct udev_ctrl_msg *ctrl_msg = NULL;
|
2012-01-10 01:34:15 +01:00
|
|
|
const char *str;
|
|
|
|
int i;
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
assert(manager);
|
2015-05-11 22:17:49 +02:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
ctrl_conn = udev_ctrl_get_connection(manager->ctrl);
|
2015-05-11 22:17:49 +02:00
|
|
|
if (!ctrl_conn)
|
2015-05-12 14:54:52 +02:00
|
|
|
return 1;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
ctrl_msg = udev_ctrl_receive_msg(ctrl_conn);
|
2015-05-11 22:17:49 +02:00
|
|
|
if (!ctrl_msg)
|
2015-05-12 14:54:52 +02:00
|
|
|
return 1;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
i = udev_ctrl_get_set_log_level(ctrl_msg);
|
|
|
|
if (i >= 0) {
|
2013-12-29 18:05:25 +01:00
|
|
|
log_debug("udevd message (SET_LOG_LEVEL) received, log_priority=%i", i);
|
2012-04-08 16:06:20 +02:00
|
|
|
log_set_max_level(i);
|
2015-05-12 18:37:04 +02:00
|
|
|
manager_kill_workers(manager);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
if (udev_ctrl_get_stop_exec_queue(ctrl_msg) > 0) {
|
2013-12-24 16:39:37 +01:00
|
|
|
log_debug("udevd message (STOP_EXEC_QUEUE) received");
|
2015-05-12 18:37:04 +02:00
|
|
|
manager->stop_exec_queue = true;
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
if (udev_ctrl_get_start_exec_queue(ctrl_msg) > 0) {
|
2013-12-24 16:39:37 +01:00
|
|
|
log_debug("udevd message (START_EXEC_QUEUE) received");
|
2015-05-12 18:37:04 +02:00
|
|
|
manager->stop_exec_queue = false;
|
2015-05-18 17:06:00 +02:00
|
|
|
event_queue_start(manager);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
if (udev_ctrl_get_reload(ctrl_msg) > 0) {
|
2013-12-24 16:39:37 +01:00
|
|
|
log_debug("udevd message (RELOAD) received");
|
2015-05-13 11:26:32 +02:00
|
|
|
manager_reload(manager);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
str = udev_ctrl_get_set_env(ctrl_msg);
|
|
|
|
if (str != NULL) {
|
2015-05-12 18:37:04 +02:00
|
|
|
_cleanup_free_ char *key = NULL;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
key = strdup(str);
|
2015-05-12 18:37:04 +02:00
|
|
|
if (key) {
|
2012-01-10 01:34:15 +01:00
|
|
|
char *val;
|
|
|
|
|
|
|
|
val = strchr(key, '=');
|
|
|
|
if (val != NULL) {
|
|
|
|
val[0] = '\0';
|
|
|
|
val = &val[1];
|
|
|
|
if (val[0] == '\0') {
|
2013-12-24 16:39:37 +01:00
|
|
|
log_debug("udevd message (ENV) received, unset '%s'", key);
|
2015-05-12 18:37:04 +02:00
|
|
|
udev_list_entry_add(&manager->properties, key, NULL);
|
2012-01-10 01:34:15 +01:00
|
|
|
} else {
|
2013-12-24 16:39:37 +01:00
|
|
|
log_debug("udevd message (ENV) received, set '%s=%s'", key, val);
|
2015-05-12 18:37:04 +02:00
|
|
|
udev_list_entry_add(&manager->properties, key, val);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
2015-05-12 18:37:04 +02:00
|
|
|
} else
|
2013-12-24 16:39:37 +01:00
|
|
|
log_error("wrong key format '%s'", key);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
2015-05-12 18:37:04 +02:00
|
|
|
manager_kill_workers(manager);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
i = udev_ctrl_get_set_children_max(ctrl_msg);
|
|
|
|
if (i >= 0) {
|
2013-12-24 16:39:37 +01:00
|
|
|
log_debug("udevd message (SET_MAX_CHILDREN) received, children_max=%i", i);
|
2014-09-12 14:42:59 +02:00
|
|
|
arg_children_max = i;
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
2015-05-16 10:14:20 +02:00
|
|
|
if (udev_ctrl_get_ping(ctrl_msg) > 0)
|
2013-12-24 16:39:37 +01:00
|
|
|
log_debug("udevd message (SYNC) received");
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
if (udev_ctrl_get_exit(ctrl_msg) > 0) {
|
2013-12-24 16:39:37 +01:00
|
|
|
log_debug("udevd message (EXIT) received");
|
2015-05-13 11:26:32 +02:00
|
|
|
manager_exit(manager);
|
2015-05-12 18:37:04 +02:00
|
|
|
/* keep reference to block the client until we exit
|
|
|
|
TODO: deal with several blocking exit requests */
|
|
|
|
manager->ctrl_conn_blocking = udev_ctrl_connection_ref(ctrl_conn);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
2015-05-11 22:17:49 +02:00
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
return 1;
|
2005-06-05 04:38:10 +02:00
|
|
|
}
|
2004-11-06 14:30:15 +01:00
|
|
|
|
2014-06-04 12:16:28 +02:00
|
|
|
static int synthesize_change(struct udev_device *dev) {
|
2014-06-04 11:05:45 +02:00
|
|
|
char filename[UTIL_PATH_SIZE];
|
2014-06-04 12:16:28 +02:00
|
|
|
int r;
|
2014-06-04 11:05:45 +02:00
|
|
|
|
2014-06-04 12:16:28 +02:00
|
|
|
if (streq_ptr("block", udev_device_get_subsystem(dev)) &&
|
2014-06-04 13:30:24 +02:00
|
|
|
streq_ptr("disk", udev_device_get_devtype(dev)) &&
|
2014-06-10 15:51:15 +02:00
|
|
|
!startswith(udev_device_get_sysname(dev), "dm-")) {
|
2014-06-04 15:17:15 +02:00
|
|
|
bool part_table_read = false;
|
|
|
|
bool has_partitions = false;
|
2014-06-04 13:30:24 +02:00
|
|
|
int fd;
|
2014-06-04 12:16:28 +02:00
|
|
|
struct udev *udev = udev_device_get_udev(dev);
|
|
|
|
_cleanup_udev_enumerate_unref_ struct udev_enumerate *e = NULL;
|
|
|
|
struct udev_list_entry *item;
|
|
|
|
|
2014-06-04 13:30:24 +02:00
|
|
|
/*
|
2014-06-04 15:17:15 +02:00
|
|
|
* Try to re-read the partition table. This only succeeds if
|
|
|
|
* none of the devices is busy. The kernel returns 0 if no
|
|
|
|
* partition table is found, and we will not get an event for
|
|
|
|
* the disk.
|
2014-06-04 13:30:24 +02:00
|
|
|
*/
|
2014-06-04 16:21:19 +02:00
|
|
|
fd = open(udev_device_get_devnode(dev), O_RDONLY|O_CLOEXEC|O_NOFOLLOW|O_NONBLOCK);
|
2014-06-04 13:30:24 +02:00
|
|
|
if (fd >= 0) {
|
2014-06-04 16:21:19 +02:00
|
|
|
r = flock(fd, LOCK_EX|LOCK_NB);
|
|
|
|
if (r >= 0)
|
|
|
|
r = ioctl(fd, BLKRRPART, 0);
|
|
|
|
|
2014-06-04 13:30:24 +02:00
|
|
|
close(fd);
|
|
|
|
if (r >= 0)
|
2014-06-04 15:17:15 +02:00
|
|
|
part_table_read = true;
|
2014-06-04 13:30:24 +02:00
|
|
|
}
|
|
|
|
|
2014-06-04 15:17:15 +02:00
|
|
|
/* search for partitions */
|
2014-06-04 12:16:28 +02:00
|
|
|
e = udev_enumerate_new(udev);
|
|
|
|
if (!e)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
r = udev_enumerate_add_match_parent(e, dev);
|
|
|
|
if (r < 0)
|
|
|
|
return r;
|
|
|
|
|
|
|
|
r = udev_enumerate_add_match_subsystem(e, "block");
|
|
|
|
if (r < 0)
|
|
|
|
return r;
|
|
|
|
|
|
|
|
r = udev_enumerate_scan_devices(e);
|
2014-06-04 23:40:43 +02:00
|
|
|
if (r < 0)
|
|
|
|
return r;
|
2014-06-04 15:17:15 +02:00
|
|
|
|
|
|
|
udev_list_entry_foreach(item, udev_enumerate_get_list_entry(e)) {
|
|
|
|
_cleanup_udev_device_unref_ struct udev_device *d = NULL;
|
|
|
|
|
|
|
|
d = udev_device_new_from_syspath(udev, udev_list_entry_get_name(item));
|
|
|
|
if (!d)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!streq_ptr("partition", udev_device_get_devtype(d)))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
has_partitions = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We have partitions and re-read the table, the kernel already sent
|
|
|
|
* out a "change" event for the disk, and "remove/add" for all
|
|
|
|
* partitions.
|
|
|
|
*/
|
|
|
|
if (part_table_read && has_partitions)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We have partitions but re-reading the partition table did not
|
|
|
|
* work, synthesize "change" for the disk and all partitions.
|
|
|
|
*/
|
|
|
|
log_debug("device %s closed, synthesising 'change'", udev_device_get_devnode(dev));
|
|
|
|
strscpyl(filename, sizeof(filename), udev_device_get_syspath(dev), "/uevent", NULL);
|
2015-07-07 01:19:25 +02:00
|
|
|
write_string_file(filename, "change", WRITE_STRING_FILE_CREATE);
|
2014-06-04 15:17:15 +02:00
|
|
|
|
2014-06-04 12:16:28 +02:00
|
|
|
udev_list_entry_foreach(item, udev_enumerate_get_list_entry(e)) {
|
|
|
|
_cleanup_udev_device_unref_ struct udev_device *d = NULL;
|
|
|
|
|
|
|
|
d = udev_device_new_from_syspath(udev, udev_list_entry_get_name(item));
|
|
|
|
if (!d)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (!streq_ptr("partition", udev_device_get_devtype(d)))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
log_debug("device %s closed, synthesising partition '%s' 'change'",
|
|
|
|
udev_device_get_devnode(dev), udev_device_get_devnode(d));
|
|
|
|
strscpyl(filename, sizeof(filename), udev_device_get_syspath(d), "/uevent", NULL);
|
2015-07-07 01:19:25 +02:00
|
|
|
write_string_file(filename, "change", WRITE_STRING_FILE_CREATE);
|
2014-06-04 12:16:28 +02:00
|
|
|
}
|
2014-06-04 13:30:24 +02:00
|
|
|
|
|
|
|
return 0;
|
2014-06-04 12:16:28 +02:00
|
|
|
}
|
|
|
|
|
2014-06-04 13:30:24 +02:00
|
|
|
log_debug("device %s closed, synthesising 'change'", udev_device_get_devnode(dev));
|
|
|
|
strscpyl(filename, sizeof(filename), udev_device_get_syspath(dev), "/uevent", NULL);
|
2015-07-07 01:19:25 +02:00
|
|
|
write_string_file(filename, "change", WRITE_STRING_FILE_CREATE);
|
2014-06-04 13:30:24 +02:00
|
|
|
|
2014-06-04 12:16:28 +02:00
|
|
|
return 0;
|
2014-06-04 11:05:45 +02:00
|
|
|
}
|
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
static int on_inotify(sd_event_source *s, int fd, uint32_t revents, void *userdata) {
|
2015-05-12 18:37:04 +02:00
|
|
|
Manager *manager = userdata;
|
2014-12-23 22:47:16 +01:00
|
|
|
union inotify_event_buffer buffer;
|
2014-12-10 01:45:43 +01:00
|
|
|
struct inotify_event *e;
|
|
|
|
ssize_t l;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
assert(manager);
|
2015-05-12 14:54:52 +02:00
|
|
|
|
|
|
|
l = read(fd, &buffer, sizeof(buffer));
|
2014-12-10 01:45:43 +01:00
|
|
|
if (l < 0) {
|
|
|
|
if (errno == EAGAIN || errno == EINTR)
|
2015-05-12 14:54:52 +02:00
|
|
|
return 1;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2014-12-10 01:45:43 +01:00
|
|
|
return log_error_errno(errno, "Failed to read inotify fd: %m");
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
2014-12-10 01:45:43 +01:00
|
|
|
FOREACH_INOTIFY_EVENT(e, buffer, l) {
|
2015-05-12 14:54:52 +02:00
|
|
|
_cleanup_udev_device_unref_ struct udev_device *dev = NULL;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
dev = udev_watch_lookup(manager->udev, e->wd);
|
2014-06-04 11:05:45 +02:00
|
|
|
if (!dev)
|
|
|
|
continue;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2014-12-10 01:45:43 +01:00
|
|
|
log_debug("inotify event: %x for %s", e->mask, udev_device_get_devnode(dev));
|
2015-05-12 16:51:31 +02:00
|
|
|
if (e->mask & IN_CLOSE_WRITE) {
|
2014-06-04 11:05:45 +02:00
|
|
|
synthesize_change(dev);
|
2015-05-12 16:51:31 +02:00
|
|
|
|
|
|
|
/* settle might be waiting on us to determine the queue
|
|
|
|
* state. If we just handled an inotify event, we might have
|
|
|
|
* generated a "change" event, but we won't have queued up
|
|
|
|
* the resultant uevent yet. Do that.
|
|
|
|
*/
|
2015-05-12 18:37:04 +02:00
|
|
|
on_uevent(NULL, -1, 0, manager);
|
2015-05-12 16:51:31 +02:00
|
|
|
} else if (e->mask & IN_IGNORED)
|
2015-05-12 18:37:04 +02:00
|
|
|
udev_watch_end(manager->udev, dev);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
return 1;
|
2009-02-11 18:38:56 +01:00
|
|
|
}
|
|
|
|
|
2015-05-18 17:18:46 +02:00
|
|
|
static int on_sigterm(sd_event_source *s, const struct signalfd_siginfo *si, void *userdata) {
|
2015-05-12 18:37:04 +02:00
|
|
|
Manager *manager = userdata;
|
|
|
|
|
|
|
|
assert(manager);
|
|
|
|
|
2015-05-13 11:26:32 +02:00
|
|
|
manager_exit(manager);
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
return 1;
|
|
|
|
}
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-18 17:18:46 +02:00
|
|
|
static int on_sighup(sd_event_source *s, const struct signalfd_siginfo *si, void *userdata) {
|
2015-05-12 18:37:04 +02:00
|
|
|
Manager *manager = userdata;
|
|
|
|
|
|
|
|
assert(manager);
|
|
|
|
|
2015-05-13 11:26:32 +02:00
|
|
|
manager_reload(manager);
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
return 1;
|
|
|
|
}
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
static int on_sigchld(sd_event_source *s, const struct signalfd_siginfo *si, void *userdata) {
|
2015-05-12 18:37:04 +02:00
|
|
|
Manager *manager = userdata;
|
|
|
|
|
|
|
|
assert(manager);
|
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
for (;;) {
|
|
|
|
pid_t pid;
|
|
|
|
int status;
|
|
|
|
struct worker *worker;
|
2015-04-24 18:04:57 +02:00
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
pid = waitpid(-1, &status, WNOHANG);
|
|
|
|
if (pid <= 0)
|
2015-05-18 17:07:04 +02:00
|
|
|
break;
|
2015-05-12 14:54:52 +02:00
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
worker = hashmap_get(manager->workers, UINT_TO_PTR(pid));
|
2015-05-12 14:54:52 +02:00
|
|
|
if (!worker) {
|
|
|
|
log_warning("worker ["PID_FMT"] is unknown, ignoring", pid);
|
2015-05-18 17:07:04 +02:00
|
|
|
continue;
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
2015-05-12 14:54:52 +02:00
|
|
|
|
|
|
|
if (WIFEXITED(status)) {
|
|
|
|
if (WEXITSTATUS(status) == 0)
|
|
|
|
log_debug("worker ["PID_FMT"] exited", pid);
|
|
|
|
else
|
|
|
|
log_warning("worker ["PID_FMT"] exited with return code %i", pid, WEXITSTATUS(status));
|
|
|
|
} else if (WIFSIGNALED(status)) {
|
|
|
|
log_warning("worker ["PID_FMT"] terminated by signal %i (%s)", pid, WTERMSIG(status), strsignal(WTERMSIG(status)));
|
|
|
|
} else if (WIFSTOPPED(status)) {
|
|
|
|
log_info("worker ["PID_FMT"] stopped", pid);
|
2015-05-18 17:07:04 +02:00
|
|
|
continue;
|
2015-05-12 14:54:52 +02:00
|
|
|
} else if (WIFCONTINUED(status)) {
|
|
|
|
log_info("worker ["PID_FMT"] continued", pid);
|
2015-05-18 17:07:04 +02:00
|
|
|
continue;
|
2015-05-12 14:54:52 +02:00
|
|
|
} else
|
|
|
|
log_warning("worker ["PID_FMT"] exit with status 0x%04x", pid, status);
|
|
|
|
|
|
|
|
if (!WIFEXITED(status) || WEXITSTATUS(status) != 0) {
|
|
|
|
if (worker->event) {
|
|
|
|
log_error("worker ["PID_FMT"] failed while handling '%s'", pid, worker->event->devpath);
|
|
|
|
/* delete state from disk */
|
|
|
|
udev_device_delete_db(worker->event->dev);
|
|
|
|
udev_device_tag_index(worker->event->dev, NULL, false);
|
|
|
|
/* forward kernel event without amending it */
|
2015-05-12 18:37:04 +02:00
|
|
|
udev_monitor_send_device(manager->monitor, NULL, worker->event->dev_kernel);
|
2015-05-12 14:54:52 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
worker_free(worker);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
2015-05-12 14:54:52 +02:00
|
|
|
|
2015-05-18 17:06:00 +02:00
|
|
|
/* we can start new workers, try to schedule events */
|
|
|
|
event_queue_start(manager);
|
|
|
|
|
2015-05-12 14:54:52 +02:00
|
|
|
return 1;
|
2004-04-01 09:03:07 +02:00
|
|
|
}
|
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
static int on_post(sd_event_source *s, void *userdata) {
|
|
|
|
Manager *manager = userdata;
|
|
|
|
int r;
|
|
|
|
|
|
|
|
assert(manager);
|
|
|
|
|
|
|
|
if (udev_list_node_is_empty(&manager->events)) {
|
|
|
|
/* no pending events */
|
|
|
|
if (!hashmap_isempty(manager->workers)) {
|
|
|
|
/* there are idle workers */
|
|
|
|
log_debug("cleanup idle workers");
|
|
|
|
manager_kill_workers(manager);
|
|
|
|
} else {
|
|
|
|
/* we are idle */
|
|
|
|
if (manager->exit) {
|
|
|
|
r = sd_event_exit(manager->event, 0);
|
|
|
|
if (r < 0)
|
|
|
|
return r;
|
|
|
|
} else if (manager->cgroup)
|
|
|
|
/* cleanup possible left-over processes in our cgroup */
|
|
|
|
cg_kill(SYSTEMD_CGROUP_CONTROLLER, manager->cgroup, SIGKILL, false, true, NULL);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2015-06-02 19:04:38 +02:00
|
|
|
static int listen_fds(int *rctrl, int *rnetlink) {
|
2015-06-02 21:03:36 +02:00
|
|
|
_cleanup_udev_unref_ struct udev *udev = NULL;
|
2015-06-02 19:04:38 +02:00
|
|
|
int ctrl_fd = -1, netlink_fd = -1;
|
2015-06-02 21:03:36 +02:00
|
|
|
int fd, n, r;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-06-02 19:04:38 +02:00
|
|
|
assert(rctrl);
|
|
|
|
assert(rnetlink);
|
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
n = sd_listen_fds(true);
|
2015-06-02 19:04:38 +02:00
|
|
|
if (n < 0)
|
|
|
|
return n;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
for (fd = SD_LISTEN_FDS_START; fd < n + SD_LISTEN_FDS_START; fd++) {
|
|
|
|
if (sd_is_socket(fd, AF_LOCAL, SOCK_SEQPACKET, -1)) {
|
2015-06-02 19:04:38 +02:00
|
|
|
if (ctrl_fd >= 0)
|
|
|
|
return -EINVAL;
|
|
|
|
ctrl_fd = fd;
|
2012-01-10 01:34:15 +01:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sd_is_socket(fd, AF_NETLINK, SOCK_RAW, -1)) {
|
2015-06-02 19:04:38 +02:00
|
|
|
if (netlink_fd >= 0)
|
|
|
|
return -EINVAL;
|
|
|
|
netlink_fd = fd;
|
2012-01-10 01:34:15 +01:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2015-06-02 19:04:38 +02:00
|
|
|
return -EINVAL;
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
2015-06-02 21:03:36 +02:00
|
|
|
if (ctrl_fd < 0) {
|
|
|
|
_cleanup_udev_ctrl_unref_ struct udev_ctrl *ctrl = NULL;
|
|
|
|
|
|
|
|
udev = udev_new();
|
|
|
|
if (!udev)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ctrl = udev_ctrl_new(udev);
|
|
|
|
if (!ctrl)
|
|
|
|
return log_error_errno(EINVAL, "error initializing udev control socket");
|
|
|
|
|
|
|
|
r = udev_ctrl_enable_receiving(ctrl);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(EINVAL, "error binding udev control socket");
|
|
|
|
|
|
|
|
fd = udev_ctrl_get_fd(ctrl);
|
|
|
|
if (fd < 0)
|
|
|
|
return log_error_errno(EIO, "could not get ctrl fd");
|
2015-06-02 19:04:38 +02:00
|
|
|
|
2015-06-02 21:03:36 +02:00
|
|
|
ctrl_fd = fcntl(fd, F_DUPFD_CLOEXEC, 3);
|
|
|
|
if (ctrl_fd < 0)
|
|
|
|
return log_error_errno(errno, "could not dup ctrl fd: %m");
|
|
|
|
}
|
|
|
|
|
|
|
|
if (netlink_fd < 0) {
|
|
|
|
_cleanup_udev_monitor_unref_ struct udev_monitor *monitor = NULL;
|
|
|
|
|
|
|
|
if (!udev) {
|
|
|
|
udev = udev_new();
|
|
|
|
if (!udev)
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
monitor = udev_monitor_new_from_netlink(udev, "kernel");
|
|
|
|
if (!monitor)
|
|
|
|
return log_error_errno(EINVAL, "error initializing netlink socket");
|
|
|
|
|
|
|
|
(void) udev_monitor_set_receive_buffer_size(monitor, 128 * 1024 * 1024);
|
|
|
|
|
|
|
|
r = udev_monitor_enable_receiving(monitor);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(EINVAL, "error binding netlink socket");
|
|
|
|
|
|
|
|
fd = udev_monitor_get_fd(monitor);
|
|
|
|
if (fd < 0)
|
|
|
|
return log_error_errno(netlink_fd, "could not get uevent fd: %m");
|
|
|
|
|
|
|
|
netlink_fd = fcntl(fd, F_DUPFD_CLOEXEC, 3);
|
|
|
|
if (ctrl_fd < 0)
|
|
|
|
return log_error_errno(errno, "could not dup netlink fd: %m");
|
|
|
|
}
|
2015-06-02 19:04:38 +02:00
|
|
|
|
|
|
|
*rctrl = ctrl_fd;
|
|
|
|
*rnetlink = netlink_fd;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
return 0;
|
2011-04-14 23:46:44 +02:00
|
|
|
}
|
|
|
|
|
2012-06-04 22:46:32 +02:00
|
|
|
/*
|
2014-11-06 15:33:48 +01:00
|
|
|
* read the kernel command line, in case we need to get into debug mode
|
2015-04-17 23:18:24 +02:00
|
|
|
* udev.log-priority=<level> syslog priority
|
|
|
|
* udev.children-max=<number of workers> events are fully serialized if set to 1
|
|
|
|
* udev.exec-delay=<number of seconds> delay execution of every executed program
|
|
|
|
* udev.event-timeout=<number of seconds> seconds to wait before terminating an event
|
2012-06-04 22:46:32 +02:00
|
|
|
*/
|
2015-04-17 23:18:24 +02:00
|
|
|
static int parse_proc_cmdline_item(const char *key, const char *value) {
|
2015-07-21 18:35:18 +02:00
|
|
|
const char *full_key = key;
|
2013-11-06 03:15:16 +01:00
|
|
|
int r;
|
2012-06-04 22:46:32 +02:00
|
|
|
|
2015-04-17 23:18:24 +02:00
|
|
|
assert(key);
|
2012-06-04 22:46:32 +02:00
|
|
|
|
2015-04-17 23:18:24 +02:00
|
|
|
if (!value)
|
|
|
|
return 0;
|
2012-06-04 22:46:32 +02:00
|
|
|
|
2015-04-17 23:18:24 +02:00
|
|
|
if (startswith(key, "rd."))
|
|
|
|
key += strlen("rd.");
|
2012-06-04 22:46:32 +02:00
|
|
|
|
2015-04-17 23:18:24 +02:00
|
|
|
if (startswith(key, "udev."))
|
|
|
|
key += strlen("udev.");
|
|
|
|
else
|
|
|
|
return 0;
|
2012-06-04 22:46:32 +02:00
|
|
|
|
2015-04-17 23:18:24 +02:00
|
|
|
if (streq(key, "log-priority")) {
|
|
|
|
int prio;
|
2012-06-04 22:46:32 +02:00
|
|
|
|
2015-04-17 23:18:24 +02:00
|
|
|
prio = util_log_priority(value);
|
2015-07-21 18:26:09 +02:00
|
|
|
if (prio < 0)
|
2015-07-21 18:35:18 +02:00
|
|
|
goto invalid;
|
|
|
|
log_set_max_level(prio);
|
2015-04-17 23:18:24 +02:00
|
|
|
} else if (streq(key, "children-max")) {
|
2015-05-06 23:36:36 +02:00
|
|
|
r = safe_atou(value, &arg_children_max);
|
2015-04-17 23:18:24 +02:00
|
|
|
if (r < 0)
|
2015-07-21 18:35:18 +02:00
|
|
|
goto invalid;
|
2015-04-17 23:18:24 +02:00
|
|
|
} else if (streq(key, "exec-delay")) {
|
|
|
|
r = safe_atoi(value, &arg_exec_delay);
|
|
|
|
if (r < 0)
|
2015-07-21 18:35:18 +02:00
|
|
|
goto invalid;
|
2015-04-17 23:18:24 +02:00
|
|
|
} else if (streq(key, "event-timeout")) {
|
|
|
|
r = safe_atou64(value, &arg_event_timeout_usec);
|
|
|
|
if (r < 0)
|
2015-07-21 18:35:18 +02:00
|
|
|
goto invalid;
|
|
|
|
arg_event_timeout_usec *= USEC_PER_SEC;
|
|
|
|
arg_event_timeout_warn_usec = (arg_event_timeout_usec / 3) ? : 1;
|
2012-06-04 22:46:32 +02:00
|
|
|
}
|
2015-04-17 23:18:24 +02:00
|
|
|
|
2015-07-21 18:35:18 +02:00
|
|
|
return 0;
|
|
|
|
invalid:
|
|
|
|
log_warning("invalid %s ignored: %s", full_key, value);
|
2015-04-17 23:18:24 +02:00
|
|
|
return 0;
|
2012-06-04 22:46:32 +02:00
|
|
|
}
|
|
|
|
|
2014-09-12 14:18:06 +02:00
|
|
|
static void help(void) {
|
|
|
|
printf("%s [OPTIONS...]\n\n"
|
|
|
|
"Manages devices.\n\n"
|
2015-01-05 13:19:55 +01:00
|
|
|
" -h --help Print this message\n"
|
|
|
|
" --version Print version of the program\n"
|
|
|
|
" --daemon Detach and run in the background\n"
|
|
|
|
" --debug Enable debug output\n"
|
|
|
|
" --children-max=INT Set maximum number of workers\n"
|
|
|
|
" --exec-delay=SECONDS Seconds to wait before executing RUN=\n"
|
|
|
|
" --event-timeout=SECONDS Seconds to wait before terminating an event\n"
|
|
|
|
" --resolve-names=early|late|never\n"
|
|
|
|
" When to resolve users and groups\n"
|
2014-09-12 14:18:06 +02:00
|
|
|
, program_invocation_short_name);
|
|
|
|
}
|
|
|
|
|
2014-09-12 14:42:59 +02:00
|
|
|
static int parse_argv(int argc, char *argv[]) {
|
2012-01-10 01:34:15 +01:00
|
|
|
static const struct option options[] = {
|
2014-09-12 14:42:59 +02:00
|
|
|
{ "daemon", no_argument, NULL, 'd' },
|
|
|
|
{ "debug", no_argument, NULL, 'D' },
|
|
|
|
{ "children-max", required_argument, NULL, 'c' },
|
|
|
|
{ "exec-delay", required_argument, NULL, 'e' },
|
|
|
|
{ "event-timeout", required_argument, NULL, 't' },
|
|
|
|
{ "resolve-names", required_argument, NULL, 'N' },
|
|
|
|
{ "help", no_argument, NULL, 'h' },
|
|
|
|
{ "version", no_argument, NULL, 'V' },
|
2012-01-10 01:34:15 +01:00
|
|
|
{}
|
|
|
|
};
|
2013-08-08 21:44:02 +02:00
|
|
|
|
2014-09-12 14:42:59 +02:00
|
|
|
int c;
|
2013-08-08 21:44:02 +02:00
|
|
|
|
2014-09-12 14:42:59 +02:00
|
|
|
assert(argc >= 0);
|
|
|
|
assert(argv);
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-07-24 10:37:17 +02:00
|
|
|
while ((c = getopt_long(argc, argv, "c:de:Dt:N:hV", options, NULL)) >= 0) {
|
2014-09-15 14:41:30 +02:00
|
|
|
int r;
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2014-09-12 14:42:59 +02:00
|
|
|
switch (c) {
|
2012-01-10 01:34:15 +01:00
|
|
|
|
|
|
|
case 'd':
|
2014-09-12 14:42:59 +02:00
|
|
|
arg_daemonize = true;
|
2012-01-10 01:34:15 +01:00
|
|
|
break;
|
|
|
|
case 'c':
|
2015-05-06 23:36:36 +02:00
|
|
|
r = safe_atou(optarg, &arg_children_max);
|
2014-09-18 19:02:03 +02:00
|
|
|
if (r < 0)
|
|
|
|
log_warning("Invalid --children-max ignored: %s", optarg);
|
2012-01-10 01:34:15 +01:00
|
|
|
break;
|
|
|
|
case 'e':
|
2014-09-18 19:02:03 +02:00
|
|
|
r = safe_atoi(optarg, &arg_exec_delay);
|
|
|
|
if (r < 0)
|
|
|
|
log_warning("Invalid --exec-delay ignored: %s", optarg);
|
2012-01-10 01:34:15 +01:00
|
|
|
break;
|
2014-07-29 09:06:14 +02:00
|
|
|
case 't':
|
2014-09-15 14:41:30 +02:00
|
|
|
r = safe_atou64(optarg, &arg_event_timeout_usec);
|
|
|
|
if (r < 0)
|
2014-09-18 20:25:33 +02:00
|
|
|
log_warning("Invalid --event-timeout ignored: %s", optarg);
|
2014-09-18 19:02:03 +02:00
|
|
|
else {
|
|
|
|
arg_event_timeout_usec *= USEC_PER_SEC;
|
|
|
|
arg_event_timeout_warn_usec = (arg_event_timeout_usec / 3) ? : 1;
|
|
|
|
}
|
2014-07-29 09:06:14 +02:00
|
|
|
break;
|
2012-01-10 01:34:15 +01:00
|
|
|
case 'D':
|
2014-09-12 14:42:59 +02:00
|
|
|
arg_debug = true;
|
2012-01-10 01:34:15 +01:00
|
|
|
break;
|
|
|
|
case 'N':
|
2013-02-13 18:13:22 +01:00
|
|
|
if (streq(optarg, "early")) {
|
2014-09-12 14:42:59 +02:00
|
|
|
arg_resolve_names = 1;
|
2013-02-13 18:13:22 +01:00
|
|
|
} else if (streq(optarg, "late")) {
|
2014-09-12 14:42:59 +02:00
|
|
|
arg_resolve_names = 0;
|
2013-02-13 18:13:22 +01:00
|
|
|
} else if (streq(optarg, "never")) {
|
2014-09-12 14:42:59 +02:00
|
|
|
arg_resolve_names = -1;
|
2012-01-10 01:34:15 +01:00
|
|
|
} else {
|
2013-12-24 16:39:37 +01:00
|
|
|
log_error("resolve-names must be early, late or never");
|
2014-09-12 14:42:59 +02:00
|
|
|
return 0;
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 'h':
|
2014-09-12 14:18:06 +02:00
|
|
|
help();
|
2014-09-12 14:42:59 +02:00
|
|
|
return 0;
|
2012-01-10 01:34:15 +01:00
|
|
|
case 'V':
|
|
|
|
printf("%s\n", VERSION);
|
2014-09-12 14:42:59 +02:00
|
|
|
return 0;
|
|
|
|
case '?':
|
|
|
|
return -EINVAL;
|
2012-01-10 01:34:15 +01:00
|
|
|
default:
|
2014-09-12 14:42:59 +02:00
|
|
|
assert_not_reached("Unhandled option");
|
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-09-12 14:42:59 +02:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
2015-06-02 23:05:40 +02:00
|
|
|
static int manager_new(Manager **ret, int fd_ctrl, int fd_uevent, const char *cgroup) {
|
2015-05-12 18:37:04 +02:00
|
|
|
_cleanup_(manager_freep) Manager *manager = NULL;
|
2015-06-02 23:14:34 +02:00
|
|
|
int r, fd_worker, one = 1;
|
2015-05-12 18:37:04 +02:00
|
|
|
|
|
|
|
assert(ret);
|
2015-06-02 23:14:34 +02:00
|
|
|
assert(fd_ctrl >= 0);
|
|
|
|
assert(fd_uevent >= 0);
|
2015-05-12 18:37:04 +02:00
|
|
|
|
|
|
|
manager = new0(Manager, 1);
|
|
|
|
if (!manager)
|
|
|
|
return log_oom();
|
|
|
|
|
2015-05-12 21:16:47 +02:00
|
|
|
manager->fd_inotify = -1;
|
|
|
|
manager->worker_watch[WRITE_END] = -1;
|
|
|
|
manager->worker_watch[READ_END] = -1;
|
|
|
|
|
2015-05-12 18:37:04 +02:00
|
|
|
manager->udev = udev_new();
|
|
|
|
if (!manager->udev)
|
|
|
|
return log_error_errno(errno, "could not allocate udev context: %m");
|
|
|
|
|
2015-05-13 11:39:45 +02:00
|
|
|
udev_builtin_init(manager->udev);
|
|
|
|
|
2015-05-12 19:06:33 +02:00
|
|
|
manager->rules = udev_rules_new(manager->udev, arg_resolve_names);
|
|
|
|
if (!manager->rules)
|
|
|
|
return log_error_errno(ENOMEM, "error reading rules");
|
|
|
|
|
|
|
|
udev_list_node_init(&manager->events);
|
|
|
|
udev_list_init(manager->udev, &manager->properties, true);
|
|
|
|
|
2015-06-03 01:53:20 +02:00
|
|
|
manager->cgroup = cgroup;
|
|
|
|
|
2015-06-02 21:03:36 +02:00
|
|
|
manager->ctrl = udev_ctrl_new_from_fd(manager->udev, fd_ctrl);
|
|
|
|
if (!manager->ctrl)
|
|
|
|
return log_error_errno(EINVAL, "error taking over udev control socket");
|
2015-05-12 21:16:47 +02:00
|
|
|
|
2015-06-02 21:03:36 +02:00
|
|
|
manager->monitor = udev_monitor_new_from_netlink_fd(manager->udev, "kernel", fd_uevent);
|
|
|
|
if (!manager->monitor)
|
|
|
|
return log_error_errno(EINVAL, "error taking over netlink socket");
|
2015-05-12 21:16:47 +02:00
|
|
|
|
|
|
|
/* unnamed socket from workers to the main daemon */
|
|
|
|
r = socketpair(AF_LOCAL, SOCK_DGRAM|SOCK_CLOEXEC, 0, manager->worker_watch);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(errno, "error creating socketpair: %m");
|
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
fd_worker = manager->worker_watch[READ_END];
|
2015-05-12 21:16:47 +02:00
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
r = setsockopt(fd_worker, SOL_SOCKET, SO_PASSCRED, &one, sizeof(one));
|
2015-05-12 21:16:47 +02:00
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(errno, "could not enable SO_PASSCRED: %m");
|
|
|
|
|
|
|
|
manager->fd_inotify = udev_watch_init(manager->udev);
|
|
|
|
if (manager->fd_inotify < 0)
|
|
|
|
return log_error_errno(ENOMEM, "error initializing inotify");
|
|
|
|
|
|
|
|
udev_watch_restore(manager->udev);
|
|
|
|
|
|
|
|
/* block and listen to all signals on signalfd */
|
2015-06-15 20:13:23 +02:00
|
|
|
assert_se(sigprocmask_many(SIG_BLOCK, NULL, SIGTERM, SIGINT, SIGHUP, SIGCHLD, -1) >= 0);
|
2015-05-18 17:22:36 +02:00
|
|
|
|
2015-05-31 23:52:53 +02:00
|
|
|
r = sd_event_default(&manager->event);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(errno, "could not allocate event loop: %m");
|
|
|
|
|
2015-05-18 17:22:36 +02:00
|
|
|
r = sd_event_add_signal(manager->event, NULL, SIGINT, on_sigterm, manager);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "error creating sigint event source: %m");
|
|
|
|
|
|
|
|
r = sd_event_add_signal(manager->event, NULL, SIGTERM, on_sigterm, manager);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "error creating sigterm event source: %m");
|
|
|
|
|
|
|
|
r = sd_event_add_signal(manager->event, NULL, SIGHUP, on_sighup, manager);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "error creating sighup event source: %m");
|
|
|
|
|
|
|
|
r = sd_event_add_signal(manager->event, NULL, SIGCHLD, on_sigchld, manager);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "error creating sigchld event source: %m");
|
|
|
|
|
|
|
|
r = sd_event_set_watchdog(manager->event, true);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "error creating watchdog event source: %m");
|
|
|
|
|
2015-06-02 23:14:34 +02:00
|
|
|
r = sd_event_add_io(manager->event, &manager->ctrl_event, fd_ctrl, EPOLLIN, on_ctrl_msg, manager);
|
2015-05-18 17:22:36 +02:00
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "error creating ctrl event source: %m");
|
|
|
|
|
|
|
|
/* This needs to be after the inotify and uevent handling, to make sure
|
|
|
|
* that the ping is send back after fully processing the pending uevents
|
|
|
|
* (including the synthetic ones we may create due to inotify events).
|
|
|
|
*/
|
|
|
|
r = sd_event_source_set_priority(manager->ctrl_event, SD_EVENT_PRIORITY_IDLE);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "cold not set IDLE event priority for ctrl event source: %m");
|
|
|
|
|
|
|
|
r = sd_event_add_io(manager->event, &manager->inotify_event, manager->fd_inotify, EPOLLIN, on_inotify, manager);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "error creating inotify event source: %m");
|
|
|
|
|
2015-06-02 23:14:34 +02:00
|
|
|
r = sd_event_add_io(manager->event, &manager->uevent_event, fd_uevent, EPOLLIN, on_uevent, manager);
|
2015-05-18 17:22:36 +02:00
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "error creating uevent event source: %m");
|
|
|
|
|
|
|
|
r = sd_event_add_io(manager->event, NULL, fd_worker, EPOLLIN, on_worker, manager);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "error creating worker event source: %m");
|
|
|
|
|
|
|
|
r = sd_event_add_post(manager->event, NULL, on_post, manager);
|
|
|
|
if (r < 0)
|
|
|
|
return log_error_errno(r, "error creating post event source: %m");
|
2015-05-12 21:16:47 +02:00
|
|
|
|
2015-06-02 23:14:34 +02:00
|
|
|
*ret = manager;
|
|
|
|
manager = NULL;
|
|
|
|
|
2015-05-27 18:39:36 +02:00
|
|
|
return 0;
|
2015-05-12 18:37:04 +02:00
|
|
|
}
|
|
|
|
|
2015-07-01 19:25:30 +02:00
|
|
|
static int run(int fd_ctrl, int fd_uevent, const char *cgroup) {
|
2015-05-12 18:37:04 +02:00
|
|
|
_cleanup_(manager_freep) Manager *manager = NULL;
|
2015-07-01 19:25:30 +02:00
|
|
|
int r;
|
|
|
|
|
|
|
|
r = manager_new(&manager, fd_ctrl, fd_uevent, cgroup);
|
|
|
|
if (r < 0) {
|
|
|
|
r = log_error_errno(r, "failed to allocate manager object: %m");
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
r = udev_rules_apply_static_dev_perms(manager->rules);
|
|
|
|
if (r < 0)
|
|
|
|
log_error_errno(r, "failed to apply permissions on static device nodes: %m");
|
|
|
|
|
|
|
|
(void) sd_notify(false,
|
|
|
|
"READY=1\n"
|
|
|
|
"STATUS=Processing...");
|
|
|
|
|
|
|
|
r = sd_event_loop(manager->event);
|
|
|
|
if (r < 0) {
|
|
|
|
log_error_errno(r, "event loop failed: %m");
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
|
|
|
|
sd_event_get_exit_code(manager->event, &r);
|
|
|
|
|
|
|
|
exit:
|
|
|
|
sd_notify(false,
|
|
|
|
"STOPPING=1\n"
|
|
|
|
"STATUS=Shutting down...");
|
|
|
|
if (manager)
|
|
|
|
udev_ctrl_cleanup(manager->ctrl);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
int main(int argc, char *argv[]) {
|
2015-06-03 01:53:20 +02:00
|
|
|
_cleanup_free_ char *cgroup = NULL;
|
2015-06-02 23:05:40 +02:00
|
|
|
int r, fd_ctrl, fd_uevent;
|
2014-09-12 14:42:59 +02:00
|
|
|
|
|
|
|
log_set_target(LOG_TARGET_AUTO);
|
|
|
|
log_parse_environment();
|
|
|
|
log_open();
|
|
|
|
|
|
|
|
r = parse_argv(argc, argv);
|
|
|
|
if (r <= 0)
|
|
|
|
goto exit;
|
|
|
|
|
2015-04-17 23:18:24 +02:00
|
|
|
r = parse_proc_cmdline(parse_proc_cmdline_item);
|
|
|
|
if (r < 0)
|
|
|
|
log_warning_errno(r, "failed to parse kernel command line, ignoring: %m");
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-07-01 19:20:59 +02:00
|
|
|
if (arg_debug) {
|
|
|
|
log_set_target(LOG_TARGET_CONSOLE);
|
2014-09-12 14:42:59 +02:00
|
|
|
log_set_max_level(LOG_DEBUG);
|
2015-07-01 19:20:59 +02:00
|
|
|
}
|
2014-09-12 14:42:59 +02:00
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
if (getuid() != 0) {
|
2015-04-27 14:56:21 +02:00
|
|
|
r = log_error_errno(EPERM, "root privileges required");
|
2012-01-10 01:34:15 +01:00
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
|
2015-05-18 17:19:38 +02:00
|
|
|
if (arg_children_max == 0) {
|
|
|
|
cpu_set_t cpu_set;
|
2014-09-15 11:53:03 +02:00
|
|
|
|
2015-05-18 17:19:38 +02:00
|
|
|
arg_children_max = 8;
|
2014-09-18 19:07:02 +02:00
|
|
|
|
2015-05-18 17:19:38 +02:00
|
|
|
if (sched_getaffinity(0, sizeof (cpu_set), &cpu_set) == 0) {
|
2015-06-08 20:53:16 +02:00
|
|
|
arg_children_max += CPU_COUNT(&cpu_set) * 2;
|
2015-05-18 17:19:38 +02:00
|
|
|
}
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-18 17:19:38 +02:00
|
|
|
log_debug("set children_max to %u", arg_children_max);
|
2014-09-18 19:07:02 +02:00
|
|
|
}
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-18 17:19:38 +02:00
|
|
|
/* set umask before creating any file/directory */
|
|
|
|
r = chdir("/");
|
|
|
|
if (r < 0) {
|
|
|
|
r = log_error_errno(errno, "could not change dir to /: %m");
|
|
|
|
goto exit;
|
|
|
|
}
|
2012-04-15 02:35:31 +02:00
|
|
|
|
2015-05-18 17:19:38 +02:00
|
|
|
umask(022);
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-05-18 17:19:38 +02:00
|
|
|
r = mac_selinux_init("/dev");
|
|
|
|
if (r < 0) {
|
|
|
|
log_error_errno(r, "could not initialize labelling: %m");
|
|
|
|
goto exit;
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
2015-05-18 17:19:38 +02:00
|
|
|
r = mkdir("/run/udev", 0755);
|
|
|
|
if (r < 0 && errno != EEXIST) {
|
|
|
|
r = log_error_errno(errno, "could not create /run/udev: %m");
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
|
2015-05-21 16:30:58 +02:00
|
|
|
dev_setup(NULL, UID_INVALID, GID_INVALID);
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-06-03 01:53:20 +02:00
|
|
|
if (getppid() == 1) {
|
|
|
|
/* get our own cgroup, we regularly kill everything udev has left behind
|
|
|
|
we only do this on systemd systems, and only if we are directly spawned
|
|
|
|
by PID1. otherwise we are not guaranteed to have a dedicated cgroup */
|
|
|
|
r = cg_pid_get_path(SYSTEMD_CGROUP_CONTROLLER, 0, &cgroup);
|
2015-06-22 16:53:54 +02:00
|
|
|
if (r < 0) {
|
|
|
|
if (r == -ENOENT)
|
|
|
|
log_debug_errno(r, "did not find dedicated cgroup: %m");
|
|
|
|
else
|
|
|
|
log_warning_errno(r, "failed to get cgroup: %m");
|
|
|
|
}
|
2015-06-03 01:53:20 +02:00
|
|
|
}
|
|
|
|
|
2015-06-02 23:05:40 +02:00
|
|
|
r = listen_fds(&fd_ctrl, &fd_uevent);
|
|
|
|
if (r < 0) {
|
|
|
|
r = log_error_errno(r, "could not listen on fds: %m");
|
|
|
|
goto exit;
|
|
|
|
}
|
|
|
|
|
2014-09-12 14:42:59 +02:00
|
|
|
if (arg_daemonize) {
|
2012-01-10 01:34:15 +01:00
|
|
|
pid_t pid;
|
|
|
|
|
2015-05-29 18:31:01 +02:00
|
|
|
log_info("starting version " VERSION);
|
|
|
|
|
2015-06-17 17:43:11 +02:00
|
|
|
/* connect /dev/null to stdin, stdout, stderr */
|
|
|
|
if (log_get_max_level() < LOG_DEBUG)
|
|
|
|
(void) make_null_stdio();
|
|
|
|
|
2012-01-10 01:34:15 +01:00
|
|
|
pid = fork();
|
|
|
|
switch (pid) {
|
|
|
|
case 0:
|
|
|
|
break;
|
|
|
|
case -1:
|
2015-04-27 14:56:21 +02:00
|
|
|
r = log_error_errno(errno, "fork of daemon failed: %m");
|
2012-01-10 01:34:15 +01:00
|
|
|
goto exit;
|
|
|
|
default:
|
2015-05-18 17:21:03 +02:00
|
|
|
mac_selinux_finish();
|
|
|
|
log_close();
|
|
|
|
_exit(EXIT_SUCCESS);
|
2012-01-10 01:34:15 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
setsid();
|
|
|
|
|
2015-07-07 01:27:20 +02:00
|
|
|
write_string_file("/proc/self/oom_score_adj", "-1000", 0);
|
2015-06-02 23:08:11 +02:00
|
|
|
}
|
2012-01-10 01:34:15 +01:00
|
|
|
|
2015-07-01 19:25:30 +02:00
|
|
|
r = run(fd_ctrl, fd_uevent, cgroup);
|
2015-05-18 17:22:36 +02:00
|
|
|
|
[PATCH] udevd - cleanup and better timeout handling
On Thu, Jan 29, 2004 at 04:55:11PM +0100, Kay Sievers wrote:
> On Thu, Jan 29, 2004 at 02:56:25AM +0100, Kay Sievers wrote:
> > On Wed, Jan 28, 2004 at 10:47:36PM +0100, Kay Sievers wrote:
> > > Oh, couldn't resist to try threads.
> > > It's a multithreaded udevd that communicates through a localhost socket.
> > > The message includes a magic with the udev version, so we don't accept
> > > older udevsend's.
> > >
> > > No need for locking, cause we can't bind two sockets on the same address.
> > > The daemon tries to connect and if it fails it starts the daemon.
> > >
> > > We create a thread for every incoming connection, handle over the socket,
> > > sort the messages in the global message queue and exit the thread.
> > > Huh, that was easy with threads :)
> > >
> > > With the addition of a message we wakeup the queue manager thread and
> > > handle timeouts or move the message to the global exec list. This wakes
> > > up the exec list manager who looks if a process is already running for this
> > > device path.
> > > If yes, the exec is delayed otherwise we create a thread that execs udev.
> > > n the background. With the return of udev we free the message and wakeup
> > > the exec list manager to look if something is pending.
> > >
> > > It is just a quick shot, cause I couldn't solve the problems with fork an
> > > scheduling and I wanted to see if I'm to stupid :)
> > > But if anybody with a better idea or more experience with I/O scheduling
> > > we may go another way. The remaining problem is that klibc doesn't support
> > > threads.
> > >
> > > By now, we don't exec anything, it's just a sleep 3 for every exec,
> > > but you can see the queue management by watching syslog and do:
> > >
> > > DEVPATH=/abc ACTION=add SEQNUM=0 ./udevsend /abc
>
> Next version, switched to unix domain sockets.
Next cleaned up version. Hey, nobody wants to try it :)
Works for me, It's funny if I connect/disconnect my 4in1-usb-flash-reader
every two seconds. The 2.6 usb rocks! I can connect/diconnect a hub with 3
devices plugged in every second and don't run into any problem but a _very_
big udevd queue.
2004-02-01 18:12:36 +01:00
|
|
|
exit:
|
2014-10-23 10:23:46 +02:00
|
|
|
mac_selinux_finish();
|
2012-04-08 16:06:20 +02:00
|
|
|
log_close();
|
2015-04-27 14:56:21 +02:00
|
|
|
return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS;
|
[PATCH] spilt udev into pieces
On Thu, Jan 22, 2004 at 01:27:45AM +0100, Kay Sievers wrote:
> On Wed, Jan 21, 2004 at 02:38:25PM +0100, Kay Sievers wrote:
> > On Thu, Jan 15, 2004 at 01:45:10PM -0800, Greg KH wrote:
> > > On Thu, Jan 15, 2004 at 10:36:25PM +0800, Ling, Xiaofeng wrote:
> > > > Hi, Greg
> > > > I wrote a simple implementation for the two pieces
> > > > of send and receive hotplug event,
> > > > use a message queue and a list for the out of order
> > > > hotplug event. It also has a timeout timer of 3 seconds.
> > > > They are now separate program. the file nseq is the test script.
> > > > Could you have a look to see wether it is feasible?
> > > > If so, I'll continue to merge with udev.
> > >
> > > Yes, very nice start. Please continue on.
> > >
> > > One minor comment, please stick with the kernel coding style when you
> > > are writing new code for udev.
> >
> > I took the code from Xiaofeng, cleaned the whitespace, renamed some bits,
> > tweaked the debugging, added the udev exec and created a patch for the current tree.
> >
> > It seems functional now, by simply executing our current udev (dirty hack).
> > It reorders the incoming events and if one is missing it delays the
> > execution of the following ones up to a maximum of 10 seconds.
> >
> > Test script is included, but you can't mix hotplug sequence numbers and
> > test script numbers, it will result in waiting for the missing numbers :)
>
> Hey, nobody want's to play with me?
> So here I'm chatting with myself :)
>
> This is the next version with signal handling for resetting the expected
> signal number. I changed the behaviour of the timeout to skip all
> missing events at once and to proceed with the next event in the queue.
>
> So it's now possible to use the test script at any time, cause it resets
> the daemon, if real hotplug event coming in later all missing nimbers will
> be skipped after a timeout of 10 seconds and the queued events are applied.
Here is the next updated updated version to apply to the lastet udev.
I've added infrastructure for getting the state of the IPC queue in the
sender and set the program to exec by the daemon. Also the magic key id
is replaced by the usual key generation by path/nr.
It looks promising, I use it on my machine and my 4in1 USB-flash-reader
connect/disconnect emits the events "randomly" but udevd is able to
reorder it and calls our normal udev in the right order.
2004-01-23 09:28:57 +01:00
|
|
|
}
|