Commit graph

73 commits

Author SHA1 Message Date
Yu Watanabe 8d80f27530 sd-device: make TAGS= property prefixed and suffixed with ":"
The commit 6f3ac0d517 drops the prefix and
suffix in TAGS= property. But there exists several rules that have like
`TAGS=="*:tag:*"`. So, the property must be always prefixed and suffixed
with ":".

Fixes #17930.
2020-12-14 14:04:53 +09:00
Yu Watanabe 4dbce71787 set: introduce set_strjoin() 2020-12-08 12:28:54 +09:00
Yu Watanabe db9ecf0501 license: LGPL-2.1+ -> LGPL-2.1-or-later 2020-11-09 13:23:58 +09:00
Yu Watanabe 11e9fec259 hashmap: introduce {hashmap,set}_put_strdup_full()
They can take hash_ops.
2020-10-13 22:39:06 +09:00
Lennart Poettering ae0b700a85 hashmap: make sure to initialize shared hash key atomically
if we allocate a bunch of hash tables all at the same time, with none
earlier than the other, there's a good chance we'll initialize the
shared hash key multiple times, so that some threads will see a
different shared hash key than others.

Let's fix that, and make sure really everyone sees the same hash key.

Fixes: #17007
2020-09-12 09:33:33 +02:00
Zbigniew Jędrzejewski-Szmek 8a35af80fc basic/hashmap,set: move pointer symbol adjactent to the returned value
I think this is nicer in general, and here in particular we have a lot
of code like:
 static inline IteratedCache* hashmap_iterated_cache_new(Hashmap *h) {
         return (IteratedCache*) _hashmap_iterated_cache_new(HASHMAP_BASE(h));
 }
and it's visually appealing to use the same whitespace in the function
signature and the cast in the body of the function.
2020-09-01 13:45:51 +02:00
Zbigniew Jędrzejewski-Szmek e4126adf45 basic/hashmap,set: inline trivial set_iterate() wrapper
The compiler would do this to, esp. with LTO, but we can short-circuit the
whole process and make everything a bit simpler by avoiding the separate
definition.

(It would be nice to do the same for _set_new(), _set_ensure_allocated()
and other similar functions which are one-line trivial wrappers too. Unfortunately
that would require enum HashmapType to be made public, which we don't want
to do.)
2020-09-01 13:32:02 +02:00
Susant Sahani b7847e05f5 basic: Introduce ordered_hashmap_ensure_put 2020-09-01 12:32:48 +02:00
Zbigniew Jędrzejewski-Szmek add74e8929 basic/hashmap,set: propagate allocation location info in _copy()
Also use double space before the tracking args at the end. Without
the comma this looks ugly, but it's a bit better with the double space.
At least it doesn't look like a variable with a type.
2020-06-24 10:38:15 +02:00
Zbigniew Jędrzejewski-Szmek b8b46b1ce5 basic/set,hashmap: pass through allocation info in more cases 2020-06-24 10:38:15 +02:00
Zbigniew Jędrzejewski-Szmek fcc1d0315d basic/set: add set_ensure_consume()
This combines set_ensure_allocated() with set_consume(). The cool thing is that
because we know the hash ops, we can correctly free the item if appropriate.
Similarly to set_consume(), the goal is to simplify handling of the case where
the item needs to be freed on error and if already present in the set.
2020-06-24 10:38:15 +02:00
Zbigniew Jędrzejewski-Szmek 0f9ccd9552 basic/set: add set_ensure_put()
It's such a common operation to allocate the set and put an item in it,
that it deserves a helper. set_ensure_put() has the same return values
as set_put().

Comes with tests!
2020-06-22 16:32:37 +02:00
Lennart Poettering 24bd74ae03
Merge pull request #15940 from keszybz/names-set-optimization
Try to optimize away Unit.names set
2020-06-10 18:52:08 +02:00
Zbigniew Jędrzejewski-Szmek 138f49e452 basic/hashmap,set: change "internal_" to "_" as the prefix
"internal" is a lot of characters. Let's take a leaf out of the Python's book
and simply use _ to mean private. Much less verbose, but the meaning is just as
clear, or even more.
2020-05-30 11:40:53 +02:00
Zbigniew Jędrzejewski-Szmek 55825de59b basic/hashmap: drop unneeded macro 2020-05-30 11:40:53 +02:00
Zbigniew Jędrzejewski-Szmek 43874aa7bb hashmap: don't allow hashmap_type_info table to be optimized away
This makes debugging hashmaps harder, because we can't query the size.  Make
sure that table is always present.
2020-05-30 11:40:37 +02:00
Zbigniew Jędrzejewski-Szmek 9ff7c5b031 basic/hashmap: make _ensure_allocated return 1 on actual allocations
Also, make test_hashmap_ensure_allocated() actually test
hashmap_ensure_allocated().
2020-05-27 16:48:04 +02:00
Zbigniew Jędrzejewski-Szmek 25b3e2a835 basic/hashmap: allow NULL values in strdup hashmaps and add test 2020-05-06 16:56:42 +02:00
Zbigniew Jędrzejewski-Szmek be32732168 basic/set: let set_put_strdup() create the set with string hash ops
If we're using a set with _put_strdup(), most of the time we want to use
string hash ops on the set, and free the strings when done. This defines
the appropriate a new string_hash_ops_free structure to automatically free
the keys when removing the set, and makes set_put_strdup() and set_put_strdupv()
instantiate the set with those hash ops.

hashmap_put_strdup() was already doing something similar.

(It is OK to instantiate the set earlier, possibly with a different hash ops
structure. set_put_strdup() will then use the existing set. It is also OK
to call set_free_free() instead of set_free() on a set with
string_hash_ops_free, the effect is the same, we're just overriding the
override of the cleanup function.)

No functional change intended.
2020-05-06 16:54:06 +02:00
Yu Watanabe 455fa9610c tree-wide: drop string.h when string-util.h or friends are included 2019-11-04 00:30:32 +09:00
Yu Watanabe f5947a5e92 tree-wide: drop missing.h 2019-10-31 17:57:03 +09:00
Zbigniew Jędrzejewski-Szmek 3c4e303136 basic/set: constify operations which don't modify Set
No functional change, but it's nicer to the reader.
2019-07-19 16:51:14 +02:00
Zbigniew Jędrzejewski-Szmek 87da87846d basic/hashmap: add hashops variant that does strdup/freeing on its own
So far, we'd use hashmap_free_free to free both keys and values along with
the hashmap. I think it's better to make this more encapsulated: in this variant
the way contents are freed can be decided when the hashmap is created, and
users of the hashmap can always use hashmap_free.
2019-07-19 16:50:36 +02:00
Frantisek Sumsal 31c9d74d50 hashmap: avoid using TLS in a destructor
Using C11 thread-local storage in destructors causes uninitialized
read. Let's avoid that using a direct comparison instead of using
the cached values. As this code path is taken only when compiled
with -DVALGRIND=1, the performance cost shouldn't matter too much.

Fixes #12814
2019-06-18 13:59:12 +02:00
Lennart Poettering 0a9707187b util: split out memcmp()/memset() related calls into memory-util.[ch]
Just some source rearranging.
2019-03-13 12:16:43 +01:00
Zbigniew Jędrzejewski-Szmek b61658fd9a shared/hashmap: trivial style updates 2019-02-21 12:04:27 +01:00
Thomas Haller 51c682df38 hashmap: always set key output argument of internal_hashmap_first_key_and_value()
internal_hashmap_first_key_and_value() returns the first value, or %NULL
if the hashmap is empty.

However, hashmaps may contain %NULL values. That means, a caller getting
%NULL doesn't know whether the hashmap is empty or whether the first
value is %NULL.

For example, a caller may be tempted to do something like:

    if ((val = hashmap_steal_first_key_and_value (h, (void **) key))) {
         // process first entry.
    }

But this is only correct if the caller made sure that the hash is either
not empty or contains no NULL values.

Anyway, since a %NULL return value can signal an empty hash or a %NULL
value, it seems error prone to leave the key output argument
uninitialized in situations that the caller cannot clearly distinguish
(without making additional assumptions).
2019-02-04 09:47:00 +01:00
Thomas Haller ca3237150e hashmap: avoid uninitialized variable warning in internal_hashmap_clear()
GCC 8.2 with LTO and -O2 emits a false warning:

    src/basic/hashmap.c: In function 'internal_hashmap_free.constprop':
    src/basic/hashmap.c:898:33: error: 'k' may be used uninitialized in this function [-Werror=maybe-uninitialized]
                      free_key(k);
                      ^

Avoid it by initializing the variable.
2019-02-04 09:36:08 +01:00
Topi Miettinen a1e92eee3e Remove 'inline' attributes from static functions in .c files (#11426)
Let the compiler perform inlining (see #11397).
2019-01-15 08:12:28 +01:00
Zbigniew Jędrzejewski-Szmek 70b400d9c2 hashmap: use ternary op to shorten code 2018-12-18 12:04:00 +01:00
Lennart Poettering c380b84d8b hashmap: rework hashmap_clear() to be more defensive
Let's first remove an item from the hashmap and only then destroy it.
This makes sure that destructor functions can mdoify the hashtables in
their own codee and we won't be confused by that.
2018-12-18 11:28:10 +01:00
Lennart Poettering 63e688cc3b
Merge pull request #11031 from poettering/gcc-attr-cleanup
various gcc attribute clean-ups
2018-12-03 21:59:00 +01:00
Lennart Poettering d34dae1817 tree-wide: use gcc attribute macros where appropriate
We have these macros already, hence use them.
2018-12-03 13:28:26 +01:00
Yu Watanabe 59a5cda7b9 hash-func: add destructors for key and value
If they are set, then they are called in hashmap_clear() or
hashmap_free().
2018-12-02 11:59:07 +01:00
Yu Watanabe ee05335f43 hashmap: fix minor coding style issue 2018-12-02 06:34:25 +01:00
Yu Watanabe 7ef670c34a hashmap: introduce hashmap_first_key_and_value() and friends 2018-10-13 21:45:50 +09:00
Zbigniew Jędrzejewski-Szmek 7c48ea0280 Move use_pool() to mempool.c and rename to mempool_enabled()
The only user is in hashmap.c, but it's a mempool thing.
2018-10-11 10:55:41 +02:00
Lennart Poettering 205c085bc3 hashmap: add an explicit assert() for detecting when objects migrated between threads
When clients don't follow protocol and use the same object from
different threads, then we previously would silently corrupt memory.
With this assert we'll fail with an assert(). This doesn't fix anything
but certainly makes mis-uses easier to detect and debug.

Triggered by https://bugzilla.redhat.com/show_bug.cgi?id=1609349
2018-08-03 17:36:11 +02:00
Lennart Poettering b4f607433c hashmap: add an environment variable to turn off the memory pool used by hashmaps
Triggered by https://bugzilla.redhat.com/show_bug.cgi?id=1609349
2018-08-03 17:36:11 +02:00
Zbigniew Jędrzejewski-Szmek d9b02e1697 tree-wide: drop copyright headers from frequent contributors
Fixes #9320.

for p in Shapovalov Chevalier Rozhkov Sievers Mack Herrmann Schmidt Rudenberg Sahani Landden Andersen Watanabe; do
  git grep -e 'Copyright.*'$p -l|xargs perl -i -0pe 's|/([*][*])?[*]\s+([*#]\s+)?Copyright[^\n]*'$p'[^\n]*\s*[*]([*][*])?/\n*|\n|gms; s|\s+([*#]\s+)?Copyright[^\n]*'$p'[^\n]*\n*|\n|gms'
done
2018-06-20 11:58:53 +02:00
Lennart Poettering 96b2fb93c5 tree-wide: beautify remaining copyright statements
Let's unify an beautify our remaining copyright statements, with a
unicode ©. This means our copyright statements are now always formatted
the same way. Yay.
2018-06-14 10:20:21 +02:00
Lennart Poettering 0c69794138 tree-wide: remove Lennart's copyright lines
These lines are generally out-of-date, incomplete and unnecessary. With
SPDX and git repository much more accurate and fine grained information
about licensing and authorship is available, hence let's drop the
per-file copyright notice. Of course, removing copyright lines of others
is problematic, hence this commit only removes my own lines and leaves
all others untouched. It might be nicer if sooner or later those could
go away too, making git the only and accurate source of authorship
information.
2018-06-14 10:20:20 +02:00
Lennart Poettering 818bf54632 tree-wide: drop 'This file is part of systemd' blurb
This part of the copyright blurb stems from the GPL use recommendations:

https://www.gnu.org/licenses/gpl-howto.en.html

The concept appears to originate in times where version control was per
file, instead of per tree, and was a way to glue the files together.
Ultimately, we nowadays don't live in that world anymore, and this
information is entirely useless anyway, as people are very welcome to
copy these files into any projects they like, and they shouldn't have to
change bits that are part of our copyright header for that.

hence, let's just get rid of this old cruft, and shorten our codebase a
bit.
2018-06-14 10:20:20 +02:00
Zbigniew Jędrzejewski-Szmek d18cb3937b Turn VALGRIND variable into a meson configuration switch
Configuration through environment variable is inconvenient with meson, because
they cannot be convieniently changed and/or are not preserved during
reconfiguration (https://github.com/mesonbuild/meson/issues/1503).
This adds -Dvalgrind=true/false, which has the advantage that it can be set
at any time with meson configure -Dvalgrind=... and ninja will rebuild targets
as necessary. Additional minor advantages are better consistency with the
options for hashmap debugging, and typo avoidance with '#if' instead of '#ifdef'.
2018-05-17 09:54:36 -07:00
Zbigniew Jędrzejewski-Szmek 11a1589223 tree-wide: drop license boilerplate
Files which are installed as-is (any .service and other unit files, .conf
files, .policy files, etc), are left as is. My assumption is that SPDX
identifiers are not yet that well known, so it's better to retain the
extended header to avoid any doubt.

I also kept any copyright lines. We can probably remove them, but it'd nice to
obtain explicit acks from all involved authors before doing that.
2018-04-06 18:58:55 +02:00
Zbigniew Jędrzejewski-Szmek afbbc0682e basic/hashmap: tweak code to avoid pointless gcc warning
gcc says:
[196/1142] Compiling C object 'src/basic/basic@sta/hashmap.c.o'.
../src/basic/hashmap.c: In function ‘cachemem_maintain’:
../src/basic/hashmap.c:1913:17: warning: suggest parentheses around assignment used as truth value [-Wparentheses]
                 mem->active = r = true;
                 ^~~

which conflates two things: the first is transitive assignent a = b = c = d;
the second is assignment of the value of an expression, which happens to be a
an assignment expression here, and boolean. While the second _should_ be
parenthesized, the first should _not_, and it's more natural to understand
our code as the first, and gcc should treat this as an exception and not emit
the warning. But since it's a while until this will be fixed, let's update
our code too.
2018-02-02 14:34:00 +01:00
Vito Caputo 45ea84d8ed basic: implement the IteratedCache
Adds the basics of the IteratedCache and constructor support for the
Hashmap and OrderedHashmap types.

iterated_cache_get() is responsible for synchronizing the cache with
the associated Hashmap and making it available to the caller at the
supplied result pointers.  Since iterated_cache_get() may need to
allocate memory, it may fail, so callers must check the return value.

On success, pointer arrays containing pointers to the associated
Hashmap's keys and values, in as-iterated order, are returned in
res_keys and res_values, respectively.  Either may be supplied as NULL
to inhibit caching of the keys or values, respectively.

Note that if the cached Hashmap hasn't changed since the previous call
to iterated_cache_get(), and it's not a call activating caching of the
values or keys, the cost is effectively zero as the resulting pointers
will simply refer to the previously returned arrays as-is.

A cleanup function has also been added, iterated_cache_free().

This only frees the IteratedCache container and related arrays.  The
associated Hashmap, its keys, and values are not affected.  Also note
that the associated Hashmap does not automatically free its associated
IteratedCache when freed.

One could, in theory, safely access the arrays returned by a
successful iterated_cache_get() call after its associated Hashmap has
been freed, including the referenced values and keys.  Provided the
iterated_cache_get() was performed prior to the hashmap free, and that
the type of hashmap free performed didn't free keys and/or values as
well.
2018-01-27 13:11:50 -08:00
Vito Caputo 84dcca75b4 basic: track dirty state in HashmapBase
This only adds marking the HashmapBase as dirty, no clearing of
the dirty state happens yet.

No functional changes.
2018-01-26 16:04:35 -08:00
Zbigniew Jędrzejewski-Szmek 53e1b68390 Add SPDX license identifiers to source files under the LGPL
This follows what the kernel is doing, c.f.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5fd54ace4721fc5ce2bb5aef6318fcf17f421460.
2017-11-19 19:08:15 +01:00
Zbigniew Jędrzejewski-Szmek 556c7bae0c basic/hashmap: add cleanup of memory pools (#7164)
It was dropped in 89439d4fc0. As a result, every
process that uses a hashmap allocates and then leaks the hashmap mempools.
The mempools are only allocated in the main thread, but we don't know where
the memory is used.

So let's check if we are the last thread and free the mempools then. This is
fairly heavy, because /proc/self/status has to be opened and parsed, but we do
it only when compiled for valgrind, i.e. not by default, and compared to running
under valgrind or asan, the extra cost is acceptable. The big advantage is that
we don't have to think or filter out this false positive.

As a micro-opt, cleanup is attempted only in the main thread. We could allow
any thread to check if it is the last one and perform cleanup, but that'd mean
that we'd have to _do_ the check in every thread. We don't use threads like
that, our non-main threads are always short-lived, so let's just accept the
possibility that we'll leak memory if a thread survives. The check is also
non-atomic, but it's called in a destructor of the main thread _and_ we do
cleanup only when there are no other threads, so the risk of some library
suddenly spawning another thread is very low. All in all, this is not perfect,
but should work in 999‰ of cases.

Fixes the following valgrind warning:

==22564== HEAP SUMMARY:
==22564==     in use at exit: 8,192 bytes in 2 blocks
==22564==   total heap usage: 243 allocs, 241 frees, 151,905 bytes allocated
==22564==
==22564== 4,096 bytes in 1 blocks are still reachable in loss record 1 of 2
==22564==    at 0x4C2FB6B: malloc (vg_replace_malloc.c:299)
==22564==    by 0x4F08A8C: mempool_alloc_tile (mempool.c:62)
==22564==    by 0x4F08B16: mempool_alloc0_tile (mempool.c:81)
==22564==    by 0x4EF8DE0: hashmap_base_new (hashmap.c:748)
==22564==    by 0x4EF8ED9: internal_hashmap_new (hashmap.c:782)
==22564==    by 0x11045D: test_hashmap_copy (test-hashmap-plain.c:87)
==22564==    by 0x115722: test_hashmap_funcs (test-hashmap-plain.c:914)
==22564==    by 0x10FC9D: main (test-hashmap.c:60)
==22564==
==22564== 4,096 bytes in 1 blocks are still reachable in loss record 2 of 2
==22564==    at 0x4C2FB6B: malloc (vg_replace_malloc.c:299)
==22564==    by 0x4F08A8C: mempool_alloc_tile (mempool.c:62)
==22564==    by 0x4F08B16: mempool_alloc0_tile (mempool.c:81)
==22564==    by 0x4EF8DE0: hashmap_base_new (hashmap.c:748)
==22564==    by 0x4EF8EF8: internal_ordered_hashmap_new (hashmap.c:786)
==22564==    by 0x10A2A0: test_ordered_hashmap_copy (test-hashmap-ordered.c:89)
==22564==    by 0x10F70F: test_ordered_hashmap_funcs (test-hashmap-ordered.c:916)
==22564==    by 0x10FCA2: main (test-hashmap.c:61)
==22564==
==22564== LEAK SUMMARY:
==22564==    definitely lost: 0 bytes in 0 blocks
==22564==    indirectly lost: 0 bytes in 0 blocks
==22564==      possibly lost: 0 bytes in 0 blocks
==22564==    still reachable: 8,192 bytes in 2 blocks
==22564==         suppressed: 0 bytes in 0 blocks

v2:
- check if we are the main thread

v3:
- check if there are no other threads
2017-11-10 15:44:58 +01:00