With the exponential backoff, we can perform more requests in the same amount of time,
so bump this a bit.
In case of large RTT this may be necessary in order not to regress, and in case
of large packet-loss it will make us more robust. The latter is particularly
relevant once we start probing for features (and hence may see packet-loss
until we settle on the right feature level).
Rather than fixing this to 5s for unicast DNS and 1s for LLMNR, start
at a tenth of those values and increase exponentially until the old
values are reached. For LLMNR the recommended timeout for IEEE802
networks (which basically means all of the ones we care about) is 100ms,
so that should be uncontroversial. For unicast DNS I have found no
recommended value. However, it seems vastly more likely that hitting a
500ms timeout is casued by a packet loss, rather than the RTT genuinely
being greater than 500ms, so taking this as a startnig value seems
reasonable to me.
In the common case this greatly reduces the latency due to normal packet
loss. Moreover, once we get support for probing for features, this means
that we can send more packets before degrading the feature level whilst
still allowing us to settle on the correct feature level in a reasonable
timeframe.
The timeouts are tracked per server (or per scope for the multicast
protocols), and once a server (or scope) receives a successfull package
the timeout is reset. We also track the largest RTT for the given
server/scope, and always start our timouts at twice the largest
observed RTT.
Let's optimize things a bit and properly compare DNS question arrays,
instead of checking if they are mutual supersets. This also makes ANY
query handling more accurate.
As we have connect()ed to the desired DNS server, we no longer need to pass
control messages manually when sending packets. Simplify the logic accordingly.
This function emits the UDP packet via the scope, but first it will
determine the current server (and connect to it) and store the
server in the transaction.
This should not change the behavior, but simplifies the code.
With access to the server when creating the socket, we can connect()
to the server and hence simplify message sending and receiving in
follow-up patches.
Close the socket when changing the server in a transaction, in
order for it to be reopened with the right server when we send
the next packet.
This fixes a regression where we could get stuck with a failing
server.
We can never read past the end of the packet, so this seems impossible
to exploit, but let's error out early as reading past the end of the
current RR is clearly an error.
Found by Lennart, based on patch by Daniel.
Most blobs (keys, signatures, ...) should have a specific size given by
the relevant algorithm. However, as we don't use/verify the algorithms
yet, let's just ensure that we don't read out zero-length data in cases
where this does not make sense.
The only exceptions, where zero-length data is allowed are in the NSEC3
salt field, and the generic data (which we don't know anything about,
so better not make any assumptions).
Let's make dns_packet_read_public_key() more generic by renaming it to
dns_packet_read_memdup() (which more accurately describes what it
does...). Then, patch all cases where we memdup() RR data to use this
new call.
This specifically checks for zero-length objects, and handles them
gracefully. It will set zero length payload fields as a result.
Special care should be taken to ensure that any code using this call
can handle the returned allocated field to be NULL if the size is
specified as 0!
strv_extend() already strdup()s internally, no need to to this twice.
(Also, was missing OOM check...).
Use strv_consume() when we already have a string allocated whose
ownership we want to pass to the strv.
This fixes 50f1e641a9.
Reuse the Iterator object from hashmap.h and expose a similar API.
This allows us to do
{
Iterator i;
unsigned n;
BITMAP_FOREACH(n, b, i) {
Iterator j;
unsigned m;
BITMAP_FOREACH(m, b, j) {
...
}
}
}
without getting confused. Requested by David.
We used to have one global socket, use one per transaction instead. This
has the side-effect of giving us a random UDP port per transaction, and
hence increasing the entropy and making cache poisoining significantly
harder to achieve.
We still reuse the same port number for packets belonging to the same
transaction (resent packets).
This improves the resilience against cache poisoning by being stricter
about only accepting responses that match precisely the requst they
are in reply to.
It should be noted that we still only use one port (which is picked
at random), rather than one port for each transaction. Port
randomization would improve things further, but is not required by
the RFC.
Make all LLMNR related packet inspections conditional to p->protocol.
Use switch-case statements while at it, which will make future additions
more readable.
The C and T bits in the DNS packet header definitions are specific to LLMNR.
In regular DNS, they are called AA and RD instead. Reflect that by calling
the macros accordingly, and alias LLMNR specific macros.
While at it, define RA, AD and CD getters as well.