Compare commits

..

1 Commits

Author SHA1 Message Date
Félix Baylac-Jacqué 8b73ff45cc Make curl threadsafe again. 2019-07-31 22:54:03 +02:00
63 changed files with 316 additions and 1259 deletions

View File

@ -1 +1 @@
2.4
2.3

View File

@ -1,38 +0,0 @@
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p python3 --pure
# To be used with `--trace-function-calls` and `flamegraph.pl`.
#
# For example:
#
# nix-instantiate --trace-function-calls '<nixpkgs>' -A hello 2> nix-function-calls.trace
# ./contrib/stack-collapse.py nix-function-calls.trace > nix-function-calls.folded
# nix-shell -p flamegraph --run "flamegraph.pl nix-function-calls.folded > nix-function-calls.svg"
import sys
from pprint import pprint
import fileinput
stack = []
timestack = []
for line in fileinput.input():
components = line.strip().split(" ", 2)
if components[0] != "function-trace":
continue
direction = components[1]
components = components[2].rsplit(" ", 2)
loc = components[0]
_at = components[1]
time = int(components[2])
if direction == "entered":
stack.append(loc)
timestack.append(time)
elif direction == "exited":
dur = time - timestack.pop()
vst = ";".join(stack)
print(f"{vst} {dur}")
stack.pop()

View File

@ -9,6 +9,5 @@
<xi:include href="distributed-builds.xml" />
<xi:include href="cores-vs-jobs.xml" />
<xi:include href="diff-hook.xml" />
<xi:include href="post-build-hook.xml" />
</part>

View File

@ -1,160 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-post-build-hook"
version="5.0"
>
<title>Using the <xref linkend="conf-post-build-hook" /></title>
<subtitle>Uploading to an S3-compatible binary cache after each build</subtitle>
<section xml:id="chap-post-build-hook-caveats">
<title>Implementation Caveats</title>
<para>Here we use the post-build hook to upload to a binary cache.
This is a simple and working example, but it is not suitable for all
use cases.</para>
<para>The post build hook program runs after each executed build,
and blocks the build loop. The build loop exits if the hook program
fails.</para>
<para>Concretely, this implementation will make Nix slow or unusable
when the internet is slow or unreliable.</para>
<para>A more advanced implementation might pass the store paths to a
user-supplied daemon or queue for processing the store paths outside
of the build loop.</para>
</section>
<section>
<title>Prerequisites</title>
<para>
This tutorial assumes you have configured an S3-compatible binary cache
according to the instructions at
<xref linkend="ssec-s3-substituter-authenticated-writes" />, and
that the <literal>root</literal> user's default AWS profile can
upload to the bucket.
</para>
</section>
<section>
<title>Set up a Signing Key</title>
<para>Use <command>nix-store --generate-binary-cache-key</command> to
create our public and private signing keys. We will sign paths
with the private key, and distribute the public key for verifying
the authenticity of the paths.</para>
<screen>
# nix-store --generate-binary-cache-key example-nix-cache-1 /etc/nix/key.private /etc/nix/key.public
# cat /etc/nix/key.public
example-nix-cache-1:1/cKDz3QCCOmwcztD2eV6Coggp6rqc9DGjWv7C0G+rM=
</screen>
<para>Then, add the public key and the cache URL to your
<filename>nix.conf</filename>'s <xref linkend="conf-trusted-public-keys" />
and <xref linkend="conf-substituters" /> like:</para>
<programlisting>
substituters = https://cache.nixos.org/ s3://example-nix-cache
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= example-nix-cache-1:1/cKDz3QCCOmwcztD2eV6Coggp6rqc9DGjWv7C0G+rM=
</programlisting>
<para>we will restart the Nix daemon a later step.</para>
</section>
<section>
<title>Implementing the build hook</title>
<para>Write the following script to
<filename>/etc/nix/upload-to-cache.sh</filename>:
</para>
<programlisting>
#!/bin/sh
set -eu
set -f # disable globbing
export IFS=' '
echo "Signing paths" $OUT_PATHS
nix sign-paths --key-file /etc/nix/key.private $OUT_PATHS
echo "Uploading paths" $OUT_PATHS
exec nix copy --to 's3://example-nix-cache' $OUT_PATHS
</programlisting>
<note>
<title>Should <literal>$OUT_PATHS</literal> be quoted?</title>
<para>
The <literal>$OUT_PATHS</literal> variable is a space-separated
list of Nix store paths. In this case, we expect and want the
shell to perform word splitting to make each output path its
own argument to <command>nix sign-paths</command>. Nix guarantees
the paths will not contain any spaces, however a store path
might contain glob characters. The <command>set -f</command>
disables globbing in the shell.
</para>
</note>
<para>
Then make sure the hook program is executable by the <literal>root</literal> user:
<screen>
# chmod +x /etc/nix/upload-to-cache.sh
</screen></para>
</section>
<section>
<title>Updating Nix Configuration</title>
<para>Edit <filename>/etc/nix/nix.conf</filename> to run our hook,
by adding the following configuration snippet at the end:</para>
<programlisting>
post-build-hook = /etc/nix/upload-to-cache.sh
</programlisting>
<para>Then, restart the <command>nix-daemon</command>.</para>
</section>
<section>
<title>Testing</title>
<para>Build any derivation, for example:</para>
<screen>
$ nix-build -E '(import &lt;nixpkgs&gt; {}).writeText "example" (builtins.toString builtins.currentTime)'
these derivations will be built:
/nix/store/s4pnfbkalzy5qz57qs6yybna8wylkig6-example.drv
building '/nix/store/s4pnfbkalzy5qz57qs6yybna8wylkig6-example.drv'...
running post-build-hook '/home/grahamc/projects/github.com/NixOS/nix/post-hook.sh'...
post-build-hook: Signing paths /nix/store/ibcyipq5gf91838ldx40mjsp0b8w9n18-example
post-build-hook: Uploading paths /nix/store/ibcyipq5gf91838ldx40mjsp0b8w9n18-example
/nix/store/ibcyipq5gf91838ldx40mjsp0b8w9n18-example
</screen>
<para>Then delete the path from the store, and try substituting it from the binary cache:</para>
<screen>
$ rm ./result
$ nix-store --delete /nix/store/ibcyipq5gf91838ldx40mjsp0b8w9n18-example
</screen>
<para>Now, copy the path back from the cache:</para>
<screen>
$ nix store --realize /nix/store/ibcyipq5gf91838ldx40mjsp0b8w9n18-example
copying path '/nix/store/m8bmqwrch6l3h8s0k3d673xpmipcdpsa-example from 's3://example-nix-cache'...
warning: you did not specify '--add-root'; the result might be removed by the garbage collector
/nix/store/m8bmqwrch6l3h8s0k3d673xpmipcdpsa-example
</screen>
</section>
<section>
<title>Conclusion</title>
<para>
We now have a Nix installation configured to automatically sign and
upload every local build to a remote binary cache.
</para>
<para>
Before deploying this to production, be sure to consider the
implementation caveats in <xref linkend="chap-post-build-hook-caveats" />.
</para>
</section>
</chapter>

View File

@ -483,10 +483,8 @@ builtins.fetchurl {
<varlistentry xml:id="conf-max-free"><term><literal>max-free</literal></term>
<listitem><para>When a garbage collection is triggered by the
<literal>min-free</literal> option, it stops as soon as
<literal>max-free</literal> bytes are available. The default is
infinity (i.e. delete all garbage).</para></listitem>
<listitem><para>This option defines after how many free bytes to stop collecting
garbage once the <literal>min-free</literal> condition gets triggered.</para></listitem>
</varlistentry>
@ -530,11 +528,9 @@ builtins.fetchurl {
<varlistentry xml:id="conf-min-free"><term><literal>min-free</literal></term>
<listitem>
<para>When free disk space in <filename>/nix/store</filename>
drops below <literal>min-free</literal> during a build, Nix
performs a garbage-collection until <literal>max-free</literal>
bytes are available or there is no more garbage. A value of
<literal>0</literal> (the default) disables this feature.</para>
<para>When the disk reaches <literal>min-free</literal> bytes of free disk space during a build, nix
will start to garbage-collection until <literal>max-free</literal> bytes are available on the disk.
A value of <literal>0</literal> (the default) means that this feature is disabled.</para>
</listitem>
</varlistentry>
@ -664,62 +660,6 @@ password <replaceable>my-password</replaceable>
</varlistentry>
<varlistentry xml:id="conf-post-build-hook">
<term><literal>post-build-hook</literal></term>
<listitem>
<para>Optional. The path to a program to execute after each build.</para>
<para>This option is only settable in the global
<filename>nix.conf</filename>, or on the command line by trusted
users.</para>
<para>When using the nix-daemon, the daemon executes the hook as
<literal>root</literal>. If the nix-daemon is not involved, the
hook runs as the user executing the nix-build.</para>
<itemizedlist>
<listitem><para>The hook executes after an evaluation-time build.</para></listitem>
<listitem><para>The hook does not execute on substituted paths.</para></listitem>
<listitem><para>The hook's output always goes to the user's terminal.</para></listitem>
<listitem><para>If the hook fails, the build succeeds but no further builds execute.</para></listitem>
<listitem><para>The hook executes synchronously, and blocks other builds from progressing while it runs.</para></listitem>
</itemizedlist>
<para>The program executes with no arguments. The program's environment
contains the following environment variables:</para>
<variablelist>
<varlistentry>
<term><envar>DRV_PATH</envar></term>
<listitem>
<para>The derivation for the built paths.</para>
<para>Example:
<literal>/nix/store/5nihn1a7pa8b25l9zafqaqibznlvvp3f-bash-4.4-p23.drv</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term><envar>OUT_PATHS</envar></term>
<listitem>
<para>Output paths of the built derivation, separated by a space character.</para>
<para>Example:
<literal>/nix/store/zf5lbh336mnzf1nlswdn11g4n2m8zh3g-bash-4.4-p23-dev
/nix/store/rjxwxwv1fpn9wa2x5ssk5phzwlcv4mna-bash-4.4-p23-doc
/nix/store/6bqvbzjkcp9695dq0dpl5y43nvy37pq1-bash-4.4-p23-info
/nix/store/r7fng3kk3vlpdlh2idnrbn37vh4imlj2-bash-4.4-p23-man
/nix/store/xfghy8ixrhz3kyy6p724iv3cxji088dx-bash-4.4-p23</literal>.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>See <xref linkend="chap-post-build-hook" /> for an example
implementation.</para>
</listitem>
</varlistentry>
<varlistentry xml:id="conf-repeat"><term><literal>repeat</literal></term>
<listitem><para>How many times to repeat builds to check whether
@ -873,14 +813,6 @@ password <replaceable>my-password</replaceable>
</varlistentry>
<varlistentry xml:id="conf-stalled-download-timeout"><term><literal>stalled-download-timeout</literal></term>
<listitem>
<para>The timeout (in seconds) for receiving data from servers
during download. Nix cancels idle downloads after this timeout's
duration.</para>
</listitem>
</varlistentry>
<varlistentry xml:id="conf-substituters"><term><literal>substituters</literal></term>
<listitem><para>A list of URLs of substituters, separated by
@ -981,34 +913,6 @@ requiredSystemFeatures = [ "kvm" ];
</varlistentry>
<varlistentry xml:id="conf-trace-function-calls"><term><literal>trace-function-calls</literal></term>
<listitem>
<para>Default: <literal>false</literal>.</para>
<para>If set to <literal>true</literal>, the Nix evaluator will
trace every function call. Nix will print a log message at the
"vomit" level for every function entrance and function exit.</para>
<informalexample><screen>
function-trace entered undefined position at 1565795816999559622
function-trace exited undefined position at 1565795816999581277
function-trace entered /nix/store/.../example.nix:226:41 at 1565795253249935150
function-trace exited /nix/store/.../example.nix:226:41 at 1565795253249941684
</screen></informalexample>
<para>The <literal>undefined position</literal> means the function
call is a builtin.</para>
<para>Use the <literal>contrib/stack-collapse.py</literal> script
distributed with the Nix source code to convert the trace logs
in to a format suitable for <command>flamegraph.pl</command>.</para>
</listitem>
</varlistentry>
<varlistentry xml:id="conf-trusted-public-keys"><term><literal>trusted-public-keys</literal></term>
<listitem><para>A whitespace-separated list of public keys. When

View File

@ -221,53 +221,31 @@ also <xref linkend="sec-common-options" />.</phrase></para>
<varlistentry><term><filename>~/.nix-defexpr</filename></term>
<listitem><para>The source for the default Nix
<listitem><para>A directory that contains the default Nix
expressions used by the <option>--install</option>,
<option>--upgrade</option>, and <option>--query
--available</option> operations to obtain derivations. The
--available</option> operations to obtain derivations. The
<option>--file</option> option may be used to override this
default.</para>
<para>If <filename>~/.nix-defexpr</filename> is a file,
it is loaded as a Nix expression. If the expression
is a set, it is used as the default Nix expression.
If the expression is a function, an empty set is passed
as argument and the return value is used as
the default Nix expression.</para>
<para>If <filename>~/.nix-defexpr</filename> is a directory
containing a <filename>default.nix</filename> file, that file
is loaded as in the above paragraph.</para>
<para>If <filename>~/.nix-defexpr</filename> is a directory without
a <filename>default.nix</filename> file, then its contents
(both files and subdirectories) are loaded as Nix expressions.
The expressions are combined into a single set, each expression
under an attribute with the same name as the original file
or subdirectory.
</para>
<para>For example, if <filename>~/.nix-defexpr</filename> contains
two files, <filename>foo.nix</filename> and <filename>bar.nix</filename>,
<para>The Nix expressions in this directory are combined into a
single set, with each file as an attribute that has the name of
the file. Thus, if <filename>~/.nix-defexpr</filename> contains
two files, <filename>foo</filename> and <filename>bar</filename>,
then the default Nix expression will essentially be
<programlisting>
{
foo = import ~/.nix-defexpr/foo.nix;
bar = import ~/.nix-defexpr/bar.nix;
foo = import ~/.nix-defexpr/foo;
bar = import ~/.nix-defexpr/bar;
}</programlisting>
</para>
<para>The file <filename>manifest.nix</filename> is always ignored.
Subdirectories without a <filename>default.nix</filename> file
are traversed recursively in search of more Nix expressions,
but the names of these intermediate directories are not
added to the attribute paths of the default Nix expression.</para>
<para>The command <command>nix-channel</command> places symlinks
to the downloaded Nix expressions from each subscribed channel in
this directory.</para>
</listitem>
</varlistentry>
@ -1370,13 +1348,10 @@ profile. The generations can be a list of generation numbers, the
special value <literal>old</literal> to delete all non-current
generations, a value such as <literal>30d</literal> to delete all
generations older than the specified number of days (except for the
generation that was active at that point in time), or a value such as
<literal>+5</literal> to keep the last <literal>5</literal> generations
ignoring any newer than current, e.g., if <literal>30</literal> is the current
generation <literal>+5</literal> will delete generation <literal>25</literal>
and all older generations.
Periodically deleting old generations is important to make garbage collection
effective.</para>
generation that was active at that point in time), or a value such as.
<literal>+5</literal> to only keep the specified items older than the
current generation. Periodically deleting old generations is important
to make garbage collection effective.</para>
</refsection>

View File

@ -170,6 +170,18 @@ if builtins ? getEnv then builtins.getEnv "PATH" else ""</programlisting>
</varlistentry>
<varlistentry xml:id='builtin-splitVersion'>
<term><function>builtins.splitVersion</function>
<replaceable>s</replaceable></term>
<listitem><para>Split a string representing a version into its
components, by the same version splitting logic underlying the
version comparison in <link linkend="ssec-version-comparisons">
<command>nix-env -u</command></link>.</para></listitem>
</varlistentry>
<varlistentry xml:id='builtin-concatLists'>
<term><function>builtins.concatLists</function>
<replaceable>lists</replaceable></term>
@ -436,7 +448,7 @@ stdenv.mkDerivation { … }
<example>
<title>Fetching an arbitrary ref</title>
<programlisting>builtins.fetchGit {
url = "https://github.com/NixOS/nix.git";
url = "https://gitub.com/NixOS/nix.git";
ref = "refs/heads/0.5-release";
}</programlisting>
</example>
@ -487,8 +499,11 @@ stdenv.mkDerivation { … }
<title>Fetching a tag</title>
<programlisting>builtins.fetchGit {
url = "https://github.com/nixos/nix.git";
ref = "refs/tags/1.9";
ref = "tags/1.9";
}</programlisting>
<note><para>Due to a bug (<link
xlink:href="https://github.com/NixOS/nix/issues/2385">#2385</link>),
only non-annotated tags can be fetched.</para></note>
</example>
<example>
@ -1260,19 +1275,6 @@ Evaluates to <literal>[ " " [ "FOO" ] " " ]</literal>.
</para></listitem>
</varlistentry>
<varlistentry xml:id='builtin-splitVersion'>
<term><function>builtins.splitVersion</function>
<replaceable>s</replaceable></term>
<listitem><para>Split a string representing a version into its
components, by the same version splitting logic underlying the
version comparison in <link linkend="ssec-version-comparisons">
<command>nix-env -u</command></link>.</para></listitem>
</varlistentry>
<varlistentry xml:id='builtin-stringLength'>
<term><function>builtins.stringLength</function>
<replaceable>e</replaceable></term>

View File

@ -15,16 +15,13 @@ weakest binding).</para>
<tgroup cols='3'>
<thead>
<row>
<entry>Name</entry>
<entry>Syntax</entry>
<entry>Associativity</entry>
<entry>Description</entry>
<entry>Precedence</entry>
</row>
</thead>
<tbody>
<row>
<entry>Select</entry>
<entry><replaceable>e</replaceable> <literal>.</literal>
<replaceable>attrpath</replaceable>
[ <literal>or</literal> <replaceable>def</replaceable> ]
@ -36,25 +33,19 @@ weakest binding).</para>
dot-separated list of attribute names.) If the attribute
doesnt exist, return <replaceable>def</replaceable> if
provided, otherwise abort evaluation.</entry>
<entry>1</entry>
</row>
<row>
<entry>Application</entry>
<entry><replaceable>e1</replaceable> <replaceable>e2</replaceable></entry>
<entry>left</entry>
<entry>Call function <replaceable>e1</replaceable> with
argument <replaceable>e2</replaceable>.</entry>
<entry>2</entry>
</row>
<row>
<entry>Arithmetic Negation</entry>
<entry><literal>-</literal> <replaceable>e</replaceable></entry>
<entry>none</entry>
<entry>Arithmetic negation.</entry>
<entry>3</entry>
</row>
<row>
<entry>Has Attribute</entry>
<entry><replaceable>e</replaceable> <literal>?</literal>
<replaceable>attrpath</replaceable></entry>
<entry>none</entry>
@ -62,69 +53,34 @@ weakest binding).</para>
the attribute denoted by <replaceable>attrpath</replaceable>;
return <literal>true</literal> or
<literal>false</literal>.</entry>
<entry>4</entry>
</row>
<row>
<entry>List Concatenation</entry>
<entry><replaceable>e1</replaceable> <literal>++</literal> <replaceable>e2</replaceable></entry>
<entry>right</entry>
<entry>List concatenation.</entry>
<entry>5</entry>
</row>
<row>
<entry>Multiplication</entry>
<entry>
<replaceable>e1</replaceable> <literal>*</literal> <replaceable>e2</replaceable>,
</entry>
<entry>left</entry>
<entry>Arithmetic multiplication.</entry>
<entry>6</entry>
</row>
<row>
<entry>Division</entry>
<entry>
<replaceable>e1</replaceable> <literal>/</literal> <replaceable>e2</replaceable>
</entry>
<entry>left</entry>
<entry>Arithmetic division.</entry>
<entry>6</entry>
<entry>Arithmetic multiplication and division.</entry>
</row>
<row>
<entry>Addition</entry>
<entry>
<replaceable>e1</replaceable> <literal>+</literal> <replaceable>e2</replaceable>
</entry>
<entry>left</entry>
<entry>Arithmetic addition.</entry>
<entry>7</entry>
</row>
<row>
<entry>Subtraction</entry>
<entry>
<replaceable>e1</replaceable> <literal>+</literal> <replaceable>e2</replaceable>,
<replaceable>e1</replaceable> <literal>-</literal> <replaceable>e2</replaceable>
</entry>
<entry>left</entry>
<entry>Arithmetic subtraction.</entry>
<entry>7</entry>
<entry>Arithmetic addition and subtraction. String or path concatenation (only by <literal>+</literal>).</entry>
</row>
<row>
<entry>String Concatenation</entry>
<entry>
<replaceable>string1</replaceable> <literal>+</literal> <replaceable>string2</replaceable>
</entry>
<entry>left</entry>
<entry>String concatenation.</entry>
<entry>7</entry>
</row>
<row>
<entry>Not</entry>
<entry><literal>!</literal> <replaceable>e</replaceable></entry>
<entry>none</entry>
<entry>Boolean negation.</entry>
<entry>8</entry>
</row>
<row>
<entry>Update</entry>
<entry><replaceable>e1</replaceable> <literal>//</literal>
<replaceable>e2</replaceable></entry>
<entry>right</entry>
@ -133,90 +89,47 @@ weakest binding).</para>
<replaceable>e2</replaceable> (with the latter taking
precedence over the former in case of equally named
attributes).</entry>
<entry>9</entry>
</row>
<row>
<entry>Less Than</entry>
<entry>
<replaceable>e1</replaceable> <literal>&lt;</literal> <replaceable>e2</replaceable>,
</entry>
<entry>none</entry>
<entry>Arithmetic comparison.</entry>
<entry>10</entry>
</row>
<row>
<entry>Less Than or Equal To</entry>
<entry>
<replaceable>e1</replaceable> <literal>&lt;=</literal> <replaceable>e2</replaceable>
</entry>
<entry>none</entry>
<entry>Arithmetic comparison.</entry>
<entry>10</entry>
</row>
<row>
<entry>Greater Than</entry>
<entry>
<replaceable>e1</replaceable> <literal>&gt;</literal> <replaceable>e2</replaceable>
</entry>
<entry>none</entry>
<entry>Arithmetic comparison.</entry>
<entry>10</entry>
</row>
<row>
<entry>Greater Than or Equal To</entry>
<entry>
<replaceable>e1</replaceable> <literal>&gt;</literal> <replaceable>e2</replaceable>,
<replaceable>e1</replaceable> <literal>&lt;=</literal> <replaceable>e2</replaceable>,
<replaceable>e1</replaceable> <literal>&gt;=</literal> <replaceable>e2</replaceable>
</entry>
<entry>none</entry>
<entry>Arithmetic comparison.</entry>
<entry>10</entry>
</row>
<row>
<entry>Equality</entry>
<entry>
<replaceable>e1</replaceable> <literal>==</literal> <replaceable>e2</replaceable>
</entry>
<entry>none</entry>
<entry>Equality.</entry>
<entry>11</entry>
</row>
<row>
<entry>Inequality</entry>
<entry>
<replaceable>e1</replaceable> <literal>==</literal> <replaceable>e2</replaceable>,
<replaceable>e1</replaceable> <literal>!=</literal> <replaceable>e2</replaceable>
</entry>
<entry>none</entry>
<entry>Inequality.</entry>
<entry>11</entry>
<entry>Equality and inequality.</entry>
</row>
<row>
<entry>Logical AND</entry>
<entry><replaceable>e1</replaceable> <literal>&amp;&amp;</literal>
<replaceable>e2</replaceable></entry>
<entry>left</entry>
<entry>Logical AND.</entry>
<entry>12</entry>
</row>
<row>
<entry>Logical OR</entry>
<entry><replaceable>e1</replaceable> <literal>||</literal>
<replaceable>e2</replaceable></entry>
<entry>left</entry>
<entry>Logical OR.</entry>
<entry>13</entry>
</row>
<row>
<entry>Logical Implication</entry>
<entry><replaceable>e1</replaceable> <literal>-></literal>
<replaceable>e2</replaceable></entry>
<entry>none</entry>
<entry>Logical implication (equivalent to
<literal>!<replaceable>e1</replaceable> ||
<replaceable>e2</replaceable></literal>).</entry>
<entry>14</entry>
</row>
</tbody>
</tgroup>
</table>
</section>
</section>

View File

@ -67,23 +67,5 @@ $ sudo launchctl kickstart -k system/org.nixos.nix-daemon
</screen>
</section>
<section xml:id="sec-installer-proxy-settings">
<title>Proxy Environment Variables</title>
<para>The Nix installer has special handling for these proxy-related
environment variables:
<varname>http_proxy</varname>, <varname>https_proxy</varname>,
<varname>ftp_proxy</varname>, <varname>no_proxy</varname>,
<varname>HTTP_PROXY</varname>, <varname>HTTPS_PROXY</varname>,
<varname>FTP_PROXY</varname>, <varname>NO_PROXY</varname>.
</para>
<para>If any of these variables are set when running the Nix installer,
then the installer will create an override file at
<filename>/etc/systemd/system/nix-daemon.service.d/override.conf</filename>
so <command>nix-daemon</command> will use them.
</para>
</section>
</section>
</chapter>

View File

@ -4,22 +4,9 @@
version="5.0"
xml:id="ssec-relnotes-2.3">
<title>Release 2.3 (2019-09-04)</title>
<title>Release 2.3 (????-??-??)</title>
<para>This is primarily a bug fix release. However, it makes some
incompatible changes:</para>
<itemizedlist>
<listitem>
<para>Nix now uses BSD file locks instead of POSIX file
locks. Because of this, you should not use Nix 2.3 and previous
releases at the same time on a Nix store.</para>
</listitem>
</itemizedlist>
<para>It also has the following changes:</para>
<para>This release contains the following changes:</para>
<itemizedlist>
@ -31,62 +18,5 @@ incompatible changes:</para>
already begin with <literal>refs/</literal>.
</para>
</listitem>
<listitem>
<para>The installer now enables sandboxing by default on
Linux. The <literal>max-jobs</literal> setting now defaults to
1.</para>
</listitem>
<listitem>
<para>New builtin functions:
<literal>builtins.isPath</literal>,
<literal>builtins.hashFile</literal>.
</para>
</listitem>
<listitem>
<para>The <command>nix</command> command has a new
<option>--print-build-logs</option> (<option>-L</option>) flag to
print build log output to stderr, rather than showing the last log
line in the progress bar. To distinguish between concurrent
builds, log lines are prefixed by the name of the package.
</para>
</listitem>
<listitem>
<para>Builds are now executed in a pseudo-terminal, and the
<envar>TERM</envar> environment variable is set to
<literal>xterm-256color</literal>. This allows many programs
(e.g. <command>gcc</command>, <command>clang</command>,
<command>cmake</command>) to print colorized log output.</para>
</listitem>
<listitem>
<para>Add <option>--no-net</option> convenience flag. This flag
disables substituters; sets the <literal>tarball-ttl</literal>
setting to infinity (ensuring that any previously downloaded files
are considered current); and disables retrying downloads and sets
the connection timeout to the minimum. This flag is enabled
automatically if there are no configured non-loopback network
interfaces.</para>
</listitem>
<listitem>
<para>Add a <literal>post-build-hook</literal> setting to run a
program after a build has succeeded.</para>
</listitem>
<listitem>
<para>Add a <literal>trace-function-calls</literal> setting to log
the duration of Nix function calls to stderr.</para>
</listitem>
<listitem>
<para>On Linux, sandboxing is now disabled by default on systems
that dont have the necessary kernel support.</para>
</listitem>
</itemizedlist>
</section>

View File

@ -44,7 +44,7 @@ print STDERR "Nix revision is $nixRev, version is $version\n";
File::Path::make_path($releasesDir);
if (system("mountpoint -q $releasesDir") != 0) {
system("sshfs hydra-mirror\@nixos.org:/releases $releasesDir") == 0 or die;
system("sshfs hydra-mirror:/releases $releasesDir") == 0 or die;
}
my $releaseDir = "$releasesDir/nix/$releaseName";

View File

@ -13,12 +13,8 @@
<true/>
<key>RunAtLoad</key>
<true/>
<key>ProgramArguments</key>
<array>
<string>/bin/sh</string>
<string>-c</string>
<string>/bin/wait4path @bindir@/nix-daemon &amp;&amp; @bindir@/nix-daemon</string>
</array>
<key>Program</key>
<string>@bindir@/nix-daemon</string>
<key>StandardErrorPath</key>
<string>/var/log/nix-daemon.log</string>
<key>StandardOutPath</key>

View File

@ -7,6 +7,3 @@ ConditionPathIsReadWrite=@localstatedir@/nix/daemon-socket
[Service]
ExecStart=@@bindir@/nix-daemon nix-daemon --daemon
KillMode=process
[Install]
WantedBy=multi-user.target

View File

@ -72,12 +72,7 @@ let
# https://github.com/NixOS/nixpkgs/issues/45462
''
mkdir -p $out/lib
cp -pd ${boost}/lib/{libboost_context*,libboost_thread*,libboost_system*} $out/lib
rm -f $out/lib/*.a
${lib.optionalString stdenv.isLinux ''
chmod u+w $out/lib/*.so.*
patchelf --set-rpath $out/lib:${stdenv.cc.cc.lib}/lib $out/lib/libboost_thread.so.*
''}
cp ${boost}/lib/libboost_context* $out/lib
'';
configureFlags = configureFlags ++
@ -170,10 +165,10 @@ let
chmod +x $TMPDIR/install-systemd-multi-user.sh
chmod +x $TMPDIR/install-multi-user
dir=nix-${version}-${system}
fn=$out/$dir.tar.xz
fn=$out/$dir.tar.bz2
mkdir -p $out/nix-support
echo "file binary-dist $fn" >> $out/nix-support/hydra-build-products
tar cvfJ $fn \
tar cvfj $fn \
--owner=0 --group=0 --mode=u+rw,uga+r \
--absolute-names \
--hard-dereference \
@ -300,7 +295,7 @@ let
substitute ${./scripts/install.in} $out/install \
${pkgs.lib.concatMapStrings
(system: "--replace '@binaryTarball_${system}@' $(nix hash-file --base16 --type sha256 ${binaryTarball.${system}}/*.tar.xz) ")
(system: "--replace '@binaryTarball_${system}@' $(nix hash-file --base16 --type sha256 ${binaryTarball.${system}}/*.tar.bz2) ")
[ "x86_64-linux" "i686-linux" "x86_64-darwin" "aarch64-linux" ]
} \
--replace '@nixVersion@' ${build.x86_64-linux.src.version}

View File

@ -330,7 +330,7 @@ EOF
fi
done
if [ -d /nix/store ] || [ -d /nix/var ]; then
if [ -d /nix ]; then
failure <<EOF
There are some relics of a previous installation of Nix at /nix, and
this scripts assumes Nix is _not_ yet installed. Please delete the old
@ -758,13 +758,9 @@ main() {
if [ "$(uname -s)" = "Darwin" ]; then
# shellcheck source=./install-darwin-multi-user.sh
. "$EXTRACTED_NIX_PATH/install-darwin-multi-user.sh"
elif [ "$(uname -s)" = "Linux" ]; then
if [ -e /run/systemd/system ]; then
# shellcheck source=./install-systemd-multi-user.sh
. "$EXTRACTED_NIX_PATH/install-systemd-multi-user.sh"
else
failure "Sorry, the multi-user installation requires systemd on Linux (detected using /run/systemd/system)"
fi
elif [ "$(uname -s)" = "Linux" ] && [ -e /run/systemd/system ]; then
# shellcheck source=./install-systemd-multi-user.sh
. "$EXTRACTED_NIX_PATH/install-systemd-multi-user.sh"
else
failure "Sorry, I don't know what to do on $(uname)"
fi

34
scripts/install-systemd-multi-user.sh Executable file → Normal file
View File

@ -9,38 +9,6 @@ readonly SERVICE_DEST=/etc/systemd/system/nix-daemon.service
readonly SOCKET_SRC=/lib/systemd/system/nix-daemon.socket
readonly SOCKET_DEST=/etc/systemd/system/nix-daemon.socket
# Path for the systemd override unit file to contain the proxy settings
readonly SERVICE_OVERRIDE=${SERVICE_DEST}.d/override.conf
create_systemd_override() {
header "Configuring proxy for the nix-daemon service"
_sudo "create directory for systemd unit override" mkdir -p "$(dirname $SERVICE_OVERRIDE)"
cat <<EOF | _sudo "create systemd unit override" tee "$SERVICE_OVERRIDE"
[Service]
$1
EOF
}
# Gather all non-empty proxy environment variables into a string
create_systemd_proxy_env() {
vars="http_proxy https_proxy ftp_proxy no_proxy HTTP_PROXY HTTPS_PROXY FTP_PROXY NO_PROXY"
for v in $vars; do
if [ "x${!v:-}" != "x" ]; then
echo "Environment=${v}=${!v}"
fi
done
}
handle_network_proxy() {
# Create a systemd unit override with proxy environment variables
# if any proxy environment variables are not empty.
PROXY_ENV_STRING=$(create_systemd_proxy_env)
if [ -n "${PROXY_ENV_STRING}" ]; then
create_systemd_override "${PROXY_ENV_STRING}"
fi
}
poly_validate_assumptions() {
if [ "$(uname -s)" != "Linux" ]; then
failure "This script is for use with Linux!"
@ -79,8 +47,6 @@ poly_configure_nix_daemon_service() {
_sudo "to set up the nix-daemon socket service" \
systemctl enable "/nix/var/nix/profiles/default$SOCKET_SRC"
handle_network_proxy
_sudo "to load the systemd unit for nix-daemon" \
systemctl daemon-reload

View File

@ -30,11 +30,12 @@ case "$(uname -s).$(uname -m)" in
*) oops "sorry, there is no binary distribution of Nix for your platform";;
esac
url="https://nixos.org/releases/nix/nix-@nixVersion@/nix-@nixVersion@-$system.tar.xz"
url="https://nixos.org/releases/nix/nix-@nixVersion@/nix-@nixVersion@-$system.tar.bz2"
tarball="$tmpDir/$(basename "$tmpDir/nix-@nixVersion@-$system.tar.xz")"
tarball="$tmpDir/$(basename "$tmpDir/nix-@nixVersion@-$system.tar.bz2")"
require_util curl "download the binary tarball"
require_util bzcat "decompress the binary tarball"
require_util tar "unpack the binary tarball"
echo "downloading Nix @nixVersion@ binary tarball for $system from '$url' to '$tmpDir'..."
@ -56,7 +57,7 @@ fi
unpack=$tmpDir/unpack
mkdir -p "$unpack"
tar -xf "$tarball" -C "$unpack" || oops "failed to unpack '$url'"
< "$tarball" bzcat | tar -xf - -C "$unpack" || oops "failed to unpack '$url'"
script=$(echo "$unpack"/*/install)

View File

@ -9,7 +9,6 @@
#include "json.hh"
#include <algorithm>
#include <chrono>
#include <cstring>
#include <unistd.h>
#include <sys/time.h>
@ -17,6 +16,7 @@
#include <iostream>
#include <fstream>
#include <sys/time.h>
#include <sys/resource.h>
#if HAVE_BOEHMGC
@ -1094,13 +1094,9 @@ void EvalState::callPrimOp(Value & fun, Value & arg, Value & v, const Pos & pos)
}
}
void EvalState::callFunction(Value & fun, Value & arg, Value & v, const Pos & pos)
{
std::optional<FunctionCallTrace> trace;
if (evalSettings.traceFunctionCalls) {
trace.emplace(pos);
}
forceValue(fun, pos);
if (fun.type == tPrimOp || fun.type == tPrimOpApp) {

View File

@ -6,7 +6,6 @@
#include "symbol-table.hh"
#include "hash.hh"
#include "config.hh"
#include "function-trace.hh"
#include <map>
#include <unordered_map>
@ -350,9 +349,6 @@ struct EvalSettings : Config
Setting<Strings> allowedUris{this, {}, "allowed-uris",
"Prefixes of URIs that builtin functions such as fetchurl and fetchGit are allowed to fetch."};
Setting<bool> traceFunctionCalls{this, false, "trace-function-calls",
"Emit log messages for each function entry and exit at the 'vomit' log level (-vvvv)"};
};
extern EvalSettings evalSettings;

View File

@ -1,24 +0,0 @@
#pragma once
#include "eval.hh"
#include <sys/time.h>
namespace nix {
struct FunctionCallTrace
{
const Pos & pos;
FunctionCallTrace(const Pos & pos) : pos(pos) {
auto duration = std::chrono::high_resolution_clock::now().time_since_epoch();
auto ns = std::chrono::duration_cast<std::chrono::nanoseconds>(duration);
printMsg(lvlInfo, "function-trace entered %1% at %2%", pos, ns.count());
}
~FunctionCallTrace() {
auto duration = std::chrono::high_resolution_clock::now().time_since_epoch();
auto ns = std::chrono::duration_cast<std::chrono::nanoseconds>(duration);
printMsg(lvlInfo, "function-trace exited %1% at %2%", pos, ns.count());
}
};
}

View File

@ -111,9 +111,9 @@ static void parseJSON(EvalState & state, const char * & s, Value & v)
mkFloat(v, stod(tmp_number));
else
mkInt(v, stol(tmp_number));
} catch (std::invalid_argument & e) {
} catch (std::invalid_argument e) {
throw JSONParseError("invalid JSON number");
} catch (std::out_of_range & e) {
} catch (std::out_of_range e) {
throw JSONParseError("out-of-range JSON number");
}
}

View File

@ -38,7 +38,7 @@ GitInfo exportGit(ref<Store> store, const std::string & uri,
try {
runProgram("git", true, { "-C", uri, "diff-index", "--quiet", "HEAD", "--" });
} catch (ExecError & e) {
} catch (ExecError e) {
if (!WIFEXITED(e.status) || WEXITSTATUS(e.status) != 1) throw;
clean = false;
}

View File

@ -80,7 +80,6 @@ string getArg(const string & opt,
}
#if OPENSSL_VERSION_NUMBER < 0x10101000L
/* OpenSSL is not thread-safe by default - it will randomly crash
unless the user supplies a mutex locking function. So let's do
that. */
@ -93,7 +92,6 @@ static void opensslLockCallback(int mode, int type, const char * file, int line)
else
opensslLocks[type].unlock();
}
#endif
static void sigHandler(int signo) { }
@ -107,11 +105,9 @@ void initNix()
std::cerr.rdbuf()->pubsetbuf(buf, sizeof(buf));
#endif
#if OPENSSL_VERSION_NUMBER < 0x10101000L
/* Initialise OpenSSL locking. */
opensslLocks = std::vector<std::mutex>(CRYPTO_num_locks());
CRYPTO_set_locking_callback(opensslLockCallback);
#endif
loadConfFile();
@ -129,15 +125,6 @@ void initNix()
act.sa_handler = sigHandler;
if (sigaction(SIGUSR1, &act, 0)) throw SysError("handling SIGUSR1");
#if __APPLE__
/* HACK: on darwin, we need cant use sigprocmask with SIGWINCH.
* Instead, add a dummy sigaction handler, and signalHandlerThread
* can handle the rest. */
struct sigaction sa;
sa.sa_handler = sigHandler;
if (sigaction(SIGWINCH, &sa, 0)) throw SysError("handling SIGWINCH");
#endif
/* Register a SIGSEGV handler to detect stack overflows. */
detectStackOverflow();

View File

@ -10,13 +10,10 @@
#include "nar-info-disk-cache.hh"
#include "nar-accessor.hh"
#include "json.hh"
#include "thread-pool.hh"
#include <chrono>
#include <future>
#include <regex>
#include <nlohmann/json.hpp>
#include <future>
namespace nix {
@ -58,7 +55,7 @@ void BinaryCacheStore::init()
}
void BinaryCacheStore::getFile(const std::string & path,
Callback<std::shared_ptr<std::string>> callback) noexcept
Callback<std::shared_ptr<std::string>> callback)
{
try {
callback(getFile(path));
@ -142,11 +139,6 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, const ref<std::str
auto accessor_ = std::dynamic_pointer_cast<RemoteFSAccessor>(accessor);
auto narAccessor = makeNarAccessor(nar);
if (accessor_)
accessor_->addToCache(info.path, *nar, narAccessor);
/* Optionally write a JSON file containing a listing of the
contents of the NAR. */
if (writeNARListing) {
@ -156,6 +148,11 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, const ref<std::str
JSONObject jsonRoot(jsonOut);
jsonRoot.attr("version", 1);
auto narAccessor = makeNarAccessor(nar);
if (accessor_)
accessor_->addToCache(info.path, *nar, narAccessor);
{
auto res = jsonRoot.placeholder("root");
listNar(res, narAccessor, "", true);
@ -165,6 +162,11 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, const ref<std::str
upsertFile(storePathToHash(info.path) + ".ls", jsonOut.str(), "application/json");
}
else {
if (accessor_)
accessor_->addToCache(info.path, *nar, makeNarAccessor(nar));
}
/* Compress the NAR. */
narInfo->compression = compression;
auto now1 = std::chrono::steady_clock::now();
@ -179,70 +181,12 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, const ref<std::str
% ((1.0 - (double) narCompressed->size() / nar->size()) * 100.0)
% duration);
/* Atomically write the NAR file. */
narInfo->url = "nar/" + narInfo->fileHash.to_string(Base32, false) + ".nar"
+ (compression == "xz" ? ".xz" :
compression == "bzip2" ? ".bz2" :
compression == "br" ? ".br" :
"");
/* Optionally maintain an index of DWARF debug info files
consisting of JSON files named 'debuginfo/<build-id>' that
specify the NAR file and member containing the debug info. */
if (writeDebugInfo) {
std::string buildIdDir = "/lib/debug/.build-id";
if (narAccessor->stat(buildIdDir).type == FSAccessor::tDirectory) {
ThreadPool threadPool(25);
auto doFile = [&](std::string member, std::string key, std::string target) {
checkInterrupt();
nlohmann::json json;
json["archive"] = target;
json["member"] = member;
// FIXME: or should we overwrite? The previous link may point
// to a GC'ed file, so overwriting might be useful...
if (fileExists(key)) return;
printMsg(lvlTalkative, "creating debuginfo link from '%s' to '%s'", key, target);
upsertFile(key, json.dump(), "application/json");
};
std::regex regex1("^[0-9a-f]{2}$");
std::regex regex2("^[0-9a-f]{38}\\.debug$");
for (auto & s1 : narAccessor->readDirectory(buildIdDir)) {
auto dir = buildIdDir + "/" + s1;
if (narAccessor->stat(dir).type != FSAccessor::tDirectory
|| !std::regex_match(s1, regex1))
continue;
for (auto & s2 : narAccessor->readDirectory(dir)) {
auto debugPath = dir + "/" + s2;
if (narAccessor->stat(debugPath).type != FSAccessor::tRegular
|| !std::regex_match(s2, regex2))
continue;
auto buildId = s1 + s2;
std::string key = "debuginfo/" + buildId;
std::string target = "../" + narInfo->url;
threadPool.enqueue(std::bind(doFile, std::string(debugPath, 1), key, target));
}
}
threadPool.process();
}
}
/* Atomically write the NAR file. */
if (repair || !fileExists(narInfo->url)) {
stats.narWrite++;
upsertFile(narInfo->url, *narCompressed, "application/x-nix-nar");
@ -296,7 +240,7 @@ void BinaryCacheStore::narFromPath(const Path & storePath, Sink & sink)
}
void BinaryCacheStore::queryPathInfoUncached(const Path & storePath,
Callback<std::shared_ptr<ValidPathInfo>> callback) noexcept
Callback<std::shared_ptr<ValidPathInfo>> callback)
{
auto uri = getUri();
auto act = std::make_shared<Activity>(*logger, lvlTalkative, actQueryPathInfo,
@ -305,23 +249,21 @@ void BinaryCacheStore::queryPathInfoUncached(const Path & storePath,
auto narInfoFile = narInfoFileFor(storePath);
auto callbackPtr = std::make_shared<decltype(callback)>(std::move(callback));
getFile(narInfoFile,
{[=](std::future<std::shared_ptr<std::string>> fut) {
try {
auto data = fut.get();
if (!data) return (*callbackPtr)(nullptr);
if (!data) return callback(nullptr);
stats.narInfoRead++;
(*callbackPtr)((std::shared_ptr<ValidPathInfo>)
callback((std::shared_ptr<ValidPathInfo>)
std::make_shared<NarInfo>(*this, *data, narInfoFile));
(void) act; // force Activity into this lambda to ensure it stays alive
} catch (...) {
callbackPtr->rethrow();
callback.rethrow();
}
}});
}

View File

@ -17,7 +17,6 @@ public:
const Setting<std::string> compression{this, "xz", "compression", "NAR compression method ('xz', 'bzip2', or 'none')"};
const Setting<bool> writeNARListing{this, false, "write-nar-listing", "whether to write a JSON file listing the files in each NAR"};
const Setting<bool> writeDebugInfo{this, false, "index-debug-info", "whether to index DWARF debug info files by build ID"};
const Setting<Path> secretKeyFile{this, "", "secret-key", "path to secret key used to sign the binary cache"};
const Setting<Path> localNarCache{this, "", "local-nar-cache", "path to a local cache of NARs"};
const Setting<bool> parallelCompression{this, false, "parallel-compression",
@ -48,7 +47,7 @@ public:
/* Fetch the specified file and call the specified callback with
the result. A subclass may implement this asynchronously. */
virtual void getFile(const std::string & path,
Callback<std::shared_ptr<std::string>> callback) noexcept;
Callback<std::shared_ptr<std::string>> callback);
std::shared_ptr<std::string> getFile(const std::string & path);
@ -74,7 +73,7 @@ public:
bool isValidPathUncached(const Path & path) override;
void queryPathInfoUncached(const Path & path,
Callback<std::shared_ptr<ValidPathInfo>> callback) noexcept override;
Callback<std::shared_ptr<ValidPathInfo>> callback) override;
Path queryPathFromHashPart(const string & hashPart) override
{ unsupported("queryPathFromHashPart"); }

View File

@ -1197,7 +1197,7 @@ void DerivationGoal::haveDerivation()
/* We are first going to try to create the invalid output paths
through substitutes. If that doesn't work, we'll build
them. */
if (settings.useSubstitutes && parsedDrv->substitutesAllowed())
if (settings.useSubstitutes && drv->substitutesAllowed())
for (auto & i : invalidOutputs)
addWaitee(worker.makeSubstitutionGoal(i, buildMode == bmRepair ? Repair : NoRepair));
@ -1629,61 +1629,6 @@ void DerivationGoal::buildDone()
being valid. */
registerOutputs();
if (settings.postBuildHook != "") {
Activity act(*logger, lvlInfo, actPostBuildHook,
fmt("running post-build-hook '%s'", settings.postBuildHook),
Logger::Fields{drvPath});
PushActivity pact(act.id);
auto outputPaths = drv->outputPaths();
std::map<std::string, std::string> hookEnvironment = getEnv();
hookEnvironment.emplace("DRV_PATH", drvPath);
hookEnvironment.emplace("OUT_PATHS", chomp(concatStringsSep(" ", outputPaths)));
RunOptions opts(settings.postBuildHook, {});
opts.environment = hookEnvironment;
struct LogSink : Sink {
Activity & act;
std::string currentLine;
LogSink(Activity & act) : act(act) { }
void operator() (const unsigned char * data, size_t len) override {
for (size_t i = 0; i < len; i++) {
auto c = data[i];
if (c == '\n') {
flushLine();
} else {
currentLine += c;
}
}
}
void flushLine() {
if (settings.verboseBuild) {
printError("post-build-hook: " + currentLine);
} else {
act.result(resPostBuildLogLine, currentLine);
}
currentLine.clear();
}
~LogSink() {
if (currentLine != "") {
currentLine += '\n';
flushLine();
}
}
};
LogSink sink(act);
opts.standardOut = &sink;
opts.mergeStderrToStdout = true;
runProgram2(opts);
}
if (buildMode == bmCheck) {
done(BuildResult::Built);
return;
@ -2357,37 +2302,17 @@ void DerivationGoal::startBuilder()
flags |= CLONE_NEWNET;
pid_t child = clone(childEntry, stack + stackSize, flags, this);
if (child == -1 && errno == EINVAL) {
if (child == -1 && errno == EINVAL)
/* Fallback for Linux < 2.13 where CLONE_NEWPID and
CLONE_PARENT are not allowed together. */
flags &= ~CLONE_NEWPID;
child = clone(childEntry, stack + stackSize, flags, this);
}
if (child == -1 && (errno == EPERM || errno == EINVAL)) {
/* Some distros patch Linux to not allow unprivileged
* user namespaces. If we get EPERM or EINVAL, try
* without CLONE_NEWUSER and see if that works.
*/
flags &= ~CLONE_NEWUSER;
child = clone(childEntry, stack + stackSize, flags, this);
}
/* Otherwise exit with EPERM so we can handle this in the
parent. This is only done when sandbox-fallback is set
to true (the default). */
if (child == -1 && (errno == EPERM || errno == EINVAL) && settings.sandboxFallback)
_exit(1);
child = clone(childEntry, stack + stackSize, flags & ~CLONE_NEWPID, this);
if (child == -1) throw SysError("cloning builder process");
writeFull(builderOut.writeSide.get(), std::to_string(child) + "\n");
_exit(0);
}, options);
int res = helper.wait();
if (res != 0 && settings.sandboxFallback) {
useChroot = false;
tmpDirInSandbox = tmpDir;
goto fallback;
} else if (res != 0)
if (helper.wait() != 0)
throw Error("unable to start build process");
userNamespaceSync.readSide = -1;
@ -2410,14 +2335,14 @@ void DerivationGoal::startBuilder()
writeFile("/proc/" + std::to_string(pid) + "/gid_map",
(format("%d %d 1") % sandboxGid % hostGid).str());
/* Signal the builder that we've updated its user namespace. */
/* Signal the builder that we've updated its user
namespace. */
writeFull(userNamespaceSync.writeSide.get(), "1");
userNamespaceSync.writeSide = -1;
} else
#endif
{
fallback:
options.allowVfork = !buildUser && !drv->isBuiltin();
pid = startProcess([&]() {
runChild();
@ -4059,6 +3984,17 @@ void SubstitutionGoal::tryToRun()
return;
}
/* If the store path is already locked (probably by a
DerivationGoal), then put this goal to sleep. Note: we don't
acquire a lock here since that breaks addToStore(), so below we
handle an AlreadyLocked exception from addToStore(). The check
here is just an optimisation to prevent having to redo a
download due to a locked path. */
if (pathIsLockedByMe(worker.store.toRealPath(storePath))) {
worker.waitForAWhile(shared_from_this());
return;
}
maintainRunningSubstitutions = std::make_unique<MaintainCount<uint64_t>>(worker.runningSubstitutions);
worker.updateProgress();
@ -4098,6 +4034,12 @@ void SubstitutionGoal::finished()
try {
promise.get_future().get();
} catch (AlreadyLocked & e) {
/* Probably a DerivationGoal is already building this store
path. Sleep for a while and try again. */
state = &SubstitutionGoal::init;
worker.waitForAWhile(shared_from_this());
return;
} catch (std::exception & e) {
printError(e.what());

View File

@ -36,6 +36,12 @@ Path BasicDerivation::findOutput(const string & id) const
}
bool BasicDerivation::substitutesAllowed() const
{
return get(env, "allowSubstitutes", "1") == "1";
}
bool BasicDerivation::isBuiltin() const
{
return string(builder, 0, 8) == "builtin:";

View File

@ -56,6 +56,8 @@ struct BasicDerivation
the given derivation. */
Path findOutput(const string & id) const;
bool substitutesAllowed() const;
bool isBuiltin() const;
/* Return true iff this is a fixed-output derivation. */

View File

@ -77,13 +77,13 @@ struct CurlDownloader : public Downloader
DownloadItem(CurlDownloader & downloader,
const DownloadRequest & request,
Callback<DownloadResult> && callback)
Callback<DownloadResult> callback)
: downloader(downloader)
, request(request)
, act(*logger, lvlTalkative, actDownload,
fmt(request.data ? "uploading '%s'" : "downloading '%s'", request.uri),
{request.uri}, request.parentAct)
, callback(std::move(callback))
, callback(callback)
, finalSink([this](const unsigned char * data, size_t len) {
if (this->request.dataCallback) {
writtenToSink += len;
@ -236,6 +236,8 @@ struct CurlDownloader : public Downloader
return ((DownloadItem *) userp)->readCallback(buffer, size, nitems);
}
long lowSpeedTimeout = 300;
void init()
{
if (!req) req = curl_easy_init();
@ -295,7 +297,7 @@ struct CurlDownloader : public Downloader
curl_easy_setopt(req, CURLOPT_CONNECTTIMEOUT, downloadSettings.connectTimeout.get());
curl_easy_setopt(req, CURLOPT_LOW_SPEED_LIMIT, 1L);
curl_easy_setopt(req, CURLOPT_LOW_SPEED_TIME, downloadSettings.stalledDownloadTimeout.get());
curl_easy_setopt(req, CURLOPT_LOW_SPEED_TIME, lowSpeedTimeout);
/* If no file exist in the specified path, curl continues to work
anyway as if netrc support was disabled. */
@ -342,9 +344,15 @@ struct CurlDownloader : public Downloader
(httpStatus == 200 || httpStatus == 201 || httpStatus == 204 || httpStatus == 206 || httpStatus == 304 || httpStatus == 226 /* FTP */ || httpStatus == 0 /* other protocol */))
{
result.cached = httpStatus == 304;
act.progress(result.bodySize, result.bodySize);
done = true;
callback(std::move(result));
try {
act.progress(result.bodySize, result.bodySize);
callback(std::move(result));
} catch (...) {
done = true;
callback.rethrow();
}
}
else {
@ -497,6 +505,17 @@ struct CurlDownloader : public Downloader
stopWorkerThread();
});
/* NINJATRAPPEUR_TOREMOVE: quick test */
auto curlmn = curl_multi_init();
#if LIBCURL_VERSION_NUM >= 0x072b00 // Multiplex requires >= 7.43.0
curl_multi_setopt(curlm, CURLMOPT_PIPELINING, CURLPIPE_MULTIPLEX);
#endif
#if LIBCURL_VERSION_NUM >= 0x071e00 // Max connections requires >= 7.30.0
curl_multi_setopt(curlm, CURLMOPT_MAX_TOTAL_CONNECTIONS,
downloadSettings.httpConnections.get());
#endif
/* =================== */
std::map<CURL *, std::shared_ptr<DownloadItem>> items;
bool quit = false;
@ -508,19 +527,19 @@ struct CurlDownloader : public Downloader
/* Let curl do its thing. */
int running;
CURLMcode mc = curl_multi_perform(curlm, &running);
CURLMcode mc = curl_multi_perform(curlmn, &running);
if (mc != CURLM_OK)
throw nix::Error(format("unexpected error from curl_multi_perform(): %s") % curl_multi_strerror(mc));
/* Set the promises of any finished requests. */
CURLMsg * msg;
int left;
while ((msg = curl_multi_info_read(curlm, &left))) {
while ((msg = curl_multi_info_read(curlmn, &left))) {
if (msg->msg == CURLMSG_DONE) {
auto i = items.find(msg->easy_handle);
assert(i != items.end());
i->second->finish(msg->data.result);
curl_multi_remove_handle(curlm, i->second->req);
curl_multi_remove_handle(curlmn, i->second->req);
i->second->active = false;
items.erase(i);
}
@ -538,7 +557,7 @@ struct CurlDownloader : public Downloader
? std::max(0, (int) std::chrono::duration_cast<std::chrono::milliseconds>(nextWakeup - std::chrono::steady_clock::now()).count())
: maxSleepTimeMs;
vomit("download thread waiting for %d ms", sleepTimeMs);
mc = curl_multi_wait(curlm, extraFDs, 1, sleepTimeMs, &numfds);
mc = curl_multi_wait(curlmn, extraFDs, 1, sleepTimeMs, &numfds);
if (mc != CURLM_OK)
throw nix::Error(format("unexpected error from curl_multi_wait(): %s") % curl_multi_strerror(mc));
@ -577,11 +596,14 @@ struct CurlDownloader : public Downloader
for (auto & item : incoming) {
debug("starting %s of %s", item->request.verb(), item->request.uri);
item->init();
curl_multi_add_handle(curlm, item->req);
curl_multi_add_handle(curlmn, item->req);
item->active = true;
items[item->req] = item;
}
}
/* NINJATRPPEUR_TODO test stuff, toremove */
if (curlmn) curl_multi_cleanup(curlmn);
/* END */
debug("download thread shutting down");
}
@ -665,7 +687,7 @@ struct CurlDownloader : public Downloader
return;
}
enqueueItem(std::make_shared<DownloadItem>(*this, request, std::move(callback)));
enqueueItem(std::make_shared<DownloadItem>(*this, request, callback));
}
};

View File

@ -24,9 +24,6 @@ struct DownloadSettings : Config
Setting<unsigned long> connectTimeout{this, 0, "connect-timeout",
"Timeout for connecting to servers during downloads. 0 means use curl's builtin default."};
Setting<unsigned long> stalledDownloadTimeout{this, 300, "stalled-download-timeout",
"Timeout (in seconds) for receiving data from servers during download. Nix cancels idle downloads after this timeout's duration."};
Setting<unsigned int> tries{this, 5, "download-attempts",
"How often Nix will attempt to download a file before giving up."};
};
@ -91,8 +88,6 @@ class Store;
struct Downloader
{
virtual ~Downloader() { }
/* Enqueue a download request, returning a future to the result of
the download. The future may throw a DownloadError
exception. */

View File

@ -19,8 +19,6 @@ public:
uint64_t narOffset = 0; // regular files only
};
virtual ~FSAccessor() { }
virtual Stat stat(const Path & path) = 0;
virtual StringSet readDirectory(const Path & path) = 0;

View File

@ -29,7 +29,7 @@ static string gcRootsDir = "gcroots";
read. To be precise: when they try to create a new temporary root
file, they will block until the garbage collector has finished /
yielded the GC lock. */
AutoCloseFD LocalStore::openGCLock(LockType lockType)
int LocalStore::openGCLock(LockType lockType)
{
Path fnGCLock = (format("%1%/%2%")
% stateDir % gcLockName).str();
@ -49,7 +49,7 @@ AutoCloseFD LocalStore::openGCLock(LockType lockType)
process that can open the file for reading can DoS the
collector. */
return fdGCLock;
return fdGCLock.release();
}
@ -221,21 +221,25 @@ void LocalStore::findTempRoots(FDs & fds, Roots & tempRoots, bool censor)
//FDPtr fd(new AutoCloseFD(openLockFile(path, false)));
//if (*fd == -1) continue;
/* Try to acquire a write lock without blocking. This can
only succeed if the owning process has died. In that case
we don't care about its temporary roots. */
if (lockFile(fd->get(), ltWrite, false)) {
printError(format("removing stale temporary roots file '%1%'") % path);
unlink(path.c_str());
writeFull(fd->get(), "d");
continue;
}
if (path != fnTempRoots) {
/* Acquire a read lock. This will prevent the owning process
from upgrading to a write lock, therefore it will block in
addTempRoot(). */
debug(format("waiting for read lock on '%1%'") % path);
lockFile(fd->get(), ltRead, true);
/* Try to acquire a write lock without blocking. This can
only succeed if the owning process has died. In that case
we don't care about its temporary roots. */
if (lockFile(fd->get(), ltWrite, false)) {
printError(format("removing stale temporary roots file '%1%'") % path);
unlink(path.c_str());
writeFull(fd->get(), "d");
continue;
}
/* Acquire a read lock. This will prevent the owning process
from upgrading to a write lock, therefore it will block in
addTempRoot(). */
debug(format("waiting for read lock on '%1%'") % path);
lockFile(fd->get(), ltRead, true);
}
/* Read the entire file. */
string contents = readFile(fd->get());
@ -690,8 +694,9 @@ void LocalStore::removeUnusedLinks(const GCState & state)
throw SysError(format("statting '%1%'") % path);
if (st.st_nlink != 1) {
actualSize += st.st_size;
unsharedSize += (st.st_nlink - 1) * st.st_size;
unsigned long long size = st.st_blocks * 512ULL;
actualSize += size;
unsharedSize += (st.st_nlink - 1) * size;
continue;
}
@ -700,7 +705,7 @@ void LocalStore::removeUnusedLinks(const GCState & state)
if (unlink(path.c_str()) == -1)
throw SysError(format("deleting '%1%'") % path);
state.results.bytesFreed += st.st_size;
state.results.bytesFreed += st.st_blocks * 512ULL;
}
struct stat st;
@ -866,12 +871,7 @@ void LocalStore::collectGarbage(const GCOptions & options, GCResults & results)
void LocalStore::autoGC(bool sync)
{
static auto fakeFreeSpaceFile = getEnv("_NIX_TEST_FREE_SPACE_FILE", "");
auto getAvail = [this]() -> uint64_t {
if (!fakeFreeSpaceFile.empty())
return std::stoll(readFile(fakeFreeSpaceFile));
auto getAvail = [this]() {
struct statvfs st;
if (statvfs(realStoreDir.c_str(), &st))
throw SysError("getting filesystem info about '%s'", realStoreDir);
@ -892,7 +892,7 @@ void LocalStore::autoGC(bool sync)
auto now = std::chrono::steady_clock::now();
if (now < state->lastGCCheck + std::chrono::seconds(settings.minFreeCheckInterval)) return;
if (now < state->lastGCCheck + std::chrono::seconds(5)) return;
auto avail = getAvail();
@ -919,11 +919,11 @@ void LocalStore::autoGC(bool sync)
promise.set_value();
});
printInfo("running auto-GC to free %d bytes", settings.maxFree - avail);
GCOptions options;
options.maxFreed = settings.maxFree - avail;
printInfo("running auto-GC to free %d bytes", options.maxFreed);
GCResults results;
collectGarbage(options, results);

View File

@ -209,9 +209,6 @@ public:
"The paths to make available inside the build sandbox.",
{"build-chroot-dirs", "build-sandbox-paths"}};
Setting<bool> sandboxFallback{this, true, "sandbox-fallback",
"Whether to disable sandboxing when the kernel doesn't allow it."};
Setting<PathSet> extraSandboxPaths{this, {}, "extra-sandbox-paths",
"Additional paths to make available inside the build sandbox.",
{"build-extra-chroot-dirs", "build-extra-sandbox-paths"}};
@ -318,9 +315,6 @@ public:
"pre-build-hook",
"A program to run just before a build to set derivation-specific build settings."};
Setting<std::string> postBuildHook{this, "", "post-build-hook",
"A program to run just after each succesful build."};
Setting<std::string> netrcFile{this, fmt("%s/%s", nixConfDir, "netrc"), "netrc-file",
"Path to the netrc file used to obtain usernames/passwords for downloads."};
@ -348,9 +342,6 @@ public:
Setting<uint64_t> maxFree{this, std::numeric_limits<uint64_t>::max(), "max-free",
"Stop deleting garbage when free disk space is above the specified amount."};
Setting<uint64_t> minFreeCheckInterval{this, 5, "min-free-check-interval",
"Number of seconds between checking free disk space."};
Setting<Paths> pluginFiles{this, {}, "plugin-files",
"Plugins to dynamically load at nix initialization time."};
};

View File

@ -131,25 +131,23 @@ protected:
}
void getFile(const std::string & path,
Callback<std::shared_ptr<std::string>> callback) noexcept override
Callback<std::shared_ptr<std::string>> callback) override
{
checkEnabled();
auto request(makeRequest(path));
auto callbackPtr = std::make_shared<decltype(callback)>(std::move(callback));
getDownloader()->enqueueDownload(request,
{[callbackPtr, this](std::future<DownloadResult> result) {
{[callback, this](std::future<DownloadResult> result) {
try {
(*callbackPtr)(result.get().data);
callback(result.get().data);
} catch (DownloadError & e) {
if (e.error == Downloader::NotFound || e.error == Downloader::Forbidden)
return (*callbackPtr)(std::shared_ptr<std::string>());
return callback(std::shared_ptr<std::string>());
maybeDisable();
callbackPtr->rethrow();
callback.rethrow();
} catch (...) {
callbackPtr->rethrow();
callback.rethrow();
}
}});
}

View File

@ -88,7 +88,7 @@ struct LegacySSHStore : public Store
}
void queryPathInfoUncached(const Path & path,
Callback<std::shared_ptr<ValidPathInfo>> callback) noexcept override
Callback<std::shared_ptr<ValidPathInfo>> callback) override
{
try {
auto conn(connections->get());

View File

@ -63,8 +63,6 @@ protected:
void LocalBinaryCacheStore::init()
{
createDirs(binaryCacheDir + "/nar");
if (writeDebugInfo)
createDirs(binaryCacheDir + "/debuginfo");
BinaryCacheStore::init();
}

View File

@ -629,7 +629,7 @@ uint64_t LocalStore::addValidPath(State & state,
void LocalStore::queryPathInfoUncached(const Path & path,
Callback<std::shared_ptr<ValidPathInfo>> callback) noexcept
Callback<std::shared_ptr<ValidPathInfo>> callback)
{
try {
auto info = std::make_shared<ValidPathInfo>();
@ -879,8 +879,8 @@ void LocalStore::querySubstitutablePathInfos(const PathSet & paths,
info->references,
narInfo ? narInfo->fileSize : 0,
info->narSize};
} catch (InvalidPath &) {
} catch (SubstituterDisabled &) {
} catch (InvalidPath) {
} catch (SubstituterDisabled) {
} catch (Error & e) {
if (settings.tryFallback)
printError(e.what());
@ -1210,8 +1210,7 @@ bool LocalStore::verifyStore(bool checkContents, RepairFlag repair)
bool errors = false;
/* Acquire the global GC lock to get a consistent snapshot of
existing and valid paths. */
/* Acquire the global GC lock to prevent a garbage collection. */
AutoCloseFD fdGCLock = openGCLock(ltWrite);
PathSet store;
@ -1222,11 +1221,13 @@ bool LocalStore::verifyStore(bool checkContents, RepairFlag repair)
PathSet validPaths2 = queryAllValidPaths(), validPaths, done;
fdGCLock = -1;
for (auto & i : validPaths2)
verifyPath(i, store, done, validPaths, repair, errors);
/* Release the GC lock so that checking content hashes (which can
take ages) doesn't block the GC or builds. */
fdGCLock = -1;
/* Optionally, check the content hashes (slow). */
if (checkContents) {
printInfo("checking hashes...");

View File

@ -127,7 +127,7 @@ public:
PathSet queryAllValidPaths() override;
void queryPathInfoUncached(const Path & path,
Callback<std::shared_ptr<ValidPathInfo>> callback) noexcept override;
Callback<std::shared_ptr<ValidPathInfo>> callback) override;
void queryReferrers(const Path & path, PathSet & referrers) override;
@ -263,7 +263,7 @@ private:
bool isActiveTempFile(const GCState & state,
const Path & path, const string & suffix);
AutoCloseFD openGCLock(LockType lockType);
int openGCLock(LockType lockType);
void findRoots(const Path & path, unsigned char type, Roots & roots);

View File

@ -1,5 +1,4 @@
#include "derivations.hh"
#include "parsed-derivations.hh"
#include "globals.hh"
#include "local-store.hh"
#include "store-api.hh"
@ -190,7 +189,6 @@ void Store::queryMissing(const PathSet & targets,
}
Derivation drv = derivationFromPath(i2.first);
ParsedDerivation parsedDrv(i2.first, drv);
PathSet invalid;
for (auto & j : drv.outputs)
@ -199,7 +197,7 @@ void Store::queryMissing(const PathSet & targets,
invalid.insert(j.second.path);
if (invalid.empty()) return;
if (settings.useSubstitutes && parsedDrv.substitutesAllowed()) {
if (settings.useSubstitutes && drv.substitutesAllowed()) {
auto drvState = make_ref<Sync<DrvState>>(DrvState(invalid.size()));
for (auto & output : invalid)
pool.enqueue(std::bind(checkOutput, i2.first, make_ref<Derivation>(drv), output, drvState));

View File

@ -108,9 +108,4 @@ bool ParsedDerivation::willBuildLocally() const
return getBoolAttr("preferLocalBuild") && canBuildLocally();
}
bool ParsedDerivation::substitutesAllowed() const
{
return getBoolAttr("allowSubstitutes", true);
}
}

View File

@ -30,8 +30,6 @@ public:
bool canBuildLocally() const;
bool willBuildLocally() const;
bool substitutesAllowed() const;
};
}

View File

@ -5,10 +5,9 @@
#include <cerrno>
#include <cstdlib>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/file.h>
#include <fcntl.h>
namespace nix {
@ -41,14 +40,17 @@ void deleteLockFile(const Path & path, int fd)
bool lockFile(int fd, LockType lockType, bool wait)
{
int type;
if (lockType == ltRead) type = LOCK_SH;
else if (lockType == ltWrite) type = LOCK_EX;
else if (lockType == ltNone) type = LOCK_UN;
struct flock lock;
if (lockType == ltRead) lock.l_type = F_RDLCK;
else if (lockType == ltWrite) lock.l_type = F_WRLCK;
else if (lockType == ltNone) lock.l_type = F_UNLCK;
else abort();
lock.l_whence = SEEK_SET;
lock.l_start = 0;
lock.l_len = 0; /* entire file */
if (wait) {
while (flock(fd, type) != 0) {
while (fcntl(fd, F_SETLKW, &lock) != 0) {
checkInterrupt();
if (errno != EINTR)
throw SysError(format("acquiring/releasing lock"));
@ -56,9 +58,9 @@ bool lockFile(int fd, LockType lockType, bool wait)
return false;
}
} else {
while (flock(fd, type | LOCK_NB) != 0) {
while (fcntl(fd, F_SETLK, &lock) != 0) {
checkInterrupt();
if (errno == EWOULDBLOCK) return false;
if (errno == EACCES || errno == EAGAIN) return false;
if (errno != EINTR)
throw SysError(format("acquiring/releasing lock"));
}
@ -68,6 +70,14 @@ bool lockFile(int fd, LockType lockType, bool wait)
}
/* This enables us to check whether are not already holding a lock on
a file ourselves. POSIX locks (fcntl) suck in this respect: if we
close a descriptor, the previous lock will be closed as well. And
there is no way to query whether we already have a lock (F_GETLK
only works on locks held by other processes). */
static Sync<StringSet> lockedPaths_;
PathLocks::PathLocks()
: deletePaths(false)
{
@ -81,7 +91,7 @@ PathLocks::PathLocks(const PathSet & paths, const string & waitMsg)
}
bool PathLocks::lockPaths(const PathSet & paths,
bool PathLocks::lockPaths(const PathSet & _paths,
const string & waitMsg, bool wait)
{
assert(fds.empty());
@ -89,54 +99,75 @@ bool PathLocks::lockPaths(const PathSet & paths,
/* Note that `fds' is built incrementally so that the destructor
will only release those locks that we have already acquired. */
/* Acquire the lock for each path in sorted order. This ensures
that locks are always acquired in the same order, thus
preventing deadlocks. */
/* Sort the paths. This assures that locks are always acquired in
the same order, thus preventing deadlocks. */
Paths paths(_paths.begin(), _paths.end());
paths.sort();
/* Acquire the lock for each path. */
for (auto & path : paths) {
checkInterrupt();
Path lockPath = path + ".lock";
debug(format("locking path '%1%'") % path);
AutoCloseFD fd;
while (1) {
/* Open/create the lock file. */
fd = openLockFile(lockPath, true);
/* Acquire an exclusive lock. */
if (!lockFile(fd.get(), ltWrite, false)) {
if (wait) {
if (waitMsg != "") printError(waitMsg);
lockFile(fd.get(), ltWrite, true);
} else {
/* Failed to lock this path; release all other
locks. */
unlock();
return false;
}
{
auto lockedPaths(lockedPaths_.lock());
if (lockedPaths->count(lockPath)) {
if (!wait) return false;
throw AlreadyLocked("deadlock: trying to re-acquire self-held lock '%s'", lockPath);
}
debug(format("lock acquired on '%1%'") % lockPath);
/* Check that the lock file hasn't become stale (i.e.,
hasn't been unlinked). */
struct stat st;
if (fstat(fd.get(), &st) == -1)
throw SysError(format("statting lock file '%1%'") % lockPath);
if (st.st_size != 0)
/* This lock file has been unlinked, so we're holding
a lock on a deleted file. This means that other
processes may create and acquire a lock on
`lockPath', and proceed. So we must retry. */
debug(format("open lock file '%1%' has become stale") % lockPath);
else
break;
lockedPaths->insert(lockPath);
}
try {
AutoCloseFD fd;
while (1) {
/* Open/create the lock file. */
fd = openLockFile(lockPath, true);
/* Acquire an exclusive lock. */
if (!lockFile(fd.get(), ltWrite, false)) {
if (wait) {
if (waitMsg != "") printError(waitMsg);
lockFile(fd.get(), ltWrite, true);
} else {
/* Failed to lock this path; release all other
locks. */
unlock();
lockedPaths_.lock()->erase(lockPath);
return false;
}
}
debug(format("lock acquired on '%1%'") % lockPath);
/* Check that the lock file hasn't become stale (i.e.,
hasn't been unlinked). */
struct stat st;
if (fstat(fd.get(), &st) == -1)
throw SysError(format("statting lock file '%1%'") % lockPath);
if (st.st_size != 0)
/* This lock file has been unlinked, so we're holding
a lock on a deleted file. This means that other
processes may create and acquire a lock on
`lockPath', and proceed. So we must retry. */
debug(format("open lock file '%1%' has become stale") % lockPath);
else
break;
}
/* Use borrow so that the descriptor isn't closed. */
fds.push_back(FDPair(fd.release(), lockPath));
} catch (...) {
lockedPaths_.lock()->erase(lockPath);
throw;
}
/* Use borrow so that the descriptor isn't closed. */
fds.push_back(FDPair(fd.release(), lockPath));
}
return true;
@ -158,6 +189,8 @@ void PathLocks::unlock()
for (auto & i : fds) {
if (deletePaths) deleteLockFile(i.second, i.first);
lockedPaths_.lock()->erase(i.second);
if (close(i.first) == -1)
printError(
format("error (ignored): cannot close lock file on '%1%'") % i.second);
@ -175,4 +208,11 @@ void PathLocks::setDeletion(bool deletePaths)
}
bool pathIsLockedByMe(const Path & path)
{
Path lockPath = path + ".lock";
return lockedPaths_.lock()->count(lockPath);
}
}

View File

@ -16,6 +16,8 @@ enum LockType { ltRead, ltWrite, ltNone };
bool lockFile(int fd, LockType lockType, bool wait);
MakeError(AlreadyLocked, Error);
class PathLocks
{
private:
@ -35,4 +37,6 @@ public:
void setDeletion(bool deletePaths);
};
bool pathIsLockedByMe(const Path & path);
}

View File

@ -191,14 +191,6 @@ void RemoteStore::setOptions(Connection & conn)
if (GET_PROTOCOL_MINOR(conn.daemonVersion) >= 12) {
std::map<std::string, Config::SettingInfo> overrides;
globalConfig.getSettings(overrides, true);
overrides.erase(settings.keepFailed.name);
overrides.erase(settings.keepGoing.name);
overrides.erase(settings.tryFallback.name);
overrides.erase(settings.maxBuildJobs.name);
overrides.erase(settings.maxSilentTime.name);
overrides.erase(settings.buildCores.name);
overrides.erase(settings.useSubstitutes.name);
overrides.erase(settings.showTrace.name);
conn.to << overrides.size();
for (auto & i : overrides)
conn.to << i.first << i.second.value;
@ -229,7 +221,7 @@ struct ConnectionHandle
~ConnectionHandle()
{
if (!daemonException && std::uncaught_exceptions()) {
if (!daemonException && std::uncaught_exception()) {
handle.markBad();
debug("closing daemon connection because of an exception");
}
@ -350,7 +342,7 @@ void RemoteStore::querySubstitutablePathInfos(const PathSet & paths,
void RemoteStore::queryPathInfoUncached(const Path & path,
Callback<std::shared_ptr<ValidPathInfo>> callback) noexcept
Callback<std::shared_ptr<ValidPathInfo>> callback)
{
try {
std::shared_ptr<ValidPathInfo> info;

View File

@ -41,7 +41,7 @@ public:
PathSet queryAllValidPaths() override;
void queryPathInfoUncached(const Path & path,
Callback<std::shared_ptr<ValidPathInfo>> callback) noexcept override;
Callback<std::shared_ptr<ValidPathInfo>> callback) override;
void queryReferrers(const Path & path, PathSet & referrers) override;

View File

@ -97,10 +97,6 @@ void checkStoreName(const string & name)
reasons (e.g., "." and ".."). */
if (string(name, 0, 1) == ".")
throw Error(baseError % "it is illegal to start the name with a period");
/* Disallow names longer than 211 characters. ext4s max is 256,
but we need extra space for the hash and .chroot extensions. */
if (name.length() > 211)
throw Error(baseError % "name must be less than 212 characters");
for (auto & i : name)
if (!((i >= 'A' && i <= 'Z') ||
(i >= 'a' && i <= 'z') ||
@ -329,14 +325,13 @@ ref<const ValidPathInfo> Store::queryPathInfo(const Path & storePath)
void Store::queryPathInfo(const Path & storePath,
Callback<ref<ValidPathInfo>> callback) noexcept
Callback<ref<ValidPathInfo>> callback)
{
std::string hashPart;
assertStorePath(storePath);
auto hashPart = storePathToHash(storePath);
try {
assertStorePath(storePath);
hashPart = storePathToHash(storePath);
{
auto res = state.lock()->pathInfoCache.get(hashPart);
@ -366,10 +361,8 @@ void Store::queryPathInfo(const Path & storePath,
} catch (...) { return callback.rethrow(); }
auto callbackPtr = std::make_shared<decltype(callback)>(std::move(callback));
queryPathInfoUncached(storePath,
{[this, storePath, hashPart, callbackPtr](std::future<std::shared_ptr<ValidPathInfo>> fut) {
{[this, storePath, hashPart, callback](std::future<std::shared_ptr<ValidPathInfo>> fut) {
try {
auto info = fut.get();
@ -389,8 +382,8 @@ void Store::queryPathInfo(const Path & storePath,
throw InvalidPath("path '%s' is not valid", storePath);
}
(*callbackPtr)(ref<ValidPathInfo>(info));
} catch (...) { callbackPtr->rethrow(); }
callback(ref<ValidPathInfo>(info));
} catch (...) { callback.rethrow(); }
}});
}

View File

@ -360,12 +360,12 @@ public:
/* Asynchronous version of queryPathInfo(). */
void queryPathInfo(const Path & path,
Callback<ref<ValidPathInfo>> callback) noexcept;
Callback<ref<ValidPathInfo>> callback);
protected:
virtual void queryPathInfoUncached(const Path & path,
Callback<std::shared_ptr<ValidPathInfo>> callback) noexcept = 0;
Callback<std::shared_ptr<ValidPathInfo>> callback) = 0;
public:

View File

@ -170,7 +170,7 @@ public:
~JSONPlaceholder()
{
assert(!first || std::uncaught_exceptions());
assert(!first || std::uncaught_exception());
}
template<typename T>

View File

@ -26,7 +26,6 @@ typedef enum {
actVerifyPaths = 107,
actSubstitute = 108,
actQueryPathInfo = 109,
actPostBuildHook = 110,
} ActivityType;
typedef enum {
@ -37,7 +36,6 @@ typedef enum {
resSetPhase = 104,
resProgress = 105,
resSetExpected = 106,
resPostBuildLogLine = 107,
} ResultType;
typedef uint64_t ActivityId;

View File

@ -179,36 +179,6 @@ struct TeeSource : Source
}
};
/* A reader that consumes the original Source until 'size'. */
struct SizedSource : Source
{
Source & orig;
size_t remain;
SizedSource(Source & orig, size_t size)
: orig(orig), remain(size) { }
size_t read(unsigned char * data, size_t len)
{
if (this->remain <= 0) {
throw EndOfFile("sized: unexpected end-of-file");
}
len = std::min(len, this->remain);
size_t n = this->orig.read(data, len);
this->remain -= n;
return n;
}
/* Consume the original source until no remain data is left to consume. */
size_t drainAll()
{
std::vector<unsigned char> buf(8192);
size_t sum = 0;
while (this->remain > 0) {
size_t n = read(buf.data(), buf.size());
sum += n;
}
return sum;
}
};
/* Convert a function into a sink. */
struct LambdaSink : Sink

View File

@ -84,15 +84,6 @@ void clearEnv()
unsetenv(name.first.c_str());
}
void replaceEnv(std::map<std::string, std::string> newEnv)
{
clearEnv();
for (auto newEnvVar : newEnv)
{
setenv(newEnvVar.first.c_str(), newEnvVar.second.c_str(), 1);
}
}
Path absPath(Path path, Path dir)
{
@ -397,7 +388,7 @@ static void _deletePath(const Path & path, unsigned long long & bytesFreed)
}
if (!S_ISDIR(st.st_mode) && st.st_nlink == 1)
bytesFreed += st.st_size;
bytesFreed += st.st_blocks * 512;
if (S_ISDIR(st.st_mode)) {
/* Make the directory accessible. */
@ -1028,22 +1019,10 @@ void runProgram2(const RunOptions & options)
if (options.standardOut) out.create();
if (source) in.create();
ProcessOptions processOptions;
// vfork implies that the environment of the main process and the fork will
// be shared (technically this is undefined, but in practice that's the
// case), so we can't use it if we alter the environment
if (options.environment)
processOptions.allowVfork = false;
/* Fork. */
Pid pid = startProcess([&]() {
if (options.environment)
replaceEnv(*options.environment);
if (options.standardOut && dup2(out.writeSide.get(), STDOUT_FILENO) == -1)
throw SysError("dupping stdout");
if (options.mergeStderrToStdout)
if (dup2(STDOUT_FILENO, STDERR_FILENO) == -1)
throw SysError("cannot dup stdout into stderr");
if (source && dup2(in.readSide.get(), STDIN_FILENO) == -1)
throw SysError("dupping stdin");
@ -1068,7 +1047,7 @@ void runProgram2(const RunOptions & options)
execv(options.program.c_str(), stringsToCharPtrs(args_).data());
throw SysError("executing '%1%'", options.program);
}, processOptions);
});
out.writeSide = -1;
@ -1169,7 +1148,7 @@ void _interrupted()
/* Block user interrupts while an exception is being handled.
Throwing an exception while another exception is being handled
kills the program! */
if (!interruptThrown && !std::uncaught_exceptions()) {
if (!interruptThrown && !std::uncaught_exception()) {
interruptThrown = true;
throw Interrupted("interrupted by the user");
}

View File

@ -270,14 +270,12 @@ struct RunOptions
std::optional<uid_t> uid;
std::optional<uid_t> gid;
std::optional<Path> chdir;
std::optional<std::map<std::string, std::string>> environment;
Path program;
bool searchPath = true;
Strings args;
std::optional<std::string> input;
Source * standardIn = nullptr;
Sink * standardOut = nullptr;
bool mergeStderrToStdout = false;
bool _killStderr = false;
RunOptions(const Path & program, const Strings & args)
@ -445,34 +443,21 @@ string get(const T & map, const string & key, const string & def = "")
type T or an exception. (We abuse std::future<T> to pass the value or
exception.) */
template<typename T>
class Callback
struct Callback
{
std::function<void(std::future<T>)> fun;
std::atomic_flag done = ATOMIC_FLAG_INIT;
public:
Callback(std::function<void(std::future<T>)> fun) : fun(fun) { }
Callback(Callback && callback) : fun(std::move(callback.fun))
void operator()(T && t) const
{
auto prev = callback.done.test_and_set();
if (prev) done.test_and_set();
}
void operator()(T && t) noexcept
{
auto prev = done.test_and_set();
assert(!prev);
std::promise<T> promise;
promise.set_value(std::move(t));
fun(promise.get_future());
}
void rethrow(const std::exception_ptr & exc = std::current_exception()) noexcept
void rethrow(const std::exception_ptr & exc = std::current_exception()) const
{
auto prev = done.test_and_set();
assert(!prev);
std::promise<T> promise;
promise.set_exception(exc);
fun(promise.get_future());

View File

@ -280,7 +280,7 @@ static void _main(int argc, char * * argv)
auto absolute = i;
try {
absolute = canonPath(absPath(i), true);
} catch (Error & e) {};
} catch (Error e) {};
if (store->isStorePath(absolute) && std::regex_match(absolute, std::regex(".*\\.drv(!.*)?")))
drvs.push_back(DrvInfo(*state, store, absolute));
else

View File

@ -950,16 +950,8 @@ static void opServe(Strings opFlags, Strings opArgs)
info.sigs = readStrings<StringSet>(in);
in >> info.ca;
if (info.narSize == 0) {
throw Error("narInfo is too old and missing the narSize field");
}
SizedSource sizedSource(in, info.narSize);
store->addToStore(info, sizedSource, NoRepair, NoCheckSigs);
// consume all the data that has been sent before continuing.
sizedSource.drainAll();
// FIXME: race if addToStore doesn't read source?
store->addToStore(info, in, NoRepair, NoCheckSigs);
out << 1; // indicate success

View File

@ -55,7 +55,7 @@ struct CmdEdit : InstallableCommand
int lineno;
try {
lineno = std::stoi(std::string(pos, colon + 1));
} catch (std::invalid_argument & e) {
} catch (std::invalid_argument e) {
throw Error("cannot parse line number '%s'", pos);
}

View File

@ -170,14 +170,6 @@ public:
name, sub);
}
if (type == actPostBuildHook) {
auto name = storePathToName(getS(fields, 0));
if (hasSuffix(name, ".drv"))
name.resize(name.size() - 4);
i->s = fmt("post-build " ANSI_BOLD "%s" ANSI_NORMAL, name);
i->name = DrvName(name).name;
}
if (type == actQueryPathInfo) {
auto name = storePathToName(getS(fields, 0));
i->s = fmt("querying " ANSI_BOLD "%s" ANSI_NORMAL " on %s", name, getS(fields, 1));
@ -236,18 +228,14 @@ public:
update(*state);
}
else if (type == resBuildLogLine || type == resPostBuildLogLine) {
else if (type == resBuildLogLine) {
auto lastLine = trim(getS(fields, 0));
if (!lastLine.empty()) {
auto i = state->its.find(act);
assert(i != state->its.end());
ActInfo info = *i->second;
if (printBuildLogs) {
auto suffix = "> ";
if (type == resPostBuildLogLine) {
suffix = " (post)> ";
}
log(*state, lvlInfo, ANSI_FAINT + info.name.value_or("unnamed") + suffix + ANSI_NORMAL + lastLine);
log(*state, lvlInfo, ANSI_FAINT + info.name.value_or("unnamed") + "> " + ANSI_NORMAL + lastLine);
} else {
state->activities.erase(i->second);
info.lastLine = lastLine;

View File

@ -199,10 +199,7 @@ void chrootHelper(int argc, char * * argv)
uid_t gid = getgid();
if (unshare(CLONE_NEWUSER | CLONE_NEWNS) == -1)
/* Try with just CLONE_NEWNS in case user namespaces are
specifically disabled. */
if (unshare(CLONE_NEWNS) == -1)
throw SysError("setting up a private mount namespace");
throw SysError("setting up a private mount namespace");
/* Bind-mount realStoreDir on /nix/store. If the latter mount
point doesn't already exists, we have to create a chroot

View File

@ -17,7 +17,6 @@ let {
builder = ./dependencies.builder0.sh + "/FOOBAR/../.";
input1 = input1 + "/.";
input2 = "${input2}/.";
input1_drv = input1;
meta.description = "Random test package";
};

View File

@ -1,85 +0,0 @@
source common.sh
set +x
expect_trace() {
expr="$1"
expect="$2"
actual=$(
nix-instantiate \
--trace-function-calls \
--expr "$expr" 2>&1 \
| grep "function-trace" \
| sed -e 's/ [0-9]*$//'
);
echo -n "Tracing expression '$expr'"
set +e
msg=$(diff -swB \
<(echo "$expect") \
<(echo "$actual")
);
result=$?
set -e
if [ $result -eq 0 ]; then
echo " ok."
else
echo " failed. difference:"
echo "$msg"
return $result
fi
}
# failure inside a tryEval
expect_trace 'builtins.tryEval (throw "example")' "
function-trace entered undefined position at
function-trace exited undefined position at
function-trace entered (string):1:1 at
function-trace entered (string):1:19 at
function-trace exited (string):1:19 at
function-trace exited (string):1:1 at
"
# Missing argument to a formal function
expect_trace '({ x }: x) { }' "
function-trace entered undefined position at
function-trace exited undefined position at
function-trace entered (string):1:1 at
function-trace exited (string):1:1 at
"
# Too many arguments to a formal function
expect_trace '({ x }: x) { x = "x"; y = "y"; }' "
function-trace entered undefined position at
function-trace exited undefined position at
function-trace entered (string):1:1 at
function-trace exited (string):1:1 at
"
# Not enough arguments to a lambda
expect_trace '(x: y: x + y) 1' "
function-trace entered undefined position at
function-trace exited undefined position at
function-trace entered (string):1:1 at
function-trace exited (string):1:1 at
"
# Too many arguments to a lambda
expect_trace '(x: x) 1 2' "
function-trace entered undefined position at
function-trace exited undefined position at
function-trace entered (string):1:1 at
function-trace exited (string):1:1 at
function-trace entered (string):1:1 at
function-trace exited (string):1:1 at
"
# Not a function
expect_trace '1 2' "
function-trace entered undefined position at
function-trace exited undefined position at
function-trace entered (string):1:1 at
function-trace exited (string):1:1 at
"
set -e

View File

@ -1,70 +0,0 @@
source common.sh
clearStore
garbage1=$(nix add-to-store --name garbage1 ./nar-access.sh)
garbage2=$(nix add-to-store --name garbage2 ./nar-access.sh)
garbage3=$(nix add-to-store --name garbage3 ./nar-access.sh)
ls -l $garbage3
POSIXLY_CORRECT=1 du $garbage3
fake_free=$TEST_ROOT/fake-free
export _NIX_TEST_FREE_SPACE_FILE=$fake_free
echo 1100 > $fake_free
expr=$(cat <<EOF
with import ./config.nix; mkDerivation {
name = "gc-A";
buildCommand = ''
set -x
[[ \$(ls \$NIX_STORE/*-garbage? | wc -l) = 3 ]]
mkdir \$out
echo foo > \$out/bar
echo 1...
sleep 2
echo 200 > ${fake_free}.tmp1
mv ${fake_free}.tmp1 $fake_free
echo 2...
sleep 2
echo 3...
sleep 2
echo 4...
[[ \$(ls \$NIX_STORE/*-garbage? | wc -l) = 1 ]]
'';
}
EOF
)
expr2=$(cat <<EOF
with import ./config.nix; mkDerivation {
name = "gc-B";
buildCommand = ''
set -x
mkdir \$out
echo foo > \$out/bar
echo 1...
sleep 2
echo 200 > ${fake_free}.tmp2
mv ${fake_free}.tmp2 $fake_free
echo 2...
sleep 2
echo 3...
sleep 2
echo 4...
'';
}
EOF
)
nix build -v -o $TEST_ROOT/result-A -L "($expr)" \
--min-free 1000 --max-free 2000 --min-free-check-interval 1 &
pid=$!
nix build -v -o $TEST_ROOT/result-B -L "($expr2)" \
--min-free 1000 --max-free 2000 --min-free-check-interval 1
wait "$pid"
[[ foo = $(cat $TEST_ROOT/result-A/bar) ]]
[[ foo = $(cat $TEST_ROOT/result-B/bar) ]]

View File

@ -3,9 +3,7 @@ check:
nix_tests = \
init.sh hash.sh lang.sh add.sh simple.sh dependencies.sh \
gc.sh \
gc-concurrent.sh \
gc-auto.sh \
gc.sh gc-concurrent.sh \
referrers.sh user-envs.sh logging.sh nix-build.sh misc.sh fixed.sh \
gc-runtime.sh check-refs.sh filter-source.sh \
remote-store.sh export.sh export-graph.sh \
@ -28,9 +26,7 @@ nix_tests = \
check.sh \
plugins.sh \
search.sh \
nix-copy-ssh.sh \
post-hook.sh \
function-trace.sh
nix-copy-ssh.sh
# parallel.sh
install-tests += $(foreach x, $(nix_tests), tests/$(x))

View File

@ -1,15 +0,0 @@
source common.sh
clearStore
export REMOTE_STORE=$TEST_ROOT/remote_store
# Build the dependencies and push them to the remote store
nix-build -o $TEST_ROOT/result dependencies.nix --post-build-hook $PWD/push-to-store.sh
clearStore
# Ensure that we the remote store contains both the runtime and buildtime
# closure of what we've just built
nix copy --from "$REMOTE_STORE" --no-require-sigs -f dependencies.nix
nix copy --from "$REMOTE_STORE" --no-require-sigs -f dependencies.nix input1_drv

View File

@ -1,4 +0,0 @@
#!/bin/sh
echo Pushing "$@" to "$REMOTE_STORE"
printf "%s" "$OUT_PATHS" | xargs -d: nix copy --to "$REMOTE_STORE" --no-require-sigs