Compare commits

...

344 Commits

Author SHA1 Message Date
Eelco Dolstra 2ba67da053
Merge pull request #3332 from Calvin-L/patch-1
Document that autoconf is a dependency
2020-02-19 13:02:35 +01:00
Eelco Dolstra 2a14c28669
Merge pull request #3357 from carlosdagos/pure-nix-shell-proxy-env
Pass through http proxy env vars in pure shell
2020-02-19 13:02:02 +01:00
Eelco Dolstra e3e8ee0471
Merge pull request #3328 from Rovanion/nix-daemon-already-running-when-installing-fix
installer: Handle edge case where the nix-daemon is already running on the system
2020-02-19 12:53:25 +01:00
Eelco Dolstra 906afedd23 Use Nixpkgs 20.03 2020-02-19 12:32:45 +01:00
Eelco Dolstra 16e9a75287 Typo 2020-02-19 12:32:33 +01:00
Eelco Dolstra 15ed4137e2
Merge pull request #3359 from bhipple/doc/pure-eval
doc: mention how to turn on pure evaluation mode in manual
2020-02-19 12:30:09 +01:00
Eelco Dolstra c4d3674de6
Merge pull request #3353 from tbsmoest/priv_tobias_pr_set_deathsig-1.4
Fix PR_SET_PDEATHSIG results in Broken pipe (#2395)
2020-02-19 12:29:12 +01:00
Eelco Dolstra 82de90961b Add dev output
Necessary since we're now propagating boehm-gc.
2020-02-19 12:26:59 +01:00
Eelco Dolstra 583d06385d
Build with large config Boehm GC 2020-02-18 17:57:10 +01:00
Eelco Dolstra f46bc0e8eb
Enable debug symbols 2020-02-18 17:52:47 +01:00
Eelco Dolstra 553e584f92
LocalStore::checkDerivationOutputs(): Improve error message 2020-02-18 17:51:48 +01:00
Eelco Dolstra d8fd31f50f
Disable the progress bar if $TERM == dumb or unset
Fixes #3363.
2020-02-18 17:51:18 +01:00
Benjamin Hipple 762febafe2 doc: mention how to turn on pure evaluation mode in manual
The flag is `--pure-eval`, which can be found by looking at the test suite; it
should be in the notes describing the feature as well, since otherwise users may
assume this is referencing something like `nix-shell --pure`.
2020-02-15 01:44:51 -05:00
Tobias Möst 3e347220c8 Fix PR_SET_PDEATHSIG results in Broken pipe (#2395)
The ssh client is lazily started by the first worker thread, that
requires a ssh connection. To avoid the ssh client to be killed, when
the worker process is stopped, do not set PR_SET_PDEATHSIG.
2020-02-14 07:51:44 +01:00
Carlos D d78141a886 Pass through http proxy env vars in pure shell 2020-02-14 16:11:22 +11:00
Eelco Dolstra 9af10b753c Bindings::get(): std::optional<Attr *> -> Attr *
Returning a nullable type in an optional is silly.
2020-02-13 17:15:05 +01:00
Eelco Dolstra d8972317fc Prevent uninitialized StorePath creation 2020-02-13 16:12:16 +01:00
Eelco Dolstra 94c9343702 Merge pull request #3350 from curiousleo/no-macro-use
Remove #[macro_use]
2020-02-10 10:11:35 +01:00
Leonhard Markert 1b56de8cd1 Remove macro_use
As of Rust 2018, macro_use is no longer required in most circumstances.
I think it is generally a good idea to remove these when not needed, to
stop them from polluting the crate's global namespace.
https://doc.rust-lang.org/edition-guide/rust-2018/macros/macro-changes.html#macro_rules-style-macros
2020-02-10 09:03:24 +01:00
Eelco Dolstra d82b78bf51
Fix segfault in gcc on i686-linux
src/libstore/ssh-store.cc: In constructor 'nix::SSHStore::SSHStore(const string&, const Params&)':
  src/libstore/ssh-store.cc:31:21: internal compiler error: Segmentation fault
               compress)
                       ^
  Please submit a full bug report,
  with preprocessed source if appropriate.

https://hydra.nixos.org/build/111545609
2020-02-07 13:01:48 +01:00
Eelco Dolstra db88cb401b
Merge pull request #3344 from LnL7/ssh-ng-remote-params
ssh-store: add remote-store and remote-program query params
2020-02-04 10:10:08 +01:00
Daiderd Jordan 8745c63d3c
ssh-store: add remote-store and remote-program query params
Brings the functionality of ssh-ng:// in sync with the legacy ssh://
implementation.  Specifying the remote store uri enables various useful
things. eg.

    $ nix copy --to ssh-ng://cache?remote-store=file://mnt/cache --all
2020-02-03 23:22:28 +01:00
Eelco Dolstra c5319e5d0b
Show "warning:" in yellow instead of red 2020-02-01 12:37:22 +01:00
Eelco Dolstra 7be1a07a45
Merge pull request #3335 from domenkozar/retry-429
retry on HTTP status code 429
2020-01-29 16:22:46 +01:00
Domen Kožar 48ddb8e481
retry on HTTP status code 429 2020-01-29 11:47:39 +01:00
Calvin Loncaric 46992e71a1
Document that autoconf is a dependency 2020-01-26 17:22:47 -08:00
Eelco Dolstra 2242be83c6
Merge pull request #3329 from mayflower/attrs-chown
structured-attrs: chown .attrs.* files to builder
2020-01-23 18:24:44 +01:00
Robin Gloster f8dbde0813
structured-attrs: chown .attrs.* files to builder
Otherwise `chmod .`'ing the build directory doesn't work anymore, which
is done in nixpkgs if sourceRoot is set to '.'.
2020-01-23 17:38:07 +01:00
Rovanion Luckey a413594baf installer: Handle edge case where the nix-daemon is already running on the system
On a systemd-based Linux distribution: If the user has previously had multi-user Nix installed on the system, removed it and then reinstalled multi-user Nix again the old nix-daemon.service will still be running when `scripts/install-systemd-multi-user.sh` tries to start it which results in nothing being done and the old daemon continuing its run.

When a normal user then tries to use Nix through the daemon the nix binary will fail to connect to the nix-daemon as it does not belong to the currently installed Nix system. See below for steps to reproduce the issue that motivated this change.

$ sh <(curl https://nixos.org/nix/install) --daemon

$ sudo rm -rf /etc/nix /nix /root/.nix-profile /root/.nix-defexpr /root/.nix-channels /home/nix-installer/.nix-profile /home/nix-installer/.nix-defexpr /home/nix-installer/.nix-channels ~/.nix-channels ~/.nix-defexpr/ ~/.nix-profile /etc/profile.d/nix.sh.backup-before-nix /etc/profile.d/nix.sh; sed -i '/added by Nix installer$/d' ~/.bash_profile

$ unset NIX_REMOTE

$ sh <(curl https://nixos.org/nix/install) --daemon

└$ export NIX_REMOTE=daemon

└$ nix-env -iA nixpkgs.hello
installing 'hello-2.10'
error: cannot connect to daemon at '/nix/var/nix/daemon-socket/socket': No such file or directory
(use '--show-trace' to show detailed location information)

└$ sudo systemctl restart nix-daemon.service

└$ nix-env -iA nixpkgs.hello
installing 'hello-2.10'
these paths will be fetched (6.09 MiB download, 27.04 MiB unpacked):
  /nix/store/2g75chlbpxlrqn15zlby2dfh8hr9qwbk-hello-2.10
  /nix/store/aag9d1y4wcddzzrpfmfp9lcmc7skd7jk-glibc-2.27
copying path '/nix/store/aag9d1y4wcddzzrpfmfp9lcmc7skd7jk-glibc-2.27' from 'https://cache.nixos.org'...
copying path '/nix/store/2g75chlbpxlrqn15zlby2dfh8hr9qwbk-hello-2.10' from 'https://cache.nixos.org'...
building '/nix/store/w9adagg6vlikr799nkkqc9la5hbbpgmi-user-environment.drv'...
created 2 symlinks in user environment
2020-01-23 14:48:53 +01:00
Eelco Dolstra d506bd587a Fix clang warning 2020-01-22 21:20:01 +01:00
Eelco Dolstra aef635da78 Fix derivation computation with __structuredAttrs and multiple outputs
Fixes

  error: derivation '/nix/store/klivma7r7h5lndb99f7xxmlh5whyayvg-zlib-1.2.11.drv' has incorrect output '/nix/store/fv98nnx5ykgbq8sqabilkgkbc4169q05-zlib-1.2.11-dev', should be '/nix/store/adm7pilzlj3z5k249s8b4wv3scprhzi1-zlib-1.2.11-dev'
2020-01-21 21:14:13 +01:00
Eelco Dolstra 8b09105db3
Merge pull request #3316 from LnL7/fix-secure-drv-outputs
build: remove warning when in sandboxing test mode
2020-01-14 08:43:37 +01:00
Eelco Dolstra e74b221a25
Merge pull request #3318 from bhipple/doc/relnotes-2.3
doc: touchup release notes for 2.3
2020-01-14 08:43:10 +01:00
Benjamin Hipple 5d24e18e29 doc: touchup release notes for 2.3
- At the top of the release notes, we announce sandboxing is now enabled by default,
then at the bottom it says it's now disabled when missing kernel support. These
can be merged into one point for clarity.

- The point about `max-jobs` defaulting to 1 appears unrelated to sandboxing.
2020-01-14 00:14:03 -05:00
Daiderd Jordan 8b3217f832
build: remove warning when in sandboxing test mode
Introduced in 66fccd5832, but somehow
breaks the secure-drv-outputs test.
2020-01-13 22:09:18 +01:00
Eelco Dolstra c3181e21e7 Tweak error message 2020-01-13 21:52:03 +01:00
Eelco Dolstra bfaa4db7bd Merge branch 'assert-show-expression' of https://github.com/LnL7/nix 2020-01-13 21:49:55 +01:00
John Ericson d64ab5131c unbreak build without pch 2020-01-13 21:45:33 +01:00
Eelco Dolstra c86c71c2b1 Test PRECOMPILE_HEADERS=0 2020-01-13 21:44:35 +01:00
Eelco Dolstra 835e541144 Fix build
https://hydra.nixos.org/eval/1564374
2020-01-13 21:34:54 +01:00
Eelco Dolstra 30c9ca3b05 Fix Nixpkgs dependency 2020-01-13 21:11:56 +01:00
Daiderd Jordan 307bcb9a8e
libexpr: show expression in assertion errors
Includes the expression of the condition in the assertion message if
the assertion failed, making assertions much easier to debug. eg.

    error: assertion (withPython -> (python2Packages != null)) failed at pkgs/tools/security/nmap/default.nix:11:1
2020-01-11 15:45:41 +01:00
Eelco Dolstra 6f046fa39e
Merge pull request #3308 from trusktr/patch-1
Add a link to official channels in the docs.
2020-01-10 01:03:41 +01:00
Eelco Dolstra 72a50756bb
Merge pull request #3307 from yorickvP/yorickvp/nlohmann-fromJSON
builtins.fromJSON: use nlohmann/json parser instead of custom parser
2020-01-10 01:03:07 +01:00
Joe Pea 3895e78794
Add link to official channels in nix-channel command ref 2020-01-09 14:20:08 -08:00
Joe Pea 7ccfa7ca4f
Add a link to official channels in the Channels chapter. 2020-01-09 14:15:19 -08:00
Yorick van Pelt a350d0beb0
json-to-value: use unique_ptr instead of raw pointers 2020-01-09 22:46:41 +01:00
Yorick van Pelt f1fac0b5c3
builtins.fromJSON: use nlohmann/json parser instead of custom parser 2020-01-09 17:38:27 +01:00
Eelco Dolstra 04bbfa692f
Merge pull request #3305 from knl/interpret-u-escapes-in-JSON-strings
Add support for unicode escape sequences in fromJSON
2020-01-07 01:04:16 +01:00
Nikola Knezevic 52a8f9295b Add support for \u escape in fromJSON
As fromTOML supports \u and \U escapes, bring fromJSON on par. As JSON defaults
to UTF-8 encoding (every JSON parser must support UTF-8), this change parses the
`\u hex hex hex hex` sequence (\u followed by 4 hexadecimal digits) into an
UTF-8 representation.

Add a test to verify correct parsing, using all escape sequences from json.org.
2020-01-07 00:09:58 +01:00
Nikola Knezevic cb2d348d48 Remove redundant check in parseJSONString 2020-01-07 00:09:58 +01:00
Eelco Dolstra bc22a7ee6a Fix use of uninitialized store path
Fixes 'building of '/nix/store/00000000000000000000000000000000-': ...'.
2020-01-06 22:20:10 +01:00
Eelco Dolstra e2988f48a1
Merge pull request #3303 from LnL7/darwin-sandbox
build: fix sandboxing on darwin
2020-01-06 20:56:35 +01:00
Daiderd Jordan 66fccd5832
build: fix sandboxing on darwin
Starting ba87b08f85 getEnv now returns an
std::optional which means these getEnv() != "" conditions no longer happen
if the variables are not defined.
2020-01-05 20:23:52 +01:00
Eelco Dolstra 0486e87791
Merge pull request #3302 from LnL7/darwin-repair-with-sandbox
build: fix path repairing with the darwin sandbox
2020-01-05 16:26:17 +01:00
Eelco Dolstra cb90e382b5 Hide FunctionCallTrace constructor/destructor
This prevents them from being inlined. On gcc 9, this reduces the
stack size needed for

  nix-instantiate '<nixpkgs>' -A texlive.combined.scheme-full --dry-run

from 12.9 MiB to 4.8 MiB.
2020-01-05 16:21:34 +01:00
Daiderd Jordan 7d448bc966
build: fix path repairing when hash rewriting is required
Handle store path repairing on darwin when sandboxing is enabled. Unlike
on linux sandboxing on darwin still requires hash rewriting.
2020-01-04 20:25:25 +01:00
Daiderd Jordan b33fefcb92
build: recover store path when replacing fails
This shouldn't happen in normal circumstances, but just in case
attempt to move the temporary path back if possible.
2020-01-04 20:24:27 +01:00
Eelco Dolstra 0de33cc81b
Merge pull request #3298 from edef1c/passasfile-noprefix
passAsFile: leave out the hash prefix
2020-01-03 12:43:06 +01:00
edef c65a6fa86a passAsFile: leave out the hash prefix
Having a colon in the path may cause issues, and having the hash
function indicated isn't actually necessary. We now verify the path 
format in the tests to prevent regressions.
2020-01-02 23:56:06 +00:00
Eelco Dolstra 3ad4a332eb
Merge pull request #3297 from edef1c/passasfile-hash
passAsFile: hash the attribute name instead of numbering sequentially
2020-01-03 00:08:23 +01:00
Puck Meerburg 515c0a263e passAsFile: hash the attribute name instead of numbering sequentially
This makes the paths consistent without relying on ordering.

Co-authored-by: edef <edef@edef.eu>
2020-01-02 22:56:03 +00:00
Eelco Dolstra 3469062e76
Merge pull request #3296 from grahamc/export-reference-graph
exportReferencesGraph: support working
2020-01-02 11:04:03 +01:00
Graham Christensen c502831a1d
exportReferencesGraph: support working
Before, we would get:

    [deploy@bastion:~]$ nix-store -r /nix/store/grfnl76cahwls0igd2by2pqv0dimi8h2-nixos-system-eris-19.09.20191213.03f3def.drv
    these derivations will be built:
      /nix/store/3ka4ihvwh6wsyhpd2qa9f59506mnxvx1-initrd-linux-4.19.88.drv
      /nix/store/ssxwmll7v21did1c8j027q0m8w6pg41i-unit-prometheus-alertmanager-irc-notifier.service.drv
      /nix/store/mvyvkj46ay7pp7b1znqbkck2mq98k0qd-unit-script-network-local-commands-start.drv
      /nix/store/vsl1y9mz38qfk6pyirjwnfzfggz5akg6-unit-network-local-commands.service.drv
      /nix/store/wi5ighfwwb83fdmav6z6n2fw6npm9ffl-unit-prometheus-hydra-exporter.service.drv
      /nix/store/x0qkv535n75pbl3xn6nn1w7qkrg9wwyg-unit-prometheus-packet-sd.service.drv
      /nix/store/lv491znsjxdf51xnfxh9ld7r1zg14d52-unit-script-packet-sd-env-key-pre-start.drv
      /nix/store/nw4nzlca49agsajvpibx7zg5b873gk9f-unit-script-packet-sd-env-key-start.drv
      /nix/store/x674wwabdwjrkhnykair4c8mpxa9532w-unit-packet-sd-env-key.service.drv
      /nix/store/ywivz64ilb1ywlv652pkixw3vxzfvgv8-unit-wireguard-wg0.service.drv
      /nix/store/v3b648293g3zl8pnn0m1345nvmyd8dwb-unit-script-acme-selfsigned-status.nixos.org-start.drv
      /nix/store/zci5d3zvr6fgdicz6k7jjka6lmx0v3g4-unit-acme-selfsigned-status.nixos.org.service.drv
      /nix/store/f6pwvnm63d0kw5df0v7sipd1rkhqxk5g-system-units.drv
      /nix/store/iax8071knxk9c7krpm9jqg0lcrawf4lc-etc.drv
      /nix/store/grfnl76cahwls0igd2by2pqv0dimi8h2-nixos-system-eris-19.09.20191213.03f3def.drv
    error: invalid file name 'closure-init-0' in 'exportReferencesGraph'

This was tough to debug, I didn't figure out which one was broken until I did:

    nix-store -r /nix/store/grfnl76cahwls0igd2by2pqv0dimi8h2-nixos-system-eris-19.09.20191213.03f3def.drv 2>&1 | grep  nix/store | xargs -n1 nix-store -r

and then looking at the remaining build graph:

    $ nix-store -r /nix/store/grfnl76cahwls0igd2by2pqv0dimi8h2-nixos-system-eris-19.09.20191213.03f3def.drv
    these derivations will be built:
      /nix/store/3ka4ihvwh6wsyhpd2qa9f59506mnxvx1-initrd-linux-4.19.88.drv
      /nix/store/grfnl76cahwls0igd2by2pqv0dimi8h2-nixos-system-eris-19.09.20191213.03f3def.drv
    error: invalid file name 'closure-init-0' in 'exportReferencesGraph'

and knowing the initrd build is before the system, then:

    $ nix show-derivation /nix/store/3ka4ihvwh6wsyhpd2qa9f59506mnxvx1-initrd-linux-4.19.88.drv
    {
      "/nix/store/3ka4ihvwh6wsyhpd2qa9f59506mnxvx1-initrd-linux-4.19.88.drv": {
        [...]
        "exportReferencesGraph": "closure-init-0 /nix/store/...-stage-1-init.sh closure-mdadm.conf-1 /nix/store/...-mdadm.conf closure-ubuntu.conf-2 ...",
        [...]
      }
    }

I then searched the repo for "in 'exportReferencesGraph'", found this
recently updated regex, and realized it was missing a "-".
2020-01-01 20:50:40 -05:00
Eelco Dolstra b0cadf547b
Merge pull request #3289 from michaelforney/tar-J
Pass -J to tar for xz decompression
2019-12-25 13:07:58 +01:00
Michael Forney 43eb7b6756 Pass -J to tar for xz decompression
Some tar implementations can't auto-detect compression formats, so
they must be specified explicitly.
2019-12-22 17:17:14 -08:00
Eelco Dolstra aaf57c983d
Merge pull request #3284 from puffnfresh/wsl
Disable use-sqlite-wal under WSL
2019-12-23 00:44:29 +01:00
Eelco Dolstra 7dcfa8042e
Merge pull request #3287 from michaelforney/cp-flag
Pass -P to cp to preserve symlinks
2019-12-23 00:43:38 +01:00
Michael Forney 10414d467b Pass -P to cp to preserve symlinks
This is commonly the default behavior with -R, but POSIX leaves the
default unspecified.
2019-12-21 21:30:38 -08:00
Brian McKenna d25923263e Disable use-sqlite-wal under WSL
Before:

    $ nix-channel --update
    unpacking channels...
    warning: SQLite database '/nix/var/nix/db/db.sqlite' is busy (SQLITE_PROTOCOL)
    warning: SQLite database '/nix/var/nix/db/db.sqlite' is busy (SQLITE_PROTOCOL)
    warning: SQLite database '/nix/var/nix/db/db.sqlite' is busy (SQLITE_PROTOCOL)
    warning: SQLite database '/nix/var/nix/db/db.sqlite' is busy (SQLITE_PROTOCOL)
    warning: SQLite database '/nix/var/nix/db/db.sqlite' is busy (SQLITE_PROTOCOL)

After:

    $ inst/bin/nix-channel --update
    unpacking channels...
    created 1 symlinks in user environment

I've seen complaints that "sandbox" caused problems under WSL but I'm
having no problems. I think recent changes could have fixed the issue.
2019-12-21 08:14:19 +11:00
Eelco Dolstra c84c843e33
tarfile.cc: Restore timestamps
This is needed to get the lastModified attribute of GitHub flakes.
2019-12-19 15:09:54 +01:00
Eelco Dolstra 2550c11373
tarfile.cc: Don't change the cwd
Nix is multithreaded so it's not safe to change the cwd.
2019-12-19 15:08:16 +01:00
Eelco Dolstra be32da0ed0
tarfile.cc: Style fixes 2019-12-19 15:01:58 +01:00
Eelco Dolstra ee235e764c
Merge branch 'libarchive' of https://github.com/yorickvP/nix 2019-12-19 14:47:18 +01:00
Eelco Dolstra 9f7b4d068c
Cleanup: Remove unused makeDeb/makeRPM functions 2019-12-19 13:17:48 +01:00
Eelco Dolstra 4511f09b49 nix make-content-addressable: Add --json flag
Fixes #3274.
2019-12-18 17:39:02 +01:00
Eelco Dolstra f8abbdd456 Add priority setting to stores
This allows overriding the priority of substituters, e.g.

  $ nix-store --store ~/my-nix/ -r /nix/store/df3m4da96d84ljzxx4mygfshm1p0r2n3-geeqie-1.4 \
    --substituters 'http://cache.nixos.org?priority=100 daemon?priority=10'

Fixes #3264.
2019-12-17 17:17:53 +01:00
Eelco Dolstra 54bf5ba422 nix-store -r: Handle symlinks to store paths
Fixes #3270.
2019-12-16 19:11:47 +01:00
Eelco Dolstra 14d82baba4 StorePath::new(): Check store directory 2019-12-16 17:41:56 +01:00
Eelco Dolstra 410acd29c0 Fix cargo test 2019-12-15 10:47:59 +01:00
Eelco Dolstra acb71aa5c6 Tweak error message 2019-12-15 10:44:53 +01:00
Eelco Dolstra 2b0365753a Merge branch 'limit_depth_resolveExprPath' of https://github.com/d-goldin/nix 2019-12-15 00:22:35 +01:00
Eelco Dolstra 8656a2de56
Merge pull request #3269 from xzfc/nix-shell
nix-shell: don't check for "nix-shell" in shebang script name
2019-12-14 23:24:32 +01:00
Eelco Dolstra ba6d2093c7 Fix progress bar 2019-12-14 23:19:04 +01:00
Albert Safin a70706b025 nix-shell: don't check for "nix-shell" in shebang script name 2019-12-14 15:37:20 +00:00
Eelco Dolstra ac9cc2ec08 Move some code 2019-12-13 19:10:39 +01:00
Eelco Dolstra b4edc3ca61 Don't leak exceptions 2019-12-13 19:05:26 +01:00
Eelco Dolstra e6bd88878e Improve gzip error message 2019-12-13 19:05:26 +01:00
Eelco Dolstra ca87707c90 Get rid of CBox 2019-12-13 19:05:26 +01:00
Eelco Dolstra 5a6d6da7ae Validate tarball components 2019-12-13 19:05:26 +01:00
Eelco Dolstra 4581159e3f Simplify tarball test 2019-12-13 17:26:58 +01:00
Dima d89d9958a7 bugfix: Adding depth limit to resolveExprPath
There is no termination condition for evaluation of cyclical
expression paths which can lead to infinite loops. This addresses
one spot in the parser in a similar fashion as utils.cc/canonPath
does.

This issue can be reproduced by something like:

```
ln -s a b
ln -s b a

nix-instantiate -E 'import ./a'
```
2019-12-13 14:51:30 +01:00
Eelco Dolstra e8aa2290ed Only install *.sb files on macOS 2019-12-13 14:42:55 +01:00
Eelco Dolstra 3e787423c2 Remove FIXME 2019-12-13 12:55:52 +01:00
Eelco Dolstra d1b238ec3c Simplify 2019-12-13 12:53:20 +01:00
Eelco Dolstra 2da4c61049 Merge branch 'libstore-ssh-better-exec-error-message' of https://github.com/Profpatsch/nix 2019-12-13 12:51:36 +01:00
Tom Bereknyei c6295a3afd Initial gzip support
Closes #3256
2019-12-13 03:34:15 -05:00
Profpatsch 38b29fb72c libstore/ssh: Improve error message on failing `execvp`
If the `throw` is reached, this means that execvp into `ssh` wasn’t
successful. We can hint at a usual problem, which is a missing `ssh`
executable.

Test with:

```
env PATH= ./result/bin/nix-copy-closure --builders '' unusedhost
```

and the bash version with

```
env PATH= ./result/bin/nix-copy-closure --builders '' localhost
```
2019-12-12 15:32:17 +01:00
Eelco Dolstra f800d450b7 Speed up StorePath::to_string()
1.81% -> 0.56%
2019-12-10 22:15:20 +01:00
Eelco Dolstra f64b58b45e Speed up base32::decode()
From 1.03% to 0.19% of the runtime of 'nix-instantiate "<nixpkgs>" -A
texlive.combined.scheme-full --dry-run'.
2019-12-10 22:06:05 +01:00
Eelco Dolstra bbe97dff8b Make the Store API more type-safe
Most functions now take a StorePath argument rather than a Path (which
is just an alias for std::string). The StorePath constructor ensures
that the path is syntactically correct (i.e. it looks like
<store-dir>/<base32-hash>-<name>). Similarly, functions like
buildPaths() now take a StorePathWithOutputs, rather than abusing Path
by adding a '!<outputs>' suffix.

Note that the StorePath type is implemented in Rust. This involves
some hackery to allow Rust values to be used directly in C++, via a
helper type whose destructor calls the Rust type's drop()
function. The main issue is the dynamic nature of C++ move semantics:
after we have moved a Rust value, we should not call the drop function
on the original value. So when we move a value, we set the original
value to bitwise zero, and the destructor only calls drop() if the
value is not bitwise zero. This should be sufficient for most types.

Also lots of minor cleanups to the C++ API to make it more modern
(e.g. using std::optional and std::string_view in some places).
2019-12-10 22:06:05 +01:00
Eelco Dolstra ebd89999c2 Add StorePath tests 2019-12-10 22:04:40 +01:00
Eelco Dolstra bca0afb943 Shut up about deprecated functions 2019-12-10 13:44:49 +01:00
Eelco Dolstra 9e565781c6 Shut up warnings 2019-12-10 13:37:23 +01:00
Eelco Dolstra 14aa0c3259 Use hyper directly instead of reqwest 2019-12-10 13:37:23 +01:00
Eelco Dolstra a6f0bef0a7 Update to async/await-enabled tokio 2019-12-10 13:37:23 +01:00
Eelco Dolstra 7f08975050 Add NAR parser 2019-12-10 13:37:23 +01:00
Eelco Dolstra 6317f0f7a0 StorePath improvements 2019-12-10 13:37:23 +01:00
Eelco Dolstra cce218f950 Add base32 encoder/decoder 2019-12-10 13:37:23 +01:00
Eelco Dolstra a1ff43045b Move stuff around 2019-12-10 13:37:23 +01:00
Eelco Dolstra ce3c41aef0 Drop some dependencies 2019-12-10 13:37:23 +01:00
Eelco Dolstra d832a355ea Use rustls
In particular, this enables HTTP/2 support in reqwest, which is a lot
more efficient.
2019-12-10 13:37:23 +01:00
Eelco Dolstra dd5d76e2ed Basic BinaryCacheStore implementation using async Rust 2019-12-10 13:37:23 +01:00
Eelco Dolstra 98ef11677c EvalState::callFunction(): Make FunctionCallTrace use less stack space
The FunctionCallTrace object consumes a few hundred bytes of stack
space, even when tracing is disabled. This was causing stack overflows:

  $ nix-instantiate '<nixpkgs> -A texlive.combined.scheme-full --dry-run
  error: stack overflow (possible infinite recursion)

This is with the default stack size of 8 MiB.

Putting the object on the heap reduces stack usage to < 5 MiB.
2019-12-10 13:32:30 +01:00
Eelco Dolstra 61cc9f34d2 Remove UserLock self-lock check
This is no longer needed since we're not using POSIX locks anymore.
2019-12-09 23:57:33 +01:00
Yorick van Pelt f765e44123
downgrade required libarchive version (ubuntu 16.04) 2019-12-09 18:39:37 +07:00
Yorick van Pelt 3663a8a7e9
release.nix: add libarchive to rpm and deb dependencies 2019-12-09 17:31:05 +07:00
Yorick van Pelt b232eea40a
nix-rust: remove unused tar file code 2019-12-09 17:28:15 +07:00
Yorick van Pelt eba82b7c88
further clean up libarchive code 2019-12-09 17:21:46 +07:00
Puck Meerburg 28ee687adf Clean up libarchive support 2019-12-07 18:12:21 +00:00
Yorick van Pelt fe7ec70e6b
remove rust unpack_tarfile ffi 2019-12-07 23:28:31 +07:00
Yorick van Pelt 1355554d12
code 'cleanup' 2019-12-07 23:23:11 +07:00
Yorick van Pelt f54c168031
add wrapper function around libarchive to c++ errors 2019-12-07 23:10:27 +07:00
Yorick van Pelt 232b390766
fixup! libarchive proof of concept 2019-12-07 23:00:37 +07:00
Yorick van Pelt 9ff5f6492f
libarchive proof of concept 2019-12-07 22:35:14 +07:00
Eelco Dolstra 3b9c9d34e5 Shut up clang warning
(cherry picked from commit 3392f1b778)
2019-12-05 20:41:44 +01:00
Eelco Dolstra 80ab95315d nix doctor: Fix typo
(cherry picked from commit 96c6b08ed7)
2019-12-05 20:40:52 +01:00
Eelco Dolstra 47a937d512 Show hash mismatch warnings in SRI format
(cherry picked from commit 63c5c91cc0)
2019-12-05 20:32:25 +01:00
Eelco Dolstra 0678e4d56a Move #include
(cherry picked from commit 8beedd4486)
2019-12-05 20:30:29 +01:00
Eelco Dolstra 79142cbbe1 Bindings: Add convenience method for requiring an attribute
(cherry picked from commit fb692e5f7b)
2019-12-05 20:29:15 +01:00
Eelco Dolstra 0d118ef0c9 Bindings::get(): Add convenience method
This allows writing attribute lookups as

    if (auto name = value.attrs->get(state.sName))
      ...

(cherry picked from commit f216c76c56)
2019-12-05 20:29:00 +01:00
Eelco Dolstra 50d483a2c1 Fix precompiled-headers generation
It's now regenerated when util.hh changes, and is ordered after
config.h to fix a race.
2019-12-05 20:26:24 +01:00
Eelco Dolstra 5e449b43ed Initialize Command::_name
(cherry picked from commit d0a769cb06)
2019-12-05 20:21:22 +01:00
Eelco Dolstra ac67685606 Make subcommand construction in MultiCommand lazy
(cherry picked from commit a0de58f471)
2019-12-05 20:19:26 +01:00
Eelco Dolstra f964f428fe Move Command and MultiCommand to libutil
(cherry picked from commit f70434b1fb)
2019-12-05 20:13:47 +01:00
Eelco Dolstra f1b5c76c1a MultiCommand: Simplify construction
(cherry picked from commit 15a16e5c05)
2019-12-05 20:10:35 +01:00
Eelco Dolstra 092af3c826 Eliminate more pass-by-value in variadic calls 2019-12-05 19:58:52 +01:00
Eelco Dolstra 603b2f583c Revert "Make fmt() non-recursive"
This reverts commit 2b761d5f50.

Also *really* make fmt() take arguments by reference.
2019-12-05 19:58:49 +01:00
Eelco Dolstra 334b8f8af1 fmt(): Pass arguments by reference rather than by value 2019-12-05 17:40:46 +01:00
Eelco Dolstra f4b9495854
Merge pull request #3255 from Profpatsch/doc-manual-allowSubstitutes-add-note
doc/manual: add note to `allowSubstitutes` advanced attribute
2019-12-04 12:46:31 +01:00
Eelco Dolstra c1d18050b4 Disable recursive Nix test on macOS
https://hydra.nixos.org/build/107724274
2019-12-03 19:19:14 +01:00
Profpatsch 7923e22276 doc/manual: add ids to the advanced attribute definitions
This makes it possible to reference single attribute definitions,
for pointing people to their exact definition.
2019-12-03 18:22:27 +01:00
Profpatsch 7395e091c5 doc/manual: add note to `allowSubstitutes` advanced attribute 2019-12-03 18:01:45 +01:00
Eelco Dolstra e59e2b2951 Merge branch 'pkg-config-static' of https://github.com/matthewbauer/nix 2019-12-02 13:20:02 +01:00
Eelco Dolstra ac2bc721d8 Merge remote-tracking branch 'origin/recursive-nix' 2019-12-02 12:34:46 +01:00
Graham Christensen ec364582eb
Merge pull request #3252 from bwignall/typo
Fix typos
2019-11-30 19:05:43 -05:00
Brian Wignall 8737980e75 Fix typos 2019-11-30 19:04:14 -05:00
Eelco Dolstra f102d793f1
Merge pull request #2748 from edolstra/rust
Make nix/unpack-channel.nix a builtin builder
2019-11-29 19:33:31 +01:00
Eelco Dolstra 39954a9586 Make libnixrust a dynamic library
This is a hack to fix the build on macOS, which was failing because
libnixrust.a contains compiler builtins that clash with
libclang_rt.osx.a. There's probably a better solution...

https://hydra.nixos.org/build/107473280
2019-11-29 18:30:39 +01:00
Eelco Dolstra 895ed4cef0
Remove RPM spec file
Closes #3225.
Closes #3226.
2019-11-28 15:10:18 +01:00
Eelco Dolstra 2d6f1ddbb5
Remove builtins.valueSize
Fixes #3246.
2019-11-28 13:52:42 +01:00
Eelco Dolstra 895ce1bb6c make clean: Delete nix-rust/target 2019-11-27 17:33:59 +01:00
Eelco Dolstra f553a8bdea When OPTIMIZE=0, build rust code in debug mode 2019-11-27 14:18:57 +01:00
Eelco Dolstra 949dc84894 Fix segfault on i686-linux
https://hydra.nixos.org/build/107467517

Seems that on i686-linux, gcc and rustc disagree on how to return
1-word structs: gcc has the caller pass a pointer to the result, while
rustc has the callee return the result in a register. Work around this
by using a bare pointer.
2019-11-27 14:17:15 +01:00
Eelco Dolstra dbc4f9d478 Fix macOS build
https://hydra.nixos.org/build/107466992
2019-11-27 00:17:39 +01:00
Eelco Dolstra e6c1d1b474 Update Cargo.lock 2019-11-26 22:46:36 +01:00
Eelco Dolstra 88f8063917 -Z offline -> --offline 2019-11-26 22:45:15 +01:00
Eelco Dolstra 8918bae098 Drop remaining uses of external "tar"
Also, fetchGit now runs in O(1) memory since we pipe the output of
'git archive' directly into unpackTarball() (rather than first reading
it all into memory).
2019-11-26 22:07:28 +01:00
Eelco Dolstra f2bd847092 Ignore tar header entries
In particular, these are emitted by 'git archive' (in fetchGit).
2019-11-26 22:07:28 +01:00
Eelco Dolstra d33dd6e6c0 Move code around 2019-11-26 22:07:28 +01:00
Eelco Dolstra d14b1c261c Shut up some rust warnings 2019-11-26 22:07:28 +01:00
Eelco Dolstra b7fba16613 Move code around 2019-11-26 22:07:28 +01:00
Eelco Dolstra f738cd4d97 More Rust FFI adventures
We can now convert Rust Errors to C++ exceptions. At the Rust->C++ FFI
boundary, Result<T, Error> will cause Error to be converted to and
thrown as a C++ exception.
2019-11-26 22:07:28 +01:00
Eelco Dolstra 8110b4ebb2 Rust cleanup 2019-11-26 22:07:28 +01:00
Eelco Dolstra 343ebcc048 Only pass '-Z offline' to cargo if we have a vendor directory 2019-11-26 22:07:28 +01:00
Eelco Dolstra afb021893b Reduce the size of the vendor directory by removing some winapi cruft 2019-11-26 22:07:28 +01:00
Eelco Dolstra d722e2175e Include cargo dependencies in the Nix tarball 2019-11-26 22:07:28 +01:00
Eelco Dolstra 0dbb249b36 Update Rust dependencies 2019-11-26 22:07:28 +01:00
Eelco Dolstra 87b7b25e13 Clean up the configure script 2019-11-26 22:07:28 +01:00
Eelco Dolstra 6a9c815734 Remove most of <nix/config.nix>
This is no longer needed.
2019-11-26 22:07:28 +01:00
Eelco Dolstra 045708db43 Make <nix/unpack-channel.nix> a builtin builder
This was the last function using a shell script, so this allows us to
get rid of tar, coreutils, bash etc.
2019-11-26 22:07:28 +01:00
Eelco Dolstra e60f6bd4ce Enable Rust code to call C++ Source objects 2019-11-26 22:07:28 +01:00
Eelco Dolstra 11da5b2816 Add some Rust code 2019-11-26 22:07:28 +01:00
Eelco Dolstra abb8ef619b
Fix macOS build
https://hydra.nixos.org/build/107457009
2019-11-26 21:08:56 +01:00
Eelco Dolstra 313106d549
Fix clang warnings 2019-11-26 21:07:44 +01:00
Eelco Dolstra 425991883a
Merge pull request #3141 from xbreak/nocafile
Downloader: Log configured CA file
2019-11-26 20:52:25 +01:00
Eelco Dolstra 7c8d7c17f8
Merge pull request #3144 from matthewbauer/fix-sandbox-fallback
Fix sandbox fallback settings
2019-11-26 20:51:52 +01:00
Eelco Dolstra 0be8d7784f
Typo 2019-11-26 20:33:46 +01:00
Eelco Dolstra 73efc1e8e7
Merge branch 'document-dry-run-option' of https://github.com/Ma27/nix 2019-11-26 20:32:50 +01:00
Eelco Dolstra ec5e7b44ff
Simplify 2019-11-26 20:26:22 +01:00
Eelco Dolstra 96e1c39bb7
Merge branch 'repair-bad-links' of https://github.com/chkno/nix 2019-11-26 20:21:48 +01:00
Eelco Dolstra 872740cf60
Merge pull request #3238 from puckipedia/attrset-overrides-dynamic
Ensure enough space in attrset bindings
2019-11-26 20:14:55 +01:00
Eelco Dolstra c13193017f
Disallow empty store path names
Fixes #3239.
2019-11-26 20:12:15 +01:00
Eelco Dolstra 89db9353d7
Doh 2019-11-26 20:08:25 +01:00
Eelco Dolstra 1ec6e6e11e
Add feature to disable URL literals
E.g.

  $ nix-build '<nixpkgs>' -A hello --experimental-features no-url-literals
  error: URL literals are disabled, at /nix/store/vsjamkzh15r3c779q2711az826hqgvzr-nixpkgs-20.03pre194957.bef773ed53f/nixpkgs/pkgs/top-level/all-packages.nix:1236:11

Helps with implementing https://github.com/NixOS/rfcs/pull/45.
2019-11-26 19:48:34 +01:00
Eelco Dolstra fc62caa4a5
Merge pull request #3242 from raboof/documentBuiltinsPlaceholder
Document builtins.placeholder
2019-11-25 22:05:52 +01:00
Arnout Engelen 4e70652ee3 Document builtins.placeholder 2019-11-25 18:00:05 +01:00
Puck Meerburg cdadbf7708 Add testcase for attrset using __overrides and dynamic attrs 2019-11-25 13:03:54 +00:00
Puck Meerburg cd55f91ad2 Ensure enough space in attrset bindings when using both __overrides and dynamic attributes 2019-11-25 12:37:14 +00:00
Eelco Dolstra d12d69ea1a
Turn NIX_PATH into a config setting
This allows it to be set in nix.conf.
2019-11-22 23:07:35 +01:00
Eelco Dolstra ec9dd9a5ae
Provide a default value for NIX_PATH 2019-11-22 22:08:51 +01:00
Eelco Dolstra 1c3ccba0f5
Remove $NIX_USER_PROFILE_DIR
This is not used anywhere.
2019-11-22 16:27:48 +01:00
Eelco Dolstra ba87b08f85
getEnv(): Return std::optional
This allows distinguishing between an empty value and no value.
2019-11-22 16:18:13 +01:00
Chuck 3e2c77d001 Check for and repair bad .links entries
A corrupt entry in .links prevents adding a fixed version of that file
to the store in any path.  The user experience is that corruption
present in the store 'spreads' to new paths added to the store:

(With store optimisation enabled)

1. A file in the store gets corrupted somehow (eg: filesystem bug).
2. The user tries to add a thing to the store which contains a good copy
   of the corrupted file.
3. The file being added to the store is hashed, found to match the bad
   .links entry, and is replaced by a link to the bad .links entry.
   (The .links entry's hash is not verified during add -- this would
   impose a substantial performance burden.)
4. The user observes that the thing in the store that is supposed to be
   a copy of what they were trying to add is not a correct copy -- some
   files have different contents!  Running "nix-store --verify
   --check-contents --repair" does not fix the problem.

This change makes "nix-store --verify --check-contents --repair" fix
this problem.  Bad .links entries are simply removed, allowing future
attempts to insert a good copy of the file to succeed.
2019-11-15 11:55:36 -08:00
Eelco Dolstra fd900c45b5
Merge pull request #3220 from nh2/manual-nix-shell-p-expr
manual: nix-shell: Elaborate on using `-p` with expressions
2019-11-14 11:25:47 +01:00
Eelco Dolstra 0352c1a4f8
Typo 2019-11-13 17:18:17 +01:00
Eelco Dolstra 804910fb0e
Merge pull request #3213 from singron/fetchurl_test
Replace $TMPDIR with $TEST_ROOT in tests/fetchurl.sh
2019-11-11 12:15:59 +01:00
Eelco Dolstra 5ee23c35b9
Merge pull request #3219 from Ericson2314/semicolons
Fix extra semicolons warnings
2019-11-11 12:13:51 +01:00
John Ericson 8669db1dcc Clean up semicolon and comma
Thanks @bhipple for catching!
2019-11-10 16:21:59 -05:00
Niklas Hambüchen 07294e988c manual: nix-shell: Elaborate on using `-p` with expressions.
This documents the outcome of the change in
https://github.com/NixOS/nix/issues/454:

> We can also automatically add parentheses in the generated
> `buildInputs`, so you can type `nix-shell -p "expr"`
> instead of `"(expr").
2019-11-10 17:29:13 +01:00
John Ericson 4c34054673 Remove unneeded semicolons 2019-11-10 11:24:47 -05:00
John Ericson 96e6e680c1 Fix extra ; warnings involving MakeError 2019-11-10 11:24:47 -05:00
Domen Kožar 1f174226d1
Merge pull request #3218 from kolloch/patch-1
De-duplicate struct PrimOp forward declaration
2019-11-10 15:28:18 +01:00
Peter Kolloch 2ba9f22715
De-duplicate struct PrimOp forward declaration 2019-11-10 10:02:22 +01:00
Eric Culp 6c041e8413 Replace $TMPDIR with $TEST_ROOT in tests/fetchurl.sh
$TMPDIR isn't necessarily set and would cause this test to fail.
2019-11-08 12:08:10 -08:00
Eelco Dolstra d1db7fa952
Merge pull request #3211 from zimbatm/gitignore-precompiled-headers
gitignore /precompiled-headers.h.gch
2019-11-08 16:23:57 +01:00
zimbatm a08f353922
gitignore /precompiled-headers.h.?ch 2019-11-08 14:48:52 +00:00
Eelco Dolstra 0d6774468c
Move editorFor srom libutil to nix
libutil should not depend on libexpr.
2019-11-08 15:13:32 +01:00
Eelco Dolstra 48f0a76372
Fix installerScript job
https://hydra.nixos.org/build/105961653
2019-11-07 18:31:16 +01:00
Eelco Dolstra 4145cd2da0
Use upstream nlohmann_json 2019-11-07 18:23:17 +01:00
Eelco Dolstra e5bf81256c
Fix Perl bindings 2019-11-07 12:18:37 +01:00
Eelco Dolstra 6d2605500f
Fix macOS build 2019-11-07 11:53:28 +01:00
Eelco Dolstra 99af822004
Disable the evalNixOS test
It also OOMs.

https://hydra.nixos.org/build/105942679
2019-11-07 10:14:00 +01:00
Eelco Dolstra 04bf9acd22
Remove #include 2019-11-07 10:12:35 +01:00
Eelco Dolstra f5b7991e59
Revert "autoconf: Allow overriding CFLAGS/CXXFLAGS from outside."
This reverts commit 717e821b99. It's
much more convenient to do 'make OPTIMIZE=0'.
2019-11-07 10:12:35 +01:00
Eelco Dolstra 5ff4d77f55
Precompile headers
This cuts 'make install -j6' on my laptop from 170s to 134s.
2019-11-07 10:12:35 +01:00
Maximilian Bosch 52ffe2797a
doc: Document `--dry-run` option for `nix-build` 2019-11-07 00:11:57 +01:00
Eelco Dolstra 39a2e166dd
Cleanup 2019-11-06 16:53:02 +01:00
Eelco Dolstra 35732a95bc
Disable the evalNixpkgs test
It constantly OOMs.

https://hydra.nixos.org/build/105784912
2019-11-06 10:36:06 +01:00
Eelco Dolstra 7614a127a0
Fix binaryTarball test 2019-11-06 10:35:50 +01:00
Eelco Dolstra 69326f3637
Recursive Nix: Handle concurrent client connections 2019-11-06 00:55:03 +01:00
Eelco Dolstra c119ab9db0
Enable recursive Nix using a feature
Derivations that want to use recursion should now set

  requiredSystemFeatures = [ "recursive-nix" ];

to make the daemon socket appear.

Also, Nix should be configured with "experimental-features =
recursive-nix".
2019-11-06 00:55:03 +01:00
Eelco Dolstra 2af9561316
Add a test for recursive Nix 2019-11-06 00:55:03 +01:00
Eelco Dolstra c921074c19
RestrictedStore: Implement addToStore() 2019-11-06 00:55:03 +01:00
Eelco Dolstra c4d7c76b64
Recursive Nix support
This allows Nix builders to call Nix to build derivations, with some
limitations.

Example:

  let nixpkgs = fetchTarball channel:nixos-18.03; in

  with import <nixpkgs> {};

  runCommand "foo"
    {
      buildInputs = [ nix jq ];
      NIX_PATH = "nixpkgs=${nixpkgs}";
    }
    ''
      hello=$(nix-build -E '(import <nixpkgs> {}).hello.overrideDerivation (args: { name = "hello-3.5"; })')

      $hello/bin/hello

      mkdir -p $out/bin
      ln -s $hello/bin/hello $out/bin/hello

      nix path-info -r --json $hello | jq .
    ''

This derivation makes a recursive Nix call to build GNU Hello and
symlinks it from its $out, i.e.

  # ll ./result/bin/
  lrwxrwxrwx 1 root root 63 Jan  1  1970 hello -> /nix/store/s0awxrs71gickhaqdwxl506hzccb30y5-hello-3.5/bin/hello

  # nix-store -qR ./result
  /nix/store/hwwqshlmazzjzj7yhrkyjydxamvvkfd3-glibc-2.26-131
  /nix/store/s0awxrs71gickhaqdwxl506hzccb30y5-hello-3.5
  /nix/store/sgmvvyw8vhfqdqb619bxkcpfn9lvd8ss-foo

This is implemented as follows:

* Before running the outer builder, Nix creates a Unix domain socket
  '.nix-socket' in the builder's temporary directory and sets
  $NIX_REMOTE to point to it. It starts a thread to process
  connections to this socket. (Thus you don't need to have nix-daemon
  running.)

* The daemon thread uses a wrapper store (RestrictedStore) to keep
  track of paths added through recursive Nix calls, to implement some
  restrictions (see below), and to do some censorship (e.g. for
  purity, queryPathInfo() won't return impure information such as
  signatures and timestamps).

* After the build finishes, the output paths are scanned for
  references to the paths added through recursive Nix calls (in
  addition to the inputs closure). Thus, in the example above, $out
  has a reference to $hello.

The main restriction on recursive Nix calls is that they cannot do
arbitrary substitutions. For example, doing

  nix-store -r /nix/store/kmwd1hq55akdb9sc7l3finr175dajlby-hello-2.10

is forbidden unless /nix/store/kmwd... is in the inputs closure or
previously built by a recursive Nix call. This is to prevent
irreproducible derivations that have hidden dependencies on
substituters or the current store contents. Building a derivation is
fine, however, and Nix will use substitutes if available. In other
words, the builder has to present proof that it knows how to build a
desired store path from scratch by constructing a derivation graph for
that path.

Probably we should also disallow instantiating/building fixed-output
derivations (specifically, those that access the network, but
currently we have no way to mark fixed-output derivations that don't
access the network). Otherwise sandboxed derivations can bypass
sandbox restrictions and access the network.

When sandboxing is enabled, we make paths appear in the sandbox of the
builder by entering the mount namespace of the builder and
bind-mounting each path. This is tricky because we do a pivot_root()
in the builder to change the root directory of its mount namespace,
and thus the host /nix/store is not visible in the mount namespace of
the builder. To get around this, just before doing pivot_root(), we
branch a second mount namespace that shares its /nix/store mountpoint
with the parent.

Recursive Nix currently doesn't work on macOS in sandboxed mode
(because we can't change the sandbox policy of a running build) and in
non-root mode (because setns() barfs).
2019-11-06 00:52:38 +01:00
Eelco Dolstra b874272f7a
Make --enable-gc the default 2019-11-06 00:46:37 +01:00
Eelco Dolstra d823381c0a
Merge branch 'fix/nix-doctor-output' of https://github.com/bhipple/nix 2019-11-05 16:04:40 +01:00
Eelco Dolstra b4e260d887
Disable shellcheck
It's broken at the moment: https://hydra.nixos.org/build/105746055

Also it pulls in GHC which is a pretty big dependency.
2019-11-05 16:00:30 +01:00
Eelco Dolstra 81a9b93689
Fix manual build 2019-11-05 11:21:32 +01:00
Eelco Dolstra 852554bb16
Merge branch 'nix-repl-e' of https://github.com/zimbatm/nix 2019-11-05 11:20:53 +01:00
Eelco Dolstra 7876027071
Merge pull request #3193 from matthewbauer/patch-11
Update man to show that nix-shell allows --arg
2019-11-05 11:18:24 +01:00
Eelco Dolstra 78b8203e50
Merge pull request #3180 from kevinastock/patch-1
docs: fix upper bound on number of consumed cores
2019-11-05 11:17:02 +01:00
Eelco Dolstra 376802c9b8
Merge pull request #3199 from kevinastock/patch-2
docs: xref doesn't render in title
2019-11-05 11:16:15 +01:00
Eelco Dolstra e1725ba946
Fix VM tests 2019-11-05 11:12:25 +01:00
Eelco Dolstra 6b708711f5
Merge branch 'switch-to-19.09' of https://github.com/Ericson2314/nix 2019-11-05 10:32:11 +01:00
Eelco Dolstra 1b600ecd14
Don't use SOCK_CLOEXEC on macOS
https://hydra.nixos.org/build/105428308
2019-11-05 10:25:09 +01:00
Eelco Dolstra 3770f5c944
Merge pull request #3206 from kevinastock/patch-3
docs: correct default location of log directory
2019-11-04 22:30:07 +01:00
Kevin Stock cea05e5ee7
docs: correct default location of log directory 2019-11-04 16:23:03 -05:00
Eelco Dolstra f5a46ef0b1
Merge pull request #3202 from kraem/master
Update nix eval --help msg to not include deprecated command
2019-11-04 09:34:30 +01:00
Eelco Dolstra 8ec1b1e7b8
Merge pull request #3203 from hvdijk/prefetch-progress
Fix progress bar when nix-prefetch-url is piped.
2019-11-04 09:28:17 +01:00
Harald van Dijk c935ad3f02
Fix progress bar when nix-prefetch-url is piped.
The intent of the code was that if the window size cannot be determined,
it would be treated as having the maximum possible size. Because of a
missing assignment, it was actually treated as having a width of 0.

The reason the width could not be determined was because it was obtained
from stdout, not stderr, even though the printing was done to stderr.

This commit addresses both issues.
2019-11-03 21:46:59 +00:00
kraem dcd7a26063
Update nix eval --help msg to not include deprecated command 2019-11-03 18:47:28 +01:00
Kevin Stock 808cb6444e
docs: xref doesn't render in title
The `post-build-hook` text currently appears in the index, but not on the actual title line of the section, this follows the pattern used in a previous section to get a reference into a title.
2019-11-02 17:55:53 -04:00
Matthew Bauer 3e85c57a6c Pass --static flag to pkg-config when necessary 2019-11-01 13:27:40 -04:00
Matthew Bauer f1d4ba2afd
Update man to show that nix-shell allows --arg 2019-11-01 13:25:15 -04:00
Eelco Dolstra 06f9364e5f
Merge pull request #3192 from ng-0/ng0/issue3186
include netinet/in.h in src/nix/main.cc
2019-11-01 16:17:30 +01:00
ng0 b811bd2172 include netinet/in.h in src/nix/main.cc
Fixes #3186
2019-11-01 14:09:42 +00:00
Eelco Dolstra 6c8d0133ef
Merge pull request #3187 from Mic92/travis
travis: enable linux builds
2019-10-31 17:40:57 +01:00
Jörg Thalheim f1782642d3
travis: enable linux builds
Also disable email to not notify the whole NixOS community about build failures
2019-10-31 16:37:33 +00:00
Eelco Dolstra 6bff1aa46d
Merge pull request #3182 from bhipple/fixup/comments
Minor updates to inline comments
2019-10-31 14:14:35 +01:00
Eelco Dolstra 4e840fc541
Merge pull request #3179 from dtzWill/fix/struct-class-mismatch-minor
minor: fix mismatch of struct/class forward decl of 'Source'
2019-10-31 14:03:04 +01:00
Benjamin Hipple 80d5ec6ff4 Minor updates to inline comments
Add missing docstring on InstallableCommand. Also, some of these were wrapped
when they're right next to a line longer than the unwrapped line, so we can just
unwrap them to save vertical space.
2019-10-31 05:56:37 -04:00
Kevin Stock 99aac72a16
docs: fix upper bound on number of consumed cores 2019-10-30 16:53:04 -04:00
Will Dietz 0e9b72e097 minor: fix mismatch of struct/class forward decl of 'Source'
Fixes the following warning and the indicate potential issue:

src/libstore/worker-protocol.hh:66:1: warning: class 'Source' was previously declared as a struct; this is valid, but may result in linker errors
under the Microsoft C++ ABI [-Wmismatched-tags]

(cherry picked from commit 6e1bb04870b1b723282d32182af286646f13bf3c)
2019-10-30 14:39:01 -05:00
Eelco Dolstra e5319a87ce
queryPathInfoUncached(): Return const ValidPathInfo 2019-10-29 13:53:04 +01:00
Eelco Dolstra 992a2ad475
Move addToStoreFromDump to Store 2019-10-29 13:38:24 +01:00
Eelco Dolstra 05819d013f
Don't create a Store in processConnection() 2019-10-29 13:36:19 +01:00
Eelco Dolstra 63b99af85a
Move Unix domain socket creation to libutil
Also drop multithread-unfriendly hacks like doing a temporary
chmod/umask.
2019-10-29 13:30:51 +01:00
Eelco Dolstra 2d37e88319
Move most of the daemon implementation to libstore 2019-10-29 13:25:33 +01:00
Eelco Dolstra 95c727caef
Remove the check against concurrent builds in the same process 2019-10-29 12:43:20 +01:00
zimbatm 9a25059656
findDerivationFilename: add FIXME 2019-10-28 21:40:02 +01:00
zimbatm d407f4d15f
nix repl: also handle lambda edit 2019-10-28 21:37:22 +01:00
zimbatm 3774fe55fd
editorFor: take a pos object instead 2019-10-28 21:36:34 +01:00
zimbatm ec448f8bb6
libexpr: findDerivationFilename return Pos instead of tuple 2019-10-28 21:29:54 +01:00
Eelco Dolstra f7ce80f90a
Factor out linkOrCopy() 2019-10-27 18:19:13 +01:00
Eelco Dolstra f1c0b2c0e1
Add O(1)-memory copyPath() function 2019-10-27 18:18:58 +01:00
Eelco Dolstra 3913afdd69
Simplification 2019-10-27 18:00:09 +01:00
Eelco Dolstra 0e459d79a6
Merge branch 'issue-3147-inNixShell-arg' of https://github.com/hercules-ci/nix 2019-10-27 17:10:19 +01:00
Robert Hensing 9d612c393a Add inNixShell = true to nix-shell auto-call
This is an alternative to the IN_NIX_SHELL environment variable,
allowing the expression to adapt itself to nix-shell without
triggering those adaptations when used as a dependency of another
shell.

Closes #3147
2019-10-27 13:16:02 +01:00
Eelco Dolstra e012384fe9
Merge branch 'tojson-tostring-fix' of https://github.com/mayflower/nix 2019-10-27 12:18:35 +01:00
Robin Gloster e583df5280
builtins.toJSON: fix __toString usage 2019-10-27 10:15:51 +01:00
John Ericson 70cab0587d Switch to nixpkgs 19.09 2019-10-25 07:23:05 -04:00
Eelco Dolstra 2f96a89646 install-multi-user.sh: Remove unused variables
https://hydra.nixos.org/build/104119659
2019-10-23 21:24:21 +02:00
zimbatm 59c7249769
libexpr: add findDerivationFilename
extract the derivation to filename:lineno heuristic
2019-10-23 17:21:16 +02:00
zimbatm 207a537343
libutil: add editorFor heuristic 2019-10-23 16:48:28 +02:00
Eelco Dolstra b421895c1e
Merge pull request #3161 from schlarpc/patch-1
Remove superfluous IAM action for S3 cache
2019-10-23 16:34:54 +02:00
Eelco Dolstra dfe1fdf9e8
Merge pull request #3159 from earksiinni/docs-import-brackets
Document import <path> syntax
2019-10-23 16:33:58 +02:00
zimbatm 73ff84f6a8
nix repl: add :edit command
This allows to have a repl-centric workflow to working on nixpkgs.

Usage:

    :edit <package> - heuristic that find the package file path

    :edit <path> - just open the editor on the file path

Once invoked, `nix repl` will open $EDITOR on that file path. Once the
editor exits, `nix repl` will automatically reload itself.
2019-10-23 16:09:42 +02:00
Chaz Schlarp c92ea927e5
Remove superfluous IAM action for S3 cache
`s3:ListObjects` isn't a real IAM action, but _is_ the name of an S3 API method. `s3:ListBucket` is the relevant action for that method.

https://docs.aws.amazon.com/IAM/latest/UserGuide/list_amazons3.html
2019-10-22 16:04:49 -07:00
Ersin Akinci f107a27002 Tweak path hint 2019-10-21 14:16:55 -07:00
Ersin Akinci b7a936224e Add hint about path in builtins.import 2019-10-21 14:11:26 -07:00
Ersin Akinci 9be7787ec0 Revert "Document import <path> syntax"
This reverts commit d8730fb86f.
2019-10-21 13:12:41 -07:00
Eelco Dolstra 629b9b0049 Mark content-addressable paths with references as experimental 2019-10-21 18:05:31 +02:00
Eelco Dolstra e68736936a nix make-content-addressable: Add examples 2019-10-21 17:58:17 +02:00
Eelco Dolstra d77970fde7 Fix build 2019-10-21 17:49:16 +02:00
Eelco Dolstra 0abb3ad537 Allow content-addressable paths to have references
This adds a command 'nix make-content-addressable' that rewrites the
specified store paths into content-addressable paths. The advantage of
such paths is that 1) they can be imported without signatures; 2) they
can enable deduplication in cases where derivation changes do not
cause output changes (apart from store path hashes).

For example,

  $ nix make-content-addressable -r nixpkgs.cowsay
  rewrote '/nix/store/g1g31ah55xdia1jdqabv1imf6mcw0nb1-glibc-2.25-49' to '/nix/store/48jfj7bg78a8n4f2nhg269rgw1936vj4-glibc-2.25-49'
  ...
  rewrote '/nix/store/qbi6rzpk0bxjw8lw6azn2mc7ynnn455q-cowsay-3.03+dfsg1-16' to '/nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16'

We can then copy the resulting closure to another store without
signatures:

  $ nix copy --trusted-public-keys '' ---to ~/my-nix /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16

In order to support self-references in content-addressable paths,
these paths are hashed "modulo" self-references, meaning that
self-references are zeroed out during hashing. Somewhat annoyingly,
this means that the NAR hash stored in the Nix database is no longer
necessarily equal to the output of "nix hash-path"; for
content-addressable paths, you need to pass the --modulo flag:

  $ nix path-info --json /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16  | jq -r .[].narHash
  sha256:0ri611gdilz2c9rsibqhsipbfs9vwcqvs811a52i2bnkhv7w9mgw

  $ nix hash-path --type sha256 --base32 /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16
  1ggznh07khq0hz6id09pqws3a8q9pn03ya3c03nwck1kwq8rclzs

  $ nix hash-path --type sha256 --base32 /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16 --modulo iq6g2x4q62xp7y7493bibx0qn5w7xz67
  0ri611gdilz2c9rsibqhsipbfs9vwcqvs811a52i2bnkhv7w9mgw
2019-10-21 17:47:24 +02:00
Eelco Dolstra aabf5c86c9
Add experimental-features setting
Experimental features are now opt-in. There is currently one
experimental feature: "nix-command" (which enables the "nix"
command. This will allow us to merge experimental features more
quickly, without committing to supporting them indefinitely.

Typical usage:

$ nix build --experimental-features 'nix-command flakes' nixpkgs#hello

(cherry picked from commit 8e478c2341,
without the "flakes" feature)
2019-10-21 13:34:44 +02:00
Eelco Dolstra 389a2cebed
SourceExprCommand::getSourceExpr(): Allocate more space
Fixes #3140.
2019-10-21 13:14:39 +02:00
Eelco Dolstra 37e45dac8c
Merge pull request #3158 from steshaw/master
Fix unset variable in installer
2019-10-21 12:33:01 +02:00
Ersin Akinci d8730fb86f Document import <path> syntax 2019-10-20 19:08:05 -07:00
Steven Shaw f0ec4b4ce4
Fix unset variable in installer 2019-10-19 13:26:06 +10:00
xbreak 7c568d4c6e Downloader: Warn if no trusted CA file has been configured 2019-10-18 19:08:33 +00:00
Eelco Dolstra ab4dd1d783
Merge pull request #2291 from Taneb/master
nix-channel documentation: don't suggest deprecated function
2019-10-17 12:53:42 +02:00
Matthew Bauer 96c84937c4 Move tmpDirInSandbox to initTmpDir 2019-10-13 16:41:49 -04:00
Matthew Bauer 499b038875 Fix sandbox fallback settings
The tmpDirInSandbox is different when in sandboxed vs. non-sandboxed.
Since we don’t know ahead of time here whether sandboxing is enabled,
we need to reset all of the env vars we’ve set previously. This fixes
the issue encountered in https://github.com/NixOS/nixpkgs/issues/70856.
2019-10-12 19:22:13 -04:00
Eelco Dolstra 906d56a96b
ssh-ng: Don't set CPU affinity on the remote
Fixes #3138.
2019-10-11 18:49:46 +02:00
Eelco Dolstra 7d8c99eb43
Merge pull request #3114 from matthewbauer/add-libatomic
Add libatomic for 32-bit ARM
2019-10-11 11:01:37 +02:00
Eelco Dolstra 95cf23ee7c
nix verify: Fix uninitialized variable 2019-10-10 15:03:01 +02:00
Eelco Dolstra c3aaf3b8da
nix-env: Ignore failures creating ~/.nix-profile and ~/.nix-defexpr
https://hydra.nixos.org/build/102803093
2019-10-10 09:14:05 +02:00
Eelco Dolstra bda64a2b0f
Doh
https://hydra.nixos.org/build/102803044
2019-10-10 00:12:30 +02:00
Eelco Dolstra 94dfb6b1fe
Merge pull request #3136 from NixOS/no-world-writable
Remove world-writability from {profiles,gcroots}/per-user
2019-10-09 23:35:45 +02:00
Eelco Dolstra 20eec802ff
Force per-user group to a known value 2019-10-09 23:35:02 +02:00
Eelco Dolstra 9277e72cb0
Typo 2019-10-09 23:35:02 +02:00
Eelco Dolstra d7bae5680f
Go back to 755 permission on per-user directories
700 is pointless since the store is world-readable anyway. And
per-user/root/channels must be world-readable.
2019-10-09 23:35:02 +02:00
Eelco Dolstra c9159f86cc
nix-env: Create ~/.nix-defexpr automatically 2019-10-09 23:35:02 +02:00
Eelco Dolstra 61a6176aca
nix-profile.sh: Remove coreutils dependency 2019-10-09 23:35:02 +02:00
Eelco Dolstra 9348f9291e
nix-env: Create ~/.nix-profile automatically 2019-10-09 23:35:01 +02:00
Eelco Dolstra 26762ceb86
nix-profile.sh: Don't create .nix-channels
This is already done by the installer, so no need to do it again.
2019-10-09 23:35:01 +02:00
Eelco Dolstra c43d9f6131
Remove some redundant initialization 2019-10-09 23:35:01 +02:00
Eelco Dolstra 5a303093dc
Remove world-writability from per-user directories
'nix-daemon' now creates subdirectories for users when they first
connect.

Fixes #509 (CVE-2019-17365).
Should also fix #3127.
2019-10-09 23:34:48 +02:00
Eelco Dolstra 4331eeb13d
Filter ANSI escape sequences in -L output
Otherwise, builds like NixOS VM tests may leave the terminal in a
weird state and do resets.
2019-10-09 23:25:06 +02:00
Eelco Dolstra 55bba8e4f5
Make std::uncaught_exception warning less noisy 2019-10-09 23:04:11 +02:00
Eelco Dolstra 926d3e5bb0
Fix Bison 2.4 warning 2019-10-09 22:57:37 +02:00
Eelco Dolstra 99b73fb507
OCD performance fix: {find,count}+insert => insert 2019-10-09 16:06:29 +02:00
Eelco Dolstra e6e61f0a54
getSourceExpr(): Handle channels
Fixes #1892.
Fixes #1865.
Fixes #3119.
2019-10-09 15:36:51 +02:00
Eelco Dolstra 08ad9714e1
Merge pull request #3132 from matthewbauer/handle-sandbox-shell
Handle empty sandbox_shell
2019-10-09 14:52:51 +02:00
Eelco Dolstra 7c74f075f4
nix search: Don't quietly ignore errors 2019-10-09 14:46:58 +02:00
Eelco Dolstra 64d8872900
nix-build: Fix compilation 2019-10-09 14:46:44 +02:00
Eelco Dolstra 335504a58e
Merge pull request #3133 from callahad/launchd
Make nix-daemon.plist less fragile on macOS
2019-10-09 14:32:18 +02:00
Dan Callahan 8c4a5e7ba1
Make nix-daemon.plist less fragile on macOS
We're calling `wait4path` on the full, resolved `@bindir@/nix-daemon` path.

That means we're hardcoding something like:

    /bin/wait4path /nix/store/zs9c5xhp3zv9p23qnjxp87nl5injsi1i-nix-2.3/bin/nix-daemon &amp;&amp; /nix/var/nix/profiles/default/bin/nix-daemon

That seems unnecessarily fragile.

It might be better to wait4path on the path we intend to call.
2019-10-09 12:52:01 +01:00
Eelco Dolstra 7bb5ddbe15
Merge pull request #3128 from matthewbauer/dont-symlink-launchagent
Don't symlink org.nixos.nix-daemon.plist in installer
2019-10-09 09:37:11 +02:00
Eelco Dolstra 6b3a6fe5a2
Merge pull request #3131 from matthewbauer/dont-source-bashrc-in-pure-mode
Don’t source bashrc in pure mode
2019-10-09 09:25:44 +02:00
Matthew Bauer 199e888785 Handle empty sandbox_shell
Previously, SANDBOX_SHELL was set to empty when unavailable. This
caused issues when actually generating the sandbox. Instead, just set
SANDBOX_SHELL when --with-sandbox-shell= is non-empty. Alternative
implementation to https://github.com/NixOS/nix/pull/3038.
2019-10-08 23:12:54 -04:00
Matthew Bauer 65f6d5db6f Don’t source bashrc in pure mode
Pure mode should not try to source the user’s bashrc file. These may
have many impurities that the user does not expect to get into their
shell.

Fixes #3090
2019-10-08 22:41:59 -04:00
Matthew Bauer d4e51aac08 Make preexisting Nix install a warning, not a failure
In the multi-user install script, we originally made sure no previous
references to Nix existed. This prevented any previous installs from
contaminating the new install. However, some users need the ability to
repair their existing Nix installation without uninstalling all
references to Nix. This change allows users with existing Nix
installations to use the installer, while still outputing a warning
message on the dangers of this. As a result, the multi-user install
script work much more like the single-user install script has worked
in the past.

This is a requirement for macOS Catalina users now that
/Library/LaunchDaemons/org.nixos.nix-daemon.plisg is not managed by
the Nix store. If there is ever a change to the .plist, all users will
need to rerun this install script to get the new changes. Otherwise,
changes to the launch daemon will require manual interventions.
2019-10-08 21:53:06 -04:00
Matthew Bauer 0847f2f1b3 Copy instead of linking launch agent
On Catalina, the /nix filesystem might not be mounted at start time.
To avoid this service not starting, we need to keep the launch agent
outside of the Nix store. A wait4pid will hold for our /nix dir to be
mounted.

Fixes #3125.
2019-10-08 21:52:17 -04:00
Eelco Dolstra a7e9286359
Merge pull request #3126 from PyroLagus/fix-typos
Fix typos in the Nix Manual.
2019-10-08 20:40:24 +02:00
Danny Bautista 00a567588e Fix typos in the Nix Manual. 2019-10-08 14:02:40 -04:00
Eelco Dolstra 8ccae55dab
Merge pull request #3120 from samdoshi/remove-search-verbose
nix search: remove verbose example
2019-10-07 14:36:44 +02:00
Sam Doshi 6f6cb5e388 nix search: remove verbose example 2019-10-07 11:40:42 +01:00
Benjamin Hipple c5bd564c69 nix doctor: add more logging output to checks
When running nix doctor on a healthy system, it just prints the store URI and
nothing else. This makes it unclear whether the system is in a good state and
what check(s) it actually ran, since some of the checks are optional depending
on the store type.

This commit updates nix doctor to print an colored log message for every check
that it does, and explicitly state whether that check was a PASS or FAIL to make
it clear to the user whether the system passed its checkup with the doctor.

Fixes #3084
2019-10-06 16:57:57 -04:00
Eelco Dolstra 93b1ce1ac5
Revert "std::uncaught_exception() -> std::uncaught_exceptions()"
This reverts commit 6b83174fff because
it doesn't work on macOS yet.

https://hydra.nixos.org/build/102617587
2019-10-04 16:34:59 +02:00
Eelco Dolstra 15e70c662e
Fix indentation 2019-10-02 16:26:15 +02:00
Matthew Bauer b1c34152fe Use more robust test for libatomics
Taken from Mesa configure script:

https://github.com/mesa3d/mesa/blob/17.2/configure.ac#L405-L427
2019-10-01 21:22:18 -04:00
Matthew Bauer 74b4737d8f Add libatomic for 32-bit ARM
Fixes #3113
2019-10-01 21:07:32 -04:00
Eelco Dolstra 4e60c5ec65
Merge pull request #3112 from zimbatm/fetchTarball-with-chroot
Fix fetchTarball with chroot stores
2019-10-01 11:33:57 +02:00
Eelco Dolstra 168a887916
Fix fetchTarball with chroot stores
Fixes #2405.
2019-10-01 07:51:06 +00:00
Domen Kožar 2d2769f68c
Merge pull request #2338 from bobvanderlinden/pr-cannot-delete-alive-why
mention `nix-store --query --roots` when a path cannot be deleted
2019-09-30 14:06:52 +02:00
Domen Kožar 043365c2fb
Merge pull request #3080 from Infinisil/tryEval-docs
docs: Note that tryEval doesn't do deep evaluation
2019-09-30 14:03:16 +02:00
Domen Kožar a3bb929798
Merge pull request #3106 from JosephLucas/patch-1
Update garbage-collection.xml readability
2019-09-30 14:02:26 +02:00
Joseph Lucas 10bfc5c0d0
Update garbage-collection.xml readability
1. remove a typo space
2. Simplify negative style by using affirmative style
2019-09-23 13:18:59 +00:00
Silvan Mosberger e4ea3e0306
docs: Note that tryEval doesn't do deep evaluation 2019-09-03 07:32:44 +02:00
Bob van der Linden 58a85fa462 mention nix-store --query --roots when a path cannot be deleted 2018-08-08 21:21:21 +02:00
Nathan van Doorn 41f38fbb4b
nix-channel documentation: don't suggest deprecated function
Running `nix-instantiate --eval -E '(import <nixpkgs> {}).lib.nixpkgsVersion` emits a warning
```
trace: `lib.nixpkgsVersion` is deprecated, use `lib.version` instead!
```
2018-07-16 10:00:42 +01:00
230 changed files with 7419 additions and 25006 deletions

6
.gitignore vendored
View File

@ -4,9 +4,10 @@ perl/Makefile.config
# /
/aclocal.m4
/autom4te.cache
/precompiled-headers.h.gch
/precompiled-headers.h.pch
/config.*
/configure
/nix.spec
/stamp-h1
/svn-revision
/libtool
@ -84,6 +85,7 @@ perl/Makefile.config
/tests/restricted-innocent
/tests/shell
/tests/shell.drv
/tests/config.nix
# /tests/lang/
/tests/lang/*.out
@ -117,3 +119,5 @@ GPATH
GRTAGS
GSYMS
GTAGS
nix-rust/target

View File

@ -1,2 +1,8 @@
os: osx
script: ./tests/install-darwin.sh
matrix:
include:
- language: osx
script: ./tests/install-darwin.sh
- language: nix
script: nix-build release.nix -A build.x86_64-linux
notifications:
email: false

View File

@ -1,5 +1,7 @@
makefiles = \
mk/precompiled-headers.mk \
local.mk \
nix-rust/local.mk \
src/libutil/local.mk \
src/libstore/local.mk \
src/libmain/local.mk \
@ -15,8 +17,16 @@ makefiles = \
tests/local.mk \
tests/plugins/local.mk
GLOBAL_CXXFLAGS += -g -Wall -include config.h
-include Makefile.config
OPTIMIZE = 1
ifeq ($(OPTIMIZE), 1)
GLOBAL_CXXFLAGS += -O3
else
GLOBAL_CXXFLAGS += -O0
endif
include mk/lib.mk
GLOBAL_CXXFLAGS += -g -Wall -include config.h -std=c++17

View File

@ -18,6 +18,7 @@ SODIUM_LIBS = @SODIUM_LIBS@
LIBLZMA_LIBS = @LIBLZMA_LIBS@
SQLITE3_LIBS = @SQLITE3_LIBS@
LIBBROTLI_LIBS = @LIBBROTLI_LIBS@
LIBARCHIVE_LIBS = @LIBARCHIVE_LIBS@
EDITLINE_LIBS = @EDITLINE_LIBS@
bash = @bash@
bindir = @bindir@

View File

@ -50,14 +50,11 @@ AC_DEFINE_UNQUOTED(SYSTEM, ["$system"], [platform identifier ('cpu-os')])
test "$localstatedir" = '${prefix}/var' && localstatedir=/nix/var
# Set default flags for nix (as per AC_PROG_CC/CXX docs),
# while still allowing the user to override them from the command line.
: ${CFLAGS="-O3"}
: ${CXXFLAGS="-O3"}
CFLAGS=
CXXFLAGS=
AC_PROG_CC
AC_PROG_CXX
AC_PROG_CPP
AX_CXX_COMPILE_STDCXX_17([noext], [mandatory])
AC_CHECK_TOOL([AR], [ar])
@ -120,26 +117,15 @@ fi
])
NEED_PROG(bash, bash)
NEED_PROG(patch, patch)
AC_PATH_PROG(xmllint, xmllint, false)
AC_PATH_PROG(xsltproc, xsltproc, false)
AC_PATH_PROG(flex, flex, false)
AC_PATH_PROG(bison, bison, false)
NEED_PROG(sed, sed)
NEED_PROG(tar, tar)
NEED_PROG(bzip2, bzip2)
NEED_PROG(gzip, gzip)
NEED_PROG(xz, xz)
AC_PATH_PROG(dot, dot)
AC_PATH_PROG(lsof, lsof, lsof)
NEED_PROG(cat, cat)
NEED_PROG(tr, tr)
AC_ARG_WITH(coreutils-bin, AC_HELP_STRING([--with-coreutils-bin=PATH],
[path of cat, mkdir, etc.]),
coreutils=$withval, coreutils=$(dirname $cat))
AC_SUBST(coreutils)
AC_SUBST(coreutils, [$(dirname $(type -p cat))])
AC_ARG_WITH(store-dir, AC_HELP_STRING([--with-store-dir=PATH],
@ -157,8 +143,33 @@ AX_BOOST_BASE([1.66], [CXXFLAGS="$BOOST_CPPFLAGS $CXXFLAGS"], [AC_MSG_ERROR([Nix
# ends up with LDFLAGS being empty, so we set it afterwards.
LDFLAGS="$BOOST_LDFLAGS $LDFLAGS"
# On some platforms, new-style atomics need a helper library
AC_MSG_CHECKING(whether -latomic is needed)
AC_LINK_IFELSE([AC_LANG_SOURCE([[
#include <stdint.h>
uint64_t v;
int main() {
return (int)__atomic_load_n(&v, __ATOMIC_ACQUIRE);
}]])], GCC_ATOMIC_BUILTINS_NEED_LIBATOMIC=no, GCC_ATOMIC_BUILTINS_NEED_LIBATOMIC=yes)
AC_MSG_RESULT($GCC_ATOMIC_BUILTINS_NEED_LIBATOMIC)
if test "x$GCC_ATOMIC_BUILTINS_NEED_LIBATOMIC" = xyes; then
LIBS="-latomic $LIBS"
fi
# Look for OpenSSL, a required dependency.
PKG_PROG_PKG_CONFIG
AC_ARG_ENABLE(shared, AC_HELP_STRING([--enable-shared],
[Build shared libraries for Nix [default=yes]]),
shared=$enableval, shared=yes)
if test "$shared" = yes; then
AC_SUBST(BUILD_SHARED_LIBS, 1, [Whether to build shared libraries.])
else
AC_SUBST(BUILD_SHARED_LIBS, 0, [Whether to build shared libraries.])
PKG_CONFIG="$PKG_CONFIG --static"
fi
# Look for OpenSSL, a required dependency. FIXME: this is only (maybe)
# used by S3BinaryCacheStore.
PKG_CHECK_MODULES([OPENSSL], [libcrypto], [CXXFLAGS="$OPENSSL_CFLAGS $CXXFLAGS"])
@ -167,12 +178,12 @@ AC_CHECK_LIB([bz2], [BZ2_bzWriteOpen], [true],
[AC_MSG_ERROR([Nix requires libbz2, which is part of bzip2. See https://web.archive.org/web/20180624184756/http://www.bzip.org/.])])
AC_CHECK_HEADERS([bzlib.h], [true],
[AC_MSG_ERROR([Nix requires libbz2, which is part of bzip2. See https://web.archive.org/web/20180624184756/http://www.bzip.org/.])])
# Checks for libarchive
PKG_CHECK_MODULES([LIBARCHIVE], [libarchive >= 3.1.2], [CXXFLAGS="$LIBARCHIVE_CFLAGS $CXXFLAGS"])
# Look for SQLite, a required dependency.
PKG_CHECK_MODULES([SQLITE3], [sqlite3 >= 3.6.19], [CXXFLAGS="$SQLITE3_CFLAGS $CXXFLAGS"])
# Look for libcurl, a required dependency.
PKG_CHECK_MODULES([LIBCURL], [libcurl], [CXXFLAGS="$LIBCURL_CFLAGS $CXXFLAGS"])
@ -195,12 +206,15 @@ PKG_CHECK_MODULES([SODIUM], [libsodium],
have_sodium=1], [have_sodium=])
AC_SUBST(HAVE_SODIUM, [$have_sodium])
# Look for liblzma, a required dependency.
PKG_CHECK_MODULES([LIBLZMA], [liblzma], [CXXFLAGS="$LIBLZMA_CFLAGS $CXXFLAGS"])
AC_CHECK_LIB([lzma], [lzma_stream_encoder_mt],
[AC_DEFINE([HAVE_LZMA_MT], [1], [xz multithreaded compression support])])
# Look for zlib, a required dependency.
PKG_CHECK_MODULES([ZLIB], [zlib], [CXXFLAGS="$ZLIB_CFLAGS $CXXFLAGS"])
AC_CHECK_HEADER([zlib.h],[:],[AC_MSG_ERROR([could not find the zlib.h header])])
LDFLAGS="-lz $LDFLAGS"
# Look for libbrotli{enc,dec}.
PKG_CHECK_MODULES([LIBBROTLI], [libbrotlienc libbrotlidec], [CXXFLAGS="$LIBBROTLI_CFLAGS $CXXFLAGS"])
@ -243,8 +257,8 @@ fi
# Whether to use the Boehm garbage collector.
AC_ARG_ENABLE(gc, AC_HELP_STRING([--enable-gc],
[enable garbage collection in the Nix expression evaluator (requires Boehm GC) [default=no]]),
gc=$enableval, gc=no)
[enable garbage collection in the Nix expression evaluator (requires Boehm GC) [default=yes]]),
gc=$enableval, gc=yes)
if test "$gc" = yes; then
PKG_CHECK_MODULES([BDW_GC], [bdw-gc])
CXXFLAGS="$BDW_GC_CFLAGS $CXXFLAGS"
@ -290,16 +304,6 @@ AC_ARG_WITH(sandbox-shell, AC_HELP_STRING([--with-sandbox-shell=PATH],
sandbox_shell=$withval)
AC_SUBST(sandbox_shell)
AC_ARG_ENABLE(shared, AC_HELP_STRING([--enable-shared],
[Build shared libraries for Nix [default=yes]]),
shared=$enableval, shared=yes)
if test "$shared" = yes; then
AC_SUBST(BUILD_SHARED_LIBS, 1, [Whether to build shared libraries.])
else
AC_SUBST(BUILD_SHARED_LIBS, 0, [Whether to build shared libraries.])
fi
# Expand all variables in config.status.
test "$prefix" = NONE && prefix=$ac_default_prefix
test "$exec_prefix" = NONE && exec_prefix='${prefix}'

View File

@ -1,29 +1,13 @@
# FIXME: remove this file?
let
fromEnv = var: def:
let val = builtins.getEnv var; in
if val != "" then val else def;
in rec {
shell = "@bash@";
coreutils = "@coreutils@";
bzip2 = "@bzip2@";
gzip = "@gzip@";
xz = "@xz@";
tar = "@tar@";
tarFlags = "@tarFlags@";
tr = "@tr@";
nixBinDir = fromEnv "NIX_BIN_DIR" "@bindir@";
nixPrefix = "@prefix@";
nixLibexecDir = fromEnv "NIX_LIBEXEC_DIR" "@libexecdir@";
nixLocalstateDir = "@localstatedir@";
nixSysconfDir = "@sysconfdir@";
nixStoreDir = fromEnv "NIX_STORE_DIR" "@storedir@";
# If Nix is installed in the Nix store, then automatically add it as
# a dependency to the core packages. This ensures that they work
# properly in a chroot.
chrootDeps =
if dirOf nixPrefix == builtins.storeDir then
[ (builtins.storePath nixPrefix) ]
else
[ ];
}

View File

@ -1,39 +1,12 @@
with import <nix/config.nix>;
let
builder = builtins.toFile "unpack-channel.sh"
''
mkdir $out
cd $out
xzpat="\.xz\$"
gzpat="\.gz\$"
if [[ "$src" =~ $xzpat ]]; then
${xz} -d < $src | ${tar} xf - ${tarFlags}
elif [[ "$src" =~ $gzpat ]]; then
${gzip} -d < $src | ${tar} xf - ${tarFlags}
else
${bzip2} -d < $src | ${tar} xf - ${tarFlags}
fi
if [ * != $channelName ]; then
mv * $out/$channelName
fi
'';
in
{ name, channelName, src }:
derivation {
system = builtins.currentSystem;
builder = shell;
args = [ "-e" builder ];
inherit name channelName src;
builder = "builtin:unpack-channel";
PATH = "${nixBinDir}:${coreutils}";
system = "builtin";
inherit name channelName src;
# No point in doing this remotely.
preferLocalBuild = true;
inherit chrootDeps;
}

View File

@ -36,8 +36,8 @@ to <xref linkend="conf-cores" />, unless <xref linkend="conf-cores" />
equals <literal>0</literal>, in which case <envar>NIX_BUILD_CORES</envar>
will be the total number of cores in the system.</para>
<para>The total number of consumed cores is a simple multiplication,
<xref linkend="conf-cores" /> * <envar>NIX_BUILD_CORES</envar>.</para>
<para>The maximum number of consumed cores is a simple multiplication,
<xref linkend="conf-max-jobs" /> * <envar>NIX_BUILD_CORES</envar>.</para>
<para>The balance on how to set these two independent variables depends
upon each builder's workload and hardware. Here are a few example

View File

@ -5,7 +5,7 @@
version="5.0"
>
<title>Using the <xref linkend="conf-post-build-hook" /></title>
<title>Using the <option linkend="conf-post-build-hook">post-build-hook</option></title>
<subtitle>Uploading to an S3-compatible binary cache after each build</subtitle>

View File

@ -433,7 +433,7 @@ builtins.fetchurl {
<varlistentry xml:id="conf-keep-env-derivations"><term><literal>keep-env-derivations</literal></term>
<listitem><para>If <literal>false</literal> (default), derivations
are not stored in Nix user environments. That is, the derivation
are not stored in Nix user environments. That is, the derivations of
any build-time-only dependencies may be garbage-collected.</para>
<para>If <literal>true</literal>, when you add a Nix derivation to

View File

@ -122,7 +122,7 @@ $ mount -o bind /mnt/otherdisk/nix /nix</screen>
<varlistentry><term><envar>NIX_LOG_DIR</envar></term>
<listitem><para>Overrides the location of the Nix log directory
(default <filename><replaceable>prefix</replaceable>/log/nix</filename>).</para></listitem>
(default <filename><replaceable>prefix</replaceable>/var/log/nix</filename>).</para></listitem>
</varlistentry>

View File

@ -30,6 +30,7 @@
<replaceable>attrPath</replaceable>
</arg>
<arg><option>--no-out-link</option></arg>
<arg><option>--dry-run</option></arg>
<arg>
<group choice='req'>
<arg choice='plain'><option>--out-link</option></arg>
@ -98,6 +99,10 @@ also <xref linkend="sec-common-options" />.</phrase></para>
</varlistentry>
<varlistentry><term><option>--dry-run</option></term>
<listitem><para>Show what store paths would be built or downloaded.</para></listitem>
</varlistentry>
<varlistentry xml:id='opt-out-link'><term><option>--out-link</option> /
<option>-o</option> <replaceable>outlink</replaceable></term>

View File

@ -36,6 +36,9 @@ stay up-to-date with a set of pre-built Nix expressions. A Nix
channel is just a URL that points to a place containing a set of Nix
expressions. <phrase condition="manual">See also <xref
linkend="sec-channels" />.</phrase></para>
<para>To see the list of official NixOS channels, visit <link
xlink:href="https://nixos.org/channels" />.</para>
<para>This command has the following operations:
@ -111,13 +114,13 @@ $ nix-env -iA nixpkgs.hello</screen>
<para>You can revert channel updates using <option>--rollback</option>:</para>
<screen>
$ nix-instantiate --eval -E '(import &lt;nixpkgs> {}).lib.nixpkgsVersion'
$ nix-instantiate --eval -E '(import &lt;nixpkgs> {}).lib.version'
"14.04.527.0e935f1"
$ nix-channel --rollback
switching from generation 483 to 482
$ nix-instantiate --eval -E '(import &lt;nixpkgs> {}).lib.nixpkgsVersion'
$ nix-instantiate --eval -E '(import &lt;nixpkgs> {}).lib.version'
"14.04.526.dbadfad"
</screen>

View File

@ -659,7 +659,7 @@ upgrading `mozilla-1.2' to `mozilla-1.4'</screen>
<literal>gcc-3.3.1</literal> are split into two parts: the package
name (<literal>gcc</literal>), and the version
(<literal>3.3.1</literal>). The version part starts after the first
dash not following by a letter. <varname>x</varname> is considered an
dash not followed by a letter. <varname>x</varname> is considered an
upgrade of <varname>y</varname> if their package names match, and the
version of <varname>y</varname> is higher that that of
<varname>x</varname>.</para>

View File

@ -53,7 +53,7 @@ avoided.</para>
<para>If <replaceable>hash</replaceable> is specified, then a download
is not performed if the Nix store already contains a file with the
same hash and base name. Otherwise, the file is downloaded, and an
error if signaled if the actual hash of the file does not match the
error is signaled if the actual hash of the file does not match the
specified hash.</para>
<para>This command prints the hash on standard output. Additionally,

View File

@ -39,7 +39,12 @@
<arg choice='plain'><option>--packages</option></arg>
<arg choice='plain'><option>-p</option></arg>
</group>
<arg choice='plain' rep='repeat'><replaceable>packages</replaceable></arg>
<arg choice='plain' rep='repeat'>
<group choice='req'>
<arg choice="plain"><replaceable>packages</replaceable></arg>
<arg choice="plain"><replaceable>expressions</replaceable></arg>
</group>
</arg>
</arg>
<arg><replaceable>path</replaceable></arg>
</group>
@ -189,8 +194,8 @@ also <xref linkend="sec-common-options" />.</phrase></para>
<variablelist>
<varlistentry><term><envar>NIX_BUILD_SHELL</envar></term>
<listitem><para>Shell used to start the interactive environment.
<listitem><para>Shell used to start the interactive environment.
Defaults to the <command>bash</command> found in <envar>PATH</envar>.</para></listitem>
</varlistentry>
@ -222,8 +227,9 @@ $ nix-shell '&lt;nixpkgs>' -A pan --pure \
--command 'export NIX_DEBUG=1; export NIX_CORES=8; return'
</screen>
Nix expressions can also be given on the command line. For instance,
the following starts a shell containing the packages
Nix expressions can also be given on the command line using the
<command>-E</command> and <command>-p</command> flags.
For instance, the following starts a shell containing the packages
<literal>sqlite</literal> and <literal>libX11</literal>:
<screen>
@ -238,6 +244,14 @@ $ nix-shell -p sqlite xorg.libX11
… -L/nix/store/j1zg5v…-sqlite-3.8.0.2/lib -L/nix/store/0gmcz9…-libX11-1.6.1/lib …
</screen>
Note that <command>-p</command> accepts multiple full nix expressions that
are valid in the <literal>buildInputs = [ ... ]</literal> shown above,
not only package names. So the following is also legal:
<screen>
$ nix-shell -p sqlite 'git.override { withManual = false; }'
</screen>
The <command>-p</command> flag looks up Nixpkgs in the Nix search
path. You can override it by passing <option>-I</option> or setting
<envar>NIX_PATH</envar>. For example, the following gives you a shell

View File

@ -243,9 +243,10 @@
<varlistentry><term><option>--arg</option> <replaceable>name</replaceable> <replaceable>value</replaceable></term>
<listitem><para>This option is accepted by
<command>nix-env</command>, <command>nix-instantiate</command> and
<command>nix-build</command>. When evaluating Nix expressions, the
expression evaluator will automatically try to call functions that
<command>nix-env</command>, <command>nix-instantiate</command>,
<command>nix-shell</command> and <command>nix-build</command>.
When evaluating Nix expressions, the expression evaluator will
automatically try to call functions that
it encounters. It can automatically call functions for which every
argument has a <link linkend='ss-functions'>default value</link>
(e.g., <literal>{ <replaceable>argName</replaceable> ?
@ -322,7 +323,14 @@
Nix expressions to be parsed and evaluated, rather than as a list
of file names of Nix expressions.
(<command>nix-instantiate</command>, <command>nix-build</command>
and <command>nix-shell</command> only.)</para></listitem>
and <command>nix-shell</command> only.)</para>
<para>For <command>nix-shell</command>, this option is commonly used
to give you a shell in which you can build the packages returned
by the expression. If you want to get a shell which contain the
<emphasis>built</emphasis> packages ready for use, give your
expression to the <command>nix-shell -p</command> convenience flag
instead.</para></listitem>
</varlistentry>

View File

@ -11,7 +11,7 @@ attributes.</para>
<variablelist>
<varlistentry><term><varname>allowedReferences</varname></term>
<varlistentry xml:id="adv-attr-allowedReferences"><term><varname>allowedReferences</varname></term>
<listitem><para>The optional attribute
<varname>allowedReferences</varname> specifies a list of legal
@ -32,7 +32,7 @@ allowedReferences = [];
</varlistentry>
<varlistentry><term><varname>allowedRequisites</varname></term>
<varlistentry xml:id="adv-attr-allowedRequisites"><term><varname>allowedRequisites</varname></term>
<listitem><para>This attribute is similar to
<varname>allowedReferences</varname>, but it specifies the legal
@ -50,7 +50,7 @@ allowedRequisites = [ foobar ];
</varlistentry>
<varlistentry><term><varname>disallowedReferences</varname></term>
<varlistentry xml:id="adv-attr-disallowedReferences"><term><varname>disallowedReferences</varname></term>
<listitem><para>The optional attribute
<varname>disallowedReferences</varname> specifies a list of illegal
@ -67,7 +67,7 @@ disallowedReferences = [ foo ];
</varlistentry>
<varlistentry><term><varname>disallowedRequisites</varname></term>
<varlistentry xml:id="adv-attr-disallowedRequisites"><term><varname>disallowedRequisites</varname></term>
<listitem><para>This attribute is similar to
<varname>disallowedReferences</varname>, but it specifies illegal
@ -85,7 +85,7 @@ disallowedRequisites = [ foobar ];
</varlistentry>
<varlistentry><term><varname>exportReferencesGraph</varname></term>
<varlistentry xml:id="adv-attr-exportReferencesGraph"><term><varname>exportReferencesGraph</varname></term>
<listitem><para>This attribute allows builders access to the
references graph of their inputs. The attribute is a list of
@ -124,7 +124,7 @@ derivation {
</varlistentry>
<varlistentry><term><varname>impureEnvVars</varname></term>
<varlistentry xml:id="adv-attr-impureEnvVars"><term><varname>impureEnvVars</varname></term>
<listitem><para>This attribute allows you to specify a list of
environment variables that should be passed from the environment
@ -158,9 +158,9 @@ impureEnvVars = [ "http_proxy" "https_proxy" <replaceable>...</replaceable> ];
<varlistentry xml:id="fixed-output-drvs">
<term><varname>outputHash</varname></term>
<term><varname>outputHashAlgo</varname></term>
<term><varname>outputHashMode</varname></term>
<term xml:id="adv-attr-outputHash"><varname>outputHash</varname></term>
<term xml:id="adv-attr-outputHashAlgo"><varname>outputHashAlgo</varname></term>
<term xml:id="adv-attr-outputHashMode"><varname>outputHashMode</varname></term>
<listitem><para>These attributes declare that the derivation is a
so-called <emphasis>fixed-output derivation</emphasis>, which
@ -282,7 +282,7 @@ stdenv.mkDerivation {
</varlistentry>
<varlistentry><term><varname>passAsFile</varname></term>
<varlistentry xml:id="adv-attr-passAsFile"><term><varname>passAsFile</varname></term>
<listitem><para>A list of names of attributes that should be
passed via files rather than environment variables. For example,
@ -309,7 +309,7 @@ big = "a very long string";
</varlistentry>
<varlistentry><term><varname>preferLocalBuild</varname></term>
<varlistentry xml:id="adv-attr-preferLocalBuild"><term><varname>preferLocalBuild</varname></term>
<listitem><para>If this attribute is set to
<literal>true</literal> and <link
@ -323,14 +323,25 @@ big = "a very long string";
</varlistentry>
<varlistentry><term><varname>allowSubstitutes</varname></term>
<varlistentry xml:id="adv-attr-allowSubstitutes"><term><varname>allowSubstitutes</varname></term>
<listitem><para>If this attribute is set to
<listitem>
<para>If this attribute is set to
<literal>false</literal>, then Nix will always build this
derivation; it will not try to substitute its outputs. This is
useful for very trivial derivations (such as
<function>writeText</function> in Nixpkgs) that are cheaper to
build than to substitute from a binary cache.</para></listitem>
build than to substitute from a binary cache.</para>
<note><para>You need to have a builder configured which satisfies
the derivations <literal>system</literal> attribute, since the
derivation cannot be substituted. Thus it is usually a good idea
to align <literal>system</literal> with
<literal>builtins.currentSystem</literal> when setting
<literal>allowSubstitutes</literal> to <literal>false</literal>.
For most trivial derivations this should be the case.
</para></note>
</listitem>
</varlistentry>

View File

@ -289,7 +289,7 @@ if builtins ? getEnv then builtins.getEnv "PATH" else ""</programlisting>
<listitem><para>Return element <replaceable>n</replaceable> from
the list <replaceable>xs</replaceable>. Elements are counted
starting from 0. A fatal error occurs in the index is out of
starting from 0. A fatal error occurs if the index is out of
bounds.</para></listitem>
</varlistentry>
@ -746,6 +746,11 @@ builtins.genList (x: x * x) 5
separate file, and use it from Nix expressions in other
files.</para>
<note><para>Unlike some languages, <function>import</function> is a regular
function in Nix. Paths using the angle bracket syntax (e.g., <function>
import</function> <replaceable>&lt;foo&gt;</replaceable>) are normal path
values (see <xref linkend='ssec-values' />).</para></note>
<para>A Nix expression loaded by <function>import</function> must
not contain any <emphasis>free variables</emphasis> (identifiers
that are not defined in the Nix expression itself and are not
@ -1115,6 +1120,16 @@ Evaluates to <literal>[ "foo" ]</literal>.
</varlistentry>
<varlistentry xml:id='builtin-placeholder'>
<term><function>builtins.placeholder</function>
<replaceable>output</replaceable></term>
<listitem><para>Return a placeholder string for the specified
<replaceable>output</replaceable> that will be substituted by the
corresponding output path at build time. Typical outputs would be
<literal>"out"</literal>, <literal>"bin"</literal> or
<literal>"dev"</literal>.</para></listitem>
</varlistentry>
<varlistentry xml:id='builtin-readDir'>
<term><function>builtins.readDir</function>
@ -1466,7 +1481,7 @@ in foo</programlisting>
<listitem><para>A set containing <literal>{ __toString = self: ...; }</literal>.</para></listitem>
<listitem><para>An integer.</para></listitem>
<listitem><para>A list, in which case the string representations of its elements are joined with spaces.</para></listitem>
<listitem><para>A Boolean (<literal>false</literal> yields <literal>""</literal>, <literal>true</literal> yields <literal>"1"</literal>.</para></listitem>
<listitem><para>A Boolean (<literal>false</literal> yields <literal>""</literal>, <literal>true</literal> yields <literal>"1"</literal>).</para></listitem>
<listitem><para><literal>null</literal>, which yields the empty string.</para></listitem>
</itemizedlist>
</listitem>
@ -1605,12 +1620,18 @@ stdenv.mkDerivation (rec {
<term><function>builtins.tryEval</function>
<replaceable>e</replaceable></term>
<listitem><para>Try to evaluate <replaceable>e</replaceable>.
<listitem><para>Try to shallowly evaluate <replaceable>e</replaceable>.
Return a set containing the attributes <literal>success</literal>
(<literal>true</literal> if <replaceable>e</replaceable> evaluated
successfully, <literal>false</literal> if an error was thrown) and
<literal>value</literal>, equalling <replaceable>e</replaceable>
if successful and <literal>false</literal> otherwise.
if successful and <literal>false</literal> otherwise. Note that this
doesn't evaluate <replaceable>e</replaceable> deeply, so
<literal>let e = { x = throw ""; }; in (builtins.tryEval e).success
</literal> will be <literal>true</literal>. Using <literal>builtins.deepSeq
</literal> one can get the expected result: <literal>let e = { x = throw "";
}; in (builtins.tryEval (builtins.deepSeq e e)).success</literal> will be
<literal>false</literal>.
</para></listitem>
</varlistentry>

View File

@ -43,7 +43,7 @@ use <command>nix-build</command>s <option
linkend='opt-out-link'>-o</option> switch to give the symlink another
name.</para>
<para>Nix has a transactional semantics. Once a build finishes
<para>Nix has transactional semantics. Once a build finishes
successfully, Nix makes a note of this in its database: it registers
that the path denoted by <envar>out</envar> is now
<quote>valid</quote>. If you try to build the derivation again, Nix

View File

@ -8,6 +8,14 @@
<itemizedlist>
<listitem><para>GNU Autoconf
(<link xlink:href="https://www.gnu.org/software/autoconf/"/>)
and the autoconf-archive macro collection
(<link xlink:href="https://www.gnu.org/software/autoconf-archive/"/>).
These are only needed to run the bootstrap script, and are not necessary
if your source distribution came with a pre-built
<literal>./configure</literal> script.</para></listitem>
<listitem><para>GNU Make.</para></listitem>
<listitem><para>Bash Shell. The <literal>./configure</literal> script

View File

@ -17,6 +17,9 @@ a set of Nix expressions and a manifest. Using the command <link
linkend="sec-nix-channel"><command>nix-channel</command></link> you
can automatically stay up to date with whatever is available at that
URL.</para>
<para>To see the list of official NixOS channels, visit <link
xlink:href="https://nixos.org/channels" />.</para>
<para>You can “subscribe” to a channel using
<command>nix-channel --add</command>, e.g.,

View File

@ -52,12 +52,13 @@ garbage collector as follows:
<screen>
$ nix-store --gc</screen>
The behaviour of the gargage collector is affected by the <literal>keep-
derivations</literal> (default: true) and <literal>keep-outputs</literal>
The behaviour of the gargage collector is affected by the
<literal>keep-derivations</literal> (default: true) and <literal>keep-outputs</literal>
(default: false) options in the Nix configuration file. The defaults will ensure
that all derivations that are not build-time dependencies of garbage collector roots
will be collected but that all output paths that are not runtime dependencies
will be collected. (This is usually what you want, but while you are developing
that all derivations that are build-time dependencies of garbage collector roots
will be kept and that all output paths that are runtime dependencies
will be kept as well. All other derivations or paths will be collected.
(This is usually what you want, but while you are developing
it may make sense to keep outputs to ensure that rebuild times are quick.)
If you are feeling uncertain, you can also first view what files would

View File

@ -159,7 +159,6 @@ the S3 URL:</para>
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts",
"s3:ListObjects",
"s3:PutObject"
],
"Resource": [

View File

@ -503,14 +503,14 @@
</listitem>
<listitem>
<para><emphasis>Pure evaluation mode</emphasis>. This is a variant
of the existing restricted evaluation mode. In pure mode, the Nix
evaluator forbids access to anything that could cause different
evaluations of the same command line arguments to produce a
<para><emphasis>Pure evaluation mode</emphasis>. With the
<literal>--pure-eval</literal> flag, Nix enables a variant of the existing
restricted evaluation mode that forbids access to anything that could cause
different evaluations of the same command line arguments to produce a
different result. This includes builtin functions such as
<function>builtins.getEnv</function>, but more importantly,
<emphasis>all</emphasis> filesystem or network access unless a
content hash or commit hash is specified. For example, calls to
<emphasis>all</emphasis> filesystem or network access unless a content hash
or commit hash is specified. For example, calls to
<function>builtins.fetchGit</function> are only allowed if a
<varname>rev</varname> attribute is specified.</para>

View File

@ -33,9 +33,13 @@ incompatible changes:</para>
</listitem>
<listitem>
<para>The installer now enables sandboxing by default on
Linux. The <literal>max-jobs</literal> setting now defaults to
1.</para>
<para>The installer now enables sandboxing by default on Linux when the
system has the necessary kernel support.
</para>
</listitem>
<listitem>
<para>The <literal>max-jobs</literal> setting now defaults to 1.</para>
</listitem>
<listitem>
@ -82,11 +86,6 @@ incompatible changes:</para>
the duration of Nix function calls to stderr.</para>
</listitem>
<listitem>
<para>On Linux, sandboxing is now disabled by default on systems
that dont have the necessary kernel support.</para>
</listitem>
</itemizedlist>
</section>

View File

@ -2,11 +2,13 @@ ifeq ($(MAKECMDGOALS), dist)
dist-files += $(shell cat .dist-files)
endif
dist-files += configure config.h.in nix.spec perl/configure
dist-files += configure config.h.in perl/configure
clean-files += Makefile.config
GLOBAL_CXXFLAGS += -I . -I src -I src/libutil -I src/libstore -I src/libmain -I src/libexpr -I src/nix
GLOBAL_CXXFLAGS += -I . -I src -I src/libutil -I src/libstore -I src/libmain -I src/libexpr -I src/nix -Wno-deprecated-declarations
$(foreach i, config.h $(call rwildcard, src/lib*, *.hh), \
$(eval $(call install-file-in, $(i), $(includedir)/nix, 0644)))
$(GCH) $(PCH): src/libutil/util.hh config.h

View File

@ -17,7 +17,7 @@
<array>
<string>/bin/sh</string>
<string>-c</string>
<string>/bin/wait4path @bindir@/nix-daemon &amp;&amp; @bindir@/nix-daemon</string>
<string>/bin/wait4path /nix/var/nix/profiles/default/bin/nix-daemon &amp;&amp; /nix/var/nix/profiles/default/bin/nix-daemon</string>
</array>
<key>StandardErrorPath</key>
<string>/var/log/nix-daemon.log</string>

View File

@ -1,10 +1,10 @@
$(buildprefix)%.o: %.cc
@mkdir -p "$(dir $@)"
$(trace-cxx) $(CXX) -o $@ -c $< $(GLOBAL_CXXFLAGS) $(GLOBAL_CXXFLAGS_PCH) $(CXXFLAGS) $($@_CXXFLAGS) -MMD -MF $(call filename-to-dep, $@) -MP
$(trace-cxx) $(CXX) -o $@ -c $< $(GLOBAL_CXXFLAGS_PCH) $(GLOBAL_CXXFLAGS) $(CXXFLAGS) $($@_CXXFLAGS) -MMD -MF $(call filename-to-dep, $@) -MP
$(buildprefix)%.o: %.cpp
@mkdir -p "$(dir $@)"
$(trace-cxx) $(CXX) -o $@ -c $< $(GLOBAL_CXXFLAGS) $(GLOBAL_CXXFLAGS_PCH) $(CXXFLAGS) $($@_CXXFLAGS) -MMD -MF $(call filename-to-dep, $@) -MP
$(trace-cxx) $(CXX) -o $@ -c $< $(GLOBAL_CXXFLAGS_PCH) $(GLOBAL_CXXFLAGS) $(CXXFLAGS) $($@_CXXFLAGS) -MMD -MF $(call filename-to-dep, $@) -MP
$(buildprefix)%.o: %.c
@mkdir -p "$(dir $@)"

42
mk/precompiled-headers.mk Normal file
View File

@ -0,0 +1,42 @@
PRECOMPILE_HEADERS ?= 1
print-var-help += \
echo " PRECOMPILE_HEADERS ($(PRECOMPILE_HEADERS)): Whether to use precompiled headers to speed up the build";
GCH = $(buildprefix)precompiled-headers.h.gch
$(GCH): precompiled-headers.h
@rm -f $@
@mkdir -p "$(dir $@)"
$(trace-gen) $(CXX) -x c++-header -o $@ $< $(GLOBAL_CXXFLAGS)
PCH = $(buildprefix)precompiled-headers.h.pch
$(PCH): precompiled-headers.h
@rm -f $@
@mkdir -p "$(dir $@)"
$(trace-gen) $(CXX) -x c++-header -o $@ $< $(GLOBAL_CXXFLAGS)
clean-files += $(GCH) $(PCH)
ifeq ($(PRECOMPILE_HEADERS), 1)
ifeq ($(CXX), g++)
GLOBAL_CXXFLAGS_PCH += -include $(buildprefix)precompiled-headers.h -Winvalid-pch
GLOBAL_ORDER_AFTER += $(GCH)
else ifeq ($(CXX), clang++)
GLOBAL_CXXFLAGS_PCH += -include-pch $(PCH) -Winvalid-pch
GLOBAL_ORDER_AFTER += $(PCH)
else
$(error Don't know how to precompile headers on $(CXX))
endif
endif

399
nix-rust/Cargo.lock generated Normal file
View File

@ -0,0 +1,399 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
[[package]]
name = "assert_matches"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "autocfg"
version = "0.1.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "bit-set"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bit-vec 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "bit-vec"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "bitflags"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "byteorder"
version = "1.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "c2-chacha"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"ppv-lite86 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "cfg-if"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "cloudabi"
version = "0.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bitflags 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "fnv"
version = "1.0.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "fuchsia-cprng"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "getrandom"
version = "0.1.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.66 (registry+https://github.com/rust-lang/crates.io-index)",
"wasi 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "hex"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "lazy_static"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "libc"
version = "0.2.66"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "nix-rust"
version = "0.1.0"
dependencies = [
"assert_matches 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
"hex 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.66 (registry+https://github.com/rust-lang/crates.io-index)",
"proptest 0.9.4 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "num-traits"
version = "0.2.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"autocfg 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "ppv-lite86"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "proptest"
version = "0.9.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"bit-set 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"bitflags 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"byteorder 1.3.2 (registry+https://github.com/rust-lang/crates.io-index)",
"lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"num-traits 0.2.10 (registry+https://github.com/rust-lang/crates.io-index)",
"quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.6.5 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_chacha 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_xorshift 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"regex-syntax 0.6.12 (registry+https://github.com/rust-lang/crates.io-index)",
"rusty-fork 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"tempfile 3.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "quick-error"
version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "rand"
version = "0.6.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"autocfg 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.66 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_chacha 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_core 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_hc 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_isaac 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_jitter 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_os 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_pcg 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_xorshift 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand"
version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"getrandom 0.1.13 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.66 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_chacha 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_core 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_hc 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_chacha"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"autocfg 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_chacha"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"c2-chacha 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_core 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_core"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand_core 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_core"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "rand_core"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"getrandom 0.1.13 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_hc"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_hc"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand_core 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_isaac"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_jitter"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.66 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_core 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_os"
version = "0.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cloudabi 0.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
"fuchsia-cprng 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.66 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_core 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
"rdrand 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_pcg"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"autocfg 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)",
"rand_core 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rand_xorshift"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rdrand"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "redox_syscall"
version = "0.1.56"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "regex-syntax"
version = "0.6.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "remove_dir_all"
version = "0.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "rusty-fork"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"fnv 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)",
"quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
"tempfile 3.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"wait-timeout 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "tempfile"
version = "3.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"cfg-if 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)",
"libc 0.2.66 (registry+https://github.com/rust-lang/crates.io-index)",
"rand 0.7.2 (registry+https://github.com/rust-lang/crates.io-index)",
"redox_syscall 0.1.56 (registry+https://github.com/rust-lang/crates.io-index)",
"remove_dir_all 0.5.2 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "wait-timeout"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"libc 0.2.66 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "wasi"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi"
version = "0.3.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
dependencies = [
"winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
"winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "winapi-i686-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
[metadata]
"checksum assert_matches 1.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "7deb0a829ca7bcfaf5da70b073a8d128619259a7be8216a355e23f00763059e5"
"checksum autocfg 0.1.7 (registry+https://github.com/rust-lang/crates.io-index)" = "1d49d90015b3c36167a20fe2810c5cd875ad504b39cff3d4eae7977e6b7c1cb2"
"checksum bit-set 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "e84c238982c4b1e1ee668d136c510c67a13465279c0cb367ea6baf6310620a80"
"checksum bit-vec 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "f59bbe95d4e52a6398ec21238d31577f2b28a9d86807f06ca59d191d8440d0bb"
"checksum bitflags 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "cf1de2fe8c75bc145a2f577add951f8134889b4795d47466a54a5c846d691693"
"checksum byteorder 1.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "a7c3dd8985a7111efc5c80b44e23ecdd8c007de8ade3b96595387e812b957cf5"
"checksum c2-chacha 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "214238caa1bf3a496ec3392968969cab8549f96ff30652c9e56885329315f6bb"
"checksum cfg-if 0.1.10 (registry+https://github.com/rust-lang/crates.io-index)" = "4785bdd1c96b2a846b2bd7cc02e86b6b3dbf14e7e53446c4f54c92a361040822"
"checksum cloudabi 0.0.3 (registry+https://github.com/rust-lang/crates.io-index)" = "ddfc5b9aa5d4507acaf872de71051dfd0e309860e88966e1051e462a077aac4f"
"checksum fnv 1.0.6 (registry+https://github.com/rust-lang/crates.io-index)" = "2fad85553e09a6f881f739c29f0b00b0f01357c743266d478b68951ce23285f3"
"checksum fuchsia-cprng 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "a06f77d526c1a601b7c4cdd98f54b5eaabffc14d5f2f0296febdc7f357c6d3ba"
"checksum getrandom 0.1.13 (registry+https://github.com/rust-lang/crates.io-index)" = "e7db7ca94ed4cd01190ceee0d8a8052f08a247aa1b469a7f68c6a3b71afcf407"
"checksum hex 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)" = "805026a5d0141ffc30abb3be3173848ad46a1b1664fe632428479619a3644d77"
"checksum lazy_static 1.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
"checksum libc 0.2.66 (registry+https://github.com/rust-lang/crates.io-index)" = "d515b1f41455adea1313a4a2ac8a8a477634fbae63cc6100e3aebb207ce61558"
"checksum num-traits 0.2.10 (registry+https://github.com/rust-lang/crates.io-index)" = "d4c81ffc11c212fa327657cb19dd85eb7419e163b5b076bede2bdb5c974c07e4"
"checksum ppv-lite86 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)" = "74490b50b9fbe561ac330df47c08f3f33073d2d00c150f719147d7c54522fa1b"
"checksum proptest 0.9.4 (registry+https://github.com/rust-lang/crates.io-index)" = "cf147e022eacf0c8a054ab864914a7602618adba841d800a9a9868a5237a529f"
"checksum quick-error 1.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "9274b940887ce9addde99c4eee6b5c44cc494b182b97e73dc8ffdcb3397fd3f0"
"checksum rand 0.6.5 (registry+https://github.com/rust-lang/crates.io-index)" = "6d71dacdc3c88c1fde3885a3be3fbab9f35724e6ce99467f7d9c5026132184ca"
"checksum rand 0.7.2 (registry+https://github.com/rust-lang/crates.io-index)" = "3ae1b169243eaf61759b8475a998f0a385e42042370f3a7dbaf35246eacc8412"
"checksum rand_chacha 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "556d3a1ca6600bfcbab7c7c91ccb085ac7fbbcd70e008a98742e7847f4f7bcef"
"checksum rand_chacha 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "03a2a90da8c7523f554344f921aa97283eadf6ac484a6d2a7d0212fa7f8d6853"
"checksum rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7a6fdeb83b075e8266dcc8762c22776f6877a63111121f5f8c7411e5be7eed4b"
"checksum rand_core 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "9c33a3c44ca05fa6f1807d8e6743f3824e8509beca625669633be0acbdf509dc"
"checksum rand_core 0.5.1 (registry+https://github.com/rust-lang/crates.io-index)" = "90bde5296fc891b0cef12a6d03ddccc162ce7b2aff54160af9338f8d40df6d19"
"checksum rand_hc 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "7b40677c7be09ae76218dc623efbf7b18e34bced3f38883af07bb75630a21bc4"
"checksum rand_hc 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ca3129af7b92a17112d59ad498c6f81eaf463253766b90396d39ea7a39d6613c"
"checksum rand_isaac 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "ded997c9d5f13925be2a6fd7e66bf1872597f759fd9dd93513dd7e92e5a5ee08"
"checksum rand_jitter 0.1.4 (registry+https://github.com/rust-lang/crates.io-index)" = "1166d5c91dc97b88d1decc3285bb0a99ed84b05cfd0bc2341bdf2d43fc41e39b"
"checksum rand_os 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)" = "7b75f676a1e053fc562eafbb47838d67c84801e38fc1ba459e8f180deabd5071"
"checksum rand_pcg 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "abf9b09b01790cfe0364f52bf32995ea3c39f4d2dd011eac241d2914146d0b44"
"checksum rand_xorshift 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "cbf7e9e623549b0e21f6e97cf8ecf247c1a8fd2e8a992ae265314300b2455d5c"
"checksum rdrand 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "678054eb77286b51581ba43620cc911abf02758c91f93f479767aed0f90458b2"
"checksum redox_syscall 0.1.56 (registry+https://github.com/rust-lang/crates.io-index)" = "2439c63f3f6139d1b57529d16bc3b8bb855230c8efcc5d3a896c8bea7c3b1e84"
"checksum regex-syntax 0.6.12 (registry+https://github.com/rust-lang/crates.io-index)" = "11a7e20d1cce64ef2fed88b66d347f88bd9babb82845b2b858f3edbf59a4f716"
"checksum remove_dir_all 0.5.2 (registry+https://github.com/rust-lang/crates.io-index)" = "4a83fa3702a688b9359eccba92d153ac33fd2e8462f9e0e3fdf155239ea7792e"
"checksum rusty-fork 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "3dd93264e10c577503e926bd1430193eeb5d21b059148910082245309b424fae"
"checksum tempfile 3.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "7a6e24d9338a0a5be79593e2fa15a648add6138caa803e2d5bc782c371732ca9"
"checksum wait-timeout 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "9f200f5b12eb75f8c1ed65abd4b2db8a6e1b138a20de009dacee265a2498f3f6"
"checksum wasi 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "b89c3ce4ce14bdc6fb6beaf9ec7928ca331de5df7e5ea278375642a2f478570d"
"checksum winapi 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)" = "8093091eeb260906a183e6ae1abdba2ef5ef2257a21801128899c3fc699229c6"
"checksum winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
"checksum winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"

23
nix-rust/Cargo.toml Normal file
View File

@ -0,0 +1,23 @@
[package]
name = "nix-rust"
version = "0.1.0"
authors = ["Eelco Dolstra <edolstra@gmail.com>"]
edition = "2018"
[lib]
name = "nixrust"
crate-type = ["cdylib"]
[dependencies]
libc = "0.2"
#futures-preview = { version = "=0.3.0-alpha.19" }
#hyper = "0.13.0-alpha.4"
#http = "0.1"
#tokio = { version = "0.2.0-alpha.6", default-features = false, features = ["rt-full"] }
lazy_static = "1.4"
#byteorder = "1.3"
[dev-dependencies]
hex = "0.3"
assert_matches = "1.3"
proptest = "0.9"

45
nix-rust/local.mk Normal file
View File

@ -0,0 +1,45 @@
ifeq ($(OPTIMIZE), 1)
RUST_MODE = --release
RUST_DIR = release
else
RUST_MODE =
RUST_DIR = debug
endif
libnixrust_PATH := $(d)/target/$(RUST_DIR)/libnixrust.$(SO_EXT)
libnixrust_INSTALL_PATH := $(libdir)/libnixrust.$(SO_EXT)
libnixrust_LDFLAGS_USE := -L$(d)/target/$(RUST_DIR) -lnixrust -ldl
libnixrust_LDFLAGS_USE_INSTALLED := -L$(libdir) -lnixrust -ldl
ifeq ($(OS), Darwin)
libnixrust_BUILD_FLAGS = NIX_LDFLAGS="-undefined dynamic_lookup"
else
libnixrust_LDFLAGS_USE += -Wl,-rpath,$(abspath $(d)/target/$(RUST_DIR))
libnixrust_LDFLAGS_USE_INSTALLED += -Wl,-rpath,$(libdir)
endif
$(libnixrust_PATH): $(call rwildcard, $(d)/src, *.rs) $(d)/Cargo.toml
$(trace-gen) cd nix-rust && CARGO_HOME=$$(if [[ -d vendor ]]; then echo vendor; fi) \
$(libnixrust_BUILD_FLAGS) \
cargo build $(RUST_MODE) $$(if [[ -d vendor ]]; then echo --offline; fi) \
&& touch target/$(RUST_DIR)/libnixrust.$(SO_EXT)
$(libnixrust_INSTALL_PATH): $(libnixrust_PATH)
$(target-gen) cp $^ $@
ifeq ($(OS), Darwin)
install_name_tool -id $@ $@
endif
dist-files += $(d)/vendor
clean: clean-rust
clean-rust:
$(suppress) rm -rfv nix-rust/target
ifneq ($(OS), Darwin)
check: rust-tests
rust-tests:
cd nix-rust && CARGO_HOME=$$(if [[ -d vendor ]]; then echo vendor; fi) cargo test --release $$(if [[ -d vendor ]]; then echo --offline; fi)
endif

77
nix-rust/src/c.rs Normal file
View File

@ -0,0 +1,77 @@
use super::{error, store::path, store::StorePath, util};
#[no_mangle]
pub unsafe extern "C" fn ffi_String_new(s: &str, out: *mut String) {
// FIXME: check whether 's' is valid UTF-8?
out.write(s.to_string())
}
#[no_mangle]
pub unsafe extern "C" fn ffi_String_drop(self_: *mut String) {
std::ptr::drop_in_place(self_);
}
#[no_mangle]
pub extern "C" fn ffi_StorePath_new(
path: &str,
store_dir: &str,
) -> Result<StorePath, error::CppException> {
StorePath::new(std::path::Path::new(path), std::path::Path::new(store_dir))
.map_err(|err| err.into())
}
#[no_mangle]
pub extern "C" fn ffi_StorePath_new2(
hash: &[u8; crate::store::path::STORE_PATH_HASH_BYTES],
name: &str,
) -> Result<StorePath, error::CppException> {
StorePath::from_parts(*hash, name).map_err(|err| err.into())
}
#[no_mangle]
pub extern "C" fn ffi_StorePath_fromBaseName(
base_name: &str,
) -> Result<StorePath, error::CppException> {
StorePath::new_from_base_name(base_name).map_err(|err| err.into())
}
#[no_mangle]
pub unsafe extern "C" fn ffi_StorePath_drop(self_: *mut StorePath) {
std::ptr::drop_in_place(self_);
}
#[no_mangle]
pub extern "C" fn ffi_StorePath_to_string(self_: &StorePath) -> Vec<u8> {
let mut buf = vec![0; path::STORE_PATH_HASH_CHARS + 1 + self_.name.name().len()];
util::base32::encode_into(self_.hash.hash(), &mut buf[0..path::STORE_PATH_HASH_CHARS]);
buf[path::STORE_PATH_HASH_CHARS] = b'-';
buf[path::STORE_PATH_HASH_CHARS + 1..].clone_from_slice(self_.name.name().as_bytes());
buf
}
#[no_mangle]
pub extern "C" fn ffi_StorePath_less_than(a: &StorePath, b: &StorePath) -> bool {
a < b
}
#[no_mangle]
pub extern "C" fn ffi_StorePath_eq(a: &StorePath, b: &StorePath) -> bool {
a == b
}
#[no_mangle]
pub extern "C" fn ffi_StorePath_clone(self_: &StorePath) -> StorePath {
self_.clone()
}
#[no_mangle]
pub extern "C" fn ffi_StorePath_name(self_: &StorePath) -> &str {
self_.name.name()
}
#[no_mangle]
pub extern "C" fn ffi_StorePath_hash_data(
self_: &StorePath,
) -> &[u8; crate::store::path::STORE_PATH_HASH_BYTES] {
self_.hash.hash()
}

118
nix-rust/src/error.rs Normal file
View File

@ -0,0 +1,118 @@
use std::fmt;
#[derive(Debug)]
pub enum Error {
InvalidPath(crate::store::StorePath),
BadStorePath(std::path::PathBuf),
NotInStore(std::path::PathBuf),
BadNarInfo,
BadBase32,
StorePathNameEmpty,
StorePathNameTooLong,
BadStorePathName,
NarSizeFieldTooBig,
BadNarString,
BadNarPadding,
BadNarVersionMagic,
MissingNarOpenTag,
MissingNarCloseTag,
MissingNarField,
BadNarField(String),
BadExecutableField,
IOError(std::io::Error),
#[cfg(unused)]
HttpError(hyper::error::Error),
Misc(String),
#[cfg(not(test))]
Foreign(CppException),
BadTarFileMemberName(String),
}
impl From<std::io::Error> for Error {
fn from(err: std::io::Error) -> Self {
Error::IOError(err)
}
}
#[cfg(unused)]
impl From<hyper::error::Error> for Error {
fn from(err: hyper::error::Error) -> Self {
Error::HttpError(err)
}
}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Error::InvalidPath(_) => write!(f, "invalid path"),
Error::BadNarInfo => write!(f, ".narinfo file is corrupt"),
Error::BadStorePath(path) => write!(f, "path '{}' is not a store path", path.display()),
Error::NotInStore(path) => {
write!(f, "path '{}' is not in the Nix store", path.display())
}
Error::BadBase32 => write!(f, "invalid base32 string"),
Error::StorePathNameEmpty => write!(f, "store path name is empty"),
Error::StorePathNameTooLong => {
write!(f, "store path name is longer than 211 characters")
}
Error::BadStorePathName => write!(f, "store path name contains forbidden character"),
Error::NarSizeFieldTooBig => write!(f, "size field in NAR is too big"),
Error::BadNarString => write!(f, "NAR string is not valid UTF-8"),
Error::BadNarPadding => write!(f, "NAR padding is not zero"),
Error::BadNarVersionMagic => write!(f, "unsupported NAR version"),
Error::MissingNarOpenTag => write!(f, "NAR open tag is missing"),
Error::MissingNarCloseTag => write!(f, "NAR close tag is missing"),
Error::MissingNarField => write!(f, "expected NAR field is missing"),
Error::BadNarField(s) => write!(f, "unrecognized NAR field '{}'", s),
Error::BadExecutableField => write!(f, "bad 'executable' field in NAR"),
Error::IOError(err) => write!(f, "I/O error: {}", err),
#[cfg(unused)]
Error::HttpError(err) => write!(f, "HTTP error: {}", err),
#[cfg(not(test))]
Error::Foreign(_) => write!(f, "<C++ exception>"), // FIXME
Error::Misc(s) => write!(f, "{}", s),
Error::BadTarFileMemberName(s) => {
write!(f, "tar archive contains illegal file name '{}'", s)
}
}
}
}
#[cfg(not(test))]
impl From<Error> for CppException {
fn from(err: Error) -> Self {
match err {
Error::Foreign(ex) => ex,
_ => CppException::new(&err.to_string()),
}
}
}
#[cfg(not(test))]
#[repr(C)]
#[derive(Debug)]
pub struct CppException(*const libc::c_void); // == std::exception_ptr*
#[cfg(not(test))]
impl CppException {
fn new(s: &str) -> Self {
Self(unsafe { make_error(s) })
}
}
#[cfg(not(test))]
impl Drop for CppException {
fn drop(&mut self) {
unsafe {
destroy_error(self.0);
}
}
}
#[cfg(not(test))]
extern "C" {
#[allow(improper_ctypes)] // YOLO
fn make_error(s: &str) -> *const libc::c_void;
fn destroy_error(exc: *const libc::c_void);
}

9
nix-rust/src/lib.rs Normal file
View File

@ -0,0 +1,9 @@
#[cfg(not(test))]
mod c;
mod error;
#[cfg(unused)]
mod nar;
mod store;
mod util;
pub use error::Error;

126
nix-rust/src/nar.rs Normal file
View File

@ -0,0 +1,126 @@
use crate::Error;
use byteorder::{LittleEndian, ReadBytesExt};
use std::convert::TryFrom;
use std::io::Read;
pub fn parse<R: Read>(input: &mut R) -> Result<(), Error> {
if String::read(input)? != NAR_VERSION_MAGIC {
return Err(Error::BadNarVersionMagic);
}
parse_file(input)
}
const NAR_VERSION_MAGIC: &str = "nix-archive-1";
fn parse_file<R: Read>(input: &mut R) -> Result<(), Error> {
if String::read(input)? != "(" {
return Err(Error::MissingNarOpenTag);
}
if String::read(input)? != "type" {
return Err(Error::MissingNarField);
}
match String::read(input)?.as_ref() {
"regular" => {
let mut _executable = false;
let mut tag = String::read(input)?;
if tag == "executable" {
_executable = true;
if String::read(input)? != "" {
return Err(Error::BadExecutableField);
}
tag = String::read(input)?;
}
if tag != "contents" {
return Err(Error::MissingNarField);
}
let _contents = Vec::<u8>::read(input)?;
if String::read(input)? != ")" {
return Err(Error::MissingNarCloseTag);
}
}
"directory" => loop {
match String::read(input)?.as_ref() {
"entry" => {
if String::read(input)? != "(" {
return Err(Error::MissingNarOpenTag);
}
if String::read(input)? != "name" {
return Err(Error::MissingNarField);
}
let _name = String::read(input)?;
if String::read(input)? != "node" {
return Err(Error::MissingNarField);
}
parse_file(input)?;
let tag = String::read(input)?;
if tag != ")" {
return Err(Error::MissingNarCloseTag);
}
}
")" => break,
s => return Err(Error::BadNarField(s.into())),
}
},
"symlink" => {
if String::read(input)? != "target" {
return Err(Error::MissingNarField);
}
let _target = String::read(input)?;
if String::read(input)? != ")" {
return Err(Error::MissingNarCloseTag);
}
}
s => return Err(Error::BadNarField(s.into())),
}
Ok(())
}
trait Deserialize: Sized {
fn read<R: Read>(input: &mut R) -> Result<Self, Error>;
}
impl Deserialize for String {
fn read<R: Read>(input: &mut R) -> Result<Self, Error> {
let buf = Deserialize::read(input)?;
Ok(String::from_utf8(buf).map_err(|_| Error::BadNarString)?)
}
}
impl Deserialize for Vec<u8> {
fn read<R: Read>(input: &mut R) -> Result<Self, Error> {
let n: usize = Deserialize::read(input)?;
let mut buf = vec![0; n];
input.read_exact(&mut buf)?;
skip_padding(input, n)?;
Ok(buf)
}
}
fn skip_padding<R: Read>(input: &mut R, len: usize) -> Result<(), Error> {
if len % 8 != 0 {
let mut buf = [0; 8];
let buf = &mut buf[0..8 - (len % 8)];
input.read_exact(buf)?;
if !buf.iter().all(|b| *b == 0) {
return Err(Error::BadNarPadding);
}
}
Ok(())
}
impl Deserialize for u64 {
fn read<R: Read>(input: &mut R) -> Result<Self, Error> {
Ok(input.read_u64::<LittleEndian>()?)
}
}
impl Deserialize for usize {
fn read<R: Read>(input: &mut R) -> Result<Self, Error> {
let n: u64 = Deserialize::read(input)?;
Ok(usize::try_from(n).map_err(|_| Error::NarSizeFieldTooBig)?)
}
}

View File

@ -0,0 +1,48 @@
use super::{PathInfo, Store, StorePath};
use crate::Error;
use hyper::client::Client;
pub struct BinaryCacheStore {
base_uri: String,
client: Client<hyper::client::HttpConnector, hyper::Body>,
}
impl BinaryCacheStore {
pub fn new(base_uri: String) -> Self {
Self {
base_uri,
client: Client::new(),
}
}
}
impl Store for BinaryCacheStore {
fn query_path_info(
&self,
path: &StorePath,
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PathInfo, Error>> + Send>> {
let uri = format!("{}/{}.narinfo", self.base_uri.clone(), path.hash);
let path = path.clone();
let client = self.client.clone();
let store_dir = self.store_dir().to_string();
Box::pin(async move {
let response = client.get(uri.parse::<hyper::Uri>().unwrap()).await?;
if response.status() == hyper::StatusCode::NOT_FOUND
|| response.status() == hyper::StatusCode::FORBIDDEN
{
return Err(Error::InvalidPath(path));
}
let mut body = response.into_body();
let mut bytes = Vec::new();
while let Some(next) = body.next().await {
bytes.extend(next?);
}
PathInfo::parse_nar_info(std::str::from_utf8(&bytes).unwrap(), &store_dir)
})
}
}

17
nix-rust/src/store/mod.rs Normal file
View File

@ -0,0 +1,17 @@
pub mod path;
#[cfg(unused)]
mod binary_cache_store;
#[cfg(unused)]
mod path_info;
#[cfg(unused)]
mod store;
pub use path::{StorePath, StorePathHash, StorePathName};
#[cfg(unused)]
pub use binary_cache_store::BinaryCacheStore;
#[cfg(unused)]
pub use path_info::PathInfo;
#[cfg(unused)]
pub use store::Store;

225
nix-rust/src/store/path.rs Normal file
View File

@ -0,0 +1,225 @@
use crate::error::Error;
use crate::util::base32;
use std::fmt;
use std::path::Path;
#[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Debug)]
pub struct StorePath {
pub hash: StorePathHash,
pub name: StorePathName,
}
pub const STORE_PATH_HASH_BYTES: usize = 20;
pub const STORE_PATH_HASH_CHARS: usize = 32;
impl StorePath {
pub fn new(path: &Path, store_dir: &Path) -> Result<Self, Error> {
if path.parent() != Some(store_dir) {
return Err(Error::NotInStore(path.into()));
}
Self::new_from_base_name(
path.file_name()
.ok_or(Error::BadStorePath(path.into()))?
.to_str()
.ok_or(Error::BadStorePath(path.into()))?,
)
}
pub fn from_parts(hash: [u8; STORE_PATH_HASH_BYTES], name: &str) -> Result<Self, Error> {
Ok(StorePath {
hash: StorePathHash(hash),
name: StorePathName::new(name)?,
})
}
pub fn new_from_base_name(base_name: &str) -> Result<Self, Error> {
if base_name.len() < STORE_PATH_HASH_CHARS + 1
|| base_name.as_bytes()[STORE_PATH_HASH_CHARS] != '-' as u8
{
return Err(Error::BadStorePath(base_name.into()));
}
Ok(StorePath {
hash: StorePathHash::new(&base_name[0..STORE_PATH_HASH_CHARS])?,
name: StorePathName::new(&base_name[STORE_PATH_HASH_CHARS + 1..])?,
})
}
}
impl fmt::Display for StorePath {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}-{}", self.hash, self.name)
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct StorePathHash([u8; STORE_PATH_HASH_BYTES]);
impl StorePathHash {
pub fn new(s: &str) -> Result<Self, Error> {
assert_eq!(s.len(), STORE_PATH_HASH_CHARS);
let v = base32::decode(s)?;
assert_eq!(v.len(), STORE_PATH_HASH_BYTES);
let mut bytes: [u8; 20] = Default::default();
bytes.copy_from_slice(&v[0..STORE_PATH_HASH_BYTES]);
Ok(Self(bytes))
}
pub fn hash<'a>(&'a self) -> &'a [u8; STORE_PATH_HASH_BYTES] {
&self.0
}
}
impl fmt::Display for StorePathHash {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let mut buf = vec![0; STORE_PATH_HASH_CHARS];
base32::encode_into(&self.0, &mut buf);
f.write_str(std::str::from_utf8(&buf).unwrap())
}
}
impl Ord for StorePathHash {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
// Historically we've sorted store paths by their base32
// serialization, but our base32 encodes bytes in reverse
// order. So compare them in reverse order as well.
self.0.iter().rev().cmp(other.0.iter().rev())
}
}
impl PartialOrd for StorePathHash {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
#[derive(Clone, PartialEq, Eq, PartialOrd, Ord, Debug)]
pub struct StorePathName(String);
impl StorePathName {
pub fn new(s: &str) -> Result<Self, Error> {
if s.len() == 0 {
return Err(Error::StorePathNameEmpty);
}
if s.len() > 211 {
return Err(Error::StorePathNameTooLong);
}
if s.starts_with('.')
|| !s.chars().all(|c| {
c.is_ascii_alphabetic()
|| c.is_ascii_digit()
|| c == '+'
|| c == '-'
|| c == '.'
|| c == '_'
|| c == '?'
|| c == '='
})
{
return Err(Error::BadStorePathName);
}
Ok(Self(s.to_string()))
}
pub fn name<'a>(&'a self) -> &'a str {
&self.0
}
}
impl fmt::Display for StorePathName {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.write_str(&self.0)
}
}
#[cfg(test)]
mod tests {
use super::*;
use assert_matches::assert_matches;
#[test]
fn test_parse() {
let s = "7h7qgvs4kgzsn8a6rb273saxyqh4jxlz-konsole-18.12.3";
let p = StorePath::new_from_base_name(&s).unwrap();
assert_eq!(p.name.0, "konsole-18.12.3");
assert_eq!(
p.hash.0,
[
0x9f, 0x76, 0x49, 0x20, 0xf6, 0x5d, 0xe9, 0x71, 0xc4, 0xca, 0x46, 0x21, 0xab, 0xff,
0x9b, 0x44, 0xef, 0x87, 0x0f, 0x3c
]
);
}
#[test]
fn test_no_name() {
let s = "7h7qgvs4kgzsn8a6rb273saxyqh4jxlz-";
assert_matches!(
StorePath::new_from_base_name(&s),
Err(Error::StorePathNameEmpty)
);
}
#[test]
fn test_no_dash() {
let s = "7h7qgvs4kgzsn8a6rb273saxyqh4jxlz";
assert_matches!(
StorePath::new_from_base_name(&s),
Err(Error::BadStorePath(_))
);
}
#[test]
fn test_short_hash() {
let s = "7h7qgvs4kgzsn8a6rb273saxyqh4jxl-konsole-18.12.3";
assert_matches!(
StorePath::new_from_base_name(&s),
Err(Error::BadStorePath(_))
);
}
#[test]
fn test_invalid_hash() {
let s = "7h7qgvs4kgzsn8e6rb273saxyqh4jxlz-konsole-18.12.3";
assert_matches!(StorePath::new_from_base_name(&s), Err(Error::BadBase32));
}
#[test]
fn test_long_name() {
let s = "7h7qgvs4kgzsn8a6rb273saxyqh4jxlz-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
assert_matches!(StorePath::new_from_base_name(&s), Ok(_));
}
#[test]
fn test_too_long_name() {
let s = "7h7qgvs4kgzsn8a6rb273saxyqh4jxlz-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
assert_matches!(
StorePath::new_from_base_name(&s),
Err(Error::StorePathNameTooLong)
);
}
#[test]
fn test_bad_name() {
let s = "7h7qgvs4kgzsn8a6rb273saxyqh4jxlz-foo bar";
assert_matches!(
StorePath::new_from_base_name(&s),
Err(Error::BadStorePathName)
);
let s = "7h7qgvs4kgzsn8a6rb273saxyqh4jxlz-kónsole";
assert_matches!(
StorePath::new_from_base_name(&s),
Err(Error::BadStorePathName)
);
}
#[test]
fn test_roundtrip() {
let s = "7h7qgvs4kgzsn8a6rb273saxyqh4jxlz-konsole-18.12.3";
assert_eq!(StorePath::new_from_base_name(&s).unwrap().to_string(), s);
}
}

View File

@ -0,0 +1,70 @@
use crate::store::StorePath;
use crate::Error;
use std::collections::BTreeSet;
#[derive(Clone, Debug)]
pub struct PathInfo {
pub path: StorePath,
pub references: BTreeSet<StorePath>,
pub nar_size: u64,
pub deriver: Option<StorePath>,
// Additional binary cache info.
pub url: Option<String>,
pub compression: Option<String>,
pub file_size: Option<u64>,
}
impl PathInfo {
pub fn parse_nar_info(nar_info: &str, store_dir: &str) -> Result<Self, Error> {
let mut path = None;
let mut references = BTreeSet::new();
let mut nar_size = None;
let mut deriver = None;
let mut url = None;
let mut compression = None;
let mut file_size = None;
for line in nar_info.lines() {
let colon = line.find(':').ok_or(Error::BadNarInfo)?;
let (name, value) = line.split_at(colon);
if !value.starts_with(": ") {
return Err(Error::BadNarInfo);
}
let value = &value[2..];
if name == "StorePath" {
path = Some(StorePath::new(std::path::Path::new(value), store_dir)?);
} else if name == "NarSize" {
nar_size = Some(u64::from_str_radix(value, 10).map_err(|_| Error::BadNarInfo)?);
} else if name == "References" {
if !value.is_empty() {
for r in value.split(' ') {
references.insert(StorePath::new_from_base_name(r)?);
}
}
} else if name == "Deriver" {
deriver = Some(StorePath::new_from_base_name(value)?);
} else if name == "URL" {
url = Some(value.into());
} else if name == "Compression" {
compression = Some(value.into());
} else if name == "FileSize" {
file_size = Some(u64::from_str_radix(value, 10).map_err(|_| Error::BadNarInfo)?);
}
}
Ok(PathInfo {
path: path.ok_or(Error::BadNarInfo)?,
references,
nar_size: nar_size.ok_or(Error::BadNarInfo)?,
deriver,
url: Some(url.ok_or(Error::BadNarInfo)?),
compression,
file_size,
})
}
}

View File

@ -0,0 +1,53 @@
use super::{PathInfo, StorePath};
use crate::Error;
use std::collections::{BTreeMap, BTreeSet};
use std::path::Path;
pub trait Store: Send + Sync {
fn store_dir(&self) -> &str {
"/nix/store"
}
fn query_path_info(
&self,
store_path: &StorePath,
) -> std::pin::Pin<Box<dyn std::future::Future<Output = Result<PathInfo, Error>> + Send>>;
}
impl dyn Store {
pub fn parse_store_path(&self, path: &Path) -> Result<StorePath, Error> {
StorePath::new(path, self.store_dir())
}
pub async fn compute_path_closure(
&self,
roots: BTreeSet<StorePath>,
) -> Result<BTreeMap<StorePath, PathInfo>, Error> {
let mut done = BTreeSet::new();
let mut result = BTreeMap::new();
let mut pending = vec![];
for root in roots {
pending.push(self.query_path_info(&root));
done.insert(root);
}
while !pending.is_empty() {
let (info, _, remaining) = futures::future::select_all(pending).await;
pending = remaining;
let info = info?;
for path in &info.references {
if !done.contains(path) {
pending.push(self.query_path_info(&path));
done.insert(path.clone());
}
}
result.insert(info.path.clone(), info);
}
Ok(result)
}
}

160
nix-rust/src/util/base32.rs Normal file
View File

@ -0,0 +1,160 @@
use crate::error::Error;
use lazy_static::lazy_static;
pub fn encoded_len(input_len: usize) -> usize {
if input_len == 0 {
0
} else {
(input_len * 8 - 1) / 5 + 1
}
}
pub fn decoded_len(input_len: usize) -> usize {
input_len * 5 / 8
}
static BASE32_CHARS: &'static [u8; 32] = &b"0123456789abcdfghijklmnpqrsvwxyz";
lazy_static! {
static ref BASE32_CHARS_REVERSE: Box<[u8; 256]> = {
let mut xs = [0xffu8; 256];
for (n, c) in BASE32_CHARS.iter().enumerate() {
xs[*c as usize] = n as u8;
}
Box::new(xs)
};
}
pub fn encode(input: &[u8]) -> String {
let mut buf = vec![0; encoded_len(input.len())];
encode_into(input, &mut buf);
std::str::from_utf8(&buf).unwrap().to_string()
}
pub fn encode_into(input: &[u8], output: &mut [u8]) {
let len = encoded_len(input.len());
assert_eq!(len, output.len());
let mut nr_bits_left: usize = 0;
let mut bits_left: u16 = 0;
let mut pos = len;
for b in input {
bits_left |= (*b as u16) << nr_bits_left;
nr_bits_left += 8;
while nr_bits_left > 5 {
output[pos - 1] = BASE32_CHARS[(bits_left & 0x1f) as usize];
pos -= 1;
bits_left >>= 5;
nr_bits_left -= 5;
}
}
if nr_bits_left > 0 {
output[pos - 1] = BASE32_CHARS[(bits_left & 0x1f) as usize];
pos -= 1;
}
assert_eq!(pos, 0);
}
pub fn decode(input: &str) -> Result<Vec<u8>, crate::Error> {
let mut res = Vec::with_capacity(decoded_len(input.len()));
let mut nr_bits_left: usize = 0;
let mut bits_left: u16 = 0;
for c in input.chars().rev() {
let b = BASE32_CHARS_REVERSE[c as usize];
if b == 0xff {
return Err(Error::BadBase32);
}
bits_left |= (b as u16) << nr_bits_left;
nr_bits_left += 5;
if nr_bits_left >= 8 {
res.push((bits_left & 0xff) as u8);
bits_left >>= 8;
nr_bits_left -= 8;
}
}
if nr_bits_left > 0 && bits_left != 0 {
return Err(Error::BadBase32);
}
Ok(res)
}
#[cfg(test)]
mod tests {
use super::*;
use assert_matches::assert_matches;
use hex;
use proptest::proptest;
#[test]
fn test_encode() {
assert_eq!(encode(&[]), "");
assert_eq!(
encode(&hex::decode("0839703786356bca59b0f4a32987eb2e6de43ae8").unwrap()),
"x0xf8v9fxf3jk8zln1cwlsrmhqvp0f88"
);
assert_eq!(
encode(
&hex::decode("ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad")
.unwrap()
),
"1b8m03r63zqhnjf7l5wnldhh7c134ap5vpj0850ymkq1iyzicy5s"
);
assert_eq!(
encode(
&hex::decode("ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f")
.unwrap()
),
"2gs8k559z4rlahfx0y688s49m2vvszylcikrfinm30ly9rak69236nkam5ydvly1ai7xac99vxfc4ii84hawjbk876blyk1jfhkbbyx"
);
}
#[test]
fn test_decode() {
assert_eq!(hex::encode(decode("").unwrap()), "");
assert_eq!(
hex::encode(decode("x0xf8v9fxf3jk8zln1cwlsrmhqvp0f88").unwrap()),
"0839703786356bca59b0f4a32987eb2e6de43ae8"
);
assert_eq!(
hex::encode(decode("1b8m03r63zqhnjf7l5wnldhh7c134ap5vpj0850ymkq1iyzicy5s").unwrap()),
"ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad"
);
assert_eq!(
hex::encode(decode("2gs8k559z4rlahfx0y688s49m2vvszylcikrfinm30ly9rak69236nkam5ydvly1ai7xac99vxfc4ii84hawjbk876blyk1jfhkbbyx").unwrap()),
"ddaf35a193617abacc417349ae20413112e6fa4e89a97ea20a9eeee64b55d39a2192992a274fc1a836ba3c23a3feebbd454d4423643ce80e2a9ac94fa54ca49f"
);
assert_matches!(
decode("xoxf8v9fxf3jk8zln1cwlsrmhqvp0f88"),
Err(Error::BadBase32)
);
assert_matches!(
decode("2b8m03r63zqhnjf7l5wnldhh7c134ap5vpj0850ymkq1iyzicy5s"),
Err(Error::BadBase32)
);
assert_matches!(decode("2"), Err(Error::BadBase32));
assert_matches!(decode("2gs"), Err(Error::BadBase32));
assert_matches!(decode("2gs8"), Err(Error::BadBase32));
}
proptest! {
#[test]
fn roundtrip(s: Vec<u8>) {
assert_eq!(s, decode(&encode(&s)).unwrap());
}
}
}

1
nix-rust/src/util/mod.rs Normal file
View File

@ -0,0 +1 @@
pub mod base32;

View File

@ -1,173 +0,0 @@
%undefine _hardened_build
%global nixbld_user "nix-builder-"
%global nixbld_group "nixbld"
# NOTE: BUILD on EL7 requires
# - Centos / RHEL7 software collection repository
# yum install centos-release-scl
#
# - Recent boost backport
# curl https://copr.fedorainfracloud.org/coprs/whosthere/boost/repo/epel-7/whosthere-boost-epel-7.repo -o /etc/yum.repos.d/whosthere-boost-epel-7.repo
#
# Disable documentation generation
# necessary on some platforms
%bcond_without docgen
Summary: The Nix software deployment system
Name: nix
Version: @PACKAGE_VERSION@
Release: 2%{?dist}
License: LGPLv2+
Group: Applications/System
URL: http://nixos.org/
Source0: %{name}-%{version}.tar.bz2
Requires: curl
Requires: bzip2
Requires: gzip
Requires: xz
BuildRequires: bison
BuildRequires: boost-devel >= 1.60
BuildRequires: bzip2-devel
# for RHEL <= 7, we need software collections for a C++14 compatible compatible compiler
%if 0%{?rhel}
BuildRequires: devtoolset-7-gcc
BuildRequires: devtoolset-7-gcc-c++
%endif
BuildRequires: flex
BuildRequires: libcurl-devel
BuildRequires: libseccomp-devel
BuildRequires: openssl-devel
BuildRequires: sqlite-devel
BuildRequires: xz-devel
%description
Nix is a purely functional package manager. It allows multiple
versions of a package to be installed side-by-side, ensures that
dependency specifications are complete, supports atomic upgrades and
rollbacks, allows non-root users to install software, and has many
other features. It is the basis of the NixOS Linux distribution, but
it can be used equally well under other Unix systems.
%package devel
Summary: Development files for %{name}
Requires: %{name}%{?_isa} = %{version}-%{release}
%description devel
The %{name}-devel package contains libraries and header files for
developing applications that use %{name}.
%package doc
Summary: Documentation files for %{name}
BuildArch: noarch
Requires: %{name} = %{version}-%{release}
%description doc
The %{name}-doc package contains documentation files for %{name}.
%prep
%setup -q
%build
%if 0%{?rhel}
source /opt/rh/devtoolset-7/enable
%endif
extraFlags=
# - override docdir so large documentation files are owned by the
# -doc subpackage
# - set localstatedir by hand to the preferred nix value
%configure --localstatedir=/nix/var \
%{!?without_docgen:--disable-doc-gen} \
--docdir=%{_defaultdocdir}/%{name}-doc-%{version} \
$extraFlags
make V=1 %{?_smp_mflags}
%install
%if 0%{?rhel}
source /opt/rh/devtoolset-7/enable
%endif
make DESTDIR=$RPM_BUILD_ROOT install
find $RPM_BUILD_ROOT -name '*.la' -exec rm -f {} ';'
# make the store
mkdir -p $RPM_BUILD_ROOT/nix/store
chmod 1775 $RPM_BUILD_ROOT/nix/store
# make per-user directories
for d in profiles gcroots;
do
mkdir -p $RPM_BUILD_ROOT/nix/var/nix/$d/per-user
chmod 1777 $RPM_BUILD_ROOT/nix/var/nix/$d/per-user
done
# fix permission of nix profile
# (until this is fixed in the relevant Makefile)
chmod -x $RPM_BUILD_ROOT%{_sysconfdir}/profile.d/nix.sh
# we ship this file in the base package
rm -f $RPM_BUILD_ROOT%{_defaultdocdir}/%{name}-doc-%{version}/README
# Get rid of Upstart job.
rm -rf $RPM_BUILD_ROOT%{_sysconfdir}/init
%clean
rm -rf $RPM_BUILD_ROOT
%pre
getent group %{nixbld_group} >/dev/null || groupadd -r %{nixbld_group}
for i in $(seq 10);
do
getent passwd %{nixbld_user}$i >/dev/null || \
useradd -r -g %{nixbld_group} -G %{nixbld_group} -d /var/empty \
-s %{_sbindir}/nologin \
-c "Nix build user $i" %{nixbld_user}$i
done
%post
chgrp %{nixbld_group} /nix/store
%if ! 0%{?rhel} || 0%{?rhel} >= 7
# Enable and start Nix worker
systemctl enable nix-daemon.socket nix-daemon.service
systemctl start nix-daemon.socket
%endif
%files
%license COPYING
%{_bindir}/nix*
%{_libdir}/*.so
%{_prefix}/libexec/*
%if ! 0%{?rhel} || 0%{?rhel} >= 7
%{_prefix}/lib/systemd/system/nix-daemon.socket
%{_prefix}/lib/systemd/system/nix-daemon.service
%endif
%{_datadir}/nix
#%if ! %{without docgen}
#%{_mandir}/man1/*.1*
#%{_mandir}/man5/*.5*
#%{_mandir}/man8/*.8*
#%endif
%config(noreplace) %{_sysconfdir}/profile.d/nix.sh
%config(noreplace) %{_sysconfdir}/profile.d/nix-daemon.sh
/nix
%files devel
%{_includedir}/nix
%{_prefix}/lib/pkgconfig/*.pc
#%if ! %{without docgen}
#%files doc
#%docdir %{_defaultdocdir}/%{name}-doc-%{version}
#%{_defaultdocdir}/%{name}-doc-%{version}
#%endif

View File

@ -4,4 +4,12 @@ GLOBAL_CXXFLAGS += -g -Wall
-include Makefile.config
OPTIMIZE = 1
ifeq ($(OPTIMIZE), 1)
GLOBAL_CXXFLAGS += -O3
else
GLOBAL_CXXFLAGS += -O0
endif
include mk/lib.mk

View File

@ -2,13 +2,10 @@ AC_INIT(nix-perl, m4_esyscmd([bash -c "echo -n $(cat ../.version)$VERSION_SUFFIX
AC_CONFIG_SRCDIR(MANIFEST)
AC_CONFIG_AUX_DIR(../config)
# Set default flags for nix (as per AC_PROG_CC/CXX docs),
# while still allowing the user to override them from the command line.
: ${CFLAGS="-O3"}
: ${CXXFLAGS="-O3"}
CFLAGS=
CXXFLAGS=
AC_PROG_CC
AC_PROG_CXX
AX_CXX_COMPILE_STDCXX_11
# Use 64-bit file system calls so that we can support files > 2 GiB.
AC_SYS_LARGEFILE
@ -71,14 +68,15 @@ AC_SUBST(perlFlags)
PKG_CHECK_MODULES([NIX], [nix-store])
NEED_PROG([NIX_INSTANTIATE_PROGRAM], [nix-instantiate])
NEED_PROG([NIX], [nix])
# Get nix configure values
nixbindir=$("$NIX_INSTANTIATE_PROGRAM" --eval '<nix/config.nix>' -A nixBinDir | tr -d \")
nixlibexecdir=$("$NIX_INSTANTIATE_PROGRAM" --eval '<nix/config.nix>' -A nixLibexecDir | tr -d \")
nixlocalstatedir=$("$NIX_INSTANTIATE_PROGRAM" --eval '<nix/config.nix>' -A nixLocalstateDir | tr -d \")
nixsysconfdir=$("$NIX_INSTANTIATE_PROGRAM" --eval '<nix/config.nix>' -A nixSysconfDir | tr -d \")
nixstoredir=$("$NIX_INSTANTIATE_PROGRAM" --eval '<nix/config.nix>' -A nixStoreDir | tr -d \")
export NIX_REMOTE=daemon
nixbindir=$("$NIX" --experimental-features nix-command eval --raw -f '<nix/config.nix>' nixBinDir)
nixlibexecdir=$("$NIX" --experimental-features nix-command eval --raw -f '<nix/config.nix>' nixLibexecDir)
nixlocalstatedir=$("$NIX" --experimental-features nix-command eval --raw -f '<nix/config.nix>' nixLocalstateDir)
nixsysconfdir=$("$NIX" --experimental-features nix-command eval --raw -f '<nix/config.nix>' nixSysconfDir)
nixstoredir=$("$NIX" --experimental-features nix-command eval --raw -f '<nix/config.nix>' nixStoreDir)
AC_SUBST(nixbindir)
AC_SUBST(nixlibexecdir)
AC_SUBST(nixlocalstatedir)

View File

@ -11,10 +11,6 @@ $logDir = $ENV{"NIX_LOG_DIR"} || "@nixlocalstatedir@/log/nix";
$confDir = $ENV{"NIX_CONF_DIR"} || "@nixsysconfdir@/nix";
$storeDir = $ENV{"NIX_STORE_DIR"} || "@nixstoredir@";
$bzip2 = "@bzip2@";
$xz = "@xz@";
$curl = "@curl@";
$useBindings = 1;
%config = ();

View File

@ -59,7 +59,7 @@ void setVerbosity(int level)
int isValidPath(char * path)
CODE:
try {
RETVAL = store()->isValidPath(path);
RETVAL = store()->isValidPath(store()->parseStorePath(path));
} catch (Error & e) {
croak("%s", e.what());
}
@ -70,9 +70,8 @@ int isValidPath(char * path)
SV * queryReferences(char * path)
PPCODE:
try {
PathSet paths = store()->queryPathInfo(path)->references;
for (PathSet::iterator i = paths.begin(); i != paths.end(); ++i)
XPUSHs(sv_2mortal(newSVpv(i->c_str(), 0)));
for (auto & i : store()->queryPathInfo(store()->parseStorePath(path))->references)
XPUSHs(sv_2mortal(newSVpv(store()->printStorePath(i).c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
}
@ -81,7 +80,7 @@ SV * queryReferences(char * path)
SV * queryPathHash(char * path)
PPCODE:
try {
auto s = store()->queryPathInfo(path)->narHash.to_string();
auto s = store()->queryPathInfo(store()->parseStorePath(path))->narHash.to_string();
XPUSHs(sv_2mortal(newSVpv(s.c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
@ -91,9 +90,9 @@ SV * queryPathHash(char * path)
SV * queryDeriver(char * path)
PPCODE:
try {
auto deriver = store()->queryPathInfo(path)->deriver;
if (deriver == "") XSRETURN_UNDEF;
XPUSHs(sv_2mortal(newSVpv(deriver.c_str(), 0)));
auto info = store()->queryPathInfo(store()->parseStorePath(path));
if (!info->deriver) XSRETURN_UNDEF;
XPUSHs(sv_2mortal(newSVpv(store()->printStorePath(*info->deriver).c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
}
@ -102,18 +101,18 @@ SV * queryDeriver(char * path)
SV * queryPathInfo(char * path, int base32)
PPCODE:
try {
auto info = store()->queryPathInfo(path);
if (info->deriver == "")
auto info = store()->queryPathInfo(store()->parseStorePath(path));
if (info->deriver)
XPUSHs(&PL_sv_undef);
else
XPUSHs(sv_2mortal(newSVpv(info->deriver.c_str(), 0)));
XPUSHs(sv_2mortal(newSVpv(store()->printStorePath(*info->deriver).c_str(), 0)));
auto s = info->narHash.to_string(base32 ? Base32 : Base16);
XPUSHs(sv_2mortal(newSVpv(s.c_str(), 0)));
mXPUSHi(info->registrationTime);
mXPUSHi(info->narSize);
AV * arr = newAV();
for (PathSet::iterator i = info->references.begin(); i != info->references.end(); ++i)
av_push(arr, newSVpv(i->c_str(), 0));
for (auto & i : info->references)
av_push(arr, newSVpv(store()->printStorePath(i).c_str(), 0));
XPUSHs(sv_2mortal(newRV((SV *) arr)));
} catch (Error & e) {
croak("%s", e.what());
@ -123,8 +122,8 @@ SV * queryPathInfo(char * path, int base32)
SV * queryPathFromHashPart(char * hashPart)
PPCODE:
try {
Path path = store()->queryPathFromHashPart(hashPart);
XPUSHs(sv_2mortal(newSVpv(path.c_str(), 0)));
auto path = store()->queryPathFromHashPart(hashPart);
XPUSHs(sv_2mortal(newSVpv(path ? store()->printStorePath(*path).c_str() : "", 0)));
} catch (Error & e) {
croak("%s", e.what());
}
@ -133,11 +132,11 @@ SV * queryPathFromHashPart(char * hashPart)
SV * computeFSClosure(int flipDirection, int includeOutputs, ...)
PPCODE:
try {
PathSet paths;
StorePathSet paths;
for (int n = 2; n < items; ++n)
store()->computeFSClosure(SvPV_nolen(ST(n)), paths, flipDirection, includeOutputs);
for (PathSet::iterator i = paths.begin(); i != paths.end(); ++i)
XPUSHs(sv_2mortal(newSVpv(i->c_str(), 0)));
store()->computeFSClosure(store()->parseStorePath(SvPV_nolen(ST(n))), paths, flipDirection, includeOutputs);
for (auto & i : paths)
XPUSHs(sv_2mortal(newSVpv(store()->printStorePath(i).c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
}
@ -146,11 +145,11 @@ SV * computeFSClosure(int flipDirection, int includeOutputs, ...)
SV * topoSortPaths(...)
PPCODE:
try {
PathSet paths;
for (int n = 0; n < items; ++n) paths.insert(SvPV_nolen(ST(n)));
Paths sorted = store()->topoSortPaths(paths);
for (Paths::iterator i = sorted.begin(); i != sorted.end(); ++i)
XPUSHs(sv_2mortal(newSVpv(i->c_str(), 0)));
StorePathSet paths;
for (int n = 0; n < items; ++n) paths.insert(store()->parseStorePath(SvPV_nolen(ST(n))));
auto sorted = store()->topoSortPaths(paths);
for (auto & i : sorted)
XPUSHs(sv_2mortal(newSVpv(store()->printStorePath(i).c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
}
@ -159,7 +158,7 @@ SV * topoSortPaths(...)
SV * followLinksToStorePath(char * path)
CODE:
try {
RETVAL = newSVpv(store()->followLinksToStorePath(path).c_str(), 0);
RETVAL = newSVpv(store()->printStorePath(store()->followLinksToStorePath(path)).c_str(), 0);
} catch (Error & e) {
croak("%s", e.what());
}
@ -170,8 +169,8 @@ SV * followLinksToStorePath(char * path)
void exportPaths(int fd, ...)
PPCODE:
try {
Paths paths;
for (int n = 1; n < items; ++n) paths.push_back(SvPV_nolen(ST(n)));
StorePathSet paths;
for (int n = 1; n < items; ++n) paths.insert(store()->parseStorePath(SvPV_nolen(ST(n))));
FdSink sink(fd);
store()->exportPaths(paths, sink);
} catch (Error & e) {
@ -275,8 +274,8 @@ int checkSignature(SV * publicKey_, SV * sig_, char * msg)
SV * addToStore(char * srcPath, int recursive, char * algo)
PPCODE:
try {
Path path = store()->addToStore(baseNameOf(srcPath), srcPath, recursive, parseHashType(algo));
XPUSHs(sv_2mortal(newSVpv(path.c_str(), 0)));
auto path = store()->addToStore(std::string(baseNameOf(srcPath)), srcPath, recursive, parseHashType(algo));
XPUSHs(sv_2mortal(newSVpv(store()->printStorePath(path).c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
}
@ -286,8 +285,8 @@ SV * makeFixedOutputPath(int recursive, char * algo, char * hash, char * name)
PPCODE:
try {
Hash h(hash, parseHashType(algo));
Path path = store()->makeFixedOutputPath(recursive, h, name);
XPUSHs(sv_2mortal(newSVpv(path.c_str(), 0)));
auto path = store()->makeFixedOutputPath(recursive, h, name);
XPUSHs(sv_2mortal(newSVpv(store()->printStorePath(path).c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
}
@ -298,35 +297,35 @@ SV * derivationFromPath(char * drvPath)
HV *hash;
CODE:
try {
Derivation drv = store()->derivationFromPath(drvPath);
Derivation drv = store()->derivationFromPath(store()->parseStorePath(drvPath));
hash = newHV();
HV * outputs = newHV();
for (DerivationOutputs::iterator i = drv.outputs.begin(); i != drv.outputs.end(); ++i)
hv_store(outputs, i->first.c_str(), i->first.size(), newSVpv(i->second.path.c_str(), 0), 0);
for (auto & i : drv.outputs)
hv_store(outputs, i.first.c_str(), i.first.size(), newSVpv(store()->printStorePath(i.second.path).c_str(), 0), 0);
hv_stores(hash, "outputs", newRV((SV *) outputs));
AV * inputDrvs = newAV();
for (DerivationInputs::iterator i = drv.inputDrvs.begin(); i != drv.inputDrvs.end(); ++i)
av_push(inputDrvs, newSVpv(i->first.c_str(), 0)); // !!! ignores i->second
for (auto & i : drv.inputDrvs)
av_push(inputDrvs, newSVpv(store()->printStorePath(i.first).c_str(), 0)); // !!! ignores i->second
hv_stores(hash, "inputDrvs", newRV((SV *) inputDrvs));
AV * inputSrcs = newAV();
for (PathSet::iterator i = drv.inputSrcs.begin(); i != drv.inputSrcs.end(); ++i)
av_push(inputSrcs, newSVpv(i->c_str(), 0));
for (auto & i : drv.inputSrcs)
av_push(inputSrcs, newSVpv(store()->printStorePath(i).c_str(), 0));
hv_stores(hash, "inputSrcs", newRV((SV *) inputSrcs));
hv_stores(hash, "platform", newSVpv(drv.platform.c_str(), 0));
hv_stores(hash, "builder", newSVpv(drv.builder.c_str(), 0));
AV * args = newAV();
for (Strings::iterator i = drv.args.begin(); i != drv.args.end(); ++i)
av_push(args, newSVpv(i->c_str(), 0));
for (auto & i : drv.args)
av_push(args, newSVpv(i.c_str(), 0));
hv_stores(hash, "args", newRV((SV *) args));
HV * env = newHV();
for (StringPairs::iterator i = drv.env.begin(); i != drv.env.end(); ++i)
hv_store(env, i->first.c_str(), i->first.size(), newSVpv(i->second.c_str(), 0), 0);
for (auto & i : drv.env)
hv_store(env, i.first.c_str(), i.first.size(), newSVpv(i.second.c_str(), 0), 0);
hv_stores(hash, "env", newRV((SV *) env));
RETVAL = newRV_noinc((SV *)hash);
@ -340,7 +339,7 @@ SV * derivationFromPath(char * drvPath)
void addTempRoot(char * storePath)
PPCODE:
try {
store()->addTempRoot(storePath);
store()->addTempRoot(store()->parseStorePath(storePath));
} catch (Error & e) {
croak("%s", e.what());
}

61
precompiled-headers.h Normal file
View File

@ -0,0 +1,61 @@
#include <algorithm>
#include <array>
#include <atomic>
#include <cassert>
#include <cctype>
#include <chrono>
#include <climits>
#include <cmath>
#include <condition_variable>
#include <cstddef>
#include <cstdint>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <exception>
#include <functional>
#include <future>
#include <iostream>
#include <limits>
#include <list>
#include <locale>
#include <map>
#include <memory>
#include <mutex>
#include <numeric>
#include <optional>
#include <queue>
#include <random>
#include <regex>
#include <set>
#include <sstream>
#include <stack>
#include <stdexcept>
#include <string>
#include <thread>
#include <unordered_map>
#include <unordered_set>
#include <vector>
#include <boost/format.hpp>
#include <dirent.h>
#include <errno.h>
#include <fcntl.h>
#include <grp.h>
#include <netdb.h>
#include <pwd.h>
#include <signal.h>
#include <sys/resource.h>
#include <sys/select.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/utsname.h>
#include <sys/wait.h>
#include <termios.h>
#include <unistd.h>
#include "util.hh"
#include "args.hh"

View File

@ -30,9 +30,7 @@ rec {
});
configureFlags =
[
"--enable-gc"
] ++ lib.optionals stdenv.isLinux [
lib.optionals stdenv.isLinux [
"--with-sandbox-shell=${sh}/bin/busybox"
];
@ -49,9 +47,12 @@ rec {
buildDeps =
[ curl
bzip2 xz brotli editline
openssl pkgconfig sqlite boehmgc
bzip2 xz brotli zlib editline
openssl pkgconfig sqlite
libarchive
boost
nlohmann_json
rustc cargo
# Tests
git
@ -72,6 +73,10 @@ rec {
*/
}));
propagatedDeps =
[ (boehmgc.override { enableLargeConfig = true; })
];
perlDeps =
[ perl
perlPackages.DBDSQLite

View File

@ -1,5 +1,5 @@
{ nix ? builtins.fetchGit ./.
, nixpkgs ? builtins.fetchTarball https://github.com/NixOS/nixpkgs-channels/archive/nixos-19.03.tar.gz
, nixpkgs ? builtins.fetchTarball https://github.com/NixOS/nixpkgs/archive/nixos-20.03-small.tar.gz
, officialRelease ? false
, systems ? [ "x86_64-linux" "i686-linux" "x86_64-darwin" "aarch64-linux" ]
}:
@ -10,6 +10,50 @@ let
jobs = rec {
# Create a "vendor" directory that contains the crates listed in
# Cargo.lock, and include it in the Nix tarball. This allows Nix
# to be built without network access.
vendoredCrates =
let
lockFile = builtins.fromTOML (builtins.readFile nix-rust/Cargo.lock);
files = map (pkg: import <nix/fetchurl.nix> {
url = "https://crates.io/api/v1/crates/${pkg.name}/${pkg.version}/download";
sha256 = lockFile.metadata."checksum ${pkg.name} ${pkg.version} (registry+https://github.com/rust-lang/crates.io-index)";
}) (builtins.filter (pkg: pkg.source or "" == "registry+https://github.com/rust-lang/crates.io-index") lockFile.package);
in pkgs.runCommand "cargo-vendor-dir" {}
''
mkdir -p $out/vendor
cat > $out/vendor/config <<EOF
[source.crates-io]
replace-with = "vendored-sources"
[source.vendored-sources]
directory = "vendor"
EOF
${toString (builtins.map (file: ''
mkdir $out/vendor/tmp
tar xvf ${file} -C $out/vendor/tmp
dir=$(echo $out/vendor/tmp/*)
# Add just enough metadata to keep Cargo happy.
printf '{"files":{},"package":"${file.outputHash}"}' > "$dir/.cargo-checksum.json"
# Clean up some cruft from the winapi crates. FIXME: find
# a way to remove winapi* from our dependencies.
if [[ $dir =~ /winapi ]]; then
find $dir -name "*.a" -print0 | xargs -0 rm -f --
fi
mv "$dir" $out/vendor/
rm -rf $out/vendor/tmp
'') files)}
'';
tarball =
with pkgs;
@ -23,9 +67,7 @@ let
src = nix;
inherit officialRelease;
buildInputs = tarballDeps ++ buildDeps;
configureFlags = "--enable-gc";
buildInputs = tarballDeps ++ buildDeps ++ propagatedDeps;
postUnpack = ''
(cd $sourceRoot && find . -type f) | cut -c3- > $sourceRoot/.dist-files
@ -40,6 +82,8 @@ let
distPhase =
''
cp -prd ${vendoredCrates}/vendor/ nix-rust/vendor/
runHook preDist
make dist
mkdir -p $out/tarballs
@ -65,8 +109,12 @@ let
name = "nix";
src = tarball;
outputs = [ "out" "dev" ];
buildInputs = buildDeps;
propagatedBuildInputs = propagatedDeps;
preConfigure =
# Copy libboost_context so we don't get all of Boost in our closure.
# https://github.com/NixOS/nixpkgs/issues/45462
@ -91,6 +139,8 @@ let
doInstallCheck = true;
installCheckFlags = "sysconfdir=$(out)/etc";
separateDebugInfo = true;
});
@ -128,7 +178,7 @@ let
in
runCommand "nix-binary-tarball-${version}"
{ nativeBuildInputs = lib.optional (system != "aarch64-linux") shellcheck;
{ #nativeBuildInputs = lib.optional (system != "aarch64-linux") shellcheck;
meta.description = "Distribution-independent Nix bootstrap binaries for ${system}";
}
''
@ -196,31 +246,26 @@ let
name = "nix-build";
src = tarball;
buildInputs = buildDeps;
enableParallelBuilding = true;
buildInputs = buildDeps ++ propagatedDeps;
dontInstall = false;
doInstallCheck = true;
lcovFilter = [ "*/boost/*" "*-tab.*" "*/nlohmann/*" "*/linenoise/*" ];
lcovFilter = [ "*/boost/*" "*-tab.*" ];
# We call `dot', and even though we just use it to
# syntax-check generated dot files, it still requires some
# fonts. So provide those.
FONTCONFIG_FILE = texFunctions.fontsConf;
# To test building without precompiled headers.
makeFlagsArray = [ "PRECOMPILE_HEADERS=0" ];
};
#rpm_fedora27x86_64 = makeRPM_x86_64 (diskImageFunsFun: diskImageFunsFun.fedora27x86_64) [ ];
#deb_debian8i386 = makeDeb_i686 (diskImageFuns: diskImageFuns.debian8i386) [ "libsodium-dev" ] [ "libsodium13" ];
#deb_debian8x86_64 = makeDeb_x86_64 (diskImageFunsFun: diskImageFunsFun.debian8x86_64) [ "libsodium-dev" ] [ "libsodium13" ];
#deb_ubuntu1710i386 = makeDeb_i686 (diskImageFuns: diskImageFuns.ubuntu1710i386) [ ] [ "libsodium18" ];
#deb_ubuntu1710x86_64 = makeDeb_x86_64 (diskImageFuns: diskImageFuns.ubuntu1710x86_64) [ ] [ "libsodium18" "libboost-context1.62.0" ];
# System tests.
tests.remoteBuilds = (import ./tests/remote-builds.nix rec {
inherit nixpkgs;
@ -262,7 +307,7 @@ let
x86_64-linux = "${build.x86_64-linux}";
}
EOF
su - alice -c 'nix upgrade-nix -vvv --nix-store-paths-url file:///tmp/paths.nix'
su - alice -c 'nix --experimental-features nix-command upgrade-nix -vvv --nix-store-paths-url file:///tmp/paths.nix'
(! [ -L /home/alice/.profile-1-link ])
su - alice -c 'PAGER= nix-store -qR ${build.x86_64-linux}'
@ -271,6 +316,7 @@ let
umount /nix
''); # */
/*
tests.evalNixpkgs =
import (nixpkgs + "/pkgs/top-level/make-tarball.nix") {
inherit nixpkgs;
@ -289,6 +335,7 @@ let
touch $out
'';
*/
installerScript =
@ -300,7 +347,7 @@ let
substitute ${./scripts/install.in} $out/install \
${pkgs.lib.concatMapStrings
(system: "--replace '@binaryTarball_${system}@' $(nix hash-file --base16 --type sha256 ${binaryTarball.${system}}/*.tar.xz) ")
(system: "--replace '@binaryTarball_${system}@' $(nix --experimental-features nix-command hash-file --base16 --type sha256 ${binaryTarball.${system}}/*.tar.xz) ")
[ "x86_64-linux" "i686-linux" "x86_64-darwin" "aarch64-linux" ]
} \
--replace '@nixVersion@' ${build.x86_64-linux.src.version}
@ -326,8 +373,8 @@ let
tests.remoteBuilds
tests.nix-copy-closure
tests.binaryTarball
tests.evalNixpkgs
tests.evalNixOS
#tests.evalNixpkgs
#tests.evalNixOS
installerScript
];
};
@ -335,55 +382,4 @@ let
};
makeRPM_i686 = makeRPM "i686-linux";
makeRPM_x86_64 = makeRPM "x86_64-linux";
makeRPM =
system: diskImageFun: extraPackages:
with import nixpkgs { inherit system; };
releaseTools.rpmBuild rec {
name = "nix-rpm";
src = jobs.tarball;
diskImage = (diskImageFun vmTools.diskImageFuns)
{ extraPackages =
[ "sqlite" "sqlite-devel" "bzip2-devel" "libcurl-devel" "openssl-devel" "xz-devel" "libseccomp-devel" "libsodium-devel" "boost-devel" "bison" "flex" ]
++ extraPackages; };
# At most 2047MB can be simulated in qemu-system-i386
memSize = 2047;
meta.schedulingPriority = 50;
postRPMInstall = "cd /tmp/rpmout/BUILD/nix-* && make installcheck";
#enableParallelBuilding = true;
};
makeDeb_i686 = makeDeb "i686-linux";
makeDeb_x86_64 = makeDeb "x86_64-linux";
makeDeb =
system: diskImageFun: extraPackages: extraDebPackages:
with import nixpkgs { inherit system; };
releaseTools.debBuild {
name = "nix-deb";
src = jobs.tarball;
diskImage = (diskImageFun vmTools.diskImageFuns)
{ extraPackages =
[ "libsqlite3-dev" "libbz2-dev" "libcurl-dev" "libcurl3-nss" "libssl-dev" "liblzma-dev" "libseccomp-dev" "libsodium-dev" "libboost-all-dev" ]
++ extraPackages; };
memSize = 2047;
meta.schedulingPriority = 50;
postInstall = "make installcheck";
configureFlags = "--sysconfdir=/etc";
debRequires =
[ "curl" "libsqlite3-0" "libbz2-1.0" "bzip2" "xz-utils" "libssl1.0.0" "liblzma5" "libseccomp2" ]
++ extraDebPackages;
debMaintainer = "Eelco Dolstra <eelco.dolstra@logicblox.com>";
doInstallCheck = true;
#enableParallelBuilding = true;
};
in jobs

View File

@ -39,7 +39,7 @@ EOF
poly_configure_nix_daemon_service() {
_sudo "to set up the nix-daemon as a LaunchDaemon" \
ln -sfn "/nix/var/nix/profiles/default$PLIST_DEST" "$PLIST_DEST"
cp -f "/nix/var/nix/profiles/default$PLIST_DEST" "$PLIST_DEST"
_sudo "to load the LaunchDaemon plist for nix-daemon" \
launchctl load /Library/LaunchDaemons/org.nixos.nix-daemon.plist

View File

@ -19,9 +19,6 @@ readonly BLUE_UL='\033[38;4;34m'
readonly GREEN='\033[38;32m'
readonly GREEN_UL='\033[38;4;32m'
readonly RED='\033[38;31m'
readonly RED_UL='\033[38;4;31m'
readonly YELLOW='\033[38;33m'
readonly YELLOW_UL='\033[38;4;33m'
readonly NIX_USER_COUNT="32"
readonly NIX_BUILD_GROUP_ID="30000"
@ -278,73 +275,9 @@ EOF
fi
if type nix-env 2> /dev/null >&2; then
failure <<EOF
Nix already appears to be installed, and this tool assumes it is
_not_ yet installed.
$(uninstall_directions)
EOF
fi
if [ "${NIX_REMOTE:-}" != "" ]; then
failure <<EOF
For some reason, \$NIX_REMOTE is set. It really should not be set
before this installer runs, and it hints that Nix is currently
installed. Please delete the old Nix installation and start again.
Note: You might need to close your shell window and open a new shell
to clear the variable.
EOF
fi
if echo "${SSL_CERT_FILE:-}" | grep -qE "(nix/var/nix|nix-profile)"; then
failure <<EOF
It looks like \$SSL_CERT_FILE is set to a path that used to be part of
the old Nix installation. Please unset that variable and try again:
$ unset SSL_CERT_FILE
EOF
fi
for file in ~/.bash_profile ~/.bash_login ~/.profile ~/.zshenv ~/.zprofile ~/.zshrc ~/.zlogin; do
if [ -f "$file" ]; then
if grep -l "^[^#].*.nix-profile" "$file"; then
failure <<EOF
I found a reference to a ".nix-profile" in $file.
This has a high chance of breaking a new nix installation. It was most
likely put there by a previous Nix installer.
Please remove this reference and try running this again. You should
also look for similar references in:
- ~/.bash_profile
- ~/.bash_login
- ~/.profile
or other shell init files that you may have.
$(uninstall_directions)
EOF
fi
fi
done
if [ -d /nix/store ] || [ -d /nix/var ]; then
failure <<EOF
There are some relics of a previous installation of Nix at /nix, and
this scripts assumes Nix is _not_ yet installed. Please delete the old
Nix installation and start again.
$(uninstall_directions)
EOF
fi
if [ -d /etc/nix ]; then
failure <<EOF
There are some relics of a previous installation of Nix at /etc/nix, and
this scripts assumes Nix is _not_ yet installed. Please delete the old
Nix installation and start again.
warning <<EOF
Nix already appears to be installed. This installer may run into issues.
If an error occurs, try manually uninstalling, then rerunning this script.
$(uninstall_directions)
EOF
@ -352,7 +285,7 @@ EOF
for profile_target in "${PROFILE_TARGETS[@]}"; do
if [ -e "$profile_target$PROFILE_BACKUP_SUFFIX" ]; then
failure <<EOF
failure <<EOF
When this script runs, it backs up the current $profile_target to
$profile_target$PROFILE_BACKUP_SUFFIX. This backup file already exists, though.
@ -364,38 +297,10 @@ in case.
2. Take care to make sure that $profile_target$PROFILE_BACKUP_SUFFIX doesn't look like
it has anything nix-related in it. If it does, something is probably
quite wrong. Please open an issue or get in touch immediately.
3. Take care to make sure that $profile_target doesn't look like it has
anything nix-related in it. If it does, and $profile_target _did not_,
run:
$ /usr/bin/sudo /bin/mv $profile_target$PROFILE_BACKUP_SUFFIX $profile_target
and try again.
EOF
fi
if [ -e "$profile_target" ] && grep -qi "nix" "$profile_target"; then
failure <<EOF
It looks like $profile_target already has some Nix configuration in
there. There should be no reason to run this again. If you're having
trouble, please open an issue.
EOF
fi
done
danger_paths=("$ROOT_HOME/.nix-defexpr" "$ROOT_HOME/.nix-channels" "$ROOT_HOME/.nix-profile")
for danger_path in "${danger_paths[@]}"; do
if _sudo "making sure that $danger_path doesn't exist" \
test -e "$danger_path"; then
failure <<EOF
I found a file at $danger_path, which is a relic of a previous
installation. You must first delete this file before continuing.
$(uninstall_directions)
EOF
fi
done
}
setup_report() {
@ -529,24 +434,17 @@ create_build_users() {
}
create_directories() {
# FIXME: remove all of this because it duplicates LocalStore::LocalStore().
_sudo "to make the basic directory structure of Nix (part 1)" \
mkdir -pv -m 0755 /nix /nix/var /nix/var/log /nix/var/log/nix /nix/var/log/nix/drvs /nix/var/nix{,/db,/gcroots,/profiles,/temproots,/userpool}
mkdir -pv -m 0755 /nix /nix/var /nix/var/log /nix/var/log/nix /nix/var/log/nix/drvs /nix/var/nix{,/db,/gcroots,/profiles,/temproots,/userpool} /nix/var/nix/{gcroots,profiles}/per-user
_sudo "to make the basic directory structure of Nix (part 2)" \
mkdir -pv -m 1777 /nix/var/nix/{gcroots,profiles}/per-user
_sudo "to make the basic directory structure of Nix (part 3)" \
mkdir -pv -m 1775 /nix/store
_sudo "to make the basic directory structure of Nix (part 4)" \
_sudo "to make the basic directory structure of Nix (part 3)" \
chgrp "$NIX_BUILD_GROUP_NAME" /nix/store
_sudo "to set up the root user's profile (part 1)" \
mkdir -pv -m 0755 /nix/var/nix/profiles/per-user/root
_sudo "to set up the root user's profile (part 2)" \
mkdir -pv -m 0700 "$ROOT_HOME/.nix-defexpr"
_sudo "to place the default nix daemon configuration (part 1)" \
mkdir -pv -m 0555 /etc/nix
}
@ -589,7 +487,7 @@ EOF
We will:
- make sure your computer doesn't already have Nix files
(if it does, I will tell you how to clean them up.)
(if it does, I will tell you how to clean them up.)
- create local users (see the list above for the users we'll make)
- create a local group ($NIX_BUILD_GROUP_NAME)
- install Nix in to $NIX_ROOT
@ -772,9 +670,7 @@ main() {
welcome_to_nix
chat_about_sudo
if [ "${ALLOW_PREEXISTING_INSTALLATION:-}" = "" ]; then
validate_starting_assumptions
fi
validate_starting_assumptions
setup_report

View File

@ -102,7 +102,7 @@ for i in $(cd "$self/store" >/dev/null && echo ./*); do
rm -rf "$i_tmp"
fi
if ! [ -e "$dest/store/$i" ]; then
cp -Rp "$self/store/$i" "$i_tmp"
cp -RPp "$self/store/$i" "$i_tmp"
chmod -R a-w "$i_tmp"
chmod +w "$i_tmp"
mv "$i_tmp" "$dest/store/$i"
@ -141,11 +141,9 @@ if [ -z "$_NIX_INSTALLER_TEST" ]; then
fi
added=
p=$HOME/.nix-profile/etc/profile.d/nix.sh
if [ -z "$NIX_INSTALLER_NO_MODIFY_PROFILE" ]; then
# Make the shell source nix.sh during login.
p=$HOME/.nix-profile/etc/profile.d/nix.sh
for i in .bash_profile .bash_login .profile; do
fn="$HOME/$i"
if [ -w "$fn" ]; then
@ -157,7 +155,6 @@ if [ -z "$NIX_INSTALLER_NO_MODIFY_PROFILE" ]; then
break
fi
done
fi
if [ -z "$added" ]; then

View File

@ -88,7 +88,7 @@ poly_configure_nix_daemon_service() {
systemctl start nix-daemon.socket
_sudo "to start the nix-daemon.service" \
systemctl start nix-daemon.service
systemctl restart nix-daemon.service
}

View File

@ -56,7 +56,7 @@ fi
unpack=$tmpDir/unpack
mkdir -p "$unpack"
tar -xf "$tarball" -C "$unpack" || oops "failed to unpack '$url'"
tar -xJf "$tarball" -C "$unpack" || oops "failed to unpack '$url'"
script=$(echo "$unpack"/*/install)

View File

@ -2,48 +2,8 @@
if [ -n "${__ETC_PROFILE_NIX_SOURCED:-}" ]; then return; fi
__ETC_PROFILE_NIX_SOURCED=1
export NIX_USER_PROFILE_DIR="@localstatedir@/nix/profiles/per-user/$USER"
export NIX_PROFILES="@localstatedir@/nix/profiles/default $HOME/.nix-profile"
# Set up the per-user profile.
mkdir -m 0755 -p $NIX_USER_PROFILE_DIR
if ! test -O "$NIX_USER_PROFILE_DIR"; then
echo "WARNING: bad ownership on $NIX_USER_PROFILE_DIR" >&2
fi
if test -w $HOME; then
if ! test -L $HOME/.nix-profile; then
if test "$USER" != root; then
ln -s $NIX_USER_PROFILE_DIR/profile $HOME/.nix-profile
else
# Root installs in the system-wide profile by default.
ln -s @localstatedir@/nix/profiles/default $HOME/.nix-profile
fi
fi
# Subscribe the root user to the NixOS channel by default.
if [ "$USER" = root -a ! -e $HOME/.nix-channels ]; then
echo "https://nixos.org/channels/nixpkgs-unstable nixpkgs" > $HOME/.nix-channels
fi
# Create the per-user garbage collector roots directory.
NIX_USER_GCROOTS_DIR=@localstatedir@/nix/gcroots/per-user/$USER
mkdir -m 0755 -p $NIX_USER_GCROOTS_DIR
if ! test -O "$NIX_USER_GCROOTS_DIR"; then
echo "WARNING: bad ownership on $NIX_USER_GCROOTS_DIR" >&2
fi
# Set up a default Nix expression from which to install stuff.
if [ ! -e $HOME/.nix-defexpr -o -L $HOME/.nix-defexpr ]; then
rm -f $HOME/.nix-defexpr
mkdir -p $HOME/.nix-defexpr
if [ "$USER" != root ]; then
ln -s @localstatedir@/nix/profiles/per-user/root/channels $HOME/.nix-defexpr/channels_root
fi
fi
fi
# Set $NIX_SSL_CERT_FILE so that Nixpkgs applications like curl work.
if [ ! -z "${NIX_SSL_CERT_FILE:-}" ]; then
: # Allow users to override the NIX_SSL_CERT_FILE
@ -64,5 +24,4 @@ else
done
fi
export NIX_PATH="nixpkgs=@localstatedir@/nix/profiles/per-user/root/channels/nixpkgs:@localstatedir@/nix/profiles/per-user/root/channels"
export PATH="$HOME/.nix-profile/bin:@localstatedir@/nix/profiles/default/bin:$PATH"

View File

@ -1,60 +1,10 @@
if [ -n "$HOME" ] && [ -n "$USER" ]; then
__savedpath="$PATH"
export PATH=@coreutils@
# Set up the per-user profile.
# This part should be kept in sync with nixpkgs:nixos/modules/programs/shell.nix
NIX_LINK=$HOME/.nix-profile
NIX_USER_PROFILE_DIR=@localstatedir@/nix/profiles/per-user/$USER
mkdir -m 0755 -p "$NIX_USER_PROFILE_DIR"
if [ "$(stat --printf '%u' "$NIX_USER_PROFILE_DIR")" != "$(id -u)" ]; then
echo "Nix: WARNING: bad ownership on "$NIX_USER_PROFILE_DIR", should be $(id -u)" >&2
fi
if [ -w "$HOME" ]; then
if ! [ -L "$NIX_LINK" ]; then
echo "Nix: creating $NIX_LINK" >&2
if [ "$USER" != root ]; then
if ! ln -s "$NIX_USER_PROFILE_DIR"/profile "$NIX_LINK"; then
echo "Nix: WARNING: could not create $NIX_LINK -> $NIX_USER_PROFILE_DIR/profile" >&2
fi
else
# Root installs in the system-wide profile by default.
ln -s @localstatedir@/nix/profiles/default "$NIX_LINK"
fi
fi
# Subscribe the user to the unstable Nixpkgs channel by default.
if [ ! -e "$HOME/.nix-channels" ]; then
echo "https://nixos.org/channels/nixpkgs-unstable nixpkgs" > "$HOME/.nix-channels"
fi
# Create the per-user garbage collector roots directory.
__user_gcroots=@localstatedir@/nix/gcroots/per-user/"$USER"
mkdir -m 0755 -p "$__user_gcroots"
if [ "$(stat --printf '%u' "$__user_gcroots")" != "$(id -u)" ]; then
echo "Nix: WARNING: bad ownership on $__user_gcroots, should be $(id -u)" >&2
fi
unset __user_gcroots
# Set up a default Nix expression from which to install stuff.
__nix_defexpr="$HOME"/.nix-defexpr
[ -L "$__nix_defexpr" ] && rm -f "$__nix_defexpr"
mkdir -m 0755 -p "$__nix_defexpr"
if [ "$USER" != root ] && [ ! -L "$__nix_defexpr"/channels_root ]; then
ln -s @localstatedir@/nix/profiles/per-user/root/channels "$__nix_defexpr"/channels_root
fi
unset __nix_defexpr
fi
# Append ~/.nix-defexpr/channels to $NIX_PATH so that <nixpkgs>
# paths work when the user has fetched the Nixpkgs channel.
export NIX_PATH=${NIX_PATH:+$NIX_PATH:}$HOME/.nix-defexpr/channels
# Set up environment.
# This part should be kept in sync with nixpkgs:nixos/modules/programs/environment.nix
export NIX_PROFILES="@localstatedir@/nix/profiles/default $HOME/.nix-profile"
@ -78,6 +28,6 @@ if [ -n "$HOME" ] && [ -n "$USER" ]; then
export MANPATH="$NIX_LINK/share/man:$MANPATH"
fi
export PATH="$NIX_LINK/bin:$__savedpath"
unset __savedpath NIX_LINK NIX_USER_PROFILE_DIR
export PATH="$NIX_LINK/bin:$PATH"
unset NIX_LINK
fi

View File

@ -1,13 +1,13 @@
{ useClang ? false }:
with import (builtins.fetchTarball https://github.com/NixOS/nixpkgs-channels/archive/nixos-19.03.tar.gz) {};
with import (builtins.fetchTarball https://github.com/NixOS/nixpkgs/archive/nixos-20.03-small.tar.gz) {};
with import ./release-common.nix { inherit pkgs; };
(if useClang then clangStdenv else stdenv).mkDerivation {
name = "nix";
buildInputs = buildDeps ++ tarballDeps ++ perlDeps;
buildInputs = buildDeps ++ propagatedDeps ++ tarballDeps ++ perlDeps ++ [ pkgs.rustfmt ];
inherit configureFlags;

View File

@ -88,7 +88,7 @@ static int _main(int argc, char * * argv)
return 0;
}
string drvPath;
std::optional<StorePath> drvPath;
string storeUri;
while (true) {
@ -100,7 +100,7 @@ static int _main(int argc, char * * argv)
auto amWilling = readInt(source);
auto neededSystem = readString(source);
source >> drvPath;
drvPath = store->parseStorePath(readString(source));
auto requiredFeatures = readStrings<std::set<std::string>>(source);
auto canBuildLocally = amWilling
@ -188,7 +188,7 @@ static int _main(int argc, char * * argv)
Store::Params storeParams;
if (hasPrefix(bestMachine->storeUri, "ssh://")) {
storeParams["max-connections"] ="1";
storeParams["max-connections"] = "1";
storeParams["log-fd"] = "4";
if (bestMachine->sshKey != "")
storeParams["ssh-key"] = bestMachine->sshKey;
@ -236,26 +236,27 @@ connected:
{
Activity act(*logger, lvlTalkative, actUnknown, fmt("copying dependencies to '%s'", storeUri));
copyPaths(store, ref<Store>(sshStore), inputs, NoRepair, NoCheckSigs, substitute);
copyPaths(store, ref<Store>(sshStore), store->parseStorePathSet(inputs), NoRepair, NoCheckSigs, substitute);
}
uploadLock = -1;
BasicDerivation drv(readDerivation(store->realStoreDir + "/" + baseNameOf(drvPath)));
drv.inputSrcs = inputs;
BasicDerivation drv(readDerivation(*store, store->realStoreDir + "/" + std::string(drvPath->to_string())));
drv.inputSrcs = store->parseStorePathSet(inputs);
auto result = sshStore->buildDerivation(drvPath, drv);
auto result = sshStore->buildDerivation(*drvPath, drv);
if (!result.success())
throw Error("build of '%s' on '%s' failed: %s", drvPath, storeUri, result.errorMsg);
throw Error("build of '%s' on '%s' failed: %s", store->printStorePath(*drvPath), storeUri, result.errorMsg);
PathSet missing;
StorePathSet missing;
for (auto & path : outputs)
if (!store->isValidPath(path)) missing.insert(path);
if (!store->isValidPath(store->parseStorePath(path))) missing.insert(store->parseStorePath(path));
if (!missing.empty()) {
Activity act(*logger, lvlTalkative, actUnknown, fmt("copying outputs from '%s'", storeUri));
store->locksHeld.insert(missing.begin(), missing.end()); /* FIXME: ugly */
for (auto & i : missing)
store->locksHeld.insert(store->printStorePath(i)); /* FIXME: ugly */
copyPaths(ref<Store>(sshStore), store, missing, NoRepair, NoCheckSigs, NoSubstitute);
}

View File

@ -93,4 +93,36 @@ Value * findAlongAttrPath(EvalState & state, const string & attrPath,
}
Pos findDerivationFilename(EvalState & state, Value & v, std::string what)
{
Value * v2;
try {
auto dummyArgs = state.allocBindings(0);
v2 = findAlongAttrPath(state, "meta.position", *dummyArgs, v);
} catch (Error &) {
throw Error("package '%s' has no source location information", what);
}
// FIXME: is it possible to extract the Pos object instead of doing this
// toString + parsing?
auto pos = state.forceString(*v2);
auto colon = pos.rfind(':');
if (colon == std::string::npos)
throw Error("cannot parse meta.position attribute '%s'", pos);
std::string filename(pos, 0, colon);
unsigned int lineno;
try {
lineno = std::stoi(std::string(pos, colon + 1));
} catch (std::invalid_argument & e) {
throw Error("cannot parse line number '%s'", pos);
}
Symbol file = state.symbols.create(filename);
return { file, lineno, 0 };
}
}

View File

@ -10,4 +10,7 @@ namespace nix {
Value * findAlongAttrPath(EvalState & state, const string & attrPath,
Bindings & autoArgs, Value & vIn);
/* Heuristic to find the filename and lineno or a nix value. */
Pos findDerivationFilename(EvalState & state, Value & v, std::string what);
}

View File

@ -4,6 +4,7 @@
#include "symbol-table.hh"
#include <algorithm>
#include <optional>
namespace nix {
@ -63,6 +64,22 @@ public:
return end();
}
Attr * get(const Symbol & name)
{
Attr key(name, 0);
iterator i = std::lower_bound(begin(), end(), key);
if (i != end() && i->name == name) return &*i;
return nullptr;
}
Attr & need(const Symbol & name, const Pos & pos = noPos)
{
auto a = get(name);
if (!a)
throw Error("attribute '%s' missing, at %s", name, pos);
return *a;
}
iterator begin() { return &attrs[0]; }
iterator end() { return &attrs[size_]; }

View File

@ -7,6 +7,7 @@
#include "eval-inline.hh"
#include "download.hh"
#include "json.hh"
#include "function-trace.hh"
#include <algorithm>
#include <chrono>
@ -42,15 +43,27 @@ static char * dupString(const char * s)
}
static char * dupStringWithLen(const char * s, size_t size)
{
char * t;
#if HAVE_BOEHMGC
t = GC_STRNDUP(s, size);
#else
t = strndup(s, size);
#endif
if (!t) throw std::bad_alloc();
return t;
}
static void printValue(std::ostream & str, std::set<const Value *> & active, const Value & v)
{
checkInterrupt();
if (active.find(&v) != active.end()) {
if (!active.insert(&v).second) {
str << "<CYCLE>";
return;
}
active.insert(&v);
switch (v.type) {
case tInt:
@ -218,7 +231,7 @@ void initGC()
that GC_expand_hp() causes a lot of virtual, but not physical
(resident) memory to be allocated. This might be a problem on
systems that don't overcommit. */
if (!getenv("GC_INITIAL_HEAP_SIZE")) {
if (!getEnv("GC_INITIAL_HEAP_SIZE")) {
size_t size = 32 * 1024 * 1024;
#if HAVE_SYSCONF && defined(_SC_PAGESIZE) && defined(_SC_PHYS_PAGES)
size_t maxSize = 384 * 1024 * 1024;
@ -307,7 +320,7 @@ EvalState::EvalState(const Strings & _searchPath, ref<Store> store)
, baseEnv(allocEnv(128))
, staticBaseEnv(false, 0)
{
countCalls = getEnv("NIX_COUNT_CALLS", "0") != "0";
countCalls = getEnv("NIX_COUNT_CALLS").value_or("0") != "0";
assert(gcInitialised);
@ -315,9 +328,8 @@ EvalState::EvalState(const Strings & _searchPath, ref<Store> store)
/* Initialise the Nix expression search path. */
if (!evalSettings.pureEval) {
Strings paths = parseNixPath(getEnv("NIX_PATH", ""));
for (auto & i : _searchPath) addToSearchPath(i);
for (auto & i : paths) addToSearchPath(i);
for (auto & i : evalSettings.nixPath.get()) addToSearchPath(i);
}
addToSearchPath("nix=" + canonPath(settings.nixDataDir + "/nix/corepkgs", true));
@ -331,10 +343,10 @@ EvalState::EvalState(const Strings & _searchPath, ref<Store> store)
auto path = r.second;
if (store->isInStore(r.second)) {
PathSet closure;
store->computeFSClosure(store->toStorePath(r.second), closure);
StorePathSet closure;
store->computeFSClosure(store->parseStorePath(store->toStorePath(r.second)), closure);
for (auto & path : closure)
allowedPaths->insert(path);
allowedPaths->insert(store->printStorePath(path));
} else
allowedPaths->insert(r.second);
}
@ -433,7 +445,7 @@ Path EvalState::toRealPath(const Path & path, const PathSet & context)
!context.empty() && store->isInStore(path)
? store->toRealPath(path)
: path;
};
}
Value * EvalState::addConstant(const string & name, Value & v)
@ -519,9 +531,9 @@ LocalNoInlineNoReturn(void throwTypeError(const char * s, const ExprLambda & fun
throw TypeError(format(s) % fun.showNamePos() % s2 % pos);
}
LocalNoInlineNoReturn(void throwAssertionError(const char * s, const Pos & pos))
LocalNoInlineNoReturn(void throwAssertionError(const char * s, const string & s1, const Pos & pos))
{
throw AssertionError(format(s) % pos);
throw AssertionError(format(s) % s1 % pos);
}
LocalNoInlineNoReturn(void throwUndefinedVarError(const char * s, const string & s1, const Pos & pos))
@ -551,9 +563,11 @@ void mkString(Value & v, const char * s)
}
Value & mkString(Value & v, const string & s, const PathSet & context)
Value & mkString(Value & v, std::string_view s, const PathSet & context)
{
mkString(v, s.c_str());
v.type = tString;
v.string.s = dupStringWithLen(s.data(), s.size());
v.string.context = 0;
if (!context.empty()) {
size_t n = 0;
v.string.context = (const char * *)
@ -616,13 +630,9 @@ Value * EvalState::allocValue()
Env & EvalState::allocEnv(size_t size)
{
if (size > std::numeric_limits<decltype(Env::size)>::max())
throw Error("environment size %d is too big", size);
nrEnvs++;
nrValuesInEnvs += size;
Env * env = (Env *) allocBytes(sizeof(Env) + size * sizeof(Value *));
env->size = (decltype(Env::size)) size;
env->type = Env::Plain;
/* We assume that env->values has been cleared by the allocator; maybeThunk() and lookupVar fromWith expect this. */
@ -877,7 +887,7 @@ void ExprAttrs::eval(EvalState & state, Env & env, Value & v)
if (hasOverrides) {
Value * vOverrides = (*v.attrs)[overrides->second.displ].value;
state.forceAttrs(*vOverrides);
Bindings * newBnds = state.allocBindings(v.attrs->size() + vOverrides->attrs->size());
Bindings * newBnds = state.allocBindings(v.attrs->capacity() + vOverrides->attrs->size());
for (auto & i : *v.attrs)
newBnds->push_back(i);
for (auto & i : *vOverrides->attrs) {
@ -1096,10 +1106,7 @@ void EvalState::callPrimOp(Value & fun, Value & arg, Value & v, const Pos & pos)
void EvalState::callFunction(Value & fun, Value & arg, Value & v, const Pos & pos)
{
std::optional<FunctionCallTrace> trace;
if (evalSettings.traceFunctionCalls) {
trace.emplace(pos);
}
auto trace = evalSettings.traceFunctionCalls ? std::make_unique<FunctionCallTrace>(pos) : nullptr;
forceValue(fun, pos);
@ -1255,8 +1262,11 @@ void ExprIf::eval(EvalState & state, Env & env, Value & v)
void ExprAssert::eval(EvalState & state, Env & env, Value & v)
{
if (!state.evalBool(env, cond, pos))
throwAssertionError("assertion failed at %1%", pos);
if (!state.evalBool(env, cond, pos)) {
std::ostringstream out;
cond->show(out);
throwAssertionError("assertion '%1%' failed at %2%", out.str(), pos);
}
body->eval(state, env, v);
}
@ -1446,8 +1456,7 @@ void EvalState::forceValueDeep(Value & v)
std::function<void(Value & v)> recurse;
recurse = [&](Value & v) {
if (seen.find(&v) != seen.end()) return;
seen.insert(&v);
if (!seen.insert(&v).second) return;
forceValue(v);
@ -1569,6 +1578,19 @@ bool EvalState::isDerivation(Value & v)
}
std::optional<string> EvalState::tryAttrsToString(const Pos & pos, Value & v,
PathSet & context, bool coerceMore, bool copyToStore)
{
auto i = v.attrs->find(sToString);
if (i != v.attrs->end()) {
Value v1;
callFunction(*i->value, v, v1, pos);
return coerceToString(pos, v1, context, coerceMore, copyToStore);
}
return {};
}
string EvalState::coerceToString(const Pos & pos, Value & v, PathSet & context,
bool coerceMore, bool copyToStore)
{
@ -1587,13 +1609,11 @@ string EvalState::coerceToString(const Pos & pos, Value & v, PathSet & context,
}
if (v.type == tAttrs) {
auto i = v.attrs->find(sToString);
if (i != v.attrs->end()) {
Value v1;
callFunction(*i->value, v, v1, pos);
return coerceToString(pos, v1, context, coerceMore, copyToStore);
auto maybeString = tryAttrsToString(pos, v, context, coerceMore, copyToStore);
if (maybeString) {
return *maybeString;
}
i = v.attrs->find(sOutPath);
auto i = v.attrs->find(sOutPath);
if (i == v.attrs->end()) throwTypeError("cannot coerce a set to a string, at %1%", pos);
return coerceToString(pos, *i->value, context, coerceMore, copyToStore);
}
@ -1635,15 +1655,16 @@ string EvalState::copyPathToStore(PathSet & context, const Path & path)
throwEvalError("file names are not allowed to end in '%1%'", drvExtension);
Path dstPath;
if (srcToStore[path] != "")
dstPath = srcToStore[path];
auto i = srcToStore.find(path);
if (i != srcToStore.end())
dstPath = store->printStorePath(i->second);
else {
dstPath = settings.readOnlyMode
? store->computeStorePathForPath(baseNameOf(path), checkSourcePath(path)).first
: store->addToStore(baseNameOf(path), checkSourcePath(path), true, htSHA256, defaultPathFilter, repair);
srcToStore[path] = dstPath;
printMsg(lvlChatty, format("copied source '%1%' -> '%2%'")
% path % dstPath);
auto p = settings.readOnlyMode
? store->computeStorePathForPath(std::string(baseNameOf(path)), checkSourcePath(path)).first
: store->addToStore(std::string(baseNameOf(path)), checkSourcePath(path), true, htSHA256, defaultPathFilter, repair);
dstPath = store->printStorePath(p);
srcToStore.insert_or_assign(path, std::move(p));
printMsg(lvlChatty, "copied source '%1%' -> '%2%'", path, dstPath);
}
context.insert(dstPath);
@ -1744,7 +1765,7 @@ bool EvalState::eqValues(Value & v1, Value & v2)
void EvalState::printStats()
{
bool showStats = getEnv("NIX_SHOW_STATS", "0") != "0";
bool showStats = getEnv("NIX_SHOW_STATS").value_or("0") != "0";
struct rusage buf;
getrusage(RUSAGE_SELF, &buf);
@ -1760,7 +1781,7 @@ void EvalState::printStats()
GC_get_heap_usage_safe(&heapSize, 0, 0, 0, &totalBytes);
#endif
if (showStats) {
auto outPath = getEnv("NIX_SHOW_STATS_PATH","-");
auto outPath = getEnv("NIX_SHOW_STATS_PATH").value_or("-");
std::fstream fs;
if (outPath != "-")
fs.open(outPath, std::fstream::out);
@ -1852,7 +1873,7 @@ void EvalState::printStats()
}
}
if (getEnv("NIX_SHOW_SYMBOLS", "0") != "0") {
if (getEnv("NIX_SHOW_SYMBOLS").value_or("0") != "0") {
auto list = topObj.list("symbols");
symbols.dump([&](const std::string & s) { list.elem(s); });
}
@ -1860,99 +1881,6 @@ void EvalState::printStats()
}
size_t valueSize(Value & v)
{
std::set<const void *> seen;
auto doString = [&](const char * s) -> size_t {
if (seen.find(s) != seen.end()) return 0;
seen.insert(s);
return strlen(s) + 1;
};
std::function<size_t(Value & v)> doValue;
std::function<size_t(Env & v)> doEnv;
doValue = [&](Value & v) -> size_t {
if (seen.find(&v) != seen.end()) return 0;
seen.insert(&v);
size_t sz = sizeof(Value);
switch (v.type) {
case tString:
sz += doString(v.string.s);
if (v.string.context)
for (const char * * p = v.string.context; *p; ++p)
sz += doString(*p);
break;
case tPath:
sz += doString(v.path);
break;
case tAttrs:
if (seen.find(v.attrs) == seen.end()) {
seen.insert(v.attrs);
sz += sizeof(Bindings) + sizeof(Attr) * v.attrs->capacity();
for (auto & i : *v.attrs)
sz += doValue(*i.value);
}
break;
case tList1:
case tList2:
case tListN:
if (seen.find(v.listElems()) == seen.end()) {
seen.insert(v.listElems());
sz += v.listSize() * sizeof(Value *);
for (size_t n = 0; n < v.listSize(); ++n)
sz += doValue(*v.listElems()[n]);
}
break;
case tThunk:
sz += doEnv(*v.thunk.env);
break;
case tApp:
sz += doValue(*v.app.left);
sz += doValue(*v.app.right);
break;
case tLambda:
sz += doEnv(*v.lambda.env);
break;
case tPrimOpApp:
sz += doValue(*v.primOpApp.left);
sz += doValue(*v.primOpApp.right);
break;
case tExternal:
if (seen.find(v.external) != seen.end()) break;
seen.insert(v.external);
sz += v.external->valueSize(seen);
break;
default:
;
}
return sz;
};
doEnv = [&](Env & env) -> size_t {
if (seen.find(&env) != seen.end()) return 0;
seen.insert(&env);
size_t sz = sizeof(Env) + sizeof(Value *) * env.size;
if (env.type != Env::HasWithExpr)
for (size_t i = 0; i < env.size; ++i)
if (env.values[i])
sz += doValue(*env.values[i]);
if (env.up) sz += doEnv(*env.up);
return sz;
};
return doValue(v);
}
string ExternalValueBase::coerceToString(const Pos & pos, PathSet & context, bool copyMore, bool copyToStore) const
{
throw TypeError(format("cannot coerce %1% to a string, at %2%") %
@ -1971,6 +1899,22 @@ std::ostream & operator << (std::ostream & str, const ExternalValueBase & v) {
}
EvalSettings::EvalSettings()
{
auto var = getEnv("NIX_PATH");
if (var) nixPath = parseNixPath(*var);
}
Strings EvalSettings::getDefaultNixPath()
{
Strings res;
auto add = [&](const Path & p) { if (pathExists(p)) { res.push_back(p); } };
add(getHome() + "/.nix-defexpr/channels");
add("nixpkgs=" + settings.nixStateDir + "/nix/profiles/per-user/root/channels/nixpkgs");
add(settings.nixStateDir + "/nix/profiles/per-user/root/channels");
return res;
}
EvalSettings evalSettings;
static GlobalConfig::Register r1(&evalSettings);

View File

@ -6,9 +6,9 @@
#include "symbol-table.hh"
#include "hash.hh"
#include "config.hh"
#include "function-trace.hh"
#include <map>
#include <optional>
#include <unordered_map>
@ -17,6 +17,7 @@ namespace nix {
class Store;
class EvalState;
struct StorePath;
enum RepairFlag : bool;
@ -36,21 +37,20 @@ struct PrimOp
struct Env
{
Env * up;
unsigned short size; // used by valueSize
unsigned short prevWith:14; // nr of levels up to next `with' environment
enum { Plain = 0, HasWithExpr, HasWithAttrs } type:2;
Value * values[0];
};
Value & mkString(Value & v, const string & s, const PathSet & context = PathSet());
Value & mkString(Value & v, std::string_view s, const PathSet & context = PathSet());
void copyContext(const Value & v, PathSet & context);
/* Cache for calls to addToStore(); maps source paths to the store
paths. */
typedef std::map<Path, Path> SrcToStore;
typedef std::map<Path, StorePath> SrcToStore;
std::ostream & operator << (std::ostream & str, const Value & v);
@ -196,6 +196,9 @@ public:
set with attribute `type = "derivation"'). */
bool isDerivation(Value & v);
std::optional<string> tryAttrsToString(const Pos & pos, Value & v,
PathSet & context, bool coerceMore = false, bool copyToStore = true);
/* String coercion. Converts strings, paths and derivations to a
string. If `coerceMore' is set, also converts nulls, integers,
booleans and lists to a string. If `copyToStore' is set,
@ -335,9 +338,16 @@ struct InvalidPathError : EvalError
struct EvalSettings : Config
{
EvalSettings();
static Strings getDefaultNixPath();
Setting<bool> enableNativeCode{this, false, "allow-unsafe-native-code-during-evaluation",
"Whether builtin functions that allow executing native code should be enabled."};
Setting<Strings> nixPath{this, getDefaultNixPath(), "nix-path",
"List of directories to be searched for <...> file references."};
Setting<bool> restrictEval{this, false, "restrict-eval",
"Whether to restrict file system access to paths in $NIX_PATH, "
"and network access to the URI prefixes listed in 'allowed-uris'."};

View File

@ -0,0 +1,17 @@
#include "function-trace.hh"
namespace nix {
FunctionCallTrace::FunctionCallTrace(const Pos & pos) : pos(pos) {
auto duration = std::chrono::high_resolution_clock::now().time_since_epoch();
auto ns = std::chrono::duration_cast<std::chrono::nanoseconds>(duration);
printMsg(lvlInfo, "function-trace entered %1% at %2%", pos, ns.count());
}
FunctionCallTrace::~FunctionCallTrace() {
auto duration = std::chrono::high_resolution_clock::now().time_since_epoch();
auto ns = std::chrono::duration_cast<std::chrono::nanoseconds>(duration);
printMsg(lvlInfo, "function-trace exited %1% at %2%", pos, ns.count());
}
}

View File

@ -1,24 +1,15 @@
#pragma once
#include "eval.hh"
#include <sys/time.h>
#include <chrono>
namespace nix {
struct FunctionCallTrace
{
const Pos & pos;
FunctionCallTrace(const Pos & pos) : pos(pos) {
auto duration = std::chrono::high_resolution_clock::now().time_since_epoch();
auto ns = std::chrono::duration_cast<std::chrono::nanoseconds>(duration);
printMsg(lvlInfo, "function-trace entered %1% at %2%", pos, ns.count());
}
~FunctionCallTrace() {
auto duration = std::chrono::high_resolution_clock::now().time_since_epoch();
auto ns = std::chrono::duration_cast<std::chrono::nanoseconds>(duration);
printMsg(lvlInfo, "function-trace exited %1% at %2%", pos, ns.count());
}
FunctionCallTrace(const Pos & pos);
~FunctionCallTrace();
};
}

View File

@ -19,27 +19,27 @@ DrvInfo::DrvInfo(EvalState & state, const string & attrPath, Bindings * attrs)
DrvInfo::DrvInfo(EvalState & state, ref<Store> store, const std::string & drvPathWithOutputs)
: state(&state), attrs(nullptr), attrPath("")
{
auto spec = parseDrvPathWithOutputs(drvPathWithOutputs);
auto [drvPath, selectedOutputs] = store->parsePathWithOutputs(drvPathWithOutputs);
drvPath = spec.first;
this->drvPath = store->printStorePath(drvPath);
auto drv = store->derivationFromPath(drvPath);
name = storePathToName(drvPath);
name = drvPath.name();
if (spec.second.size() > 1)
if (selectedOutputs.size() > 1)
throw Error("building more than one derivation output is not supported, in '%s'", drvPathWithOutputs);
outputName =
spec.second.empty()
? get(drv.env, "outputName", "out")
: *spec.second.begin();
selectedOutputs.empty()
? get(drv.env, "outputName").value_or("out")
: *selectedOutputs.begin();
auto i = drv.outputs.find(outputName);
if (i == drv.outputs.end())
throw Error("derivation '%s' does not have output '%s'", drvPath, outputName);
throw Error("derivation '%s' does not have output '%s'", store->printStorePath(drvPath), outputName);
outPath = i->second.path;
outPath = store->printStorePath(i->second.path);
}
@ -277,8 +277,7 @@ static bool getDerivation(EvalState & state, Value & v,
/* Remove spurious duplicates (e.g., a set like `rec { x =
derivation {...}; y = x;}'. */
if (done.find(v.attrs) != done.end()) return false;
done.insert(v.attrs);
if (!done.insert(v.attrs).second) return false;
DrvInfo drv(state, attrPath, v.attrs);

View File

@ -1,149 +1,161 @@
#include "json-to-value.hh"
#include <cstring>
#include <variant>
#include <nlohmann/json.hpp>
using json = nlohmann::json;
using std::unique_ptr;
namespace nix {
// for more information, refer to
// https://github.com/nlohmann/json/blob/master/include/nlohmann/detail/input/json_sax.hpp
class JSONSax : nlohmann::json_sax<json> {
class JSONState {
protected:
unique_ptr<JSONState> parent;
Value * v;
public:
virtual unique_ptr<JSONState> resolve(EvalState &)
{
throw std::logic_error("tried to close toplevel json parser state");
};
explicit JSONState(unique_ptr<JSONState>&& p) : parent(std::move(p)), v(nullptr) {};
explicit JSONState(Value* v) : v(v) {};
JSONState(JSONState& p) = delete;
Value& value(EvalState & state)
{
if (v == nullptr)
v = state.allocValue();
return *v;
};
virtual ~JSONState() {};
virtual void add() {};
};
static void skipWhitespace(const char * & s)
{
while (*s == ' ' || *s == '\t' || *s == '\n' || *s == '\r') s++;
}
static string parseJSONString(const char * & s)
{
string res;
if (*s++ != '"') throw JSONParseError("expected JSON string");
while (*s != '"') {
if (!*s) throw JSONParseError("got end-of-string in JSON string");
if (*s == '\\') {
s++;
if (*s == '"') res += '"';
else if (*s == '\\') res += '\\';
else if (*s == '/') res += '/';
else if (*s == '/') res += '/';
else if (*s == 'b') res += '\b';
else if (*s == 'f') res += '\f';
else if (*s == 'n') res += '\n';
else if (*s == 'r') res += '\r';
else if (*s == 't') res += '\t';
else if (*s == 'u') throw JSONParseError("\\u characters in JSON strings are currently not supported");
else throw JSONParseError("invalid escaped character in JSON string");
s++;
} else
res += *s++;
}
s++;
return res;
}
static void parseJSON(EvalState & state, const char * & s, Value & v)
{
skipWhitespace(s);
if (!*s) throw JSONParseError("expected JSON value");
if (*s == '[') {
s++;
ValueVector values;
values.reserve(128);
skipWhitespace(s);
while (1) {
if (values.empty() && *s == ']') break;
Value * v2 = state.allocValue();
parseJSON(state, s, *v2);
values.push_back(v2);
skipWhitespace(s);
if (*s == ']') break;
if (*s != ',') throw JSONParseError("expected ',' or ']' after JSON array element");
s++;
class JSONObjectState : public JSONState {
using JSONState::JSONState;
ValueMap attrs = ValueMap();
virtual unique_ptr<JSONState> resolve(EvalState & state) override
{
Value& v = parent->value(state);
state.mkAttrs(v, attrs.size());
for (auto & i : attrs)
v.attrs->push_back(Attr(i.first, i.second));
return std::move(parent);
}
s++;
state.mkList(v, values.size());
for (size_t n = 0; n < values.size(); ++n)
v.listElems()[n] = values[n];
}
else if (*s == '{') {
s++;
ValueMap attrs;
while (1) {
skipWhitespace(s);
if (attrs.empty() && *s == '}') break;
string name = parseJSONString(s);
skipWhitespace(s);
if (*s != ':') throw JSONParseError("expected ':' in JSON object");
s++;
Value * v2 = state.allocValue();
parseJSON(state, s, *v2);
attrs[state.symbols.create(name)] = v2;
skipWhitespace(s);
if (*s == '}') break;
if (*s != ',') throw JSONParseError("expected ',' or '}' after JSON member");
s++;
virtual void add() override { v = nullptr; };
public:
void key(string_t& name, EvalState & state)
{
attrs[state.symbols.create(name)] = &value(state);
}
state.mkAttrs(v, attrs.size());
for (auto & i : attrs)
v.attrs->push_back(Attr(i.first, i.second));
v.attrs->sort();
s++;
}
};
else if (*s == '"') {
mkString(v, parseJSONString(s));
}
else if (isdigit(*s) || *s == '-' || *s == '.' ) {
// Buffer into a string first, then use built-in C++ conversions
std::string tmp_number;
ValueType number_type = tInt;
while (isdigit(*s) || *s == '-' || *s == '.' || *s == 'e' || *s == 'E') {
if (*s == '.' || *s == 'e' || *s == 'E')
number_type = tFloat;
tmp_number += *s++;
class JSONListState : public JSONState {
ValueVector values = ValueVector();
virtual unique_ptr<JSONState> resolve(EvalState & state) override
{
Value& v = parent->value(state);
state.mkList(v, values.size());
for (size_t n = 0; n < values.size(); ++n) {
v.listElems()[n] = values[n];
}
return std::move(parent);
}
try {
if (number_type == tFloat)
mkFloat(v, stod(tmp_number));
else
mkInt(v, stol(tmp_number));
} catch (std::invalid_argument & e) {
throw JSONParseError("invalid JSON number");
} catch (std::out_of_range & e) {
throw JSONParseError("out-of-range JSON number");
virtual void add() override {
values.push_back(v);
v = nullptr;
};
public:
JSONListState(unique_ptr<JSONState>&& p, std::size_t reserve) : JSONState(std::move(p))
{
values.reserve(reserve);
}
};
EvalState & state;
unique_ptr<JSONState> rs;
template<typename T, typename... Args> inline bool handle_value(T f, Args... args)
{
f(rs->value(state), args...);
rs->add();
return true;
}
else if (strncmp(s, "true", 4) == 0) {
s += 4;
mkBool(v, true);
public:
JSONSax(EvalState & state, Value & v) : state(state), rs(new JSONState(&v)) {};
bool null()
{
return handle_value(mkNull);
}
else if (strncmp(s, "false", 5) == 0) {
s += 5;
mkBool(v, false);
bool boolean(bool val)
{
return handle_value(mkBool, val);
}
else if (strncmp(s, "null", 4) == 0) {
s += 4;
mkNull(v);
bool number_integer(number_integer_t val)
{
return handle_value(mkInt, val);
}
else throw JSONParseError("unrecognised JSON value");
}
bool number_unsigned(number_unsigned_t val)
{
return handle_value(mkInt, val);
}
bool number_float(number_float_t val, const string_t& s)
{
return handle_value(mkFloat, val);
}
bool string(string_t& val)
{
return handle_value<void(Value&, const char*)>(mkString, val.c_str());
}
bool start_object(std::size_t len)
{
rs = std::make_unique<JSONObjectState>(std::move(rs));
return true;
}
bool key(string_t& name)
{
dynamic_cast<JSONObjectState*>(rs.get())->key(name, state);
return true;
}
bool end_object() {
rs = rs->resolve(state);
rs->add();
return true;
}
bool end_array() {
return end_object();
}
bool start_array(size_t len) {
rs = std::make_unique<JSONListState>(std::move(rs),
len != std::numeric_limits<size_t>::max() ? len : 128);
return true;
}
bool parse_error(std::size_t, const std::string&, const nlohmann::detail::exception& ex) {
throw JSONParseError(ex.what());
}
};
void parseJSON(EvalState & state, const string & s_, Value & v)
{
const char * s = s_.c_str();
parseJSON(state, s, v);
skipWhitespace(s);
if (*s) throw JSONParseError(format("expected end-of-string while parsing JSON value: %1%") % s);
JSONSax parser(state, v);
bool res = json::sax_parse(s_, &parser);
if (!res)
throw JSONParseError("Invalid JSON Value");
}
}

View File

@ -6,7 +6,7 @@
namespace nix {
MakeError(JSONParseError, EvalError)
MakeError(JSONParseError, EvalError);
void parseJSON(EvalState & state, const string & s, Value & v);

View File

@ -6,7 +6,7 @@ libexpr_DIR := $(d)
libexpr_SOURCES := $(wildcard $(d)/*.cc) $(wildcard $(d)/primops/*.cc) $(d)/lexer-tab.cc $(d)/parser-tab.cc
libexpr_LIBS = libutil libstore
libexpr_LIBS = libutil libstore libnixrust
libexpr_LDFLAGS =
ifneq ($(OS), FreeBSD)

View File

@ -16,14 +16,14 @@ DrvName::DrvName()
a letter. The `version' part is the rest (excluding the separating
dash). E.g., `apache-httpd-2.0.48' is parsed to (`apache-httpd',
'2.0.48'). */
DrvName::DrvName(const string & s) : hits(0)
DrvName::DrvName(std::string_view s) : hits(0)
{
name = fullName = s;
name = fullName = std::string(s);
for (unsigned int i = 0; i < s.size(); ++i) {
/* !!! isalpha/isdigit are affected by the locale. */
if (s[i] == '-' && i + 1 < s.size() && !isalpha(s[i + 1])) {
name = string(s, 0, i);
version = string(s, i + 1);
name = s.substr(0, i);
version = s.substr(i + 1);
break;
}
}

View File

@ -15,7 +15,7 @@ struct DrvName
unsigned int hits;
DrvName();
DrvName(const string & s);
DrvName(std::string_view s);
bool matches(DrvName & n);
private:

View File

@ -9,14 +9,14 @@
namespace nix {
MakeError(EvalError, Error)
MakeError(ParseError, Error)
MakeError(AssertionError, EvalError)
MakeError(ThrownError, AssertionError)
MakeError(Abort, EvalError)
MakeError(TypeError, EvalError)
MakeError(UndefinedVarError, Error)
MakeError(RestrictedPathError, Error)
MakeError(EvalError, Error);
MakeError(ParseError, Error);
MakeError(AssertionError, EvalError);
MakeError(ThrownError, AssertionError);
MakeError(Abort, EvalError);
MakeError(TypeError, EvalError);
MakeError(UndefinedVarError, Error);
MakeError(RestrictedPathError, Error);
/* Position objects. */

View File

@ -1,5 +1,5 @@
%glr-parser
%pure-parser
%define api.pure
%locations
%define parse.error verbose
%defines
@ -20,6 +20,7 @@
#include "nixexpr.hh"
#include "eval.hh"
#include "globals.hh"
namespace nix {
@ -138,11 +139,10 @@ static void addAttr(ExprAttrs * attrs, AttrPath & attrPath,
static void addFormal(const Pos & pos, Formals * formals, const Formal & formal)
{
if (formals->argNames.find(formal.name) != formals->argNames.end())
if (!formals->argNames.insert(formal.name).second)
throw ParseError(format("duplicate formal function argument '%1%' at %2%")
% formal.name % pos);
formals->formals.push_front(formal);
formals->argNames.insert(formal.name);
}
@ -402,7 +402,12 @@ expr_simple
new ExprVar(data->symbols.create("__nixPath"))),
new ExprString(data->symbols.create(path)));
}
| URI { $$ = new ExprString(data->symbols.create($1)); }
| URI {
static bool noURLLiterals = settings.isExperimentalFeatureEnabled("no-url-literals");
if (noURLLiterals)
throw ParseError("URL literals are disabled, at %s", CUR_POS);
$$ = new ExprString(data->symbols.create($1));
}
| '(' expr ')' { $$ = $2; }
/* Let expressions `let {..., body = ...}' are just desugared
into `(rec {..., body = ...}).body'. */
@ -571,12 +576,17 @@ Path resolveExprPath(Path path)
{
assert(path[0] == '/');
unsigned int followCount = 0, maxFollow = 1024;
/* If `path' is a symlink, follow it. This is so that relative
path references work. */
struct stat st;
while (true) {
// Basic cycle/depth limit to avoid infinite loops.
if (++followCount >= maxFollow)
throw Error("too many symbolic links encountered while traversing the path '%s'", path);
if (lstat(path.c_str(), &st))
throw SysError(format("getting status of '%1%'") % path);
throw SysError("getting status of '%s'", path);
if (!S_ISLNK(st.st_mode)) break;
path = absPath(readLink(path), dirOf(path));
}

View File

@ -44,29 +44,28 @@ std::pair<string, string> decodeContext(const string & s)
InvalidPathError::InvalidPathError(const Path & path) :
EvalError(format("path '%1%' is not valid") % path), path(path) {}
EvalError("path '%s' is not valid", path), path(path) {}
void EvalState::realiseContext(const PathSet & context)
{
PathSet drvs;
std::vector<StorePathWithOutputs> drvs;
for (auto & i : context) {
std::pair<string, string> decoded = decodeContext(i);
Path ctx = decoded.first;
assert(store->isStorePath(ctx));
auto ctx = store->parseStorePath(decoded.first);
if (!store->isValidPath(ctx))
throw InvalidPathError(ctx);
if (!decoded.second.empty() && nix::isDerivation(ctx)) {
drvs.insert(decoded.first + "!" + decoded.second);
throw InvalidPathError(store->printStorePath(ctx));
if (!decoded.second.empty() && ctx.isDerivation()) {
drvs.push_back(StorePathWithOutputs{ctx.clone(), {decoded.second}});
/* Add the output of this derivation to the allowed
paths. */
if (allowedPaths) {
auto drv = store->derivationFromPath(decoded.first);
auto drv = store->derivationFromPath(store->parseStorePath(decoded.first));
DerivationOutputs::iterator i = drv.outputs.find(decoded.second);
if (i == drv.outputs.end())
throw Error("derivation '%s' does not have an output named '%s'", decoded.first, decoded.second);
allowedPaths->insert(i->second.path);
allowedPaths->insert(store->printStorePath(i->second.path));
}
}
}
@ -74,10 +73,11 @@ void EvalState::realiseContext(const PathSet & context)
if (drvs.empty()) return;
if (!evalSettings.enableImportFromDerivation)
throw EvalError(format("attempted to realize '%1%' during evaluation but 'allow-import-from-derivation' is false") % *(drvs.begin()));
throw EvalError("attempted to realize '%1%' during evaluation but 'allow-import-from-derivation' is false",
store->printStorePath(drvs.begin()->path));
/* For performance, prefetch all substitute info. */
PathSet willBuild, willSubstitute, unknown;
StorePathSet willBuild, willSubstitute, unknown;
unsigned long long downloadSize, narSize;
store->queryMissing(drvs, willBuild, willSubstitute, unknown, downloadSize, narSize);
store->buildPaths(drvs);
@ -100,8 +100,9 @@ static void prim_scopedImport(EvalState & state, const Pos & pos, Value * * args
Path realPath = state.checkSourcePath(state.toRealPath(path, context));
if (state.store->isStorePath(path) && state.store->isValidPath(path) && isDerivation(path)) {
Derivation drv = readDerivation(realPath);
// FIXME
if (state.store->isStorePath(path) && state.store->isValidPath(state.store->parseStorePath(path)) && isDerivation(path)) {
Derivation drv = readDerivation(*state.store, realPath);
Value & w = *state.allocValue();
state.mkAttrs(w, 3 + drv.outputs.size());
Value * v2 = state.allocAttr(w, state.sDrvPath);
@ -115,7 +116,7 @@ static void prim_scopedImport(EvalState & state, const Pos & pos, Value * * args
for (const auto & o : drv.outputs) {
v2 = state.allocAttr(w, state.symbols.create(o.first));
mkString(*v2, o.second.path, {"!" + o.first + "!" + path});
mkString(*v2, state.store->printStorePath(o.second.path), {"!" + o.first + "!" + path});
outputsVal->listElems()[outputs_index] = state.allocValue();
mkString(*(outputsVal->listElems()[outputs_index++]), o.first);
}
@ -396,8 +397,7 @@ static void prim_genericClosure(EvalState & state, const Pos & pos, Value * * ar
throw EvalError(format("attribute 'key' required, at %1%") % pos);
state.forceValue(*key->value);
if (doneKeys.find(key->value) != doneKeys.end()) continue;
doneKeys.insert(key->value);
if (!doneKeys.insert(key->value).second) continue;
res.push_back(e);
/* Call the `operator' function with `e' as argument. */
@ -470,7 +470,7 @@ static void prim_tryEval(EvalState & state, const Pos & pos, Value * * args, Val
static void prim_getEnv(EvalState & state, const Pos & pos, Value * * args, Value & v)
{
string name = state.forceStringNoCtx(*args[0], pos);
mkString(v, evalSettings.restrictEval || evalSettings.pureEval ? "" : getEnv(name));
mkString(v, evalSettings.restrictEval || evalSettings.pureEval ? "" : getEnv(name).value_or(""));
}
@ -507,13 +507,6 @@ static void prim_trace(EvalState & state, const Pos & pos, Value * * args, Value
}
void prim_valueSize(EvalState & state, const Pos & pos, Value * * args, Value & v)
{
/* We're not forcing the argument on purpose. */
mkInt(v, valueSize(*args[0]));
}
/*************************************************************
* Derivations
*************************************************************/
@ -684,24 +677,24 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * *
runs. */
if (path.at(0) == '=') {
/* !!! This doesn't work if readOnlyMode is set. */
PathSet refs;
state.store->computeFSClosure(string(path, 1), refs);
StorePathSet refs;
state.store->computeFSClosure(state.store->parseStorePath(std::string_view(path).substr(1)), refs);
for (auto & j : refs) {
drv.inputSrcs.insert(j);
if (isDerivation(j))
drv.inputDrvs[j] = state.store->queryDerivationOutputNames(j);
drv.inputSrcs.insert(j.clone());
if (j.isDerivation())
drv.inputDrvs[j.clone()] = state.store->queryDerivationOutputNames(j);
}
}
/* Handle derivation outputs of the form !<name>!<path>. */
else if (path.at(0) == '!') {
std::pair<string, string> ctx = decodeContext(path);
drv.inputDrvs[ctx.first].insert(ctx.second);
drv.inputDrvs[state.store->parseStorePath(ctx.first)].insert(ctx.second);
}
/* Otherwise it's a source file. */
else
drv.inputSrcs.insert(path);
drv.inputSrcs.insert(state.store->parseStorePath(path));
}
/* Do we have all required attributes? */
@ -711,10 +704,8 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * *
throw EvalError(format("required attribute 'system' missing, at %1%") % posDrvName);
/* Check whether the derivation name is valid. */
checkStoreName(drvName);
if (isDerivation(drvName))
throw EvalError(format("derivation names are not allowed to end in '%1%', at %2%")
% drvExtension % posDrvName);
throw EvalError("derivation names are not allowed to end in '%s', at %s", drvExtension, posDrvName);
if (outputHash) {
/* Handle fixed-output derivations. */
@ -724,52 +715,53 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * *
HashType ht = outputHashAlgo.empty() ? htUnknown : parseHashType(outputHashAlgo);
Hash h(*outputHash, ht);
Path outPath = state.store->makeFixedOutputPath(outputHashRecursive, h, drvName);
if (!jsonObject) drv.env["out"] = outPath;
drv.outputs["out"] = DerivationOutput(outPath,
(outputHashRecursive ? "r:" : "") + printHashType(h.type),
h.to_string(Base16, false));
auto outPath = state.store->makeFixedOutputPath(outputHashRecursive, h, drvName);
if (!jsonObject) drv.env["out"] = state.store->printStorePath(outPath);
drv.outputs.insert_or_assign("out", DerivationOutput(std::move(outPath),
(outputHashRecursive ? "r:" : "") + printHashType(h.type),
h.to_string(Base16, false)));
}
else {
/* Construct the "masked" store derivation, which is the final
one except that in the list of outputs, the output paths
are empty, and the corresponding environment variables have
an empty value. This ensures that changes in the set of
output names do get reflected in the hash. */
/* Compute a hash over the "masked" store derivation, which is
the final one except that in the list of outputs, the
output paths are empty strings, and the corresponding
environment variables have an empty value. This ensures
that changes in the set of output names do get reflected in
the hash. */
for (auto & i : outputs) {
if (!jsonObject) drv.env[i] = "";
drv.outputs[i] = DerivationOutput("", "", "");
drv.outputs.insert_or_assign(i,
DerivationOutput(StorePath::dummy.clone(), "", ""));
}
/* Use the masked derivation expression to compute the output
path. */
Hash h = hashDerivationModulo(*state.store, drv);
Hash h = hashDerivationModulo(*state.store, Derivation(drv), true);
for (auto & i : drv.outputs)
if (i.second.path == "") {
Path outPath = state.store->makeOutputPath(i.first, h, drvName);
if (!jsonObject) drv.env[i.first] = outPath;
i.second.path = outPath;
}
for (auto & i : outputs) {
auto outPath = state.store->makeOutputPath(i, h, drvName);
if (!jsonObject) drv.env[i] = state.store->printStorePath(outPath);
drv.outputs.insert_or_assign(i,
DerivationOutput(std::move(outPath), "", ""));
}
}
/* Write the resulting term into the Nix store directory. */
Path drvPath = writeDerivation(state.store, drv, drvName, state.repair);
auto drvPath = writeDerivation(state.store, drv, drvName, state.repair);
auto drvPathS = state.store->printStorePath(drvPath);
printMsg(lvlChatty, format("instantiated '%1%' -> '%2%'")
% drvName % drvPath);
printMsg(lvlChatty, "instantiated '%1%' -> '%2%'", drvName, drvPathS);
/* Optimisation, but required in read-only mode! because in that
case we don't actually write store derivations, so we can't
read them later. */
drvHashes[drvPath] = hashDerivationModulo(*state.store, drv);
drvHashes.insert_or_assign(drvPath.clone(),
hashDerivationModulo(*state.store, Derivation(drv), false));
state.mkAttrs(v, 1 + drv.outputs.size());
mkString(*state.allocAttr(v, state.sDrvPath), drvPath, {"=" + drvPath});
mkString(*state.allocAttr(v, state.sDrvPath), drvPathS, {"=" + drvPathS});
for (auto & i : drv.outputs) {
mkString(*state.allocAttr(v, state.symbols.create(i.first)),
i.second.path, {"!" + i.first + "!" + drvPath});
state.store->printStorePath(i.second.path), {"!" + i.first + "!" + drvPathS});
}
v.attrs->sort();
}
@ -822,7 +814,7 @@ static void prim_storePath(EvalState & state, const Pos & pos, Value * * args, V
throw EvalError(format("path '%1%' is not in the Nix store, at %2%") % path % pos);
Path path2 = state.store->toStorePath(path);
if (!settings.readOnlyMode)
state.store->ensurePath(path2);
state.store->ensurePath(state.store->parseStorePath(path2));
context.insert(path2);
mkString(v, path, context);
}
@ -1018,17 +1010,17 @@ static void prim_toFile(EvalState & state, const Pos & pos, Value * * args, Valu
string name = state.forceStringNoCtx(*args[0], pos);
string contents = state.forceString(*args[1], context, pos);
PathSet refs;
StorePathSet refs;
for (auto path : context) {
if (path.at(0) != '/')
throw EvalError(format("in 'toFile': the file '%1%' cannot refer to derivation outputs, at %2%") % name % pos);
refs.insert(path);
refs.insert(state.store->parseStorePath(path));
}
Path storePath = settings.readOnlyMode
auto storePath = state.store->printStorePath(settings.readOnlyMode
? state.store->computeStorePathForText(name, contents, refs)
: state.store->addTextToStore(name, contents, refs, state.repair);
: state.store->addTextToStore(name, contents, refs, state.repair));
/* Note: we don't need to add `context' to the context of the
result, since `storePath' itself has references to the paths
@ -1068,21 +1060,18 @@ static void addPath(EvalState & state, const Pos & pos, const string & name, con
return state.forceBool(res, pos);
}) : defaultPathFilter;
Path expectedStorePath;
if (expectedHash) {
expectedStorePath =
state.store->makeFixedOutputPath(recursive, expectedHash, name);
}
std::optional<StorePath> expectedStorePath;
if (expectedHash)
expectedStorePath = state.store->makeFixedOutputPath(recursive, expectedHash, name);
Path dstPath;
if (!expectedHash || !state.store->isValidPath(expectedStorePath)) {
dstPath = settings.readOnlyMode
if (!expectedHash || !state.store->isValidPath(*expectedStorePath)) {
dstPath = state.store->printStorePath(settings.readOnlyMode
? state.store->computeStorePathForPath(name, path, recursive, htSHA256, filter).first
: state.store->addToStore(name, path, recursive, htSHA256, filter, state.repair);
if (expectedHash && expectedStorePath != dstPath) {
throw Error(format("store path mismatch in (possibly filtered) path added from '%1%'") % path);
}
: state.store->addToStore(name, path, recursive, htSHA256, filter, state.repair));
if (expectedHash && expectedStorePath != state.store->parseStorePath(dstPath))
throw Error("store path mismatch in (possibly filtered) path added from '%s'", path);
} else
dstPath = expectedStorePath;
dstPath = state.store->printStorePath(*expectedStorePath);
mkString(v, dstPath, {dstPath});
}
@ -1099,7 +1088,7 @@ static void prim_filterSource(EvalState & state, const Pos & pos, Value * * args
if (args[0]->type != tLambda)
throw TypeError(format("first argument in call to 'filterSource' is not a function but %1%, at %2%") % showType(*args[0]) % pos);
addPath(state, pos, baseNameOf(path), path, args[0], true, Hash(), v);
addPath(state, pos, std::string(baseNameOf(path)), path, args[0], true, Hash(), v);
}
static void prim_path(EvalState & state, const Pos & pos, Value * * args, Value & v)
@ -1273,13 +1262,12 @@ static void prim_listToAttrs(EvalState & state, const Pos & pos, Value * * args,
string name = state.forceStringNoCtx(*j->value, pos);
Symbol sym = state.symbols.create(name);
if (seen.find(sym) == seen.end()) {
if (seen.insert(sym).second) {
Bindings::iterator j2 = v2.attrs->find(state.symbols.create(state.sValue));
if (j2 == v2.attrs->end())
throw TypeError(format("'value' attribute missing in a call to 'listToAttrs', at %1%") % pos);
v.attrs->push_back(Attr(sym, j2->value, j2->pos));
seen.insert(sym);
}
}
@ -2089,12 +2077,12 @@ void fetch(EvalState & state, const Pos & pos, Value * * args, Value & v,
if (evalSettings.pureEval && !request.expectedHash)
throw Error("in pure evaluation mode, '%s' requires a 'sha256' argument", who);
Path res = getDownloader()->downloadCached(state.store, request).path;
auto res = getDownloader()->downloadCached(state.store, request);
if (state.allowedPaths)
state.allowedPaths->insert(res);
state.allowedPaths->insert(res.path);
mkString(v, res, PathSet({res}));
mkString(v, res.storePath, PathSet({res.storePath}));
}
@ -2160,7 +2148,7 @@ void EvalState::createBaseEnv()
}
if (!evalSettings.pureEval) {
mkString(v, settings.thisSystem);
mkString(v, settings.thisSystem.get());
addConstant("__currentSystem", v);
}
@ -2208,7 +2196,6 @@ void EvalState::createBaseEnv()
// Debugging
addPrimOp("__trace", 2, prim_trace);
addPrimOp("__valueSize", 1, prim_valueSize);
// Paths
addPrimOp("__toPath", 1, prim_toPath);

View File

@ -148,7 +148,7 @@ static void prim_appendContext(EvalState & state, const Pos & pos, Value * * arg
if (!state.store->isStorePath(i.name))
throw EvalError("Context key '%s' is not a store path, at %s", i.name, i.pos);
if (!settings.readOnlyMode)
state.store->ensurePath(i.name);
state.store->ensurePath(state.store->parseStorePath(i.name));
state.forceAttrs(*i.value, *i.pos);
auto iter = i.value->attrs->find(sPath);
if (iter != i.value->attrs->end()) {

View File

@ -4,6 +4,7 @@
#include "store-api.hh"
#include "pathlocks.hh"
#include "hash.hh"
#include "tarfile.hh"
#include <sys/time.h>
@ -45,9 +46,7 @@ GitInfo exportGit(ref<Store> store, const std::string & uri,
if (!clean) {
/* This is an unclean working tree. So copy all tracked
files. */
/* This is an unclean working tree. So copy all tracked files. */
GitInfo gitInfo;
gitInfo.rev = "0000000000000000000000000000000000000000";
gitInfo.shortRev = std::string(gitInfo.rev, 0, 7);
@ -70,7 +69,7 @@ GitInfo exportGit(ref<Store> store, const std::string & uri,
return files.count(file);
};
gitInfo.storePath = store->addToStore("source", uri, true, htSHA256, filter);
gitInfo.storePath = store->printStorePath(store->addToStore("source", uri, true, htSHA256, filter));
return gitInfo;
}
@ -157,7 +156,7 @@ GitInfo exportGit(ref<Store> store, const std::string & uri,
gitInfo.storePath = json["storePath"];
if (store->isValidPath(gitInfo.storePath)) {
if (store->isValidPath(store->parseStorePath(gitInfo.storePath))) {
gitInfo.revCount = json["revCount"];
return gitInfo;
}
@ -166,16 +165,18 @@ GitInfo exportGit(ref<Store> store, const std::string & uri,
if (e.errNo != ENOENT) throw;
}
// FIXME: should pipe this, or find some better way to extract a
// revision.
auto tar = runProgram("git", true, { "-C", cacheDir, "archive", gitInfo.rev });
auto source = sinkToSource([&](Sink & sink) {
RunOptions gitOptions("git", { "-C", cacheDir, "archive", gitInfo.rev });
gitOptions.standardOut = &sink;
runProgram2(gitOptions);
});
Path tmpDir = createTempDir();
AutoDelete delTmpDir(tmpDir, true);
runProgram("tar", true, { "x", "-C", tmpDir }, tar);
unpackTarfile(*source, tmpDir);
gitInfo.storePath = store->addToStore(name, tmpDir);
gitInfo.storePath = store->printStorePath(store->addToStore(name, tmpDir));
gitInfo.revCount = std::stoull(runProgram("git", true, { "-C", cacheDir, "rev-list", "--count", gitInfo.rev }));

View File

@ -63,7 +63,7 @@ HgInfo exportMercurial(ref<Store> store, const std::string & uri,
return files.count(file);
};
hgInfo.storePath = store->addToStore("source", uri, true, htSHA256, filter);
hgInfo.storePath = store->printStorePath(store->addToStore("source", uri, true, htSHA256, filter));
return hgInfo;
}
@ -134,7 +134,7 @@ HgInfo exportMercurial(ref<Store> store, const std::string & uri,
hgInfo.storePath = json["storePath"];
if (store->isValidPath(hgInfo.storePath)) {
if (store->isValidPath(store->parseStorePath(hgInfo.storePath))) {
printTalkative("using cached Mercurial store path '%s'", hgInfo.storePath);
return hgInfo;
}
@ -150,7 +150,7 @@ HgInfo exportMercurial(ref<Store> store, const std::string & uri,
deletePath(tmpDir + "/.hg_archival.txt");
hgInfo.storePath = store->addToStore(name, tmpDir);
hgInfo.storePath = store->printStorePath(store->addToStore(name, tmpDir));
nlohmann::json json;
json["storePath"] = hgInfo.storePath;

View File

@ -38,7 +38,12 @@ public:
return s < s2.s;
}
operator const string & () const
operator const std::string & () const
{
return *s;
}
operator const std::string_view () const
{
return *s;
}

View File

@ -40,7 +40,12 @@ void printValueAsJSON(EvalState & state, bool strict,
break;
case tAttrs: {
Bindings::iterator i = v.attrs->find(state.sOutPath);
auto maybeString = state.tryAttrsToString(noPos, v, context, false, false);
if (maybeString) {
out.write(*maybeString);
break;
}
auto i = v.attrs->find(state.sOutPath);
if (i == v.attrs->end()) {
auto obj(out.object());
StringSet names;

View File

@ -105,10 +105,9 @@ static void printValueAsXML(EvalState & state, bool strict, bool location,
XMLOpenElement _(doc, "derivation", xmlAttrs);
if (drvPath != "" && drvsSeen.find(drvPath) == drvsSeen.end()) {
drvsSeen.insert(drvPath);
if (drvPath != "" && drvsSeen.insert(drvPath).second)
showAttrs(state, strict, location, *v.attrs, doc, context, drvsSeen);
} else
else
doc.writeEmptyElement("repeated");
}

View File

@ -35,7 +35,6 @@ struct Env;
struct Expr;
struct ExprLambda;
struct PrimOp;
struct PrimOp;
class Symbol;
struct Pos;
class EvalState;
@ -63,9 +62,6 @@ class ExternalValueBase
/* Return a string to be used in builtins.typeOf */
virtual string typeOf() const = 0;
/* How much space does this value take up */
virtual size_t valueSize(std::set<const void *> & seen) const = 0;
/* Coerce the value to a string. Defaults to uncoercable, i.e. throws an
* error
*/
@ -256,12 +252,6 @@ static inline void mkPathNoCopy(Value & v, const char * s)
void mkPath(Value & v, const char * s);
/* Compute the size in bytes of the given value, including all values
and environments reachable from it. Static expressions (Exprs) are
not included. */
size_t valueSize(Value & v);
#if HAVE_BOEHMGC
typedef std::vector<Value *, gc_allocator<Value *> > ValueVector;
typedef std::map<Symbol, Value *, std::less<Symbol>, gc_allocator<std::pair<const Symbol, Value *> > > ValueMap;

View File

@ -33,25 +33,25 @@ void printGCWarning()
}
void printMissing(ref<Store> store, const PathSet & paths, Verbosity lvl)
void printMissing(ref<Store> store, const std::vector<StorePathWithOutputs> & paths, Verbosity lvl)
{
unsigned long long downloadSize, narSize;
PathSet willBuild, willSubstitute, unknown;
StorePathSet willBuild, willSubstitute, unknown;
store->queryMissing(paths, willBuild, willSubstitute, unknown, downloadSize, narSize);
printMissing(store, willBuild, willSubstitute, unknown, downloadSize, narSize, lvl);
}
void printMissing(ref<Store> store, const PathSet & willBuild,
const PathSet & willSubstitute, const PathSet & unknown,
void printMissing(ref<Store> store, const StorePathSet & willBuild,
const StorePathSet & willSubstitute, const StorePathSet & unknown,
unsigned long long downloadSize, unsigned long long narSize, Verbosity lvl)
{
if (!willBuild.empty()) {
printMsg(lvl, "these derivations will be built:");
Paths sorted = store->topoSortPaths(willBuild);
auto sorted = store->topoSortPaths(willBuild);
reverse(sorted.begin(), sorted.end());
for (auto & i : sorted)
printMsg(lvl, fmt(" %s", i));
printMsg(lvl, fmt(" %s", store->printStorePath(i)));
}
if (!willSubstitute.empty()) {
@ -59,14 +59,14 @@ void printMissing(ref<Store> store, const PathSet & willBuild,
downloadSize / (1024.0 * 1024.0),
narSize / (1024.0 * 1024.0)));
for (auto & i : willSubstitute)
printMsg(lvl, fmt(" %s", i));
printMsg(lvl, fmt(" %s", store->printStorePath(i)));
}
if (!unknown.empty()) {
printMsg(lvl, fmt("don't know how to build these paths%s:",
(settings.readOnlyMode ? " (may be caused by read-only store access)" : "")));
for (auto & i : unknown)
printMsg(lvl, fmt(" %s", i));
printMsg(lvl, fmt(" %s", store->printStorePath(i)));
}
}
@ -155,7 +155,7 @@ void initNix()
sshd). This breaks build users because they don't have access
to the TMPDIR, in particular in nix-store --serve. */
#if __APPLE__
if (getuid() == 0 && hasPrefix(getEnv("TMPDIR"), "/var/folders/"))
if (getuid() == 0 && hasPrefix(getEnv("TMPDIR").value_or("/tmp"), "/var/folders/"))
unsetenv("TMPDIR");
#endif
}
@ -237,7 +237,7 @@ bool LegacyArgs::processArgs(const Strings & args, bool finish)
void parseCmdLine(int argc, char * * argv,
std::function<bool(Strings::iterator & arg, const Strings::iterator & end)> parseArg)
{
parseCmdLine(baseNameOf(argv[0]), argvToStrings(argc, argv), parseArg);
parseCmdLine(std::string(baseNameOf(argv[0])), argvToStrings(argc, argv), parseArg);
}

View File

@ -3,6 +3,7 @@
#include "util.hh"
#include "args.hh"
#include "common-args.hh"
#include "path.hh"
#include <signal.h>
@ -37,11 +38,15 @@ void printVersion(const string & programName);
void printGCWarning();
class Store;
struct StorePathWithOutputs;
void printMissing(ref<Store> store, const PathSet & paths, Verbosity lvl = lvlInfo);
void printMissing(
ref<Store> store,
const std::vector<StorePathWithOutputs> & paths,
Verbosity lvl = lvlInfo);
void printMissing(ref<Store> store, const PathSet & willBuild,
const PathSet & willSubstitute, const PathSet & unknown,
void printMissing(ref<Store> store, const StorePathSet & willBuild,
const StorePathSet & willSubstitute, const StorePathSet & unknown,
unsigned long long downloadSize, unsigned long long narSize, Verbosity lvl = lvlInfo);
string getArg(const string & opt,

View File

@ -49,9 +49,9 @@ void BinaryCacheStore::init()
throw Error(format("binary cache '%s' is for Nix stores with prefix '%s', not '%s'")
% getUri() % value % storeDir);
} else if (name == "WantMassQuery") {
wantMassQuery_ = value == "1";
wantMassQuery.setDefault(value == "1" ? "true" : "false");
} else if (name == "Priority") {
string2Int(value, priority);
priority.setDefault(fmt("%d", std::stoi(value)));
}
}
}
@ -91,19 +91,18 @@ std::shared_ptr<std::string> BinaryCacheStore::getFile(const std::string & path)
return sink.s;
}
Path BinaryCacheStore::narInfoFileFor(const Path & storePath)
std::string BinaryCacheStore::narInfoFileFor(const StorePath & storePath)
{
assertStorePath(storePath);
return storePathToHash(storePath) + ".narinfo";
return storePathToHash(printStorePath(storePath)) + ".narinfo";
}
void BinaryCacheStore::writeNarInfo(ref<NarInfo> narInfo)
{
auto narInfoFile = narInfoFileFor(narInfo->path);
upsertFile(narInfoFile, narInfo->to_string(), "text/x-nix-narinfo");
upsertFile(narInfoFile, narInfo->to_string(*this), "text/x-nix-narinfo");
auto hashPart = storePathToHash(narInfo->path);
auto hashPart = storePathToHash(printStorePath(narInfo->path));
{
auto state_(state.lock());
@ -126,8 +125,8 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, const ref<std::str
if (ref != info.path)
queryPathInfo(ref);
} catch (InvalidPath &) {
throw Error(format("cannot add '%s' to the binary cache because the reference '%s' is not valid")
% info.path % ref);
throw Error("cannot add '%s' to the binary cache because the reference '%s' is not valid",
printStorePath(info.path), printStorePath(ref));
}
assert(nar->compare(0, narMagic.size(), narMagic) == 0);
@ -138,14 +137,14 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, const ref<std::str
narInfo->narHash = hashString(htSHA256, *nar);
if (info.narHash && info.narHash != narInfo->narHash)
throw Error(format("refusing to copy corrupted path '%1%' to binary cache") % info.path);
throw Error("refusing to copy corrupted path '%1%' to binary cache", printStorePath(info.path));
auto accessor_ = std::dynamic_pointer_cast<RemoteFSAccessor>(accessor);
auto narAccessor = makeNarAccessor(nar);
if (accessor_)
accessor_->addToCache(info.path, *nar, narAccessor);
accessor_->addToCache(printStorePath(info.path), *nar, narAccessor);
/* Optionally write a JSON file containing a listing of the
contents of the NAR. */
@ -162,7 +161,7 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, const ref<std::str
}
}
upsertFile(storePathToHash(info.path) + ".ls", jsonOut.str(), "application/json");
upsertFile(storePathToHash(printStorePath(info.path)) + ".ls", jsonOut.str(), "application/json");
}
/* Compress the NAR. */
@ -174,10 +173,10 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, const ref<std::str
narInfo->fileSize = narCompressed->size();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(now2 - now1).count();
printMsg(lvlTalkative, format("copying path '%1%' (%2% bytes, compressed %3$.1f%% in %4% ms) to binary cache")
% narInfo->path % narInfo->narSize
% ((1.0 - (double) narCompressed->size() / nar->size()) * 100.0)
% duration);
printMsg(lvlTalkative, "copying path '%1%' (%2% bytes, compressed %3$.1f%% in %4% ms) to binary cache",
printStorePath(narInfo->path), narInfo->narSize,
((1.0 - (double) narCompressed->size() / nar->size()) * 100.0),
duration);
narInfo->url = "nar/" + narInfo->fileHash.to_string(Base32, false) + ".nar"
+ (compression == "xz" ? ".xz" :
@ -254,14 +253,14 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, const ref<std::str
stats.narWriteCompressionTimeMs += duration;
/* Atomically write the NAR info file.*/
if (secretKey) narInfo->sign(*secretKey);
if (secretKey) narInfo->sign(*this, *secretKey);
writeNarInfo(narInfo);
stats.narInfoWrite++;
}
bool BinaryCacheStore::isValidPathUncached(const Path & storePath)
bool BinaryCacheStore::isValidPathUncached(const StorePath & storePath)
{
// FIXME: this only checks whether a .narinfo with a matching hash
// part exists. So f4kb...-foo matches f4kb...-bar, even
@ -269,7 +268,7 @@ bool BinaryCacheStore::isValidPathUncached(const Path & storePath)
return fileExists(narInfoFileFor(storePath));
}
void BinaryCacheStore::narFromPath(const Path & storePath, Sink & sink)
void BinaryCacheStore::narFromPath(const StorePath & storePath, Sink & sink)
{
auto info = queryPathInfo(storePath).cast<const NarInfo>();
@ -295,12 +294,13 @@ void BinaryCacheStore::narFromPath(const Path & storePath, Sink & sink)
stats.narReadBytes += narSize;
}
void BinaryCacheStore::queryPathInfoUncached(const Path & storePath,
Callback<std::shared_ptr<ValidPathInfo>> callback) noexcept
void BinaryCacheStore::queryPathInfoUncached(const StorePath & storePath,
Callback<std::shared_ptr<const ValidPathInfo>> callback) noexcept
{
auto uri = getUri();
auto storePathS = printStorePath(storePath);
auto act = std::make_shared<Activity>(*logger, lvlTalkative, actQueryPathInfo,
fmt("querying info about '%s' on '%s'", storePath, uri), Logger::Fields{storePath, uri});
fmt("querying info about '%s' on '%s'", storePathS, uri), Logger::Fields{storePathS, uri});
PushActivity pact(act->id);
auto narInfoFile = narInfoFileFor(storePath);
@ -326,7 +326,7 @@ void BinaryCacheStore::queryPathInfoUncached(const Path & storePath,
}});
}
Path BinaryCacheStore::addToStore(const string & name, const Path & srcPath,
StorePath BinaryCacheStore::addToStore(const string & name, const Path & srcPath,
bool recursive, HashType hashAlgo, PathFilter & filter, RepairFlag repair)
{
// FIXME: some cut&paste from LocalStore::addToStore().
@ -345,20 +345,18 @@ Path BinaryCacheStore::addToStore(const string & name, const Path & srcPath,
h = hashString(hashAlgo, s);
}
ValidPathInfo info;
info.path = makeFixedOutputPath(recursive, h, name);
ValidPathInfo info(makeFixedOutputPath(recursive, h, name));
addToStore(info, sink.s, repair, CheckSigs, nullptr);
return info.path;
return std::move(info.path);
}
Path BinaryCacheStore::addTextToStore(const string & name, const string & s,
const PathSet & references, RepairFlag repair)
StorePath BinaryCacheStore::addTextToStore(const string & name, const string & s,
const StorePathSet & references, RepairFlag repair)
{
ValidPathInfo info;
info.path = computeStorePathForText(name, s, references);
info.references = references;
ValidPathInfo info(computeStorePathForText(name, s, references));
info.references = cloneStorePathSet(references);
if (repair || !isValidPath(info.path)) {
StringSink sink;
@ -366,7 +364,7 @@ Path BinaryCacheStore::addTextToStore(const string & name, const string & s,
addToStore(info, sink.s, repair, CheckSigs, nullptr);
}
return info.path;
return std::move(info.path);
}
ref<FSAccessor> BinaryCacheStore::getFSAccessor()
@ -374,7 +372,7 @@ ref<FSAccessor> BinaryCacheStore::getFSAccessor()
return make_ref<RemoteFSAccessor>(ref<Store>(shared_from_this()), localNarCache);
}
void BinaryCacheStore::addSignatures(const Path & storePath, const StringSet & sigs)
void BinaryCacheStore::addSignatures(const StorePath & storePath, const StringSet & sigs)
{
/* Note: this is inherently racy since there is no locking on
binary caches. In particular, with S3 this unreliable, even
@ -390,24 +388,22 @@ void BinaryCacheStore::addSignatures(const Path & storePath, const StringSet & s
writeNarInfo(narInfo);
}
std::shared_ptr<std::string> BinaryCacheStore::getBuildLog(const Path & path)
std::shared_ptr<std::string> BinaryCacheStore::getBuildLog(const StorePath & path)
{
Path drvPath;
auto drvPath = path.clone();
if (isDerivation(path))
drvPath = path;
else {
if (!path.isDerivation()) {
try {
auto info = queryPathInfo(path);
// FIXME: add a "Log" field to .narinfo
if (info->deriver == "") return nullptr;
drvPath = info->deriver;
if (!info->deriver) return nullptr;
drvPath = info->deriver->clone();
} catch (InvalidPath &) {
return nullptr;
}
}
auto logPath = "log/" + baseNameOf(drvPath);
auto logPath = "log/" + std::string(baseNameOf(printStorePath(drvPath)));
debug("fetching build log from binary cache '%s/%s'", getUri(), logPath);

View File

@ -52,11 +52,6 @@ public:
std::shared_ptr<std::string> getFile(const std::string & path);
protected:
bool wantMassQuery_ = false;
int priority = 50;
public:
virtual void init();
@ -65,49 +60,45 @@ private:
std::string narMagic;
std::string narInfoFileFor(const Path & storePath);
std::string narInfoFileFor(const StorePath & storePath);
void writeNarInfo(ref<NarInfo> narInfo);
public:
bool isValidPathUncached(const Path & path) override;
bool isValidPathUncached(const StorePath & path) override;
void queryPathInfoUncached(const Path & path,
Callback<std::shared_ptr<ValidPathInfo>> callback) noexcept override;
void queryPathInfoUncached(const StorePath & path,
Callback<std::shared_ptr<const ValidPathInfo>> callback) noexcept override;
Path queryPathFromHashPart(const string & hashPart) override
std::optional<StorePath> queryPathFromHashPart(const std::string & hashPart) override
{ unsupported("queryPathFromHashPart"); }
bool wantMassQuery() override { return wantMassQuery_; }
void addToStore(const ValidPathInfo & info, const ref<std::string> & nar,
RepairFlag repair, CheckSigsFlag checkSigs,
std::shared_ptr<FSAccessor> accessor) override;
Path addToStore(const string & name, const Path & srcPath,
StorePath addToStore(const string & name, const Path & srcPath,
bool recursive, HashType hashAlgo,
PathFilter & filter, RepairFlag repair) override;
Path addTextToStore(const string & name, const string & s,
const PathSet & references, RepairFlag repair) override;
StorePath addTextToStore(const string & name, const string & s,
const StorePathSet & references, RepairFlag repair) override;
void narFromPath(const Path & path, Sink & sink) override;
void narFromPath(const StorePath & path, Sink & sink) override;
BuildResult buildDerivation(const Path & drvPath, const BasicDerivation & drv,
BuildResult buildDerivation(const StorePath & drvPath, const BasicDerivation & drv,
BuildMode buildMode) override
{ unsupported("buildDerivation"); }
void ensurePath(const Path & path) override
void ensurePath(const StorePath & path) override
{ unsupported("ensurePath"); }
ref<FSAccessor> getFSAccessor() override;
void addSignatures(const Path & storePath, const StringSet & sigs) override;
void addSignatures(const StorePath & storePath, const StringSet & sigs) override;
std::shared_ptr<std::string> getBuildLog(const Path & path) override;
int getPriority() override { return priority; }
std::shared_ptr<std::string> getBuildLog(const StorePath & path) override;
};

File diff suppressed because it is too large Load Diff

View File

@ -7,5 +7,6 @@ namespace nix {
// TODO: make pluggable.
void builtinFetchurl(const BasicDerivation & drv, const std::string & netrcData);
void builtinBuildenv(const BasicDerivation & drv);
void builtinUnpackChannel(const BasicDerivation & drv);
}

View File

@ -123,8 +123,7 @@ static Path out;
static void addPkg(const Path & pkgDir, int priority)
{
if (done.count(pkgDir)) return;
done.insert(pkgDir);
if (!done.insert(pkgDir).second) return;
createLinks(pkgDir, out, priority);
try {

View File

@ -24,7 +24,7 @@ void builtinFetchurl(const BasicDerivation & drv, const std::string & netrcData)
Path storePath = getAttr("out");
auto mainUrl = getAttr("url");
bool unpack = get(drv.env, "unpack", "") == "1";
bool unpack = get(drv.env, "unpack").value_or("") == "1";
/* Note: have to use a fresh downloader here because we're in
a forked process. */

View File

@ -0,0 +1,29 @@
#include "builtins.hh"
#include "tarfile.hh"
namespace nix {
void builtinUnpackChannel(const BasicDerivation & drv)
{
auto getAttr = [&](const string & name) {
auto i = drv.env.find(name);
if (i == drv.env.end()) throw Error("attribute '%s' missing", name);
return i->second;
};
Path out = getAttr("out");
auto channelName = getAttr("channelName");
auto src = getAttr("src");
createDirs(out);
unpackTarfile(src, out);
auto entries = readDirectory(out);
if (entries.size() != 1)
throw Error("channel tarball '%s' contains more than one file", src);
if (rename((out + "/" + entries[0].name).c_str(), (out + "/" + channelName).c_str()) == -1)
throw SysError("renaming channel directory");
}
}

844
src/libstore/daemon.cc Normal file
View File

@ -0,0 +1,844 @@
#include "daemon.hh"
#include "monitor-fd.hh"
#include "worker-protocol.hh"
#include "store-api.hh"
#include "local-store.hh"
#include "finally.hh"
#include "affinity.hh"
#include "archive.hh"
#include "derivations.hh"
#include "args.hh"
namespace nix::daemon {
Sink & operator << (Sink & sink, const Logger::Fields & fields)
{
sink << fields.size();
for (auto & f : fields) {
sink << f.type;
if (f.type == Logger::Field::tInt)
sink << f.i;
else if (f.type == Logger::Field::tString)
sink << f.s;
else abort();
}
return sink;
}
/* Logger that forwards log messages to the client, *if* we're in a
state where the protocol allows it (i.e., when canSendStderr is
true). */
struct TunnelLogger : public Logger
{
FdSink & to;
struct State
{
bool canSendStderr = false;
std::vector<std::string> pendingMsgs;
};
Sync<State> state_;
unsigned int clientVersion;
TunnelLogger(FdSink & to, unsigned int clientVersion)
: to(to), clientVersion(clientVersion) { }
void enqueueMsg(const std::string & s)
{
auto state(state_.lock());
if (state->canSendStderr) {
assert(state->pendingMsgs.empty());
try {
to(s);
to.flush();
} catch (...) {
/* Write failed; that means that the other side is
gone. */
state->canSendStderr = false;
throw;
}
} else
state->pendingMsgs.push_back(s);
}
void log(Verbosity lvl, const FormatOrString & fs) override
{
if (lvl > verbosity) return;
StringSink buf;
buf << STDERR_NEXT << (fs.s + "\n");
enqueueMsg(*buf.s);
}
/* startWork() means that we're starting an operation for which we
want to send out stderr to the client. */
void startWork()
{
auto state(state_.lock());
state->canSendStderr = true;
for (auto & msg : state->pendingMsgs)
to(msg);
state->pendingMsgs.clear();
to.flush();
}
/* stopWork() means that we're done; stop sending stderr to the
client. */
void stopWork(bool success = true, const string & msg = "", unsigned int status = 0)
{
auto state(state_.lock());
state->canSendStderr = false;
if (success)
to << STDERR_LAST;
else {
to << STDERR_ERROR << msg;
if (status != 0) to << status;
}
}
void startActivity(ActivityId act, Verbosity lvl, ActivityType type,
const std::string & s, const Fields & fields, ActivityId parent) override
{
if (GET_PROTOCOL_MINOR(clientVersion) < 20) {
if (!s.empty())
log(lvl, s + "...");
return;
}
StringSink buf;
buf << STDERR_START_ACTIVITY << act << lvl << type << s << fields << parent;
enqueueMsg(*buf.s);
}
void stopActivity(ActivityId act) override
{
if (GET_PROTOCOL_MINOR(clientVersion) < 20) return;
StringSink buf;
buf << STDERR_STOP_ACTIVITY << act;
enqueueMsg(*buf.s);
}
void result(ActivityId act, ResultType type, const Fields & fields) override
{
if (GET_PROTOCOL_MINOR(clientVersion) < 20) return;
StringSink buf;
buf << STDERR_RESULT << act << type << fields;
enqueueMsg(*buf.s);
}
};
struct TunnelSink : Sink
{
Sink & to;
TunnelSink(Sink & to) : to(to) { }
virtual void operator () (const unsigned char * data, size_t len)
{
to << STDERR_WRITE;
writeString(data, len, to);
}
};
struct TunnelSource : BufferedSource
{
Source & from;
BufferedSink & to;
TunnelSource(Source & from, BufferedSink & to) : from(from), to(to) { }
size_t readUnbuffered(unsigned char * data, size_t len) override
{
to << STDERR_READ << len;
to.flush();
size_t n = readString(data, len, from);
if (n == 0) throw EndOfFile("unexpected end-of-file");
return n;
}
};
/* If the NAR archive contains a single file at top-level, then save
the contents of the file to `s'. Otherwise barf. */
struct RetrieveRegularNARSink : ParseSink
{
bool regular;
string s;
RetrieveRegularNARSink() : regular(true) { }
void createDirectory(const Path & path)
{
regular = false;
}
void receiveContents(unsigned char * data, unsigned int len)
{
s.append((const char *) data, len);
}
void createSymlink(const Path & path, const string & target)
{
regular = false;
}
};
struct ClientSettings
{
bool keepFailed;
bool keepGoing;
bool tryFallback;
Verbosity verbosity;
unsigned int maxBuildJobs;
time_t maxSilentTime;
bool verboseBuild;
unsigned int buildCores;
bool useSubstitutes;
StringMap overrides;
void apply(TrustedFlag trusted)
{
settings.keepFailed = keepFailed;
settings.keepGoing = keepGoing;
settings.tryFallback = tryFallback;
nix::verbosity = verbosity;
settings.maxBuildJobs.assign(maxBuildJobs);
settings.maxSilentTime = maxSilentTime;
settings.verboseBuild = verboseBuild;
settings.buildCores = buildCores;
settings.useSubstitutes = useSubstitutes;
for (auto & i : overrides) {
auto & name(i.first);
auto & value(i.second);
auto setSubstituters = [&](Setting<Strings> & res) {
if (name != res.name && res.aliases.count(name) == 0)
return false;
StringSet trusted = settings.trustedSubstituters;
for (auto & s : settings.substituters.get())
trusted.insert(s);
Strings subs;
auto ss = tokenizeString<Strings>(value);
for (auto & s : ss)
if (trusted.count(s))
subs.push_back(s);
else
warn("ignoring untrusted substituter '%s'", s);
res = subs;
return true;
};
try {
if (name == "ssh-auth-sock") // obsolete
;
else if (trusted
|| name == settings.buildTimeout.name
|| name == "connect-timeout"
|| (name == "builders" && value == ""))
settings.set(name, value);
else if (setSubstituters(settings.substituters))
;
else if (setSubstituters(settings.extraSubstituters))
;
else
warn("ignoring the user-specified setting '%s', because it is a restricted setting and you are not a trusted user", name);
} catch (UsageError & e) {
warn(e.what());
}
}
}
};
static void performOp(TunnelLogger * logger, ref<Store> store,
TrustedFlag trusted, RecursiveFlag recursive, unsigned int clientVersion,
Source & from, BufferedSink & to, unsigned int op)
{
switch (op) {
case wopIsValidPath: {
auto path = store->parseStorePath(readString(from));
logger->startWork();
bool result = store->isValidPath(path);
logger->stopWork();
to << result;
break;
}
case wopQueryValidPaths: {
auto paths = readStorePaths<StorePathSet>(*store, from);
logger->startWork();
auto res = store->queryValidPaths(paths);
logger->stopWork();
writeStorePaths(*store, to, res);
break;
}
case wopHasSubstitutes: {
auto path = store->parseStorePath(readString(from));
logger->startWork();
StorePathSet paths; // FIXME
paths.insert(path.clone());
auto res = store->querySubstitutablePaths(paths);
logger->stopWork();
to << (res.count(path) != 0);
break;
}
case wopQuerySubstitutablePaths: {
auto paths = readStorePaths<StorePathSet>(*store, from);
logger->startWork();
auto res = store->querySubstitutablePaths(paths);
logger->stopWork();
writeStorePaths(*store, to, res);
break;
}
case wopQueryPathHash: {
auto path = store->parseStorePath(readString(from));
logger->startWork();
auto hash = store->queryPathInfo(path)->narHash;
logger->stopWork();
to << hash.to_string(Base16, false);
break;
}
case wopQueryReferences:
case wopQueryReferrers:
case wopQueryValidDerivers:
case wopQueryDerivationOutputs: {
auto path = store->parseStorePath(readString(from));
logger->startWork();
StorePathSet paths;
if (op == wopQueryReferences)
for (auto & i : store->queryPathInfo(path)->references)
paths.insert(i.clone());
else if (op == wopQueryReferrers)
store->queryReferrers(path, paths);
else if (op == wopQueryValidDerivers)
paths = store->queryValidDerivers(path);
else paths = store->queryDerivationOutputs(path);
logger->stopWork();
writeStorePaths(*store, to, paths);
break;
}
case wopQueryDerivationOutputNames: {
auto path = store->parseStorePath(readString(from));
logger->startWork();
StringSet names;
names = store->queryDerivationOutputNames(path);
logger->stopWork();
to << names;
break;
}
case wopQueryDeriver: {
auto path = store->parseStorePath(readString(from));
logger->startWork();
auto info = store->queryPathInfo(path);
logger->stopWork();
to << (info->deriver ? store->printStorePath(*info->deriver) : "");
break;
}
case wopQueryPathFromHashPart: {
auto hashPart = readString(from);
logger->startWork();
auto path = store->queryPathFromHashPart(hashPart);
logger->stopWork();
to << (path ? store->printStorePath(*path) : "");
break;
}
case wopAddToStore: {
bool fixed, recursive;
std::string s, baseName;
from >> baseName >> fixed /* obsolete */ >> recursive >> s;
/* Compatibility hack. */
if (!fixed) {
s = "sha256";
recursive = true;
}
HashType hashAlgo = parseHashType(s);
TeeSource savedNAR(from);
RetrieveRegularNARSink savedRegular;
if (recursive) {
/* Get the entire NAR dump from the client and save it to
a string so that we can pass it to
addToStoreFromDump(). */
ParseSink sink; /* null sink; just parse the NAR */
parseDump(sink, savedNAR);
} else
parseDump(savedRegular, from);
logger->startWork();
if (!savedRegular.regular) throw Error("regular file expected");
auto path = store->addToStoreFromDump(recursive ? *savedNAR.data : savedRegular.s, baseName, recursive, hashAlgo);
logger->stopWork();
to << store->printStorePath(path);
break;
}
case wopAddTextToStore: {
string suffix = readString(from);
string s = readString(from);
auto refs = readStorePaths<StorePathSet>(*store, from);
logger->startWork();
auto path = store->addTextToStore(suffix, s, refs, NoRepair);
logger->stopWork();
to << store->printStorePath(path);
break;
}
case wopExportPath: {
auto path = store->parseStorePath(readString(from));
readInt(from); // obsolete
logger->startWork();
TunnelSink sink(to);
store->exportPath(path, sink);
logger->stopWork();
to << 1;
break;
}
case wopImportPaths: {
logger->startWork();
TunnelSource source(from, to);
auto paths = store->importPaths(source, nullptr,
trusted ? NoCheckSigs : CheckSigs);
logger->stopWork();
Strings paths2;
for (auto & i : paths) paths2.push_back(store->printStorePath(i));
to << paths2;
break;
}
case wopBuildPaths: {
std::vector<StorePathWithOutputs> drvs;
for (auto & s : readStrings<Strings>(from))
drvs.push_back(store->parsePathWithOutputs(s));
BuildMode mode = bmNormal;
if (GET_PROTOCOL_MINOR(clientVersion) >= 15) {
mode = (BuildMode) readInt(from);
/* Repairing is not atomic, so disallowed for "untrusted"
clients. */
if (mode == bmRepair && !trusted)
throw Error("repairing is not allowed because you are not in 'trusted-users'");
}
logger->startWork();
store->buildPaths(drvs, mode);
logger->stopWork();
to << 1;
break;
}
case wopBuildDerivation: {
auto drvPath = store->parseStorePath(readString(from));
BasicDerivation drv;
readDerivation(from, *store, drv);
BuildMode buildMode = (BuildMode) readInt(from);
logger->startWork();
if (!trusted)
throw Error("you are not privileged to build derivations");
auto res = store->buildDerivation(drvPath, drv, buildMode);
logger->stopWork();
to << res.status << res.errorMsg;
break;
}
case wopEnsurePath: {
auto path = store->parseStorePath(readString(from));
logger->startWork();
store->ensurePath(path);
logger->stopWork();
to << 1;
break;
}
case wopAddTempRoot: {
auto path = store->parseStorePath(readString(from));
logger->startWork();
store->addTempRoot(path);
logger->stopWork();
to << 1;
break;
}
case wopAddIndirectRoot: {
Path path = absPath(readString(from));
logger->startWork();
store->addIndirectRoot(path);
logger->stopWork();
to << 1;
break;
}
case wopSyncWithGC: {
logger->startWork();
store->syncWithGC();
logger->stopWork();
to << 1;
break;
}
case wopFindRoots: {
logger->startWork();
Roots roots = store->findRoots(!trusted);
logger->stopWork();
size_t size = 0;
for (auto & i : roots)
size += i.second.size();
to << size;
for (auto & [target, links] : roots)
for (auto & link : links)
to << link << store->printStorePath(target);
break;
}
case wopCollectGarbage: {
GCOptions options;
options.action = (GCOptions::GCAction) readInt(from);
options.pathsToDelete = readStorePaths<StorePathSet>(*store, from);
from >> options.ignoreLiveness >> options.maxFreed;
// obsolete fields
readInt(from);
readInt(from);
readInt(from);
GCResults results;
logger->startWork();
if (options.ignoreLiveness)
throw Error("you are not allowed to ignore liveness");
store->collectGarbage(options, results);
logger->stopWork();
to << results.paths << results.bytesFreed << 0 /* obsolete */;
break;
}
case wopSetOptions: {
ClientSettings clientSettings;
clientSettings.keepFailed = readInt(from);
clientSettings.keepGoing = readInt(from);
clientSettings.tryFallback = readInt(from);
clientSettings.verbosity = (Verbosity) readInt(from);
clientSettings.maxBuildJobs = readInt(from);
clientSettings.maxSilentTime = readInt(from);
readInt(from); // obsolete useBuildHook
clientSettings.verboseBuild = lvlError == (Verbosity) readInt(from);
readInt(from); // obsolete logType
readInt(from); // obsolete printBuildTrace
clientSettings.buildCores = readInt(from);
clientSettings.useSubstitutes = readInt(from);
if (GET_PROTOCOL_MINOR(clientVersion) >= 12) {
unsigned int n = readInt(from);
for (unsigned int i = 0; i < n; i++) {
string name = readString(from);
string value = readString(from);
clientSettings.overrides.emplace(name, value);
}
}
logger->startWork();
// FIXME: use some setting in recursive mode. Will need to use
// non-global variables.
if (!recursive)
clientSettings.apply(trusted);
logger->stopWork();
break;
}
case wopQuerySubstitutablePathInfo: {
auto path = store->parseStorePath(readString(from));
logger->startWork();
SubstitutablePathInfos infos;
StorePathSet paths;
paths.insert(path.clone()); // FIXME
store->querySubstitutablePathInfos(paths, infos);
logger->stopWork();
auto i = infos.find(path);
if (i == infos.end())
to << 0;
else {
to << 1
<< (i->second.deriver ? store->printStorePath(*i->second.deriver) : "");
writeStorePaths(*store, to, i->second.references);
to << i->second.downloadSize
<< i->second.narSize;
}
break;
}
case wopQuerySubstitutablePathInfos: {
auto paths = readStorePaths<StorePathSet>(*store, from);
logger->startWork();
SubstitutablePathInfos infos;
store->querySubstitutablePathInfos(paths, infos);
logger->stopWork();
to << infos.size();
for (auto & i : infos) {
to << store->printStorePath(i.first)
<< (i.second.deriver ? store->printStorePath(*i.second.deriver) : "");
writeStorePaths(*store, to, i.second.references);
to << i.second.downloadSize << i.second.narSize;
}
break;
}
case wopQueryAllValidPaths: {
logger->startWork();
auto paths = store->queryAllValidPaths();
logger->stopWork();
writeStorePaths(*store, to, paths);
break;
}
case wopQueryPathInfo: {
auto path = store->parseStorePath(readString(from));
std::shared_ptr<const ValidPathInfo> info;
logger->startWork();
try {
info = store->queryPathInfo(path);
} catch (InvalidPath &) {
if (GET_PROTOCOL_MINOR(clientVersion) < 17) throw;
}
logger->stopWork();
if (info) {
if (GET_PROTOCOL_MINOR(clientVersion) >= 17)
to << 1;
to << (info->deriver ? store->printStorePath(*info->deriver) : "")
<< info->narHash.to_string(Base16, false);
writeStorePaths(*store, to, info->references);
to << info->registrationTime << info->narSize;
if (GET_PROTOCOL_MINOR(clientVersion) >= 16) {
to << info->ultimate
<< info->sigs
<< info->ca;
}
} else {
assert(GET_PROTOCOL_MINOR(clientVersion) >= 17);
to << 0;
}
break;
}
case wopOptimiseStore:
logger->startWork();
store->optimiseStore();
logger->stopWork();
to << 1;
break;
case wopVerifyStore: {
bool checkContents, repair;
from >> checkContents >> repair;
logger->startWork();
if (repair && !trusted)
throw Error("you are not privileged to repair paths");
bool errors = store->verifyStore(checkContents, (RepairFlag) repair);
logger->stopWork();
to << errors;
break;
}
case wopAddSignatures: {
auto path = store->parseStorePath(readString(from));
StringSet sigs = readStrings<StringSet>(from);
logger->startWork();
if (!trusted)
throw Error("you are not privileged to add signatures");
store->addSignatures(path, sigs);
logger->stopWork();
to << 1;
break;
}
case wopNarFromPath: {
auto path = store->parseStorePath(readString(from));
logger->startWork();
logger->stopWork();
dumpPath(store->printStorePath(path), to);
break;
}
case wopAddToStoreNar: {
bool repair, dontCheckSigs;
ValidPathInfo info(store->parseStorePath(readString(from)));
auto deriver = readString(from);
if (deriver != "")
info.deriver = store->parseStorePath(deriver);
info.narHash = Hash(readString(from), htSHA256);
info.references = readStorePaths<StorePathSet>(*store, from);
from >> info.registrationTime >> info.narSize >> info.ultimate;
info.sigs = readStrings<StringSet>(from);
from >> info.ca >> repair >> dontCheckSigs;
if (!trusted && dontCheckSigs)
dontCheckSigs = false;
if (!trusted)
info.ultimate = false;
std::string saved;
std::unique_ptr<Source> source;
if (GET_PROTOCOL_MINOR(clientVersion) >= 21)
source = std::make_unique<TunnelSource>(from, to);
else {
TeeSink tee(from);
parseDump(tee, tee.source);
saved = std::move(*tee.source.data);
source = std::make_unique<StringSource>(saved);
}
logger->startWork();
// FIXME: race if addToStore doesn't read source?
store->addToStore(info, *source, (RepairFlag) repair,
dontCheckSigs ? NoCheckSigs : CheckSigs, nullptr);
logger->stopWork();
break;
}
case wopQueryMissing: {
std::vector<StorePathWithOutputs> targets;
for (auto & s : readStrings<Strings>(from))
targets.push_back(store->parsePathWithOutputs(s));
logger->startWork();
StorePathSet willBuild, willSubstitute, unknown;
unsigned long long downloadSize, narSize;
store->queryMissing(targets, willBuild, willSubstitute, unknown, downloadSize, narSize);
logger->stopWork();
writeStorePaths(*store, to, willBuild);
writeStorePaths(*store, to, willSubstitute);
writeStorePaths(*store, to, unknown);
to << downloadSize << narSize;
break;
}
default:
throw Error(format("invalid operation %1%") % op);
}
}
void processConnection(
ref<Store> store,
FdSource & from,
FdSink & to,
TrustedFlag trusted,
RecursiveFlag recursive,
const std::string & userName,
uid_t userId)
{
auto monitor = !recursive ? std::make_unique<MonitorFdHup>(from.fd) : nullptr;
/* Exchange the greeting. */
unsigned int magic = readInt(from);
if (magic != WORKER_MAGIC_1) throw Error("protocol mismatch");
to << WORKER_MAGIC_2 << PROTOCOL_VERSION;
to.flush();
unsigned int clientVersion = readInt(from);
if (clientVersion < 0x10a)
throw Error("the Nix client version is too old");
auto tunnelLogger = new TunnelLogger(to, clientVersion);
auto prevLogger = nix::logger;
// FIXME
if (!recursive)
logger = tunnelLogger;
unsigned int opCount = 0;
Finally finally([&]() {
_isInterrupted = false;
prevLogger->log(lvlDebug, fmt("%d operations", opCount));
});
if (GET_PROTOCOL_MINOR(clientVersion) >= 14 && readInt(from)) {
auto affinity = readInt(from);
setAffinityTo(affinity);
}
readInt(from); // obsolete reserveSpace
/* Send startup error messages to the client. */
tunnelLogger->startWork();
try {
/* If we can't accept clientVersion, then throw an error
*here* (not above). */
#if 0
/* Prevent users from doing something very dangerous. */
if (geteuid() == 0 &&
querySetting("build-users-group", "") == "")
throw Error("if you run 'nix-daemon' as root, then you MUST set 'build-users-group'!");
#endif
store->createUser(userName, userId);
tunnelLogger->stopWork();
to.flush();
/* Process client requests. */
while (true) {
WorkerOp op;
try {
op = (WorkerOp) readInt(from);
} catch (Interrupted & e) {
break;
} catch (EndOfFile & e) {
break;
}
opCount++;
try {
performOp(tunnelLogger, store, trusted, recursive, clientVersion, from, to, op);
} catch (Error & e) {
/* If we're not in a state where we can send replies, then
something went wrong processing the input of the
client. This can happen especially if I/O errors occur
during addTextToStore() / importPath(). If that
happens, just send the error message and exit. */
bool errorAllowed = tunnelLogger->state_.lock()->canSendStderr;
tunnelLogger->stopWork(false, e.msg(), e.status);
if (!errorAllowed) throw;
} catch (std::bad_alloc & e) {
tunnelLogger->stopWork(false, "Nix daemon out of memory", 1);
throw;
}
to.flush();
assert(!tunnelLogger->state_.lock()->canSendStderr);
};
} catch (std::exception & e) {
tunnelLogger->stopWork(false, e.what(), 1);
to.flush();
return;
}
}
}

18
src/libstore/daemon.hh Normal file
View File

@ -0,0 +1,18 @@
#include "serialise.hh"
#include "store-api.hh"
namespace nix::daemon {
enum TrustedFlag : bool { NotTrusted = false, Trusted = true };
enum RecursiveFlag : bool { NotRecursive = false, Recursive = true };
void processConnection(
ref<Store> store,
FdSource & from,
FdSink & to,
TrustedFlag trusted,
RecursiveFlag recursive,
const std::string & userName,
uid_t userId);
}

View File

@ -21,17 +21,39 @@ void DerivationOutput::parseHashInfo(bool & recursive, Hash & hash) const
HashType hashType = parseHashType(algo);
if (hashType == htUnknown)
throw Error(format("unknown hash algorithm '%1%'") % algo);
throw Error("unknown hash algorithm '%s'", algo);
hash = Hash(this->hash, hashType);
}
Path BasicDerivation::findOutput(const string & id) const
BasicDerivation::BasicDerivation(const BasicDerivation & other)
: platform(other.platform)
, builder(other.builder)
, args(other.args)
, env(other.env)
{
for (auto & i : other.outputs)
outputs.insert_or_assign(i.first,
DerivationOutput(i.second.path.clone(), std::string(i.second.hashAlgo), std::string(i.second.hash)));
for (auto & i : other.inputSrcs)
inputSrcs.insert(i.clone());
}
Derivation::Derivation(const Derivation & other)
: BasicDerivation(other)
{
for (auto & i : other.inputDrvs)
inputDrvs.insert_or_assign(i.first.clone(), i.second);
}
const StorePath & BasicDerivation::findOutput(const string & id) const
{
auto i = outputs.find(id);
if (i == outputs.end())
throw Error(format("derivation has no output '%1%'") % id);
throw Error("derivation has no output '%s'", id);
return i->second.path;
}
@ -42,18 +64,17 @@ bool BasicDerivation::isBuiltin() const
}
Path writeDerivation(ref<Store> store,
StorePath writeDerivation(ref<Store> store,
const Derivation & drv, const string & name, RepairFlag repair)
{
PathSet references;
references.insert(drv.inputSrcs.begin(), drv.inputSrcs.end());
auto references = cloneStorePathSet(drv.inputSrcs);
for (auto & i : drv.inputDrvs)
references.insert(i.first);
references.insert(i.first.clone());
/* Note that the outputs of a derivation are *not* references
(that can be missing (of course) and should not necessarily be
held during a garbage collection). */
string suffix = name + drvExtension;
string contents = drv.unparse();
string contents = drv.unparse(*store, false);
return settings.readOnlyMode
? store->computeStorePathForText(suffix, contents, references)
: store->addTextToStore(suffix, contents, references, repair);
@ -121,7 +142,7 @@ static StringSet parseStrings(std::istream & str, bool arePaths)
}
static Derivation parseDerivation(const string & s)
static Derivation parseDerivation(const Store & store, const string & s)
{
Derivation drv;
istringstream_nocopy str(s);
@ -129,13 +150,12 @@ static Derivation parseDerivation(const string & s)
/* Parse the list of outputs. */
while (!endOfList(str)) {
DerivationOutput out;
expect(str, "("); string id = parseString(str);
expect(str, ","); out.path = parsePath(str);
expect(str, ","); out.hashAlgo = parseString(str);
expect(str, ","); out.hash = parseString(str);
expect(str, "("); std::string id = parseString(str);
expect(str, ","); auto path = store.parseStorePath(parsePath(str));
expect(str, ","); auto hashAlgo = parseString(str);
expect(str, ","); auto hash = parseString(str);
expect(str, ")");
drv.outputs[id] = out;
drv.outputs.emplace(id, DerivationOutput(std::move(path), std::move(hashAlgo), std::move(hash)));
}
/* Parse the list of input derivations. */
@ -144,11 +164,11 @@ static Derivation parseDerivation(const string & s)
expect(str, "(");
Path drvPath = parsePath(str);
expect(str, ",[");
drv.inputDrvs[drvPath] = parseStrings(str, false);
drv.inputDrvs.insert_or_assign(store.parseStorePath(drvPath), parseStrings(str, false));
expect(str, ")");
}
expect(str, ",["); drv.inputSrcs = parseStrings(str, true);
expect(str, ",["); drv.inputSrcs = store.parseStorePathSet(parseStrings(str, true));
expect(str, ","); drv.platform = parseString(str);
expect(str, ","); drv.builder = parseString(str);
@ -171,25 +191,24 @@ static Derivation parseDerivation(const string & s)
}
Derivation readDerivation(const Path & drvPath)
Derivation readDerivation(const Store & store, const Path & drvPath)
{
try {
return parseDerivation(readFile(drvPath));
return parseDerivation(store, readFile(drvPath));
} catch (FormatError & e) {
throw Error(format("error parsing derivation '%1%': %2%") % drvPath % e.msg());
}
}
Derivation Store::derivationFromPath(const Path & drvPath)
Derivation Store::derivationFromPath(const StorePath & drvPath)
{
assertStorePath(drvPath);
ensurePath(drvPath);
auto accessor = getFSAccessor();
try {
return parseDerivation(accessor->readFile(drvPath));
return parseDerivation(*this, accessor->readFile(printStorePath(drvPath)));
} catch (FormatError & e) {
throw Error(format("error parsing derivation '%1%': %2%") % drvPath % e.msg());
throw Error("error parsing derivation '%s': %s", printStorePath(drvPath), e.msg());
}
}
@ -220,7 +239,8 @@ static void printStrings(string & res, ForwardIterator i, ForwardIterator j)
}
string Derivation::unparse() const
string Derivation::unparse(const Store & store, bool maskOutputs,
std::map<std::string, StringSet> * actualInputs) const
{
string s;
s.reserve(65536);
@ -230,7 +250,7 @@ string Derivation::unparse() const
for (auto & i : outputs) {
if (first) first = false; else s += ',';
s += '('; printString(s, i.first);
s += ','; printString(s, i.second.path);
s += ','; printString(s, maskOutputs ? "" : store.printStorePath(i.second.path));
s += ','; printString(s, i.second.hashAlgo);
s += ','; printString(s, i.second.hash);
s += ')';
@ -238,15 +258,25 @@ string Derivation::unparse() const
s += "],[";
first = true;
for (auto & i : inputDrvs) {
if (first) first = false; else s += ',';
s += '('; printString(s, i.first);
s += ','; printStrings(s, i.second.begin(), i.second.end());
s += ')';
if (actualInputs) {
for (auto & i : *actualInputs) {
if (first) first = false; else s += ',';
s += '('; printString(s, i.first);
s += ','; printStrings(s, i.second.begin(), i.second.end());
s += ')';
}
} else {
for (auto & i : inputDrvs) {
if (first) first = false; else s += ',';
s += '('; printString(s, store.printStorePath(i.first));
s += ','; printStrings(s, i.second.begin(), i.second.end());
s += ')';
}
}
s += "],";
printStrings(s, inputSrcs.begin(), inputSrcs.end());
auto paths = store.printStorePathSet(inputSrcs); // FIXME: slow
printStrings(s, paths.begin(), paths.end());
s += ','; printString(s, platform);
s += ','; printString(s, builder);
@ -257,7 +287,7 @@ string Derivation::unparse() const
for (auto & i : env) {
if (first) first = false; else s += ',';
s += '('; printString(s, i.first);
s += ','; printString(s, i.second);
s += ','; printString(s, maskOutputs && outputs.count(i.first) ? "" : i.second);
s += ')';
}
@ -267,6 +297,7 @@ string Derivation::unparse() const
}
// FIXME: remove
bool isDerivation(const string & fileName)
{
return hasSuffix(fileName, drvExtension);
@ -304,7 +335,7 @@ DrvHashes drvHashes;
paths have been replaced by the result of a recursive call to this
function, and that for fixed-output derivations we return a hash of
its output path. */
Hash hashDerivationModulo(Store & store, Derivation drv)
Hash hashDerivationModulo(Store & store, const Derivation & drv, bool maskOutputs)
{
/* Return a fixed hash for fixed-output derivations. */
if (drv.isFixedOutput()) {
@ -312,42 +343,31 @@ Hash hashDerivationModulo(Store & store, Derivation drv)
return hashString(htSHA256, "fixed:out:"
+ i->second.hashAlgo + ":"
+ i->second.hash + ":"
+ i->second.path);
+ store.printStorePath(i->second.path));
}
/* For other derivations, replace the inputs paths with recursive
calls to this function.*/
DerivationInputs inputs2;
std::map<std::string, StringSet> inputs2;
for (auto & i : drv.inputDrvs) {
Hash h = drvHashes[i.first];
if (!h) {
auto h = drvHashes.find(i.first);
if (h == drvHashes.end()) {
assert(store.isValidPath(i.first));
Derivation drv2 = readDerivation(store.toRealPath(i.first));
h = hashDerivationModulo(store, drv2);
drvHashes[i.first] = h;
h = drvHashes.insert_or_assign(i.first.clone(), hashDerivationModulo(store,
readDerivation(store, store.toRealPath(store.printStorePath(i.first))), false)).first;
}
inputs2[h.to_string(Base16, false)] = i.second;
inputs2.insert_or_assign(h->second.to_string(Base16, false), i.second);
}
drv.inputDrvs = inputs2;
return hashString(htSHA256, drv.unparse());
return hashString(htSHA256, drv.unparse(store, maskOutputs, &inputs2));
}
DrvPathWithOutputs parseDrvPathWithOutputs(const string & s)
{
size_t n = s.find("!");
return n == s.npos
? DrvPathWithOutputs(s, std::set<string>())
: DrvPathWithOutputs(string(s, 0, n), tokenizeString<std::set<string> >(string(s, n + 1), ","));
}
Path makeDrvPathWithOutputs(const Path & drvPath, const std::set<string> & outputs)
std::string StorePathWithOutputs::to_string(const Store & store) const
{
return outputs.empty()
? drvPath
: drvPath + "!" + concatStringsSep(",", outputs);
? store.printStorePath(path)
: store.printStorePath(path) + "!" + concatStringsSep(",", outputs);
}
@ -357,28 +377,28 @@ bool wantOutput(const string & output, const std::set<string> & wanted)
}
PathSet BasicDerivation::outputPaths() const
StorePathSet BasicDerivation::outputPaths() const
{
PathSet paths;
StorePathSet paths;
for (auto & i : outputs)
paths.insert(i.second.path);
paths.insert(i.second.path.clone());
return paths;
}
Source & readDerivation(Source & in, Store & store, BasicDerivation & drv)
Source & readDerivation(Source & in, const Store & store, BasicDerivation & drv)
{
drv.outputs.clear();
auto nr = readNum<size_t>(in);
for (size_t n = 0; n < nr; n++) {
auto name = readString(in);
DerivationOutput o;
in >> o.path >> o.hashAlgo >> o.hash;
store.assertStorePath(o.path);
drv.outputs[name] = o;
auto path = store.parseStorePath(readString(in));
auto hashAlgo = readString(in);
auto hash = readString(in);
drv.outputs.emplace(name, DerivationOutput(std::move(path), std::move(hashAlgo), std::move(hash)));
}
drv.inputSrcs = readStorePaths<PathSet>(store, in);
drv.inputSrcs = readStorePaths<StorePathSet>(store, in);
in >> drv.platform >> drv.builder;
drv.args = readStrings<Strings>(in);
@ -393,16 +413,16 @@ Source & readDerivation(Source & in, Store & store, BasicDerivation & drv)
}
Sink & operator << (Sink & out, const BasicDerivation & drv)
void writeDerivation(Sink & out, const Store & store, const BasicDerivation & drv)
{
out << drv.outputs.size();
for (auto & i : drv.outputs)
out << i.first << i.second.path << i.second.hashAlgo << i.second.hash;
out << drv.inputSrcs << drv.platform << drv.builder << drv.args;
out << i.first << store.printStorePath(i.second.path) << i.second.hashAlgo << i.second.hash;
writeStorePaths(store, out, drv.inputSrcs);
out << drv.platform << drv.builder << drv.args;
out << drv.env.size();
for (auto & i : drv.env)
out << i.first << i.second;
return out;
}

View File

@ -10,26 +10,18 @@
namespace nix {
/* Extension of derivations in the Nix store. */
const string drvExtension = ".drv";
/* Abstract syntax of derivations. */
struct DerivationOutput
{
Path path;
string hashAlgo; /* hash used for expected hash computation */
string hash; /* expected hash, may be null */
DerivationOutput()
{
}
DerivationOutput(Path path, string hashAlgo, string hash)
{
this->path = path;
this->hashAlgo = hashAlgo;
this->hash = hash;
}
StorePath path;
std::string hashAlgo; /* hash used for expected hash computation */
std::string hash; /* expected hash, may be null */
DerivationOutput(StorePath && path, std::string && hashAlgo, std::string && hash)
: path(std::move(path))
, hashAlgo(std::move(hashAlgo))
, hash(std::move(hash))
{ }
void parseHashInfo(bool & recursive, Hash & hash) const;
};
@ -37,24 +29,26 @@ typedef std::map<string, DerivationOutput> DerivationOutputs;
/* For inputs that are sub-derivations, we specify exactly which
output IDs we are interested in. */
typedef std::map<Path, StringSet> DerivationInputs;
typedef std::map<StorePath, StringSet> DerivationInputs;
typedef std::map<string, string> StringPairs;
struct BasicDerivation
{
DerivationOutputs outputs; /* keyed on symbolic IDs */
PathSet inputSrcs; /* inputs that are sources */
StorePathSet inputSrcs; /* inputs that are sources */
string platform;
Path builder;
Strings args;
StringPairs env;
BasicDerivation() { }
explicit BasicDerivation(const BasicDerivation & other);
virtual ~BasicDerivation() { };
/* Return the path corresponding to the output identifier `id' in
the given derivation. */
Path findOutput(const string & id) const;
const StorePath & findOutput(const std::string & id) const;
bool isBuiltin() const;
@ -62,7 +56,7 @@ struct BasicDerivation
bool isFixedOutput() const;
/* Return the output paths of a derivation. */
PathSet outputPaths() const;
StorePathSet outputPaths() const;
};
@ -71,7 +65,12 @@ struct Derivation : BasicDerivation
DerivationInputs inputDrvs; /* inputs that are sub-derivations */
/* Print a derivation. */
std::string unparse() const;
std::string unparse(const Store & store, bool maskOutputs,
std::map<std::string, StringSet> * actualInputs = nullptr) const;
Derivation() { }
Derivation(Derivation && other) = default;
explicit Derivation(const Derivation & other);
};
@ -79,38 +78,29 @@ class Store;
/* Write a derivation to the Nix store, and return its path. */
Path writeDerivation(ref<Store> store,
StorePath writeDerivation(ref<Store> store,
const Derivation & drv, const string & name, RepairFlag repair = NoRepair);
/* Read a derivation from a file. */
Derivation readDerivation(const Path & drvPath);
Derivation readDerivation(const Store & store, const Path & drvPath);
/* Check whether a file name ends with the extension for
derivations. */
// FIXME: remove
bool isDerivation(const string & fileName);
Hash hashDerivationModulo(Store & store, Derivation drv);
Hash hashDerivationModulo(Store & store, const Derivation & drv, bool maskOutputs);
/* Memoisation of hashDerivationModulo(). */
typedef std::map<Path, Hash> DrvHashes;
typedef std::map<StorePath, Hash> DrvHashes;
extern DrvHashes drvHashes; // FIXME: global, not thread-safe
/* Split a string specifying a derivation and a set of outputs
(/nix/store/hash-foo!out1,out2,...) into the derivation path and
the outputs. */
typedef std::pair<string, std::set<string> > DrvPathWithOutputs;
DrvPathWithOutputs parseDrvPathWithOutputs(const string & s);
Path makeDrvPathWithOutputs(const Path & drvPath, const std::set<string> & outputs);
bool wantOutput(const string & output, const std::set<string> & wanted);
struct Source;
struct Sink;
Source & readDerivation(Source & in, Store & store, BasicDerivation & drv);
Sink & operator << (Sink & out, const BasicDerivation & drv);
Source & readDerivation(Source & in, const Store & store, BasicDerivation & drv);
void writeDerivation(Sink & out, const Store & store, const BasicDerivation & drv);
std::string hashPlaceholder(const std::string & outputName);

View File

@ -8,6 +8,7 @@
#include "compression.hh"
#include "pathlocks.hh"
#include "finally.hh"
#include "tarfile.hh"
#ifdef ENABLE_S3
#include <aws/core/client/ClientConfiguration.h>
@ -34,6 +35,10 @@ DownloadSettings downloadSettings;
static GlobalConfig::Register r1(&downloadSettings);
CachedDownloadRequest::CachedDownloadRequest(const std::string & uri)
: uri(uri), ttl(settings.tarballTtl)
{ }
std::string resolveUri(const std::string & uri)
{
if (uri.compare(0, 8, "channel:") == 0)
@ -285,6 +290,7 @@ struct CurlDownloader : public Downloader
}
if (request.verifyTLS) {
debug("verify TLS: Nix CA file = '%s'", settings.caFile);
if (settings.caFile != "")
curl_easy_setopt(req, CURLOPT_CAINFO, settings.caFile.c_str());
} else {
@ -357,9 +363,10 @@ struct CurlDownloader : public Downloader
} else if (httpStatus == 401 || httpStatus == 403 || httpStatus == 407) {
// Don't retry on authentication/authorization failures
err = Forbidden;
} else if (httpStatus >= 400 && httpStatus < 500 && httpStatus != 408) {
} else if (httpStatus >= 400 && httpStatus < 500 && httpStatus != 408 && httpStatus != 429) {
// Most 4xx errors are client errors and are probably not worth retrying:
// * 408 means the server timed out waiting for us, so we try again
// * 429 means too many requests, so we retry (with a delay)
err = Misc;
} else if (httpStatus == 501 || httpStatus == 505 || httpStatus == 511) {
// Let's treat most 5xx (server) errors as transient, except for a handful:
@ -644,10 +651,10 @@ struct CurlDownloader : public Downloader
#ifdef ENABLE_S3
auto [bucketName, key, params] = parseS3Uri(request.uri);
std::string profile = get(params, "profile", "");
std::string region = get(params, "region", Aws::Region::US_EAST_1);
std::string scheme = get(params, "scheme", "");
std::string endpoint = get(params, "endpoint", "");
std::string profile = get(params, "profile").value_or("");
std::string region = get(params, "region").value_or(Aws::Region::US_EAST_1);
std::string scheme = get(params, "scheme").value_or("");
std::string endpoint = get(params, "endpoint").value_or("");
S3Helper s3Helper(profile, region, scheme, endpoint);
@ -805,13 +812,13 @@ CachedDownloadResult Downloader::downloadCached(
if (p != string::npos) name = string(url, p + 1);
}
Path expectedStorePath;
std::optional<StorePath> expectedStorePath;
if (request.expectedHash) {
expectedStorePath = store->makeFixedOutputPath(request.unpack, request.expectedHash, name);
if (store->isValidPath(expectedStorePath)) {
if (store->isValidPath(*expectedStorePath)) {
CachedDownloadResult result;
result.storePath = expectedStorePath;
result.path = store->toRealPath(expectedStorePath);
result.storePath = store->printStorePath(*expectedStorePath);
result.path = store->toRealPath(result.storePath);
return result;
}
}
@ -826,7 +833,7 @@ CachedDownloadResult Downloader::downloadCached(
PathLocks lock({fileLink}, fmt("waiting for lock on '%1%'...", fileLink));
Path storePath;
std::optional<StorePath> storePath;
string expectedETag;
@ -835,9 +842,10 @@ CachedDownloadResult Downloader::downloadCached(
CachedDownloadResult result;
if (pathExists(fileLink) && pathExists(dataFile)) {
storePath = readLink(fileLink);
store->addTempRoot(storePath);
if (store->isValidPath(storePath)) {
storePath = store->parseStorePath(readLink(fileLink));
// FIXME
store->addTempRoot(*storePath);
if (store->isValidPath(*storePath)) {
auto ss = tokenizeString<vector<string>>(readFile(dataFile), "\n");
if (ss.size() >= 3 && ss[0] == url) {
time_t lastChecked;
@ -851,7 +859,7 @@ CachedDownloadResult Downloader::downloadCached(
}
}
} else
storePath = "";
storePath.reset();
}
if (!skip) {
@ -864,62 +872,65 @@ CachedDownloadResult Downloader::downloadCached(
result.etag = res.etag;
if (!res.cached) {
ValidPathInfo info;
StringSink sink;
dumpString(*res.data, sink);
Hash hash = hashString(request.expectedHash ? request.expectedHash.type : htSHA256, *res.data);
info.path = store->makeFixedOutputPath(false, hash, name);
ValidPathInfo info(store->makeFixedOutputPath(false, hash, name));
info.narHash = hashString(htSHA256, *sink.s);
info.narSize = sink.s->size();
info.ca = makeFixedOutputCA(false, hash);
store->addToStore(info, sink.s, NoRepair, NoCheckSigs);
storePath = info.path;
storePath = info.path.clone();
}
assert(!storePath.empty());
replaceSymlink(storePath, fileLink);
assert(storePath);
replaceSymlink(store->printStorePath(*storePath), fileLink);
writeFile(dataFile, url + "\n" + res.etag + "\n" + std::to_string(time(0)) + "\n");
} catch (DownloadError & e) {
if (storePath.empty()) throw;
if (!storePath) throw;
warn("warning: %s; using cached result", e.msg());
result.etag = expectedETag;
}
}
if (request.unpack) {
Path unpackedLink = cacheDir + "/" + baseNameOf(storePath) + "-unpacked";
Path unpackedLink = cacheDir + "/" + ((std::string) storePath->to_string()) + "-unpacked";
PathLocks lock2({unpackedLink}, fmt("waiting for lock on '%1%'...", unpackedLink));
Path unpackedStorePath;
std::optional<StorePath> unpackedStorePath;
if (pathExists(unpackedLink)) {
unpackedStorePath = readLink(unpackedLink);
store->addTempRoot(unpackedStorePath);
if (!store->isValidPath(unpackedStorePath))
unpackedStorePath = "";
unpackedStorePath = store->parseStorePath(readLink(unpackedLink));
// FIXME
store->addTempRoot(*unpackedStorePath);
if (!store->isValidPath(*unpackedStorePath))
unpackedStorePath.reset();
}
if (unpackedStorePath.empty()) {
printInfo(format("unpacking '%1%'...") % url);
if (!unpackedStorePath) {
printInfo("unpacking '%s'...", url);
Path tmpDir = createTempDir();
AutoDelete autoDelete(tmpDir, true);
// FIXME: this requires GNU tar for decompression.
runProgram("tar", true, {"xf", store->toRealPath(storePath), "-C", tmpDir, "--strip-components", "1"});
unpackedStorePath = store->addToStore(name, tmpDir, true, htSHA256, defaultPathFilter, NoRepair);
unpackTarfile(store->toRealPath(store->printStorePath(*storePath)), tmpDir);
auto members = readDirectory(tmpDir);
if (members.size() != 1)
throw nix::Error("tarball '%s' contains an unexpected number of top-level files", url);
auto topDir = tmpDir + "/" + members.begin()->name;
unpackedStorePath = store->addToStore(name, topDir, true, htSHA256, defaultPathFilter, NoRepair);
}
replaceSymlink(unpackedStorePath, unpackedLink);
storePath = unpackedStorePath;
replaceSymlink(store->printStorePath(*unpackedStorePath), unpackedLink);
storePath = std::move(*unpackedStorePath);
}
if (expectedStorePath != "" && storePath != expectedStorePath) {
if (expectedStorePath && *storePath != *expectedStorePath) {
unsigned int statusCode = 102;
Hash gotHash = request.unpack
? hashPath(request.expectedHash.type, store->toRealPath(storePath)).first
: hashFile(request.expectedHash.type, store->toRealPath(storePath));
? hashPath(request.expectedHash.type, store->toRealPath(store->printStorePath(*storePath))).first
: hashFile(request.expectedHash.type, store->toRealPath(store->printStorePath(*storePath)));
throw nix::Error(statusCode, "hash mismatch in file downloaded from '%s':\n wanted: %s\n got: %s",
url, request.expectedHash.to_string(), gotHash.to_string());
}
result.storePath = storePath;
result.path = store->toRealPath(storePath);
result.storePath = store->printStorePath(*storePath);
result.path = store->toRealPath(result.storePath);
return result;
}

View File

@ -2,7 +2,7 @@
#include "types.hh"
#include "hash.hh"
#include "globals.hh"
#include "config.hh"
#include <string>
#include <future>
@ -71,10 +71,10 @@ struct CachedDownloadRequest
bool unpack = false;
std::string name;
Hash expectedHash;
unsigned int ttl = settings.tarballTtl;
unsigned int ttl;
CachedDownloadRequest(const std::string & uri)
: uri(uri) { }
CachedDownloadRequest(const std::string & uri);
CachedDownloadRequest() = delete;
};
struct CachedDownloadResult

View File

@ -24,9 +24,9 @@ struct HashAndWriteSink : Sink
}
};
void Store::exportPaths(const Paths & paths, Sink & sink)
void Store::exportPaths(const StorePathSet & paths, Sink & sink)
{
Paths sorted = topoSortPaths(PathSet(paths.begin(), paths.end()));
auto sorted = topoSortPaths(paths);
std::reverse(sorted.begin(), sorted.end());
std::string doneLabel("paths exported");
@ -42,7 +42,7 @@ void Store::exportPaths(const Paths & paths, Sink & sink)
sink << 0;
}
void Store::exportPath(const Path & path, Sink & sink)
void Store::exportPath(const StorePath & path, Sink & sink)
{
auto info = queryPathInfo(path);
@ -55,15 +55,21 @@ void Store::exportPath(const Path & path, Sink & sink)
Don't complain if the stored hash is zero (unknown). */
Hash hash = hashAndWriteSink.currentHash();
if (hash != info->narHash && info->narHash != Hash(info->narHash.type))
throw Error(format("hash of path '%1%' has changed from '%2%' to '%3%'!") % path
% info->narHash.to_string() % hash.to_string());
throw Error("hash of path '%s' has changed from '%s' to '%s'!",
printStorePath(path), info->narHash.to_string(), hash.to_string());
hashAndWriteSink << exportMagic << path << info->references << info->deriver << 0;
hashAndWriteSink
<< exportMagic
<< printStorePath(path);
writeStorePaths(*this, hashAndWriteSink, info->references);
hashAndWriteSink
<< (info->deriver ? printStorePath(*info->deriver) : "")
<< 0;
}
Paths Store::importPaths(Source & source, std::shared_ptr<FSAccessor> accessor, CheckSigsFlag checkSigs)
StorePaths Store::importPaths(Source & source, std::shared_ptr<FSAccessor> accessor, CheckSigsFlag checkSigs)
{
Paths res;
StorePaths res;
while (true) {
auto n = readNum<uint64_t>(source);
if (n == 0) break;
@ -77,16 +83,15 @@ Paths Store::importPaths(Source & source, std::shared_ptr<FSAccessor> accessor,
if (magic != exportMagic)
throw Error("Nix archive cannot be imported; wrong format");
ValidPathInfo info;
info.path = readStorePath(*this, source);
ValidPathInfo info(parseStorePath(readString(source)));
//Activity act(*logger, lvlInfo, format("importing path '%s'") % info.path);
info.references = readStorePaths<PathSet>(*this, source);
info.references = readStorePaths<StorePathSet>(*this, source);
info.deriver = readString(source);
if (info.deriver != "") assertStorePath(info.deriver);
auto deriver = readString(source);
if (deriver != "")
info.deriver = parseStorePath(deriver);
info.narHash = hashString(htSHA256, *tee.source.data);
info.narSize = tee.source.data->size();
@ -97,7 +102,7 @@ Paths Store::importPaths(Source & source, std::shared_ptr<FSAccessor> accessor,
addToStore(info, tee.source.data, NoRepair, checkSigs, accessor);
res.push_back(info.path);
res.push_back(info.path.clone());
}
return res;

Some files were not shown because too many files have changed in this diff Show More