-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build Input Caching #1
Comments
We are currently in the midst of going through ipfs internals (specifically libp2p) in order to create a haskell implementation and have a deeper integration of build input source hashes and ipfs architecture. |
The biggest benefit is the ability to version srcs that are currently not versionable. Loads of packages distributors don't keep historical versions. |
We're halfway through implementing the haskell multiaddress library: https://github.com/MatrixAI/haskell-multiaddr (see improved branch). Still to do - binary encoding/decoding. Perhaps we should meetup and compare notes? |
This script prints all tarballs for an expression:
there is also maintainers/scripts/copy-tarballs.pl that uses it to upload to S3 |
Beside distributing binary files, the build inputs of a derivation could be cached in IPFS.
That would speed up CI quite a bit since the cache could move "closer" to the build instance (using an IPFS Gateway that is within the local network).
Current Status:
@knupfer created fetchIPFS: NixOS/nix#859 (comment)
@CMCDragonkai is doing something similiar @ https://github.com/MatrixAI/Forge-Package-Archiving
NAR replacement discussion: NixOS/nix#1006
There are different methods on how to approach this:
sha256:IPFS
mapping inside nixpkgs (a huge set)Migration: easy (can be solved by a script that builds all /nix/store/ paths and puts them into IPFS)
Support: every fetch derivation needs to be patched to look up the IPFS hash first, either fetch it from the local running daemon or a gateway. If no hash is found each derivation falls back to its normal
operation (curl, git, svn etc.)
Migration: takes a lot of effort (each derivation must be touched)
Support: already implemented, however it must be added manually to each package
Should we produce an intermediate cache format (tar? -> can be chunked efficiently in IPFS) that is used to cache each
src
? This can then be used within eachfetch
derivation. As a big plus, the build inputs are deduplicated and share blocks with the official package.However it would be better just to reuse an already archived
src
(like tar.*, .zip etc.) and not to archive it again -> We need some form of manifest -> We need IPLDThe text was updated successfully, but these errors were encountered: