-
-
Notifications
You must be signed in to change notification settings - Fork 257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Yarn workspaces and upgrade to v2 #7446
Conversation
The recommendation for yarn v2 is to check |
Oh wow! Maybe we can discard some of the (heaviest) dependencies we have? any way to assess them? |
to answer this, it seems that won't be necessary after all. What will be good is to rely on yarn cache on rebuild. There is sth called "plug n play" on yarn 2 and if we manage to activate the cache, that will be like few secs only to install packages once its setup. The main thing now is to create a proper multi-stage docker image to build directly from monorepo, with stage 1 being the current core image, and other targets building different packages and apps (using docker buildkit ). If we can use yarn cache properly then it will be much faster. Also we can also bundle all alpine/apk deps inside stage 1 so we dont get to rebuild all these either. Stages per services can then be used directly in compose, which is one less hassle. There is a yarn prod-install plugin that helps package things for Docker. |
Looks good to me! |
😱 all of the conflicts :( |
yes that's a lot of conflicts, but again we can get rid of all the yarn locks so it is going to be fine. I am in the middle of cleaning the deps anyway, running some utils to check deps. What works best so far
|
Ok time to write down what have been doing with yarn v2 + docker build workflow, to assess what we should do. The monorepo approach with yarn relies on Cross dependenciesWith workspaces, you can reference local packages from the monorepo itself. For instance, we are now loading the eslint conf by adding Node modules & CacheWhen running Yarn v2 + DockerFor the CI, we want to build a Docker image for each of the workspaces with only the relevant code and deps inside. To achieve this, there are two main issues :
Export Yarn cacheSo far I have used There are possible workarounds by exporting the cache to an archive with An alternative is what Yarn names Zero installs, which basically implies that you check Another idea is to create a simple There may be other solutions, I am still digging. More soon : ) |
Thanks for the clarification here! Maybe one thing we need to understand is see why we have more than 500MB of deps? Any clear culprit that we can remove or cleanup? |
yes thats a better approach. Here $ du -sh ./node_modules/* | sort -nr | grep '\dM.*' | head -n20
142M ./node_modules/@netlify
141M ./node_modules/@storybook
113M ./node_modules/truffle
95M ./node_modules/@openzeppelin
87M ./node_modules/@vue
70M ./node_modules/hardlydifficult-ethereum-contracts
66M ./node_modules/aws-sdk
63M ./node_modules/@truffle
61M ./node_modules/mcl-wasm
61M ./node_modules/ethers
58M ./node_modules/typescript
58M ./node_modules/microbundle-crl
54M ./node_modules/detective-typescript
54M ./node_modules/@firebase
49M ./node_modules/ethereumjs-testrpc
46M ./node_modules/solidity-coverage
45M ./node_modules/ganache-core
43M ./node_modules/hardhat
33M ./node_modules/hardlydifficult-eth
30M ./node_modules/next what we can do
I think we can already cut down the size to <150M with this |
Sounds good! Please, do move smart-contract-extensions to hardhat first! I think that's a low hanging fruit and you have experience with it already :) I can look at the I will also look into storybooks and see if we can make it ligther. |
ok lots of learning here, also some complexity. I am gonna close this and split the problems into smaller PRs |
Description
yarn dlx @yarnpkg/doctor
Issues
Refs #7427
Checklist: