Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support read-only cache #153

Closed
wants to merge 9 commits into from
Closed

Conversation

MarcoPolo
Copy link
Contributor

In draft until I try this branch out somewhere

Copy link
Contributor

@marten-seemann marten-seemann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the use case for this?

@MarcoPolo
Copy link
Contributor Author

Other implementations can benefit without having write access to the cache bucket (which would be a security concern).

@marten-seemann
Copy link
Contributor

Other implementations

Who specifically would use this? Is this for @thomaseizinger who's clearing his caches (too) frequently, or are there any other users?

@MarcoPolo
Copy link
Contributor Author

zig-libp2p is using this. This will also really help nim-libp2p folks since a recent build of there's took ~30min: https://github.com/status-im/nim-libp2p/actions/runs/4385644603/jobs/7678638102.

At some point without caching the runners would run out of space. Building all these things takes a bit of a space.

@MarcoPolo MarcoPolo force-pushed the marco/read-only-cache-support branch from 56d811f to 107f408 Compare March 10, 2023 22:47
@MarcoPolo
Copy link
Contributor Author

cc @Menduist . This will speed up these jobs by about 20min. Ping me on slack and I'll share the read only keys :)

@MarcoPolo MarcoPolo marked this pull request as ready for review March 10, 2023 22:48
@marten-seemann
Copy link
Contributor

zig-libp2p is using this. This will also really help nim-libp2p folks since a recent build of there's took ~30min: https://github.com/status-im/nim-libp2p/actions/runs/4385644603/jobs/7678638102.

At some point without caching the runners would run out of space. Building all these things takes a bit of a space.

Thanks! Sounds good to me, let's ship it!

Copy link
Contributor

@thomaseizinger thomaseizinger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @MarcoPolo , this has been on my list for a while and I didn't get to it.

I did envision it slightly differently though. At the moment, CI runs from forks of our repos don't have access to the secrets and thus don't have access to the caches, right?

I figured instead of opting into a read-only cache, we should make that the default and configure the read-only keys as regular variables in our CI to allow forks access to it.

Writing to the cache should then be conditional on the actual secret being present. What do you think?

.github/actions/run-interop-ping-test/action.yml Outdated Show resolved Hide resolved
multidim-interop/src/generator.ts Outdated Show resolved Hide resolved
@MarcoPolo
Copy link
Contributor Author

I figured instead of opting into a read-only cache, we should make that the default and configure the read-only keys as regular variables in our CI to allow forks access to it.

True. I had some issue with this, but I'd like this to work as well. I believe the bucket is public to list, but not to get. I'll try to get this working.

@MarcoPolo
Copy link
Contributor Author

MarcoPolo commented Mar 13, 2023

@MarcoPolo
Copy link
Contributor Author

Ugh, unfortunately you cannot simply omit the credentials. The AWS sdk wants the credentials to make signed requests to S3 (see the note here). I was able to create a new IAM user with the following permissions on my personal account and have it work:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": "arn:aws:s3:::libp2p-by-tf-aws-bootstrap/*"
        }
    ]
}

Not as nice as seamlessly working with 0 setup. But at least it can be configured by individuals.

@thomaseizinger
Copy link
Contributor

That is kind of annoying because requests via HTTP work without any setup?

@MarcoPolo
Copy link
Contributor Author

Yup!

@galargh
Copy link
Contributor

galargh commented Mar 17, 2023

This will allow using anonymous creds for public buckets: moby/buildkit#3692

@thomaseizinger
Copy link
Contributor

Ugh, unfortunately you cannot simply omit the credentials. The AWS sdk wants the credentials to make signed requests to S3 (see the note here). I was able to create a new IAM user with the following permissions on my personal account and have it work:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": "arn:aws:s3:::libp2p-by-tf-aws-bootstrap/*"
        }
    ]
}

Not as nice as seamlessly working with 0 setup. But at least it can be configured by individuals.

Can we create a set of credentials that we default to?

To properly make this work for forks, we need it to be the same workflow, i.e. we can't specify a flag on the action whether we want to read or read-write.

Instead, I'd suggest that we default to a read-only user in the dockerBuildWrapper.sh script. That would also allow us to use this cache locally. In case a user is specified via the env variables, we override it and expect the credentials to have read-write permissions.

Thoughts?

@galargh
Copy link
Contributor

galargh commented Mar 24, 2023

Personally, I'd wait for moby/buildkit#3692 to be merged and then I'd start using the buildkit version from master. It's already approved so hopefully it's not long until it's merged.

@thomaseizinger
Copy link
Contributor

Personally, I'd wait for moby/buildkit#3692 to be merged and then I'd start using the buildkit version from master. It's already approved so hopefully it's not long until it's merged.

Sounds good.

@MarcoPolo
Copy link
Contributor Author

With #165 this is done

@MarcoPolo MarcoPolo closed this Apr 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants