Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Go & JS tests to conform to the multidim interop test spec. #121

Merged
merged 11 commits into from
Feb 11, 2023
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 29 additions & 3 deletions .github/actions/run-interop-ping-test/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,38 @@ inputs:
description: "Space-separated paths to JSON files describing additional images"
required: false
default: ""
s3-cache-bucket:
description: "Which S3 bucket to use for container layer caching"
required: false
default: ""
s3-access-key-id:
description: "S3 Access key id for the cache"
required: false
default: ""
s3-secret-access-key:
description: "S3 secret key id for the cache"
required: false
default: ""
aws-region:
description: "Which AWS region to use"
required: false
default: "us-east-1"
thomaseizinger marked this conversation as resolved.
Show resolved Hide resolved
runs:
using: "composite"
steps:
- run: |
echo "AWS_BUCKET=${{ inputs.s3-cache-bucket }}" >> $GITHUB_ENV
echo "AWS_REGION=${{ inputs.aws-region }}" >> $GITHUB_ENV
shell: bash

- name: Configure AWS credentials for S3 build cache
if: ${{ inputs.s3-access-key-id }} != "" && ${{ inputs.s3-secret-access-key }} != ""
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ inputs.s3-access-key-id }}
aws-secret-access-key: ${{ inputs.s3-secret-access-key}}
aws-region: ${{ env.AWS_REGION }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rust-libp2p executes this step on each pull request. Some pull requests originate from forks. Those pull requests don't have access to the above secrets. Thus this step fails.

See https://github.com/libp2p/rust-libp2p/actions/runs/4142906998/jobs/7164133938 on libp2p/rust-libp2p#3344 as an example.

I don't yet have the full picture on the caching approach via an S3 bucket. Would mounting the bucket as read-only from forks be an option? Worst case, we don't mount the bucket on forks at all, thus building all images from scratch.

Copy link
Contributor Author

@MarcoPolo MarcoPolo Feb 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thus this step fails.

The original plan was that the if should have skipped this step. I'll check why that isn't happening.

Would mounting the bucket as read-only from forks be an option?

Unfortunately no. The s3 cache needs rw access. Maybe there's a hidden option I'm not seeing, but when I tried a read only key it failed.

My original plan was to have forks build from scratch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had the if statement wrong. #131 fixes it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you try omitting the --cache-to option when you tried with a read-only key?

According to https://docs.docker.com/build/cache/backends/, those two options operate separately and I am surprised that --cache-from by itself would need write permissions.

If we conditionally define --cache-to, we should be set our bucket to be publicly readable and have forks use our cache.

Alternatively, I see that this also has a registry cache option which might be interesting to explore.

I really want our CI to always be fast, not just for our own PRs. (I also don't like wasting compute power.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to help with this but it would earliest be in ~5 days as I am off the grid for the next few.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you try omitting the --cache-to option when you tried with a read-only key?

I hadn't, that's probably it!

Probably worth figuring out how to make this bucket public and then defaulting to that public bucket if no keys are passed. I'll open an issue.


# This depends on where this file is within this repository. This walks up
# from here to the multidim-interop folder
- run: |
Expand All @@ -24,9 +53,6 @@ runs:
with:
node-version: 18

- name: Expose GitHub Runtime # Needed for docker buildx to cache properly (See https://docs.docker.com/build/cache/backends/gha/#authentication)
uses: crazy-max/ghaction-github-runtime@v2

- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v2
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/multidim-interop.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ name: libp2p multidimensional interop test

jobs:
run-multidim-interop:
uses: "./.github/workflows/run-testplans.yml"
with:
dir: "multidim-interop"
run-multidim-interop-ng:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run-interop-ping-test
with:
s3-cache-bucket: libp2p-by-tf-aws-bootstrap
s3-access-key-id: ${{ vars.S3_AWS_ACCESS_KEY_ID }}
s3-secret-access-key: ${{ secrets.S3_AWS_SECRET_ACCESS_KEY }}
101 changes: 0 additions & 101 deletions .github/workflows/run-testplans.yml

This file was deleted.

7 changes: 4 additions & 3 deletions multidim-interop/Makefile
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
GO_SUBDIRS := $(wildcard go/*/.)
JS_SUBDIRS := $(wildcard js/*/.)
RUST_SUBDIRS := $(wildcard rust/*/.)
thomaseizinger marked this conversation as resolved.
Show resolved Hide resolved

all: $(GO_SUBDIRS) $(JS_SUBDIRS) rust/.
all: $(GO_SUBDIRS) $(JS_SUBDIRS) $(RUST_SUBDIRS)
$(JS_SUBDIRS):
$(MAKE) -C $@
$(GO_SUBDIRS):
$(MAKE) -C $@
rust/.:
$(RUST_SUBDIRS):
$(MAKE) -C $@


.PHONY: $(GO_SUBDIRS) $(JS_SUBDIRS) rust/. all
.PHONY: $(GO_SUBDIRS) $(JS_SUBDIRS) $(RUST_SUBDIRS) all
9 changes: 5 additions & 4 deletions multidim-interop/dockerBuildWrapper.sh
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
#!/usr/bin/env /bin/bash

# If in CI use caching
if [[ -n "${CI}" ]]; then
# If in CI and we have a defined cache bucket, use caching
if [[ -n "${CI}" ]] && [[ -n "${AWS_BUCKET}" ]]; then
thomaseizinger marked this conversation as resolved.
Show resolved Hide resolved
docker buildx build \
--load \
-t $IMAGE_NAME \
--cache-to type=gha,mode=max,scope=$GITHUB_REF_NAME-$IMAGE_NAME \
--cache-from type=gha,scope=$GITHUB_REF_NAME-$IMAGE_NAME "$@"
--cache-to type=s3,mode=max,bucket=$AWS_BUCKET,region=$AWS_REGION,prefix=buildCache,name=$IMAGE_NAME \
--cache-from type=s3,mode=max,bucket=$AWS_BUCKET,region=$AWS_REGION,prefix=buildCache,name=$IMAGE_NAME \
"$@"
else
docker build -t $IMAGE_NAME "$@"
fi
Expand Down
98 changes: 63 additions & 35 deletions multidim-interop/go/v0.22/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,9 @@ package main

import (
"context"
"encoding/json"
"fmt"
"log"
"os"
"strconv"
"time"
Expand All @@ -24,22 +26,26 @@ import (
func main() {
var (
transport = os.Getenv("transport")
secureChannel = os.Getenv("security")
muxer = os.Getenv("muxer")
secureChannel = os.Getenv("security")
isDialerStr = os.Getenv("is_dialer")
ip = os.Getenv("ip")
testTimeoutStr = os.Getenv("test_timeout")
redisAddr = os.Getenv("REDIS_ADDR")
redisAddr = os.Getenv("redis_addr")
testTimeoutStr = os.Getenv("test_timeout_seconds")
)

var testTimeout = 10 * time.Second
var testTimeout = 3 * time.Minute
if testTimeoutStr != "" {
secs, err := strconv.ParseInt(testTimeoutStr, 10, 32)
if err == nil {
testTimeout = time.Duration(secs) * time.Second
}
}

if ip == "" {
ip = "0.0.0.0"
}

if redisAddr == "" {
redisAddr = "redis:6379"
}
Expand Down Expand Up @@ -80,56 +86,70 @@ func main() {
options = append(options, libp2p.Transport(libp2pquic.NewTransport))
listenAddr = fmt.Sprintf("/ip4/%s/udp/0/quic", ip)
default:
panic("Unsupported transport")
log.Fatalf("Unsupported transport: %s", transport)
}
options = append(options, libp2p.ListenAddrStrings(listenAddr))

switch secureChannel {
case "tls":
options = append(options, libp2p.Security(libp2ptls.ID, libp2ptls.New))
case "noise":
options = append(options, libp2p.Security(noise.ID, noise.New))
// Skipped for certain transports
var skipMuxer bool
var skipSecureChannel bool
switch transport {
case "quic":
default:
panic("Unsupported secure channel")
skipMuxer = true
skipSecureChannel = true
}

switch muxer {
case "yamux":
options = append(options, libp2p.Muxer("/yamux/1.0.0", yamux.DefaultTransport))
case "mplex":
options = append(options, libp2p.Muxer("/mplex/6.7.0", mplex.DefaultTransport))
case "quic":
default:
panic("Unsupported muxer")
if !skipSecureChannel {
switch secureChannel {
case "tls":
options = append(options, libp2p.Security(libp2ptls.ID, libp2ptls.New))
case "noise":
options = append(options, libp2p.Security(noise.ID, noise.New))
default:
log.Fatalf("Unsupported secure channel: %s", secureChannel)
}
}

if !skipMuxer {
switch muxer {
case "yamux":
options = append(options, libp2p.Muxer("/yamux/1.0.0", yamux.DefaultTransport))
case "mplex":
options = append(options, libp2p.Muxer("/mplex/6.7.0", mplex.DefaultTransport))
default:
log.Fatalf("Unsupported muxer: %s", muxer)
}
}

host, err := libp2p.New(options...)

if err != nil {
panic(fmt.Sprintf("failed to instantiate libp2p instance: %s", err))
log.Fatalf("failed to instantiate libp2p instance: %s", err)
}
defer host.Close()

log.Println("My multiaddr is: ", host.Addrs())

if is_dialer {
val, err := rClient.BLPop(ctx, testTimeout, "listenerAddr").Result()
if err != nil {
panic("Failed to wait for listener to be ready")
log.Fatal("Failed to wait for listener to be ready")
}
otherMa := ma.StringCast(val[1])
fmt.Println("My multiaddr is: ", host.Addrs())
fmt.Println("Other peer multiaddr is: ", otherMa)
log.Println("Other peer multiaddr is: ", otherMa)
otherMa, p2pComponent := ma.SplitLast(otherMa)
otherPeerId, err := peer.Decode(p2pComponent.Value())
if err != nil {
panic("Failed to get peer id from multiaddr")
log.Fatal("Failed to get peer id from multiaddr")
}

handshakeStartInstant := time.Now()
err = host.Connect(ctx, peer.AddrInfo{
ID: otherPeerId,
Addrs: []ma.Multiaddr{otherMa},
})
if err != nil {
panic("Failed to connect to other peer")
log.Fatal("Failed to connect to other peer")
}

ping := ping.NewPingService(host)
Expand All @@ -138,19 +158,27 @@ func main() {
if res.Error != nil {
panic(res.Error)
}
handshakePlusOneRTT := time.Since(handshakeStartInstant)

testResult := struct {
HandshakePlusOneRTTMillis float32 `json:"handshakePlusOneRTTMillis"`
PingRTTMilllis float32 `json:"pingRTTMilllis"`
}{
HandshakePlusOneRTTMillis: float32(handshakePlusOneRTT.Microseconds()) / 1000,
PingRTTMilllis: float32(res.RTT.Microseconds()) / 1000,
}

fmt.Println("Ping successful: ", res.RTT)

rClient.RPush(ctx, "dialerDone", "").Result()
} else {
_, err := rClient.RPush(ctx, "listenerAddr", host.Addrs()[0].Encapsulate(ma.StringCast("/p2p/"+host.ID().String())).String()).Result()
testResultJSON, err := json.Marshal(testResult)
if err != nil {
panic("Failed to send listener address")
log.Fatalf("Failed to marshal test result: %v", err)
}
_, err = rClient.BLPop(ctx, testTimeout, "dialerDone").Result()
fmt.Println(string(testResultJSON))
} else {
_, err := rClient.RPush(ctx, "listenerAddr", host.Addrs()[0].Encapsulate(ma.StringCast("/p2p/"+host.ID().String())).String()).Result()
if err != nil {
panic("Failed to wait for dialer conclusion")
log.Fatal("Failed to send listener address")
}

time.Sleep(testTimeout)
os.Exit(1)
}
}
Loading