Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: update interop workflow #3331

Merged
merged 14 commits into from
Jan 19, 2023
Merged
53 changes: 29 additions & 24 deletions .github/workflows/interop-test.yml
Original file line number Diff line number Diff line change
@@ -1,32 +1,37 @@
name: Interoperability Testing

on:
pull_request:
push:
branches:
- master

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
mxinden marked this conversation as resolved.
Show resolved Hide resolved
- "master"

jobs:
# NOTE: during a pull request run, github creates a merge commit referenced in `github.sha`
# that merge commit is not a regular commit. You won't find it with a regular `git checkout SHA` and
# tools like `go get repo@SHA` won't find it.
#
# As a workaround, we generate a path to the actual pull request's commit, it looks like:
# `github.com/external-org/go-libp2p@latest-commit-on-their-branch`
run-ping-interop-cross-version:
uses: "libp2p/test-plans/.github/workflows/run-composition.yml@master"
with:
composition_file: "ping/_compositions/rust-cross-versions.toml"
custom_git_target: github.com/${{ github.event.pull_request.head.repo.full_name || github.event.repository.full_name }}
custom_git_reference: ${{ github.event.pull_request.head.sha || github.sha }}
run-ping-interop-cross-implementation:
uses: "libp2p/test-plans/.github/workflows/run-composition.yml@master"
build-ping-container:
name: Build Ping interop container
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
mxinden marked this conversation as resolved.
Show resolved Hide resolved
- name: Build image
working-directory: ./test-plans
Copy link
Contributor

@thomaseizinger thomaseizinger Jan 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sometimes we call things test-plans, sometimes we call them interop-tests.

I'd be in favor of calling all of this interop-tests. test-plans is too generic IMO and doesn't tell me anything about what they are testing.

run: make
- name: Upload ping versions info
uses: actions/upload-artifact@v3
with:
name: ping-versions
path: ./test-plans/ping-versions.json
- name: Upload image tar
uses: actions/upload-artifact@v3
with:
name: ping-image
path: ./test-plans/ping-image.tar
Comment on lines +34 to +38
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we are using plain docker compose, can't we just build the image as part of the workflow here, upload it to GitHub's container registry and reference it here? That seems a lot simpler than storing it as an artifact.

cc @MarcoPolo

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In more detail, why don't we have the following:

  1. A workflow per repository that only runs on pushes to master that builds a docker container and uploads it to the container registry under the tag "master".
  2. A workflow per repository that only runs on tag pushes and builds a docker container and uploads it to the container registry under the Git tag, i.e. 0.45.0
  3. A PR workflow that builds a container of the current commit, uploads it to the container registry under the tag of the PR number.

Once (3) finishes, we can then kick off a dependent workflow that runs a matrix of containers against each other.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can upload to a container registry if you want. If the containerImageID is a tag, then docker will automatically try to fetch that tag. You don't have to upload image artifacts if you don't want to.

That said, I think the approach of building the container from the commit is better. For one it's guaranteed to be reproducible. You don't have multiple commits from the same PR mutating the tag number. Two, it's the same workflow on master, PRs, or tags. Three, when you introduce a container registry, now you need to deal with authentication, limits, and garbage collecting from that container registry.

The one benefit I see to container registries is them acting as a cache for faster tests. But we can cache layers already using GitHub Actions cache.

I'm not sure the extra work involved in setting up and maintaining a registry is worth it. But of course, if you want to feel free. I'm just sharing my thoughts from thinking about this as well.

Copy link
Contributor

@thomaseizinger thomaseizinger Jan 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for sharing your thoughts. Perhaps a mix of both would be the best solution?

  • The master tag would only ever produce one container so no GC needed, i.e. the container for master would be updated every time the master branch gets updated. That hurts reproducibility a bit but it is not a big deal IMO. We already accept that merging one PR affects other PRs.
  • The containers for releases are useful for other implementations too. I don't think we can utilize the GH Actions Cache here? i.e. go-libp2p testing against releases of rust-libp2p
  • Not uploading the container of the current commit/PR is a good idea, those would just be garbage we need to collect.

If we can remove container builds of master and tags from this workflow, then the Dockerfile also gets simpler because we always just build for the current source, no overrides needed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure the extra work involved in setting up and maintaining a registry is worth it. But of course, if you want to feel free. I'm just sharing my thoughts from thinking about this as well.

GitHub makes authenticating and interacting with its own registry really easy so that wouldn't be much work in my experience.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the ability to checkout an arbitrary commit of test-plans and be able to run the exact those exact tests is a very good feature (reproduicibility). As long as we hold that, I don’t mind using tagged images in a registry.

But what is the benefit of tagging images in a registry? To me, the only benefit is faster builds.

The benefit is faster builds but it also just feels wrong. The source code for a test for e.g. libp2p 0.45 should never change. The version is out there, our users are using it. If we want to take interoperability seriously, I think it is good to constrain ourselves such that changing the test for a particular released version is hard.

Building the binary every time doesn't sit well for me on that front.

Maybe standard GHA cache isn’t good enough (since it caches per branch I believe), but we could cache the layers from the latest test-plans master builds as well, and I think that would cover all the use cases of tagged images.

It caches per user-defined key which can be completely arbitrary and is not restricted to branches or anything. Bear in mind that each repo only has 10GB here and at least in rust-libp2p we already have to jump through hoops to not blow this cache limit for our regular jobs :(

The master tag would only ever produce one container so no GC needed, i.e. the container for master would be updated every time the master branch gets updated.

I don’t see the benefit here since you are building this every time. When do you get to reuse this tag?

It gets reused across all pushes on all open PRs until the next one merges. I myself have usually anywhere from 2-10 PRs open that I work on in parallel i.e. I switch to another PR once I incorporated feedback and wait for the next round of reviews.

If we can remove container builds of master and tags from this workflow, then the Dockerfile also gets simpler because we always just build for the current source, no overrides needed.

What overrides are used? I imagine you are just building from the current source.

My bad, I wrongly assumed that we would still need the "sed"-ing inside the Dockerfile. That should be gone now in any case.

One idea for released versions is that inside the test-plans repo we specify the way to build the interop image is to checkout this repo at this commit, then cd into test-plans and run make.

This exact step could have happened when the tag was pushed. It is like pre-built binaries of any other application here on GitHub.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I think we're starting to understand each other's points and we agree on the fundamentals.

Just a couple of questions for me:

It gets reused across all pushes on all open PRs until the next one merges.

Maybe there's a misunderstanding somewhere? An open rust-libp2p PR wouldn't do an interop test against rust-libp2p master (we could do that, I'm just not sure it's useful). It runs interop tests against every other released version of libp2p.

The source code for a test for e.g. libp2p 0.45 should never change. The version is out there, our users are using it. If we want to take interoperability seriously, I think it is good to constrain ourselves such that changing the test for a particular released version is hard.

I agree with this. I don't think building from source prevents this. If we constrain ourselves to never mutate an image tag, then I think we agree except on the little details. To me a sha256 hash is by definition an immutable tag of an image.

I don't have strong opinions on how rust-libp2p wants to handle building their image. If the project wants to simply submit a container tag that will be used in the interop tests, I only ask that the tag is immutable.

Copy link
Contributor

@thomaseizinger thomaseizinger Jan 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It gets reused across all pushes on all open PRs until the next one merges.

Maybe there's a misunderstanding somewhere? An open rust-libp2p PR wouldn't do an interop test against rust-libp2p master (we could do that, I'm just not sure it's useful). It runs interop tests against every other released version of libp2p.

Interesting, I thought we also ran interop tests against our own master branch. @mxinden do I remember this incorrectly?

If not then I agree that it is not useful to have a moving master tag.

The source code for a test for e.g. libp2p 0.45 should never change. The version is out there, our users are using it. If we want to take interoperability seriously, I think it is good to constrain ourselves such that changing the test for a particular released version is hard.

I agree with this. I don't think building from source prevents this. If we constrain ourselves to never mutate an image tag, then I think we agree except on the little details. To me a sha256 hash is by definition an immutable tag of an image.

I don't have strong opinions on how rust-libp2p wants to handle building their image. If the project wants to simply submit a container tag that will be used in the interop tests, I only ask that the tag is immutable.

The tag should definitely be immutable. As far as I know, one can append the sha256 hash of a docker image tag to the overall ID and thus guarantee that anyone pulling this image from the registry will get exactly the image with this hash. That is how I would specify those images in test-plans.

Example:

docker.io/library/hello-world@sha256:faa03e786c97f07ef34423fccceeec2398ec8a5759259f94d99078f264e9d7a

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It gets reused across all pushes on all open PRs until the next one merges.

Maybe there's a misunderstanding somewhere? An open rust-libp2p PR wouldn't do an interop test against rust-libp2p master (we could do that, I'm just not sure it's useful). It runs interop tests against every other released version of libp2p.

Interesting, I thought we also ran interop tests against our own master branch. @mxinden do I remember this incorrectly?

If I recall correctly, we did. I was planning to have each pull request tested against master with the wo-testground tests, though based on intuition, without specific reasoning. Thinking about it some more, I don't see what kind of bugs this would catch, thus I suggest not investing into PR-to-master testing for now. Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But what is the benefit of tagging images in a registry?

Another benefit would be for someone to easily test their libp2p-xxx against libp2p-yyy v1.2.3 locally without having checkout a repository, build a Docker image or install libp2p-yyy toolchain, simply pulling and running a pre-build image from a public registry. Just mentioning it here. I am fine with either and fine with Go doing ad-hoc builds and Rust pushing Docker images.

run-multidim-interop:
name: Run multidimensional interoperability tests
needs: build-ping-container
uses: "libp2p/test-plans/.github/workflows/run-testplans.yml@master"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could save some complexity here if this were just a composite action that we call after building the images rather than a separate job.

That would allow the action within test-plans to access local state like built docker images, configuration files without us having to up and download artifacts.

@MarcoPolo What was the reasoning for making this a workflow rather than a composite action?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example, within the same job, we wouldn't need to tar up the container because it would still be accessible by tag.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! I didn't realize I could make a composite action. My knowledge here was out of date. Thanks!

I'll try this out soon. I agree it'll be nice not to have to upload the image. And the upload artifacts are always directories, so it'll be nice to avoid that.

with:
composition_file: "ping/_compositions/all-interop-latest.toml"
custom_git_target: github.com/${{ github.event.pull_request.head.repo.full_name || github.event.repository.full_name }}
custom_git_reference: ${{ github.event.pull_request.head.sha || github.sha }}
custom_interop_target: rust
dir: "multidim-interop"
extra-versions: ping-versions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does extra-versions here mean? @MarcoPolo

We seem to be referencing the same image in there that we are also tarring up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The source of truth for the capabilities of a released libp2p version are defined in version.ts. The extra versions is more version data that is added to that versions.ts array. This lets you add a "rust-libp2p-head" version to the defined versions and run an interop test between all defined versions on all capabilites. This combined with the ability to filter tests lets you run the interop tests that only include "rust-libp2p-head".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for explaining, this makes sense now.

Is there a usecase where we extend versions.ts with more than one image and/or don't filter by the one we just added? It looks like the configuration API of this action could be simpler, i.e. it just needs the contents of one entry in versions.ts and we always want to filter the tests for this one version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In particular, this seems to be sufficient and perhaps a bit easier to understand:

  run-multidim-interop:
    name: Run multidimensional interoperability tests
    needs: build-ping-container
    uses: "libp2p/test-plans/.github/workflows/run-testplans.yml@master"
    with:
      imageId: "rust-libp2p-head"
      transports: ["ws", "tcp", "quic-v1", "webrtc"]
      secureChannels: ["tls", "noise"]
      muxers: ["mplex", "yamux"]
  • The name of the version can be implied from the image ID (by convention we should make this a tag and not just a hash).
  • With the composite action suggested in https://github.com/libp2p/rust-libp2p/pull/3331/files#r1073550520, we don't need to pass the artifact name of the tar'd image.
  • By directly specifying the transports, secure channels and muxers, we can avoid a configuration file in our repository.
  • Not sure I understand why dir: "multidim-interop" needs to be specified, can't this be implied by the GitHub action?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand why dir: "multidim-interop" needs to be specified, can't this be implied by the GitHub action?

This should be changed. Originally the "run test-plans" was generic enough that it would work for any npm test like thing. But there's little value for this, and it would be nicer to be clearer here.

By directly specifying the transports, secure channels and muxers, we can avoid a configuration file in our repository.

It's true. With the tradeoff of dealing not being able to run this locally as easily. I think it's nice to be able to locally run: npm test -- --extra-versions=$RUST_LIBP2P/test-plans/version.json --name-filter="rust-libp2p-head". Inputs are also only strings, so you'd have to stringify the given example.

I agree with all the other points :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By directly specifying the transports, secure channels and muxers, we can avoid a configuration file in our repository.

It's true. With the tradeoff of dealing not being able to run this locally as easily. I think it's nice to be able to locally run: npm test -- --extra-versions=$RUST_LIBP2P/test-plans/version.json --name-filter="rust-libp2p-head". Inputs are also only strings, so you'd have to stringify the given example.

How often do we need that? At the moment, this config file just sits there, not co-located with its consumer and it requires a fair amount of knowledge to figure out what it is for.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's true. With the tradeoff of dealing not being able to run this locally as easily. I think it's nice to be able to locally run: npm test -- --extra-versions=$RUST_LIBP2P/test-plans/version.json --name-filter="rust-libp2p-head". Inputs are also only strings, so you'd have to stringify the given example.

Would it make sense to unify the API of the test runner to just accept a series of JSON files with one version each? i.e. it builds the matrix of tests out of all files that are passed to it. Each JSON file would contain exactly one entry.

We could then pass the JSON as a string into the workflow and it writes it to a file. If you really need to run things locally, you just have to quickly copy paste that and make the file.

To me, that seems like an acceptable trade-off for having one less configuration file around per repository :)

Copy link
Contributor

@MarcoPolo MarcoPolo Jan 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to unify the API of the test runner to just accept a series of JSON files with one version each? i.e. it builds the matrix of tests out of all files that are passed to it. Each JSON file would contain exactly one entry.

Yeah, this seems reasonable 👍 .

We could then pass the JSON as a string into the workflow and it writes it to a file. If you really need to run things locally, you just have to quickly copy paste that and make the file.

The less logic that is in the github actions yaml the better. At least for me, I find this stuff hard to work with since it only runs in CI. The simpler the yaml file is the better.

I don’t see why having this configuration file is bad. I actually think it’s a good thing because:

  1. It defines what the capabilities of the current rust-libp2p are.
  2. If a new capability is added (e.g. webtransport), then you modify this json file, not a stringified version in a github action yaml file.
  3. When a release is made, you simply have to copy this JSON file to test-plans/multidim-interop/version.ts so other implementions test against it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to unify the API of the test runner to just accept a series of JSON files with one version each? i.e. it builds the matrix of tests out of all files that are passed to it. Each JSON file would contain exactly one entry.

Yeah, this seems reasonable 👍 .

Nice, I'll open an issue on the repo.

I don’t see why having this configuration file is bad. I actually think it’s a good thing because:

  1. It defines what the capabilities of the current rust-libp2p are.
  2. If a new capability is added (e.g. webtransport), then you modify this json file, not a stringified version in a github action yaml file.
  3. When a release is made, you simply have to copy this JSON file to test-plans/multidim-interop/version.ts so other implementions test against it.

You are mentioning some good points. Happy to keep the file, thanks for getting to the bottom of this :)

image-tar: ping-image
test-filter: "rust-libp2p-head"
21 changes: 21 additions & 0 deletions test-plans/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
[package]
edition = "2021"
name = "testcases"
thomaseizinger marked this conversation as resolved.
Show resolved Hide resolved
version = "0.1.0"

# Required to avoid conflicts with parent workspace.
[workspace]
thomaseizinger marked this conversation as resolved.
Show resolved Hide resolved

[dependencies]
anyhow = "1"
async-trait = "0.1.58"
env_logger = "0.9.0"
futures = "0.3.1"
if-addrs = "0.7.0"
log = "0.4"
redis = { version = "0.22.1", features = ["tokio-native-tls-comp", "tokio-comp"] }
tokio = { version = "1.24.1", features = ["full"] }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason for this newline?

libp2p = { path = "../", default_features = false, features = ["websocket", "quic", "mplex", "yamux", "tcp", "tokio", "ping", "noise", "tls", "dns", "rsa", "macros", "webrtc"] }
rand = "0.8.5"
strum = { version = "0.24.1", features = ["derive"] }
19 changes: 19 additions & 0 deletions test-plans/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
FROM rust:1.65-bullseye as builder
WORKDIR /usr/src/libp2p

# TODO fix this, it breaks reproducibility
RUN apt-get update && apt-get install -y cmake protobuf-compiler
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need the protobuf compiler? I thought we fixed this by inlining generated content?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That isn't merged yet :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For completeness, this is tracked in #3024.


COPY ./ ./
thomaseizinger marked this conversation as resolved.
Show resolved Hide resolved

RUN cd ./test-plans/ && cargo build # Initial build acts as a cache.
thomaseizinger marked this conversation as resolved.
Show resolved Hide resolved

ARG BINARY_NAME
RUN cd ./test-plans/ \
&& cargo build --bin=${BINARY_NAME} \
&& mv /usr/src/libp2p/test-plans/target/debug/${BINARY_NAME} /usr/local/bin/testplan

FROM debian:bullseye-slim
COPY --from=builder /usr/local/bin/testplan /usr/local/bin/testplan
ENV RUST_BACKTRACE=1
ENTRYPOINT ["testplan"]
10 changes: 10 additions & 0 deletions test-plans/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
all: ping-image.tar

ping-image.tar: Dockerfile Cargo.toml src
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this properly depend on all files in src? I thought you needed wildcard, but not sure.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah good point Marco, it seems to only depend on the directory creation/modification itself. Updated to use wildcard

cd .. && docker build -t rust-libp2p-head -f test-plans/Dockerfile --build-arg BINARY_NAME=ping .
docker image save -o $@ rust-libp2p-head

.PHONY: clean

clean:
rm ping-image.tar
34 changes: 34 additions & 0 deletions test-plans/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# test-plans test implementation

This folder defines the implementation for the test-plans interop tests.

# Running this test locally
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀 worked like a charm.


You can run this test locally by having a local Redis instance and by having
another peer that this test can dial or listen for. For example to test that we
can dial/listen for ourselves we can do the following:

1. Start redis (needed by the tests): `docker run --rm -it -p 6379:6379
redis/redis-stack`.
2. In one terminal run the dialer: `REDIS_ADDR=localhost:6379 ip="0.0.0.0"
transport=quic-v1 security=quic muxer=quic is_dialer="true" cargo run --bin ping`
3. In another terminal, run the listener: `REDIS_ADDR=localhost:6379
ip="0.0.0.0" transport=quic-v1 security=quic muxer=quic is_dialer="false" cargo run --bin ping`


To test the interop with other versions do something similar, except replace one
of these nodes with the other version's interop test.

# Running all interop tests locally with Compose

To run this test against all released libp2p versions you'll need to have the
(libp2p/test-plans)[https://github.com/libp2p/test-plans] checked out. Then do
the following:

1. Build the image: `make`.
2. Build the images for all released versions in `libp2p/test-plans`: `(cd <path
to >/libp2p/test-plans/multidim-interop/ && make)`.
3. Make a folder for the specified extra versions: `mkdir extra-versions && mv ping-versions.json extra-versions`
4. Run the test:
```
RUST_LIBP2P_TEST_PLANS="$PWD"; (cd <path to >/libp2p/test-plans/multidim-interop/ && npm run test -- --extra-versions-dir=$RUST_LIBP2P_TEST_PLANS/extra-versions --name-filter="rust-libp2p-head")
20 changes: 20 additions & 0 deletions test-plans/ping-versions.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
[
{
"id": "rust-libp2p-head",
"containerImageID": "rust-libp2p-head",
"transports": [
"ws",
"tcp",
"quic-v1",
"webrtc"
],
"secureChannels": [
"tls",
"noise"
],
"muxers": [
"mplex",
"yamux"
]
}
]
207 changes: 207 additions & 0 deletions test-plans/src/bin/ping.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
use std::collections::HashSet;
use std::env;
use std::time::Duration;

use anyhow::{Context, Result};
use async_trait::async_trait;
use futures::{AsyncRead, AsyncWrite, StreamExt};
use libp2p::core::muxing::StreamMuxerBox;
use libp2p::core::transport::Boxed;
use libp2p::core::upgrade::EitherUpgrade;
use libp2p::swarm::{keep_alive, NetworkBehaviour, SwarmEvent};
use libp2p::websocket::WsConfig;
use libp2p::{
core, identity, mplex, noise, ping, webrtc, yamux, Multiaddr, PeerId, Swarm, Transport as _,
};
use testcases::{run_ping, Muxer, PingSwarm, SecProtocol, Transport};

fn build_builder<T, C>(
builder: core::transport::upgrade::Builder<T>,
secure_channel_param: SecProtocol,
muxer_param: Muxer,
local_key: &identity::Keypair,
) -> Boxed<(libp2p::PeerId, StreamMuxerBox)>
where
T: libp2p::Transport<Output = C> + Send + Unpin + 'static,
<T as libp2p::Transport>::Error: Sync + Send + 'static,
<T as libp2p::Transport>::ListenerUpgrade: Send,
<T as libp2p::Transport>::Dial: Send,
C: AsyncRead + AsyncWrite + Send + Unpin + 'static,
{
let mux_upgrade = match muxer_param {
Muxer::Yamux => EitherUpgrade::A(yamux::YamuxConfig::default()),
Muxer::Mplex => EitherUpgrade::B(mplex::MplexConfig::default()),
};

let timeout = Duration::from_secs(5);

match secure_channel_param {
SecProtocol::Noise => builder
.authenticate(noise::NoiseAuthenticated::xx(&local_key).unwrap())
.multiplex(mux_upgrade)
.timeout(timeout)
.boxed(),
SecProtocol::Tls => builder
.authenticate(libp2p::tls::Config::new(&local_key).unwrap())
.multiplex(mux_upgrade)
.timeout(timeout)
.boxed(),
}
}

#[tokio::main]
async fn main() -> Result<()> {
let local_key = identity::Keypair::generate_ed25519();
let local_peer_id = PeerId::from(local_key.public());

let transport_param: Transport =
testcases::from_env("transport").context("unsupported transport")?;

let ip = env::var("ip").context("ip environment variable is not set")?;

let is_dialer = env::var("is_dialer")
.unwrap_or("true".into())
.parse::<bool>()?;

let redis_addr = env::var("REDIS_ADDR")
.map(|addr| format!("redis://{addr}"))
.unwrap_or("redis://redis:6379".into());

let client = redis::Client::open(redis_addr).context("Could not connect to redis")?;

let (boxed_transport, local_addr) = match transport_param {
Transport::QuicV1 => {
let builder =
libp2p::quic::tokio::Transport::new(libp2p::quic::Config::new(&local_key))
.map(|(p, c), _| (p, StreamMuxerBox::new(c)));
(builder.boxed(), format!("/ip4/{ip}/udp/0/quic-v1"))
}
Transport::Tcp => {
let builder = libp2p::tcp::tokio::Transport::new(libp2p::tcp::Config::new())
.upgrade(libp2p::core::upgrade::Version::V1Lazy);

let secure_channel_param: SecProtocol =
testcases::from_env("security").context("unsupported secure channel")?;

let muxer_param: Muxer =
testcases::from_env("muxer").context("unsupported multiplexer")?;

(
build_builder(builder, secure_channel_param, muxer_param, &local_key),
format!("/ip4/{ip}/tcp/0"),
)
}
Transport::Ws => {
let builder = WsConfig::new(libp2p::tcp::tokio::Transport::new(
libp2p::tcp::Config::new(),
))
.upgrade(libp2p::core::upgrade::Version::V1Lazy);

let secure_channel_param: SecProtocol =
testcases::from_env("security").context("unsupported secure channel")?;

let muxer_param: Muxer =
testcases::from_env("muxer").context("unsupported multiplexer")?;

(
build_builder(builder, secure_channel_param, muxer_param, &local_key),
format!("/ip4/{ip}/tcp/0/ws"),
)
}
Transport::Webrtc => (
webrtc::tokio::Transport::new(
local_key,
webrtc::tokio::Certificate::generate(&mut rand::thread_rng())?,
)
.map(|(peer_id, conn), _| (peer_id, StreamMuxerBox::new(conn)))
.boxed(),
format!("/ip4/{ip}/udp/0/webrtc"),
),
};

let swarm = OrphanRuleWorkaround(Swarm::with_tokio_executor(
boxed_transport,
Behaviour {
ping: ping::Behaviour::new(ping::Config::new().with_interval(Duration::from_secs(1))),
keep_alive: keep_alive::Behaviour,
},
local_peer_id,
));

// Use peer id as a String so that `run_ping` does not depend on a specific libp2p version.
let local_peer_id = local_peer_id.to_string();
run_ping(client, swarm, &local_addr, &local_peer_id, is_dialer).await?;

Ok(())
}

#[derive(NetworkBehaviour)]
struct Behaviour {
ping: ping::Behaviour,
keep_alive: keep_alive::Behaviour,
}
struct OrphanRuleWorkaround(Swarm<Behaviour>);

#[async_trait]
impl PingSwarm for OrphanRuleWorkaround {
async fn listen_on(&mut self, address: &str) -> Result<String> {
let id = self.0.listen_on(address.parse()?)?;

loop {
if let Some(SwarmEvent::NewListenAddr {
listener_id,
address,
}) = self.0.next().await
{
if address.to_string().contains("127.0.0.1") {
continue;
}
if listener_id == id {
return Ok(address.to_string());
}
}
}
}

fn dial(&mut self, address: &str) -> Result<()> {
self.0.dial(address.parse::<Multiaddr>()?)?;

Ok(())
}

async fn await_connections(&mut self, number: usize) {
let mut connected = HashSet::with_capacity(number);

while connected.len() < number {
if let Some(SwarmEvent::ConnectionEstablished { peer_id, .. }) = self.0.next().await {
connected.insert(peer_id);
}
}
}

async fn await_pings(&mut self, number: usize) -> Vec<Duration> {
let mut received_pings = Vec::with_capacity(number);

while received_pings.len() < number {
if let Some(SwarmEvent::Behaviour(BehaviourEvent::Ping(ping::Event {
peer: _,
result: Ok(ping::Success::Ping { rtt }),
}))) = self.0.next().await
{
received_pings.push(rtt);
}
}

received_pings
}

async fn loop_on_next(&mut self) {
loop {
self.0.next().await;
}
}

fn local_peer_id(&self) -> String {
self.0.local_peer_id().to_string()
}
}
Loading