Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: refactor to benchmark suite and add test-benchmark target #383

Merged
merged 3 commits into from
Apr 27, 2023

Conversation

ningziwen
Copy link
Member

#345

Description of changes:
Refactor to benchmark suite so we can insert vm init between vm benchmark and container benchmark.

Testing done:

✗ make test-benchmark-container
cd benchmark/container && go test -ldflags "-X github.com/runfinch/finch/pkg/version.Version=v0.6.0-2-g4b15328.modified -X github.com/runfinch/finch/pkg/version.GitCommit=4b15328755164eeea3549dbc02926e0fd4e1e14f.m" -bench=. -benchmem --installed="false"
goos: darwin
goarch: arm64
pkg: github.com/runfinch/finch/benchmark/container
BenchmarkContainer/BenchmarkContainerRun-8                     3         429230791 ns/op                 1.822 %cpu_avg/op         41.41 %cpu_peak/op               0.4292 cpu_seconds/op       -2730 disk_bytes/op           27608 B/op           464 allocs/op
BenchmarkContainer/BenchmarkImageBuild-8                       1        1687634542 ns/op                 1.396 %cpu_avg/op         85.71 %cpu_peak/op               1.688 cpu_seconds/op      5341184 disk_bytes/op           83360 B/op          1668 allocs/op
PASS
ok      github.com/runfinch/finch/benchmark/container   12.491s

✗ make test-benchmark-vm
cd benchmark/vm && go test -ldflags "-X github.com/runfinch/finch/pkg/version.Version=v0.6.0-2-g4b15328.modified -X github.com/runfinch/finch/pkg/version.GitCommit=4b15328755164eeea3549dbc02926e0fd4e1e14f.m" -bench=. -benchmem --installed="false"
goos: darwin
goarch: arm64
pkg: github.com/runfinch/finch/benchmark/vm
BenchmarkVM/BenchmarkVMInit-8                  1        68261909667 ns/op                1.473 %cpu_avg/op             100.0 %cpu_peak/op          68.26 cpu_seconds/op    1472135168 disk_bytes/op         3097904 B/op      65977 allocs/op
BenchmarkVM/BenchmarkVMStart-8                 1        23871741458 ns/op                1.143 %cpu_avg/op             100.0 %cpu_peak/op          23.87 cpu_seconds/op       1945600 disk_bytes/op         1079840 B/op      23035 allocs/op
PASS
ok      github.com/runfinch/finch/benchmark/vm  177.623s

✗ make test-benchmark   
cd benchmark/all && go test -ldflags "-X github.com/runfinch/finch/pkg/version.Version=v0.6.0-2-g4b15328.modified -X github.com/runfinch/finch/pkg/version.GitCommit=4b15328755164eeea3549dbc02926e0fd4e1e14f.m" -bench=. -benchmem --installed="false"
goos: darwin
goarch: arm64
pkg: github.com/runfinch/finch/benchmark/all
BenchmarkAll/BenchmarkVMInit-8                 1        70063292500 ns/op                0.9251 %cpu_avg/op            100.0 %cpu_peak/op          70.06 cpu_seconds/op    1439682560 disk_bytes/op         3175968 B/op      67751 allocs/op
BenchmarkAll/BenchmarkVMStart-8                1        23542149250 ns/op                1.204 %cpu_avg/op             100.0 %cpu_peak/op          23.54 cpu_seconds/op        942080 disk_bytes/op         1066296 B/op      22697 allocs/op
BenchmarkAll/BenchmarkContainerRun-8           3         425348166 ns/op                 2.545 %cpu_avg/op              44.17 %cpu_peak/op          0.4253 cpu_seconds/op           0 disk_bytes/op           27061 B/op        459 allocs/op
BenchmarkAll/BenchmarkImageBuild-8             1        3012537750 ns/op                 0.5392 %cpu_avg/op             31.08 %cpu_peak/op          3.013 cpu_seconds/op      3842048 disk_bytes/op          144184 B/op       2941 allocs/op
PASS
ok      github.com/runfinch/finch/benchmark/all 240.898s

  • [ X ] I've reviewed the guidance in CONTRIBUTING.md

License Acceptance

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

Signed-off-by: Ziwen Ning <ningziwe@amazon.com>
@ningziwen ningziwen requested a review from pendo324 April 27, 2023 21:38
Signed-off-by: Ziwen Ning <ningziwe@amazon.com>
Signed-off-by: Ziwen Ning <ningziwe@amazon.com>
Copy link
Member

@pendo324 pendo324 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe out of scope, but you might want to delete the persistent volume before/after the container tests, or between tests in general.

LGTM other than the comments I left

@ningziwen
Copy link
Member Author

Maybe out of scope, but you might want to delete the persistent volume before/after the container tests, or between tests in general.

LGTM other than the comments I left

Is it necessary to delete persistent volume? I already removed the container and image.

@ningziwen ningziwen requested a review from pendo324 April 27, 2023 22:24
@pendo324
Copy link
Member

Maybe out of scope, but you might want to delete the persistent volume before/after the container tests, or between tests in general.
LGTM other than the comments I left

Is it necessary to delete persistent volume? I already removed the container and image.

It's probably not strictly necessary, but there could be other things cached in nerdctl's data stores from previous runs of the e2e tests or benchmarks. Also, when you run vm init, it technically takes time to create a new persistent volume (but this will never happen if you already have a persistent volume).

It's up to you, I don't really have a strong opinion one way or the other. You may or may not notice weird bugs in the future if there are any issues with lingering images/containers/layers, but I guess we don't need to address that until it actually happens.

@ningziwen
Copy link
Member Author

Maybe out of scope, but you might want to delete the persistent volume before/after the container tests, or between tests in general.
LGTM other than the comments I left

Is it necessary to delete persistent volume? I already removed the container and image.

It's probably not strictly necessary, but there could be other things cached in nerdctl's data stores from previous runs of the e2e tests or benchmarks. Also, when you run vm init, it technically takes time to create a new persistent volume (but this will never happen if you already have a persistent volume).

It's up to you, I don't really have a strong opinion one way or the other. You may or may not notice weird bugs in the future if there are any issues with lingering images/containers/layers, but I guess we don't need to address that until it actually happens.

I see. Benchmarking should measure whatever the users experience daily. If something is cached in persistent disk, as users don't clean up persistent disk daily, it also applies to the users. So I think we can start from the direct scenarios now and add cleaner scenarios later if needed.

@ningziwen ningziwen closed this Apr 27, 2023
@ningziwen ningziwen reopened this Apr 27, 2023
@pendo324
Copy link
Member

I see. Benchmarking should measure whatever the users experience daily. If something is cached in persistent disk, as users don't clean up persistent disk daily, it also applies to the users. So I think we can start from the direct scenarios now and add cleaner scenarios later if needed.

I agree with that generally, but you also want to have a static reference point so that benchmarks can be compared over time. If there is any corruption or weird artifacts left on the persistent disk over time, it may skew the benchmark results. We don't need to act on that right now though, we can wait and see if that type of issue presents itself first.

@ningziwen ningziwen enabled auto-merge (squash) April 27, 2023 23:12
@ningziwen ningziwen merged commit 00182c9 into runfinch:main Apr 27, 2023
@ningziwen ningziwen deleted the test-suite branch April 27, 2023 23:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants