-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execute yaml examples via go tests #2541
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely progress! 🙌
examples/yaml_test.go
Outdated
var stderr, stdout bytes.Buffer | ||
cmd.Stderr = &stderr | ||
cmd.Stdout = &stdout | ||
if err := cmd.Run(); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can use CombinedOutput
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL! thanks :D
I'm not sure I do want to use combinedoutput in this case tho, cuz im using stderr if the command fails, and if not, im returning stdout. in the error case, dumping both seems fine, but in the success case, not including stderr seems like it makes sense to me. what do you think?
examples/yaml_test.go
Outdated
"serviceaccounts", | ||
"persistentvolumeclaims", | ||
} | ||
for _, c := range crdTypes { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually we should be able to parse the YAML file and create the resource using a client, instead of involving kubectl.
For this specifically we can just use the CRD client to delete the types.
It'd be fine to have a TODO for that for now but that's a better final state I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes and no. We will need to also test some kubectl create
commands as this is something the user will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not 100% sure which is better - if we want to use the clients, we need to load the yamls + find find the right client for the thing being created. i think im like 60% convinced this is better in the long run, but there's also something nice about just blasting it all out and watching it. i do think the part where i look for the word "run" in the ko output is pretty hacky, ill put a comment in about that for sure
examples/yaml_test.go
Outdated
|
||
// replaceDockerRepo will look in the content f and replace the hard coded docker | ||
// repo with the on provided via the KO_DOCKER_REPO environment variable | ||
func replaceDockerRepo(t *testing.T, f string) string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh, this is kinda gross. I think we could just drop any test that requires ko-building a package?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could - this is what the original yaml tests do tho, do you mind if i keep this in the context of this pull request and remove in another PR?
examples/yaml_test.go
Outdated
|
||
// pollRun will use kubectl to query the specified run to see if it | ||
// has completed. It will timeout after timeoutSeconds. | ||
func pollRun(t *testing.T, run string, wg *sync.WaitGroup) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could Watch instead and maybe end up faster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm could you give me an example of how that would work? ive never used kubectl watch - trying it out now it seems like id have to stream output from it which seems a bit more complicated given im calling kubectl with exec.Command - i think im being a bit lazy for sure but this seems pretty much fine for now? i can put in a comment to explore using watch
test/e2e-tests-yaml.sh
Outdated
done | ||
done | ||
|
||
failed=$(go test -timeout 15m ./examples) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🙌🙌🙌🙌🙌
The build failure is interesting 😛 |
7aac3b2
to
375d820
Compare
/test check-pr-has-kind-label |
I dunno what was up with that weird prow error but it stopped :D This should be ready for a real review now! Might still be some kinks to work out... I'm hoping after this we could migrate these tests to a separate test triggered via prow maybe, tho it's a bit more complicated b/c we'll have to invoke boskos too... |
examples/admission/tasks.yaml
Outdated
#!/usr/bin/env bash | ||
|
||
# TODO: assert something about the expected contents of $(params.output) | ||
# TODO: assert something about the expected results in the cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh no
🤦♀️ my bad for not seeing that sooner! If this next round of tests passes, im suggesting (#2685 (comment) ) we merge this and then add improvements from #2685 on top - means more work for @thomaschandler tho :( |
This PR cannot be merged: expecting exactly one kind/ label Available
|
test/e2e-tests.sh
Outdated
@@ -30,8 +30,9 @@ install_pipeline_crd | |||
failed=0 | |||
|
|||
# Run the integration tests | |||
header "Running Go e2e tests" | |||
go_test_e2e -timeout=20m ./test/... || failed=1 | |||
# TODO HACK HACK HACK HACK |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this dead code? If not could the comment do a bit more to describe why these lines have been left in but commented out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WHOOPS
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the bit that runs our integration tests! Unfortunately they all run BEFORE the yaml tests so I had temporarily commented them out so I could check if the tests were succeeding without having to wait for these to complete 🤦♀️
In the long run hopefully we can run them in parallel! The main complication that stops us from doing that right away is that both tests require a boskos cluster
examples/examples_test.go
Outdated
if err != nil { | ||
t.Fatalf("couldnt read contents of %s: %v", f, err) | ||
} | ||
return strings.Replace(string(read), "gcr.io/christiewilson-catfactory", r, -1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be worth putting "gcr.io/christiewilson-catfactory"
into a named constant? I'm a bit confused why it appears here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good call!
In tektoncd#2540 we are seeing that some yaml tests are timing out, but it's hard to see what yaml tests are failing. This commit moves the logic out of bash and into individual go tests - now we will run an individual go test for each yaml example, completing all v1alpha1 before all v1beta1 and cleaning up in between. The output will still be challenging to read since it will be interleaved, however the failures should at least be associated with a specific yaml file. This also makes it easier to run all tests locally, though if you interrupt the tests you end up with your cluster in a bad state and it might be good to update these to execute each example in a separate namespace (in which case we could run all of v1alpha1 and v1beta1 at the same time as well!)
There's some good stuff in this doc but it's hard to remember what's in it cuz it's kinda all over the place - maybe a TOC will help!
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbwsg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
can has lgtm? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this, looking great!
The only concern I have is about stdout in case of error, but that's something we can improve on later.
cmd.Stderr = &stderr | ||
cmd.Stdout = &stdout | ||
if err := cmd.Run(); err != nil { | ||
logf("couldn't run command %s %v: %v, %s", c, args, err, stderr.String()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks to me like in case of error we do not get to see the stdout at all.
I think we should print out both out and err, perhaps in two different log commands.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to agree, if the command failed, I tend to want to see stderr
and stdout
.
gotest.tools/v3/icmd
would come handy there 😝
return | ||
} | ||
|
||
switch status { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In future we might want to allow for example metadata of some kind (annotations?) to specify an expected target status and more details about it.
} | ||
|
||
// getYamls will look in the directory in examples indicated by version and run for yaml files | ||
func getYamls(t *testing.T, version, run string) []string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if some of this helpers could use a unit test... we do run them as part of the tests anyways, so they most likely all do that they are expected to :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haha probably!!!
/hold |
I'm afraid something is wrong, the YAML tests are timing out, logged as "FAILED" but still the CI job is marked as green:
|
whoa, that's no good at all! thanks for noticing that @afrittoli 🙏 |
@bobcatfish I've managed to get tests passing on #2685. How would you feel about merging #2685 instead of this MR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few comments 👼
"sync" | ||
"testing" | ||
"time" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: extra space not needed 😝
// we may want to consider either not running examples that require registry access | ||
// or doing something more sophisticated to inject the right registry in when folks | ||
// are executing the examples | ||
horribleHardCodedRegistry = "gcr.io/christiewilson-catfactory" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Why not running a local registry (in the test namespace) ? (in any case, it would be a follow-up)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could be a good follow up! unless the test is deploying to a different namespace 🤔
cmd.Stderr = &stderr | ||
cmd.Stdout = &stdout | ||
if err := cmd.Run(); err != nil { | ||
logf("couldn't run command %s %v: %v, %s", c, args, err, stderr.String()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tend to agree, if the command failed, I tend to want to see stderr
and stdout
.
gotest.tools/v3/icmd
would come handy there 😝
t.Helper() | ||
r := os.Getenv("KO_DOCKER_REPO") | ||
if r == "" { | ||
t.Fatalf("KO_DOCKER_REPO must be set") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we error out or skip
? (like we do on the e2e go tests — mainly to not break openshift-pipelines CI 😝 )
|
||
// logRun will retrieve the entire yaml of run in namespace n and log it | ||
func logRun(t *testing.T, n, run string) { | ||
t.Helper() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
t.Helper
might no be needed here (as it is called by other func that are already calling t.Helper
) — see stretchr/testify#933
for i := 0; i < (timeoutSeconds / sleepBetween); i++ { | ||
status, err := cmd(t.Logf, "kubectl", []string{"--namespace", n, "get", run, "--output=jsonpath={.status.conditions[*].status}"}, "") | ||
if err != nil { | ||
t.Fatalf("couldnt get status of %s: %v", run, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
t.Fatalf
does t.FailNow
which calls runtime.Goexit
, so wg.Done()
seems unecessary 🙃
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This means it will quit the test on this error… If that's not what we want, we need to use t.Errorf
.
|
||
t.Logf("Applying %s to namespace %s", y, n) | ||
content := replaceDockerRepo(t, fmt.Sprintf("%s/%s/%s", version, run, y)) | ||
output, err := cmd(t.Logf, "ko", []string{"create", "--namespace", n, "-f", "-"}, content) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/ko/kubectl
🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the existing scripts are using ko :O
// getYamls will look in the directory in examples indicated by version and run for yaml files | ||
func getYamls(t *testing.T, version, run string) []string { | ||
t.Helper() | ||
_, filename, _, _ := runtime.Caller(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to make the test dependent on the test file ? 🤔 (and use runtime.Caller
)
@thomaschandler good call, I think you're able to get to this faster than me! lemme go over to #2685 and review and we can merge your PR instead/first :D |
Apologies for closing this after your careful review @vdemeester @afrittoli @imjasonh but @thomaschandler is making much faster progress over in #2685 and he's using the client libs so it's a bit cleaner so im closing this PR in favor of it. |
Changes
In #2540 we are seeing that some yaml tests are timing out, but it's
hard to see what yaml tests are failing. This commit moves the logic out
of bash and into individual go tests - now we will run an individual go
test for each yaml example, completing all v1alpha1 before all v1beta1
and cleaning up in between. The output will still be challenging to read
since it will be interleaved, however the failures should at least
be associated with a specific yaml file.
This also makes it easier to run all tests locally, though if you
interrupt the tests you end up with your cluster in a bad state and it
might be good to update these to execute each example in a separate
namespace (in which case we could run all of v1alpha1 and v1beta1 at the
same time as well!)
I have a feeling this won't work on the first try and that I've still go a few issues to work out, not to mention that the code is a bit icky, esp. since im using t.Helper so profusely.
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
See the contribution guide for more details.
Double check this list of stuff that's easy to miss:
cmd
dir, please updatethe release Task to build and release this image.
Reviewer Notes
If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.