-
Notifications
You must be signed in to change notification settings - Fork 933
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GitHub Actions Integration Tests #2326
Conversation
- this leaves some room for future reuse (like min capi and windows) - not sure we should use upload artifact (maybe we can save just the enviornment name from toolsmiths and call down the metafile when needed?) - install tools when needed rather than use cf-deployment-concourse-tasks container
- so moved the directory
- also fix a bug with wrong variable name in unclaim - adding Shwetha cause all the work has been cribbed off of her initial start Co-authored-by: Shwetha Gururaj <gururajsh@vmware.com>
Co-authored-by: Michael Oleske <moleske@pivotal.io> Co-authored-by: Al Berez <aberezovsky@vmware.com>
Optimize steps order Co-authored-by: Michael Oleske <moleske@pivotal.io>
Co-authored-by: Michael Oleske <moleske@pivotal.io>
I propose to have workflow names use shared namespace-like prefixes. For example
|
if: ${{ inputs.capi-version == 'edge' }} | ||
run: | | ||
# find latest capi | ||
FILENAME="$(aws s3 ls capi-releases --no-sign-request --recursive --region us-east-1 | sort | tail -n 1 | awk '{print $4}')" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this means it is possible to have the same release version and we're just grabbing the latest? Or that since this is edge just always grab the latest and we don't actually know the latest version of capi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't know what the latest edge is, we just grab it. This is from the output of capi-create-release, which passed unit tests but has not passed integration tests
Future PRs will allow specification of CAPI, to support running tests with a minimum supported version of CAPI
- use absolute path (`/tmp/aws/install`) instead of relative path (`./tmp/aws/install`)
.github/workflows/setup-cf-env.yml
Outdated
|
||
wget https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -P /tmp | ||
unzip -d /tmp /tmp/awscli-exe-linux-x86_64.zip | ||
./tmp/aws/install |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be the absolute path /tmp/aws/install
instead of the relative path ./tmp/aws/install
- name: Download metadata | ||
uses: actions/download-artifact@v3 | ||
with: | ||
name: ${{ inputs.identifier-for-metadata-file }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some testing in a private space seems to indicate that this line is not quite correct. I'm getting a problem where this input is empty (''
) causing all artifacts to be downloaded, which results in a pathing problem. For example, if the environment name is rosebud
then instead of downloading just the metadata json file, it puts in the json file in a directory called rosebud
. This causes the next step to fail on cat metadata.json | jq -r .name
as the file is actually rosebud/metadata.json
I'm hoping I'm just not seeing something silly, like the input is badly passed in integration.yml
or incorrectly exported out setup-cf-env.yml
github docs for reference on anyone trying to help
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok thinking harder for a few minutes, the problem is actually over here
outputs: | ||
environment-name: | ||
description: "Name of claimed environment" | ||
value: ${{ jobs.setup-cf-env.outputs.environment-name }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be ${{ steps.claim-toolsmiths-env.outputs.environment-name }}
as we want the output of the step, not an out put of the job. See this for original issue
- previously was trying to get the output of a job, but the output is actually from a step, so would be blank
- you need to also output on the step, which you then pass to the job output
aws tools already exist on [github provided images](https://github.com/actions/runner-images/blob/main/images/linux/Ubuntu2204-Readme.md#cli-tools)
also add print of bosh --version and bbl --version just to make sure they are available
missed this in the last commit :(
resolve the following error cause by tar permission issues ``` tar: .: Cannot utime: Operation not permitted tar: .: Cannot change mode to rwxr-xr-x: Operation not permitted tar: Exiting with failure status due to previous errors ```
github provided machines don't allow tar to create folders in tmp for some reason, so this really resolves the error in the previously mentioned commit
Unclaiming the toolsmith environment does not need the integration test to be completed with an output in order to run the unclaim job
- use quotes on some exports when getting passwords out of credhub
- because `make integration-tests-full-ci` expects cf cli to be on the path
cf toolsmiths environments don't have an oidc provider configured
- we removed this before, but it is needed to make sure we don't run the unclaim job before the integration tests run. Otherwise we've unclaimed and destroyed the environment before we even run integration tests!
Chatted with @ccjaimes about this. The integration tests are now running. There are some test failures, but we think there is value in getting this merged so we can start seeing if tests are flaky and how we feel about the tests running here. We don't think these integration tests should be a blocker to merging PRs until we fix the known test failures. We'll start opening other PRs to fix the test failures. Here's a list of the test failures we've seen so far
Once those test failures are addressed, then we can make passing the integration tests a blocker for PRs If things go really wrong, we can always roll back or disable these integration tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me. I agree that integration tests should not be a blocker on PRs until we've addressed the failures/flakiness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, and I agree with Ryker's comment
Description of the Change
This introduces CF CLI integration tests