-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] - PR pytests are failing, possibly because repo env vars are not available #2783
Comments
Thanks for raising the issue, @tylergraff . Indeed, this has come to our attention since last week. We are not entirely sure why this started happening now. My main guess is that this was related to our change in branches and some internal global environment variables from the This is quite troublesome since it blocks development in any given PR upstream that alters any files under The best-case scenario would be to decouple such checks completely from contests since we already have a separate workflow. However, a more likely fix will be to mock the cloud API requests that require credentials using Another quick fix for any affected PR right now would be to copy the |
@viniciusdc the |
@tylergraff @viniciusdc I'm curious to see if this commit from last month is causing an error to be reported |
Re-opening as the issue still persists, though the work done in the PR above, helped clear some related matters. |
Hello @viniciusdc @marcelovilla I'm wondering, is there a reason this commit did not include .github/workflows/test.yaml in the The |
Hey @joneszc. No particular reason other than I probably missed that one. Happy to open another PR removing it. |
@marcelovilla Will you have time to add the PR today for removing |
Looking into this, I don't think the reason for the pytest failures is anything to do with branches or env vars. Here is what I found. I wanted to make sure this wasn't due to the PR coming from a forked repo since secrets are not shared in PR's from a forked repo. To test this made a branch in the nebari repo and merged @joneszc 's PR (#2788) into my branch. Result was CI still failed. This eliminated the forked repo as the cause. I wanted to eliminate the possibility that there was something to do with the secrets not being available for some reason. I could not see a reason they wouldn't be, but more importantly, I could not see a way that the secrets were actually being used in the unit test workflow. Regardless, I explicitly made them available in e37bc5c. The result was the error changed, but the workflow still failed. Now instead of a key error it said the provided token was invalid. This told me 2 important things. First, the action had access to secrets. Second the action was not using the secrets when it passed. This led me to believe that this actually represented actual failing unit tests, and not a problem with the pipeline. I next wanted to investigate how the other tests were passing, and looking at tests/tests_unit/conftest.py I saw that there were pytest fixtures for other assets such as instances, kubernetes versions, etc. This PR introduces a new AWS resource check, a KMS key check, but does not add the pytest mock for unit tests. This is why tests were previously passing, but now are failing. The relevant fixture is at nebari/tests/tests_unit/conftest.py Line 40 in 215bbd6
In summary, I believe that if @joneszc adds the relevant mock to the pytest fixture his code will pass and this issue can be closed. |
Nice, that's great! It explains why we only saw the fails when a change was made to the provider checks. So, this needs to be documented in the unit-tests file or the provider checks so that we are more aware of this in the future. Thanks a lot for the summary and for having a look at this @dcmcand 🚀 |
The suggested changes resolved this issue. Closing |
Describe the bug
pytests are failing for at least several PRs. Logs indicate that an exception is being raised by many tests due to various env vars not being set:
See full logs here:
https://github.com/nebari-dev/nebari/actions/runs/11386526924/job/31851899812?pr=2752
Expected behavior
PR Pytests should pass.
OS and architecture in which you are running Nebari
This is an issue with the existing github Nebari test pipeline.
How to Reproduce the problem?
Example failing PRs:
#2752
#2730
Command output
No response
Versions and dependencies used.
No response
Compute environment
None
Integrations
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: