-
Notifications
You must be signed in to change notification settings - Fork 718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix xFail errors from pytest #6940
Conversation
nikelite
commented
Oct 26, 2024
•
edited
Loading
edited
- Re-enable TensorFlow eager mode to prevent unintended failures in other tests.
- Update the test class module name to align with the tests
- Include test data files in the TFX package to prevent test failures
- Resolve artifact type name conflicts from pytest.
…gAccessError" This reverts commit 4942170.
… since mock decorator is not well working with pytest
.github/workflows/ci-test.yml
Outdated
@@ -62,4 +62,4 @@ jobs: | |||
- name: Run unit tests | |||
shell: bash | |||
run: | | |||
pytest -m "${{ matrix.which-tests }}" | |||
pytest -vv -m "${{ matrix.which-tests }}" tfx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We had specifically not chosen to put verbose output originally because there often could be an overwhelming amount of output on GitHub and developers can run pytest with verbose output on their local machines instead. If you would really like verbose output, leave it, by all means.
CC: @peytondmurray
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, is specifying the tfx
directory strictly necessary here? I believe it is already specified in the pytest config in pyproject.toml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reverted this change.
@@ -25,6 +25,7 @@ | |||
from tfx.experimental.distributed_inference.graphdef_experiments.subgraph_partitioning import create_complex_graph | |||
from tfx.experimental.distributed_inference.graphdef_experiments.subgraph_partitioning import graph_partition | |||
|
|||
tf.compat.v1.enable_eager_execution() # Re-enable eager mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we need eager execution mode for some tests, my strong preference and suggestion is to use a pytest yield fixture as a decorator to enable and disable it for the tests that need it. The fixture can be stored in a conftest.py
and reused anywhere it is needed.
CC: @peytondmurray
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
☝️ To add context to this point:
tfx
has a hard dependency ontensorflow>=2.15, <2.16
. Eager execution is enabled by default fortensorflow>=2
, so I think this may not be necessary?- This setting unilaterally applies global changes to the testing environment, impacting much more than just the tests here. Even if
tfx
did allow users to installtensorflow<2
, I'd still vote to move this into a test fixture to limit the impact on all the other tests in the test run.
On a related note, how many folks are still running TF<2? TF 2.0 came out over 5 years ago - if tfx
still has tests that target deprecated 1.0 functionality, it may be worth starting an issue/discussion to remove them, as I don't think tfx
can't be expected to run these at this point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue stemmed from the create_complex_graph module, which forcibly disables eager execution upon initialization. I've added a pytest yield fixture to re-enable it when the tests using this module end.
Line 25 in ee6eaf2
tf.compat.v1.disable_eager_execution() # Disable eager mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[pytest yield fixture](https://docs.pytest.org/en/stable/how-to/fixtures.html#yield-fixtures-recommended)
is not working, since disable_eager_execution() isn't working as expected. This might be due to disable_eager_execution() being called during the collection phase of pytest, as it's also called during initialization. This code is from an older part of the project, and I'm unsure if it's safe to remove. However, since it's within experimental code, I'll comment it out for now.
@@ -23,6 +23,7 @@ | |||
from tfx.experimental.distributed_inference.graphdef_experiments.subgraph_partitioning import graph_partition | |||
from google.protobuf import text_format | |||
|
|||
tf.compat.v1.enable_eager_execution() # Re-enable eager mode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.