Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix xFail errors from pytest #6940

Merged
merged 47 commits into from
Oct 29, 2024
Merged

Fix xFail errors from pytest #6940

merged 47 commits into from
Oct 29, 2024

Conversation

nikelite
Copy link
Contributor

@nikelite nikelite commented Oct 26, 2024

  • Re-enable TensorFlow eager mode to prevent unintended failures in other tests.
  • Update the test class module name to align with the tests
  • Include test data files in the TFX package to prevent test failures
  • Resolve artifact type name conflicts from pytest.

… since mock decorator is not well working with pytest
@nikelite nikelite marked this pull request as ready for review October 28, 2024 02:17
@@ -62,4 +62,4 @@ jobs:
- name: Run unit tests
shell: bash
run: |
pytest -m "${{ matrix.which-tests }}"
pytest -vv -m "${{ matrix.which-tests }}" tfx
Copy link
Contributor

@smokestacklightnin smokestacklightnin Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had specifically not chosen to put verbose output originally because there often could be an overwhelming amount of output on GitHub and developers can run pytest with verbose output on their local machines instead. If you would really like verbose output, leave it, by all means.

CC: @peytondmurray

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, is specifying the tfx directory strictly necessary here? I believe it is already specified in the pytest config in pyproject.toml

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've reverted this change.

@@ -25,6 +25,7 @@
from tfx.experimental.distributed_inference.graphdef_experiments.subgraph_partitioning import create_complex_graph
from tfx.experimental.distributed_inference.graphdef_experiments.subgraph_partitioning import graph_partition

tf.compat.v1.enable_eager_execution() # Re-enable eager mode
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we need eager execution mode for some tests, my strong preference and suggestion is to use a pytest yield fixture as a decorator to enable and disable it for the tests that need it. The fixture can be stored in a conftest.py and reused anywhere it is needed.

CC: @peytondmurray

Copy link
Contributor

@peytondmurray peytondmurray Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

☝️ To add context to this point:

  • tfx has a hard dependency on tensorflow>=2.15, <2.16. Eager execution is enabled by default for tensorflow>=2, so I think this may not be necessary?
  • This setting unilaterally applies global changes to the testing environment, impacting much more than just the tests here. Even if tfx did allow users to install tensorflow<2, I'd still vote to move this into a test fixture to limit the impact on all the other tests in the test run.

On a related note, how many folks are still running TF<2? TF 2.0 came out over 5 years ago - if tfx still has tests that target deprecated 1.0 functionality, it may be worth starting an issue/discussion to remove them, as I don't think tfx can't be expected to run these at this point.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue stemmed from the create_complex_graph module, which forcibly disables eager execution upon initialization. I've added a pytest yield fixture to re-enable it when the tests using this module end.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[pytest yield fixture](https://docs.pytest.org/en/stable/how-to/fixtures.html#yield-fixtures-recommended) is not working, since disable_eager_execution() isn't working as expected. This might be due to disable_eager_execution() being called during the collection phase of pytest, as it's also called during initialization. This code is from an older part of the project, and I'm unsure if it's safe to remove. However, since it's within experimental code, I'll comment it out for now.

@@ -23,6 +23,7 @@
from tfx.experimental.distributed_inference.graphdef_experiments.subgraph_partitioning import graph_partition
from google.protobuf import text_format

tf.compat.v1.enable_eager_execution() # Re-enable eager mode
Copy link
Contributor

@smokestacklightnin smokestacklightnin Oct 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@keerthanakadiri keerthanakadiri self-assigned this Oct 28, 2024
@nikelite nikelite removed the request for review from briron October 29, 2024 01:49
@nikelite nikelite merged commit c770a51 into tensorflow:master Oct 29, 2024
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants