Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] De-flake doctests #37162

Merged
merged 1 commit into from
Jul 7, 2023
Merged

Conversation

bveeramani
Copy link
Member

Why are these changes needed?

:book: Doctest (CPU) fails 25% of runs due to a few flaky tests. This PR deflakes those tests.


This test is flaky because the order of columns is non-deterministic.

______________________ [doctest] working-with-images.rst _______________________

  | 111
  | 112             ds = (
  | 113                 ray.data.read_tfrecords(
  | 114                     "s3://anonymous@air-example-data/cifar-10/tfrecords"
  | 115                 )
  | 116                 .map(decode_bytes)
  | 117             )
  | 118
  | 119             print(ds.schema())
  | 120
  | Differences (unified diff with -expected +actual):
  | @@ -1,4 +1,4 @@
  | Column  Type
  | ------  ----
  | +label   int64
  | image   numpy.ndarray(shape=(32, 32, 3), dtype=uint8)
  | -label   int64

This test is flaky because, in rare occasions, all outputs from the previous code block actually show up in the previous testoutput. (Usually, a few of the outputs show up in this testoutput).

___________________________ [doctest] async_api.rst ____________________________
  | 062     (AsyncActor pid=40293) finished
  | 063
  | 064 .. testcode::
  | 065     :hide:
  | 066
  | 067     # NOTE: The outputs from the previous code block can show up in subsequent tests.
  | 068     # To prevent flakiness, we wait for the async calls finish.
  | 069     import time
  | 070     time.sleep(3)
  | 071
  | Expected:
  | ...
  | Got nothing

These tests are flaky because the outputs contain extra logs if the dataset isn't cached.

______________________ [doctest] working-with-pytorch.rst ______________________
  | 353
  | 354     import torchvision
  | 355     import ray
  | 356
  | 357     mnist = torchvision.datasets.MNIST(root="/tmp/", download=True)
  | 358     ds = ray.data.from_torch(mnist)
  | 359
  | 360     # The data for each item of the torch dataset is under the "item" key.
  | 361     print(ds.schema())
  | 362
  | Differences (unified diff with -expected +actual):
  | @@ -1,2 +1,18 @@
  | +Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
  | +Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /tmp/MNIST/raw/train-images-idx3-ubyte.gz
  | +Extracting /tmp/MNIST/raw/train-images-idx3-ubyte.gz to /tmp/MNIST/raw
  | +<BLANKLINE>
  | +Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
  | +Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to /tmp/MNIST/raw/train-labels-idx1-ubyte.gz
  | +Extracting /tmp/MNIST/raw/train-labels-idx1-ubyte.gz to /tmp/MNIST/raw
  | +<BLANKLINE>
  | +Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
  | +Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to /tmp/MNIST/raw/t10k-images-idx3-ubyte.gz
  | +Extracting /tmp/MNIST/raw/t10k-images-idx3-ubyte.gz to /tmp/MNIST/raw
  | +<BLANKLINE>
  | +Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
  | +Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to /tmp/MNIST/raw/t10k-labels-idx1-ubyte.gz
  | +Extracting /tmp/MNIST/raw/t10k-labels-idx1-ubyte.gz to /tmp/MNIST/raw
  | +<BLANKLINE>
  | Column  Type
  | ------  ----
__________________________ [doctest] loading-data.rst __________________________
  | 613         .. testcode::
  | 614
  | 615             import ray
  | 616             import tensorflow_datasets as tfds
  | 617
  | 618             tf_ds, _ = tfds.load("cifar10", split=["train", "test"])
  | 619             ds = ray.data.from_tf(tf_ds)
  | 620
  | 621             print(ds)
  | 622
  | Differences (unified diff with -expected +actual):
  | @@ -1,4 +1,6 @@
  | +Downloading and preparing dataset 162.17 MiB (download: 162.17 MiB, generated: 132.40 MiB, total: 294.58 MiB) to /root/tensorflow_datasets/cifar10/3.0.2...
  | +Dataset cifar10 downloaded and prepared to /root/tensorflow_datasets/cifar10/3.0.2. Subsequent calls will reuse this data.
  | MaterializedDataset(
  | -   num_blocks=...,
  | +   num_blocks=200,
  | num_rows=50000,
  | schema={

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: Balaji Veeramani <balaji@anyscale.com>
@bveeramani bveeramani changed the title [Data][Docs] De-flake doctests [Docs] De-flake doctests Jul 6, 2023
@@ -67,6 +67,7 @@ async frameworks like aiohttp, aioredis, etc.
# NOTE: The outputs from the previous code block can show up in subsequent tests.
# To prevent flakiness, we wait for the async calls finish.
import time
print("Sleeping...")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this for?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually, there are outputs from the previous code block (which asynchronously prints out stuff), and this code block is intended to catch them. However, sometimes everything actually gets caught in the previous testoutput block. In this case, there are no outputs from this code block, and the test errors.

This print ensures there are always some outputs from this code block.

Copy link
Contributor

@amogkam amogkam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data changes lgtm

@@ -296,7 +296,7 @@ For more details, see the :ref:`Batch inference user guide <batch_inference_home
Saving Datasets containing torch tensors
----------------------------------------

Datasets containing torch tensors can be saved to files, like parquet or numpy.
Datasets containing torch tensors can be saved to files, like parquet or numpy.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Datasets containing torch tensors can be saved to files, like parquet or numpy.
You can save Datasets containing torch Tensors to files, like parquet or NumPy.

@amogkam amogkam merged commit a6f13e3 into ray-project:master Jul 7, 2023
bveeramani added a commit to bveeramani/ray that referenced this pull request Jul 10, 2023
Signed-off-by: Balaji Veeramani <balaji@anyscale.com>

:book: Doctest (CPU) fails 25% of runs due to a few flaky tests. This PR deflakes those tests.
Signed-off-by: Balaji Veeramani <balaji@anyscale.com>
bveeramani added a commit that referenced this pull request Jul 10, 2023
📖 Doctest (CPU) fails 25% of runs due to a few flaky tests. This PR deflakes those tests.

Signed-off-by: Balaji Veeramani <balaji@anyscale.com>
arvind-chandra pushed a commit to lmco/ray that referenced this pull request Aug 31, 2023
Signed-off-by: Balaji Veeramani <balaji@anyscale.com>

:book: Doctest (CPU) fails 25% of runs due to a few flaky tests. This PR deflakes those tests.

Signed-off-by: e428265 <arvind.chandramouli@lmco.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants