Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add: pytest log installation #7313

Merged
merged 2 commits into from
Mar 14, 2024
Merged

add: pytest log installation #7313

merged 2 commits into from
Mar 14, 2024

Conversation

sayakpaul
Copy link
Member

@sayakpaul sayakpaul commented Mar 14, 2024

What does this PR do?

Add the pytest-reportlog dependency. Otherwise, https://github.com/huggingface/diffusers/actions/runs/8273408523/job/22637123355 happens.

What did I test?

Created a simple yet reasonable test suite:

# coding=utf-8
# Copyright 2024 HuggingFace Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import unittest

import numpy as np
import torch

from diffusers import (
    DiffusionPipeline,
    LCMScheduler,
)
from diffusers.utils.import_utils import is_accelerate_available, is_peft_available
from diffusers.utils.testing_utils import (
    load_image,
    numpy_cosine_similarity_distance,
    require_torch_gpu,
    slow,
)


if is_accelerate_available():
    from accelerate.utils import release_memory

if is_peft_available():
    pass


@slow
@require_torch_gpu
class LoraSDXLIntegrationTests(unittest.TestCase):
    def tearDown(self):
        super().tearDown()
        gc.collect()
        torch.cuda.empty_cache()

    def test_sdxl_1_0_lora(self):
        generator = torch.Generator("cpu").manual_seed(0)

        pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0")
        pipe.enable_model_cpu_offload()
        lora_model_id = "hf-internal-testing/sdxl-1.0-lora"
        lora_filename = "sd_xl_offset_example-lora_1.0.safetensors"
        pipe.load_lora_weights(lora_model_id, weight_name=lora_filename)

        images = pipe(
            "masterpiece, best quality, mountain", output_type="np", generator=generator, num_inference_steps=2
        ).images

        images = images[0, -3:, -3:, -1].flatten()
        expected = np.array([0.4468, 0.4087, 0.4134, 0.366, 0.3202, 0.3505, 0.3786, 0.387, 0.3535])

        max_diff = numpy_cosine_similarity_distance(images, expected)
        assert max_diff < 1e-4  
        
        pipe.unload_lora_weights()

        release_memory(pipe)

    def test_sdxl_lcm_lora(self):
        pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
        pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
        pipe.enable_model_cpu_offload()

        generator = torch.Generator("cpu").manual_seed(0)

        lora_model_id = "latent-consistency/lcm-lora-sdxl"

        pipe.load_lora_weights(lora_model_id)

        image = pipe(
            "masterpiece, best quality, mountain", generator=generator, num_inference_steps=4, guidance_scale=0.5
        ).images[0]

        expected_image = load_image(
            "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/lcm_lora/sdxl_lcm_lora.png"
        )

        image_np = pipe.image_processor.pil_to_numpy(image)
        expected_image_np = pipe.image_processor.pil_to_numpy(expected_image)

        max_diff = numpy_cosine_similarity_distance(image_np.flatten(), expected_image_np.flatten())
        assert np.allclose(image_np.flatten(), expected_image_np.flatten())  # Purposefully making it fail.

        pipe.unload_lora_weights()

        release_memory(pipe)

Ran the test (from the DGX):

RUN_SLOW=1 CUDA_VISIBLE_DEVICES=1 python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
    --make-reports=tests_torch_cuda \
    --report-log=torch_cuda.log \
    tests/lora/test_lora_layers_peft_shorter.py::LoraSDXLIntegrationTests

Exported GITHUB_RUN_ID to an arbitrary value so that scripts/log_reports.py can run without fail.

Then ran python scripts/log_reports.py. It prints:

### *torch_cuda.log: 1 failed test*


+----+-----------------+--------------+--------------------+
|    | Test Location   | Test Case    | Test Name          |
+====+=================+==============+====================+
|  0 | tests/lora/t    | LoraSDXLInte | test_sdxl_lcm_lora |
|    | est_lora_lay    | grationTests |                    |
|    | ers_peft_sho    |              |                    |
|    | rter.py         |              |                    |
+----+-----------------+--------------+--------------------+

[{'type': 'header', 'text': {'type': 'plain_text', 'text': '🤗 Results of the Diffusers scheduled nightly tests.'}}, {'type': 'section', 'text': {'type': 'mrkdwn', 'text': '*torch_cuda.log: 1 failed test*\n\n```\n+----+-----------------+--------------+--------------------+\n|    | Test Location   | Test Case    | Test Name          |\n+====+=================+==============+====================+\n|  0 | tests/lora/t    | LoraSDXLInte | test_sdxl_lcm_lora |\n|    | est_lora_lay    | grationTests |                    |\n|    | ers_peft_sho    |              |                    |\n|    | rter.py         |              |                    |\n+----+-----------------+--------------+--------------------+\n```'}}, {'type': 'section', 'text': {'type': 'mrkdwn', 'text': '*For more details:*'}, 'accessory': {'type': 'button', 'text': {'type': 'plain_text', 'text': 'Check Action results', 'emoji': True}, 'url': 'https://github.com/huggingface/diffusers/actions/runs/sayak'}}, {'type': 'context', 'elements': [{'type': 'plain_text', 'text': 'Nightly test results for 2024-03-14'}]}]

So, that confirms things.

@sayakpaul sayakpaul requested a review from DN6 March 14, 2024 04:26
@sayakpaul
Copy link
Member Author

Merging because the PR is just about adding a dependency to a workflow file.

@sayakpaul sayakpaul merged commit 95de198 into main Mar 14, 2024
8 checks passed
@sayakpaul sayakpaul deleted the fix-nightly-ci branch March 14, 2024 04:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants