Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[microTVM] Refactor pytest fixtures #12207

Merged
merged 8 commits into from
Aug 3, 2022
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 11 additions & 1 deletion python/tvm/micro/testing/evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
import tempfile

import tvm
from tvm.relay.op.contrib import cmsisnn


def tune_model(
Expand Down Expand Up @@ -94,6 +95,15 @@ def create_aot_session(
executor = tvm.relay.backend.Executor("aot")
crt_runtime = tvm.relay.backend.Runtime("crt", {"system-lib": True})

if use_cmsis_nn:
with tvm.transform.PassContext(
config={
"tir.disable_vectorize": True,
"relay.ext.cmsisnn.options": {"mcpu": target.mcpu},
}
):
mod = cmsisnn.partition_for_cmsisnn(mod, params, mcpu=target.mcpu)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's put this into the ExitStack() block below - remove likes 98-106 and insert:

mod = cmsisnn.partition_for_cmsisnn(mod, params, mcpu=target.mcpu)

in the if use_cmsiss_nn block


with ExitStack() as stack:
config = {"tir.disable_vectorize": True}
if use_cmsis_nn:
Expand Down Expand Up @@ -153,4 +163,4 @@ def evaluate_model_accuracy(session, aot_executor, input_data, true_labels, runs
num_correct = sum(u == v for u, v in zip(true_labels, predicted_labels))
average_time = sum(aot_runtimes) / len(aot_runtimes)
accuracy = num_correct / len(predicted_labels)
return average_time, accuracy
return average_time, accuracy, predicted_labels
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we changing this to return predicted_labels? There are a few cases where we'll want to override evaluate_model_accuracy (say, with the MLPerf Tiny model for anomaly detection) but there won't be a 1:1 correspondence of samples to predicted_labels (because anomaly detection uses an area of the curve metric). I'd prefer to keep it as is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this change since we were using the predicted labels outside of this function for validity check in some tests. If you think it's out of the scope of this function, I can replicated the whole function in the other repo. thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I'd forgotten we need the labels for some hardware in the loop tests. That seems fine - for anomaly detection and other AOC metrics using the same "template", we could use confidence values (or even just None) in place of predicted_labels. This LGTM now.

126 changes: 126 additions & 0 deletions python/tvm/micro/testing/pytest_plugin.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

# pylint: disable=invalid-name,redefined-outer-name
""" microTVM testing fixtures used to deduce testing argument
values from testing parameters """

import pathlib
import json
import os
import datetime
import pytest

import tvm
from tvm.contrib.utils import tempdir


def zephyr_boards() -> dict:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we treating zephyr_boards specially? I'd rather we use get_boards("zephyr") in all those cases.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that was left over, thanks for catching!

"""Returns a dict mapping board to target model"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have a function for this purpose - get_supported_boards in tvm/python/tvm/micro/testing/utils.py. Let's either move that or use it instead.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed get_boards

with open(
pathlib.Path(tvm.micro.get_microtvm_template_projects("zephyr")) / "boards.json"
) as f:
board_properties = json.load(f)

boards_model = {board: info["model"] for board, info in board_properties.items()}
return boards_model


def get_boards(platform: str) -> dict:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above comment. This should be removed, and get_supported_boards inside tvm/python/tvm/micro/testing/utils.py should be used instead.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed get_boards

"""Returns a dict mapping board to target model"""
with open(
pathlib.Path(tvm.micro.get_microtvm_template_projects(platform)) / "boards.json"
) as f:
board_properties = json.load(f)

boards_model = {board: info["model"] for board, info in board_properties.items()}
return boards_model


def pytest_addoption(parser):
"""Adds more pytest arguments"""
parser.addoption(
"--board",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love this change! It has been a long time in the workds.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also document explicitly the difference between platforms and boards in our usage - it might not be obvious to others.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added more details

required=True,
choices=list(get_boards("zephyr").keys()) + list(get_boards("arduino").keys()),
help="microTVM boards for tests",
)
parser.addoption(
"--test-build-only",
action="store_true",
help="Only run tests that don't require physical hardware.",
)
parser.addoption(
"--tvm-debug",
action="store_true",
default=False,
help="If set true, it will keep the project directory for debugging.",
)


@pytest.fixture(scope="module")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather if all the microTVM fixtures lived in tests/micro, where they have existed previously. I don't think they can be reused very easily - if someone wanted to do additional testing elsewhere, they would't need test-build-only or similar fixtures.

def board(request):
return request.config.getoption("--board")


@pytest.fixture(scope="module")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, any fixtures that are only based on command line arguments should be session scoped IMO. They can't change during the session, so I'd argue it is more appropriate.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that makes sense. I changed them

def tvm_debug(request):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I don't love the name tvm_debug if all this flag does is keep the project directory - IMO --keep-project-dir or --preserve-project makes more sense. If it does things besides this, we should document them in the help string.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it used to do other things, but not anymore. I will change the name.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let me correct myself, it also used in project generation config to show more logs in the build/test process. So I would change the description.

return request.config.getoption("--tvm-debug")


def pytest_collection_modifyitems(config, items):
if config.getoption("--test-build-only"):
skip_hardware_tests = pytest.mark.skip(reason="--test-build-only was passed")
for item in items:
if "requires_hardware" in item.keywords:
item.add_marker(skip_hardware_tests)


@pytest.fixture
def workspace_dir(request, board, tvm_debug):
"""Creates workspace directory for each test."""
parent_dir = pathlib.Path(os.path.dirname(request.module.__file__))
board_workspace = (
parent_dir / f"workspace_{board}" / datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S")
)
board_workspace_base = str(board_workspace)
number = 1
while board_workspace.exists():
board_workspace = pathlib.Path(board_workspace_base + f"-{number}")
number += 1

if not os.path.exists(board_workspace.parent):
os.makedirs(board_workspace.parent)

keep_for_debug = tvm_debug if tvm_debug else None
test_temp_dir = tempdir(custom_path=board_workspace, keep_for_debug=keep_for_debug)
return test_temp_dir


@pytest.fixture(autouse=True)
def skip_by_board(request, board):
"""Skip test if board is in the list."""
if request.node.get_closest_marker("skip_boards"):
if board in request.node.get_closest_marker("skip_boards").args[0]:
pytest.skip("skipped on this board: {}".format(board))


def pytest_configure(config):
config.addinivalue_line(
"markers",
"skip_boards(board): skip test for the given board",
)
45 changes: 4 additions & 41 deletions tests/micro/arduino/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,35 +15,19 @@
# specific language governing permissions and limitations
# under the License.

import pytest
pytest_plugins = [
"tvm.micro.testing.pytest_plugin",
]

from test_utils import ARDUINO_BOARDS
import pytest


def pytest_addoption(parser):
parser.addoption(
"--arduino-board",
nargs="+",
required=True,
choices=ARDUINO_BOARDS.keys(),
help="Arduino board for tests.",
)
parser.addoption(
"--arduino-cli-cmd",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth abstracting this parameter to "build tool path"? I could be convinced either way.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this way is more clear, specially when used in tvmc

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea you’re right, build tool is just too abstract.

default="arduino-cli",
help="Path to `arduino-cli` command for flashing device.",
)
parser.addoption(
"--test-build-only",
action="store_true",
help="Only run tests that don't require physical hardware.",
)
parser.addoption(
"--tvm-debug",
action="store_true",
default=False,
help="If given, enable a debug session while the test is running. Before running the test, in a separate shell, you should run: <python -m tvm.exec.microtvm_debug_shell>",
)


def pytest_configure(config):
Expand All @@ -52,27 +36,6 @@ def pytest_configure(config):
)


def pytest_collection_modifyitems(config, items):
if config.getoption("--test-build-only"):
skip_hardware_tests = pytest.mark.skip(reason="--test-build-only was passed")
for item in items:
if "requires_hardware" in item.keywords:
item.add_marker(skip_hardware_tests)


# We might do project generation differently for different boards in the future
# (to take advantage of multiple cores / external memory / etc.), so all tests
# are parameterized by board
def pytest_generate_tests(metafunc):
board = metafunc.config.getoption("arduino_board")
metafunc.parametrize("board", board, scope="session")


@pytest.fixture(scope="session")
def arduino_cli_cmd(request):
return request.config.getoption("--arduino-cli-cmd")


@pytest.fixture(scope="session")
def tvm_debug(request):
return request.config.getoption("--tvm-debug")
5 changes: 0 additions & 5 deletions tests/micro/arduino/test_arduino_error_detection.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,6 @@
import test_utils
import tvm.testing

# A new project and workspace dir is created for EVERY test
@pytest.fixture
def workspace_dir(request, board):
return test_utils.make_workspace_dir("arduino_error_detection", board)


@pytest.fixture
def project(board, arduino_cli_cmd, tvm_debug, workspace_dir):
Expand Down
5 changes: 0 additions & 5 deletions tests/micro/arduino/test_arduino_rpc_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,11 +37,6 @@

import test_utils

# # A new project and workspace dir is created for EVERY test
@pytest.fixture
def workspace_dir(board):
return test_utils.make_workspace_dir("arduino_rpc_server", board)


def _make_session(model, arduino_board, arduino_cli_cmd, workspace_dir, mod, build_config):
project = tvm.micro.generate_project(
Expand Down
11 changes: 5 additions & 6 deletions tests/micro/arduino/test_arduino_workflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,23 +36,22 @@
6. Use serial connection to ensure model behaves correctly
"""


# Since these tests are sequential, we'll use the same project/workspace
# directory for all tests in this file
@pytest.fixture(scope="module")
def workspace_dir(request, board):
def workflow_workspace_dir(request, board):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like this change!

return test_utils.make_workspace_dir("arduino_workflow", board)


@pytest.fixture(scope="module")
def project_dir(workspace_dir):
return workspace_dir / "project"
def project_dir(workflow_workspace_dir):
return workflow_workspace_dir / "project"


# We MUST pass workspace_dir, not project_dir, or the workspace will be dereferenced too soon
@pytest.fixture(scope="module")
def project(board, arduino_cli_cmd, tvm_debug, workspace_dir):
return test_utils.make_kws_project(board, arduino_cli_cmd, tvm_debug, workspace_dir)
def project(board, arduino_cli_cmd, tvm_debug, workflow_workspace_dir):
return test_utils.make_kws_project(board, arduino_cli_cmd, tvm_debug, workflow_workspace_dir)


def _get_directory_elements(directory):
Expand Down
31 changes: 4 additions & 27 deletions tests/micro/common/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,11 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations

import pytest
pytest_plugins = [
"tvm.micro.testing.pytest_plugin",
]

from ..zephyr.test_utils import ZEPHYR_BOARDS
from ..arduino.test_utils import ARDUINO_BOARDS
import pytest


def pytest_addoption(parser):
Expand All @@ -27,32 +28,8 @@ def pytest_addoption(parser):
choices=["arduino", "zephyr"],
help="Platform to run tests with",
)
parser.addoption(
"--board",
required=True,
choices=list(ARDUINO_BOARDS.keys()) + list(ZEPHYR_BOARDS.keys()),
help="microTVM boards for tests",
)
parser.addoption(
"--test-build-only",
action="store_true",
help="Only run tests that don't require physical hardware.",
)


@pytest.fixture
def platform(request):
return request.config.getoption("--platform")


@pytest.fixture
def board(request):
return request.config.getoption("--board")


def pytest_collection_modifyitems(config, items):
if config.getoption("--test-build-only"):
skip_hardware_tests = pytest.mark.skip(reason="--test-build-only was passed")
for item in items:
if "requires_hardware" in item.keywords:
item.add_marker(skip_hardware_tests)
2 changes: 1 addition & 1 deletion tests/micro/common/test_autotune.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ def test_kws_autotune_workflow(platform, board, tmp_path):
labels = [0, 0, 0]

# Validate perforance across random runs
time, _ = tvm.micro.testing.evaluate_model_accuracy(
time, _, _ = tvm.micro.testing.evaluate_model_accuracy(
session, aot_executor, samples, labels, runs_per_sample=20
)
# `time` is the average time taken to execute model inference on the
Expand Down
Loading