From df75b1970e73f1925aa8061514bee2fedd3c71c1 Mon Sep 17 00:00:00 2001 From: Nat Kershaw Date: Wed, 30 Sep 2020 17:36:52 -0700 Subject: [PATCH] Fix broken links #4 --- docs/how-to/tune-performance.md | 13 ++- docs/reference/api/csharp-api.md | 9 +- docs/reference/api/winrt-api.md | 2 +- .../DNNL-ExecutionProvider.md | 97 ++++++++++++++++-- .../DirectML-ExecutionProvider.md | 39 +++---- .../MIGraphX-ExecutionProvider.md | 2 +- .../Nuphar-ExecutionProvider.md | 48 +++++---- .../TensorRT-ExecutionProvider.md | 28 +++-- docs/resources/compatibility.md | 8 +- ...h-level_design.md => high-level-design.md} | 3 +- docs/tutorials/fasterrcnn_csharp.md | 2 +- docs/tutorials/mnist_java.md | 21 ++-- docs/tutorials/samples_catalog.md | 10 +- images/mkl-dnn_node.png | Bin 0 -> 51197 bytes 14 files changed, 198 insertions(+), 84 deletions(-) rename docs/resources/{high-level_design.md => high-level-design.md} (98%) create mode 100644 images/mkl-dnn_node.png diff --git a/docs/how-to/tune-performance.md b/docs/how-to/tune-performance.md index 6005adb480641..febd844399742 100644 --- a/docs/how-to/tune-performance.md +++ b/docs/how-to/tune-performance.md @@ -7,7 +7,7 @@ nav_order: 1 # ONNX Runtime Performance Tuning {: .no_toc } -ONNX Runtime gives high performance across a range of hardware options by providing "Execution Providers" to interface to different execution environments. See: [design overview](../resources/high-level-design.md), [supported execution providers](https://github.com/microsoft/onnxruntime#supported-accelerators). +ONNX Runtime gives high performance across a range of hardware options by providing "Execution Providers" to interface to different execution environments. See: [design overview](../resources/high-level-design.md), [supported execution providers](../resources/execution-providers). Along with this flexibility comes decisions for tuning and usage. For each model running with each execution provider, there are settings that can be tuned (e.g. thread number, wait policy, etc) to improve performance. @@ -172,27 +172,32 @@ The most widely used environment variables are: * ACTIVE will not yield CPU, instead it will have a while loop to check whether the next task is ready * Use PASSIVE if your CPU usage already high, and use ACTIVE when you want to trade CPU with latency - ## Troubleshooting model performance issues + The answers below are troubleshooting suggestions based on common previous user-filed issues and questions. This list is by no means exhaustive and there is a lot of case-by-case fluctuation depending on the model and specific usage scenario. Please use this information to guide your troubleshooting, search through previously filed issues for related topics, and/or file a new issue if your problem is still not resolved. ### Performance Troubleshooting Checklist + Here is a list of things to check through when assessing performance issues. * Are you using OpenMP? OpenMP will parallelize some of the code for potential performance improvements. This is not recommended for running on single threads. * Have you enabled all [graph optimizations](../resources/graph-optimizations.md)? The official published packages do enable all by default, but when building from source, check that these are enabled in your build. * Have you searched through prior filed [Github issues](https://github.com/microsoft/onnxruntime/issues) to see if your problem has been discussed previously? Please do this before filing new issues. * If using CUDA or TensorRT, do you have the right versions of the dependent libraries installed? -### I need help performance tuning for BERT models. -For BERT models, sometimes ONNX Runtime cannot apply the best optimization due to reasons such as framework version updates. We recommend trying out the [BERT optimization tool](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/bert), which reflects the latest changes in graph pattern matching and model conversions, and a set of [notebooks](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/bert/notebooks) to help get started. +### I need help performance tuning for BERT models + +For BERT models, sometimes ONNX Runtime cannot apply the best optimization due to reasons such as framework version updates. We recommend trying out the [BERT optimization tool](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers), which reflects the latest changes in graph pattern matching and model conversions, and a set of [notebooks](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers/notebooks) to help get started. ### Why is the model graph not optimized even with graph_optimization_level set to ORT_ENABLE_ALL? + The ONNX model from IR_VERSION 4 only treats initializers that appear in graph input as non-constant. This may fail some of the graph optimizations, like const folding, operator fusion and etc. Move initializers out of graph inputs if there is no need to override them, by either re-generating the model with latest exporter/converter or with the tool [remove_initializer_from_input.py](https://github.com/microsoft/onnxruntime/tree/master/tools/python/remove_initializer_from_input.py). ### Why is my model running slower on GPU than CPU? + Depending on which execution provider you're using, it may not have full support for all the operators in your model. Fallback to CPU ops can cause hits in performance speed. Moreover even if an op is implemented by the CUDA execution provider, it may not necessarily assign/place the op to the CUDA EP due to performance reasons. To see the placement decided by ORT, turn on verbose logging and look at the console output. ### My converted Tensorflow model is slow - why? + NCHW and NHWC are two different memory layout for 4-D tensors. Most TensorFlow operations used by a CNN support both NHWC and NCHW data format. The Tensorflow team suggests that on GPU NCHW is faster but on CPU NHWC is sometimes faster in Tensorflow. However, ONNX only supports NCHW. As a result, if the original model is in NHWC format, when the model is converted extra transposes may be added. The [tensorflow-onnx](https://github.com/onnx/tensorflow-onnx) and [keras-onnx](https://github.com/onnx/keras-onnx) converters do remove many of these transposes, but if this doesn't help sufficiently, consider retraining the model using NCHW. diff --git a/docs/reference/api/csharp-api.md b/docs/reference/api/csharp-api.md index 49018e1553941..231f6667a6b6b 100644 --- a/docs/reference/api/csharp-api.md +++ b/docs/reference/api/csharp-api.md @@ -17,13 +17,14 @@ The ONNX runtime provides a C# .Net binding for running inference on ONNX models {:toc} ## NuGet Package + The Microsoft.ML.OnnxRuntime Nuget package includes the precompiled binaries for ONNX runtime, and includes libraries for Windows and Linux platforms with X64 CPUs. The APIs conform to .Net Standard 1.1. ## Sample Code The unit tests contain several examples of loading models, inspecting input/output node shapes and types, as well as constructing tensors for scoring. -* [../csharp/test/Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs#L166](../csharp/test/Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs#L166) +* [Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs](https://github.com/microsoft/onnxruntime/tree/master/csharp/test/Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs#L166) ## Getting Started Here is simple tutorial for getting started with running inference on an existing ONNX model for a given input data. The model is typically trained using any of the well-known training frameworks and exported into the ONNX format. To start scoring using the model, open a session using the `InferenceSession` class, passing in the file path to the model as a parameter. @@ -96,9 +97,10 @@ using (var outputs1 = session1.Run(inputs1)) If the model have fixed sized inputs and outputs of numeric tensors, you can use "FixedBufferOnnxValue" to accelerate the inference speed. By using "FixedBufferOnnxValue", the container objects only need to be allocated/disposed one time during multiple InferenceSession.Run() calls. This avoids some overhead which may be beneficial for smaller models where the time is noticeable in the overall running time. An example can be found at `TestReusingFixedBufferOnnxValueNonStringTypeMultiInferences()`: -* [../csharp/test/Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs#L1047](../csharp/test/Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs#L1047) +* [Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs#L1047](https://github.com/microsoft/onnxruntime/tree/master/csharp/test/Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs#L1047) ## Running on GPU (Optional) + If using the GPU package, simply use the appropriate SessionOptions when creating an InferenceSession. ```cs @@ -253,6 +255,3 @@ class OnnxRuntimeException: Exception; ``` The type of Exception that is thrown in most of the error conditions related to Onnx Runtime. - - - diff --git a/docs/reference/api/winrt-api.md b/docs/reference/api/winrt-api.md index 7d81bc6a24a2b..2469b2cd12d08 100644 --- a/docs/reference/api/winrt-api.md +++ b/docs/reference/api/winrt-api.md @@ -16,7 +16,7 @@ The WinML API is a WinRT API that shipped inside the Windows OS starting with bu Many customers have asked for a way to use this offering as an application redistributable package. -With our new [layered architecture](InferenceHighLevelDesign.md#the-onnx-runtime-and-windows-os-integration) you can now do this, with some limitations. The WinML APIs have been lifted and mirrored into the Microsoft.AI.MachineLearning namespace in the redistributable. +With our [layered architecture](../../resources/high-level-design.md#the-onnx-runtime-and-windows-os-integration) you can now do this, with some limitations. The WinML APIs have been lifted and mirrored into the Microsoft.AI.MachineLearning namespace in the redistributable. ## Contents {: .no_toc } diff --git a/docs/reference/execution-providers/DNNL-ExecutionProvider.md b/docs/reference/execution-providers/DNNL-ExecutionProvider.md index 57b4cf4b2ffb3..8e0f7d7b5165f 100644 --- a/docs/reference/execution-providers/DNNL-ExecutionProvider.md +++ b/docs/reference/execution-providers/DNNL-ExecutionProvider.md @@ -21,20 +21,26 @@ For information on how DNNL optimizes subgraphs, see [Subgraph Optimization](./M {:toc} ## Build + For build instructions, please see the [BUILD page](../../how-to/build.md#dnnl-and-mklml). ## Supported OS + * Ubuntu 16.04 -* Windows 10 +* Windows 10 * Mac OS X ## Supported backend + * CPU ## Using the DNNL Execution Provider + ### C/C++ + The DNNLExecutionProvider execution provider needs to be registered with ONNX Runtime to enable in the inference session. -``` + +```c++ string log_id = "Foo"; auto logging_manager = std::make_unique (std::unique_ptr{new CLogSink{}}, @@ -47,35 +53,38 @@ InferenceSession session_object{so,env}; session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime:: DNNLExecutionProvider >()); status = session_object.Load(model_file_name); ``` + The C API details are [here](../api/c-api.md). ### Python + When using the python wheel from the ONNX Runtime built with DNNL execution provider, it will be automatically prioritized over the CPU execution provider. Python APIs details are [here](https://aka.ms/onnxruntime-python). ## Performance Tuning + For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../how-to/tune-performance.md) ## Subgraph Optimization -DNNL uses blocked layout (example: nhwc with channels blocked by 16 – nChw16c) to take advantage of vector operations using AVX512. To get best performance, we avoid reorders (example. Nchw16c to nchw) and propagate blocked layout to next primitive. +DNNL uses blocked layout (example: nhwc with channels blocked by 16 – nChw16c) to take advantage of vector operations using AVX512. To get best performance, we avoid reorders (example. Nchw16c to nchw) and propagate blocked layout to next primitive. Subgraph optimization achieves this in the following steps. + 1. Parses ONNX Runtime graph and creates an Internal Representation of subgraph.. 2. Subgraph Operator (DnnlFunKernel) iterates through DNNL nodes and creates a vector DNNL Kernels 3. Compute Function of DnnlFunKernel iterates and binds data to DNNL primitives in the vector and submits vector for execution. - ### Subgraph (IR) Internal Representation + DnnlExecutionProvicer::GetCapability() parses ONNX model graph and creates IR (Internal Representation) of subgraphs of DNNL operators. -Each subgraph contains a vector DnnlNodes, inputs, outputs and attributes for all its DnnlNodes. There can be attributes of same name. So, we prefix attribute names with Node name and its index. -Unique id for subgraph is set as an attribute. +Each subgraph contains a vector DnnlNodes, inputs, outputs and attributes for all its DnnlNodes. There can be attributes of same name. So, we prefix attribute names with Node name and its index. Unique id for subgraph is set as an attribute. DnnlNode has an index to its inputs and outputs and pointer to its parent nodes. DnnlNode directly reads blocked memory from its parent to avoid data reordering.

- ### Subgraph Classes + Primitive like DnnlConv, DnnlPool, etc are derived from DnnlKernel base class. The following UML diagram captures Subgraph classes. @@ -87,11 +96,78 @@ The following UML diagram captures Subgraph classes. DnnlExecutionProvicer::Compute() function creates DnnlFuncKernel and call it’s Compute Function. - DnnlFuncKernel::Compute function creates SubgraphPrimitve pool and add the object to a map. SubgraphPrimitve constructor calls the following member functions + +```c++ +SubgraphPrimitve::CreatePrimitives() + for (auto& mklnode : mklnodes) { + if (mklnode.name == "Conv") { + kernel.reset(new DnnlConv()); + kernels.push_back(kernel); + } else if (mklnode.name == "BatchNormalization-Relu") { + kernel.reset(new DnnlBatchNorm()); + context_.kernels.push_back(kernel); + } else if (mklnode.name == "MaxPool") { + kernel.reset(new DnnlPool()); + context_.kernels.push_back(kernel); + } + . + . + . +``` + +In CreatePrimitives method, we iterate DnnlNodes and creates DnnlKernel objects and add DNNL primitive to a vector. It also reads attributes. This is done only once, at first iteration. + +```c++ +SubgraphPrimitve::Compute() + for (auto& kernel : kernels) { + kernel->Bind(input_tensors, output_tensors); + } + stream->submit(net); ``` + +In SubgraphPrimitve::Compute() method, we iterate thru Dnnl Kernels and bind input data. Then we submit the vector of Primitives to DNNL stream. + + +### Subgraph Optimization + +DNNL uses blocked layout (example: nhwc with channels blocked by 16 – nChw16c) to take advantage of vector operations using AVX512. To get best performance, we avoid reorders (example. Nchw16c to nchw) and propagate blocked layout to next primitive. + +Subgraph optimization achieves this in the following steps. + +1. Parses ONNX Runtime graph and creates an Internal Representation of subgraph.. +2. Subgraph Operator (DnnlFunKernel) iterates through DNNL nodes and creates a vector DNNL Kernels +3. Compute Function of DnnlFunKernel iterates and binds data to DNNL primitives in the vector and submits vector for execution. + +#### Subgraph (IR) Internal Representation + +DnnlExecutionProvicer::GetCapability() parses ONNX model graph and creates IR (Internal Representation) of subgraphs of DNNL operators. +Each subgraph contains a vector DnnlNodes, inputs, outputs and attributes for all its DnnlNodes. There can be attributes of same name. So, we prefix attribute names with Node name and its index. +Unique id for subgraph is set as an attribute. + +DnnlNode has an index to its inputs and outputs and pointer to its parent nodes. DnnlNode directly reads blocked memory from its parent to avoid data reordering. + +

+ +#### Subgraph Classes + +Primitive like DnnlConv, DnnlPool, etc are derived from DnnlKernel base class. + +The following UML diagram captures Subgraph classes. + +

+ +#### Subgraph Execution + +DnnlExecutionProvicer::Compute() function creates DnnlFuncKernel and call it’s Compute Function. + +DnnlFuncKernel::Compute function creates SubgraphPrimitve pool and add the object to a map. + +SubgraphPrimitve constructor calls the following member functions + +```c++ SubgraphPrimitve::CreatePrimitives() for (auto& mklnode : mklnodes) { if (mklnode.name == "Conv") { @@ -107,10 +183,11 @@ SubgraphPrimitve::CreatePrimitives() . . . -``` +``` + In CreatePrimitives method, we iterate DnnlNodes and creates DnnlKernel objects and add DNNL primitive to a vector. It also reads attributes. This is done only once, at first iteration. -``` +```c++ SubgraphPrimitve::Compute() for (auto& kernel : kernels) { kernel->Bind(input_tensors, output_tensors); diff --git a/docs/reference/execution-providers/DirectML-ExecutionProvider.md b/docs/reference/execution-providers/DirectML-ExecutionProvider.md index 8988e0c40a324..a28d8cd8257b4 100644 --- a/docs/reference/execution-providers/DirectML-ExecutionProvider.md +++ b/docs/reference/execution-providers/DirectML-ExecutionProvider.md @@ -32,8 +32,6 @@ The DirectML execution provider requires any DirectX 12 capable device. Almost a DirectML is compatible with Windows 10, version 1709 (10.0.16299; RS3, "Fall Creators Update") and newer. - - ## Building from source For general information about building onnxruntime, see [BUILD.md](../../how-to/build.md). @@ -44,38 +42,42 @@ Requirements for building the DirectML execution provider: To build onnxruntime with the DML EP included, supply the `--use_dml` parameter to `build.bat`. e.g. - build.bat --config RelWithDebInfo --build_shared_lib --parallel --use_dml +```powershell +build.bat --config RelWithDebInfo --build_shared_lib --parallel --use_dml +``` The DirectML execution provider supports building for both x64 (default) and x86 architectures. Note that building onnxruntime with the DirectML execution provider enabled causes the the DirectML redistributable package to be automatically downloaded as part of the build. Its use is governed by a license whose text may be found as part of the NuGet package. - - ## Using the DirectML execution provider -When using the [C API](../C_API.md) with a DML-enabled build of onnxruntime (see [Building from source](#building-from-source)), the DirectML execution provider can be enabled using one of the two factory functions included in `include/onnxruntime/core/providers/dml/dml_provider_factory.h`. +When using the [C API](../api/c-api.md) with a DML-enabled build of onnxruntime (see [Building from source](#building-from-source)), the DirectML execution provider can be enabled using one of the two factory functions included in `include/onnxruntime/core/providers/dml/dml_provider_factory.h`. ### `OrtSessionOptionsAppendExecutionProvider_DML` function Creates a DirectML Execution Provider which executes on the hardware adapter with the given `device_id`, also known as the adapter index. The device ID corresponds to the enumeration order of hardware adapters as given by [IDXGIFactory::EnumAdapters](https://docs.microsoft.com/windows/win32/api/dxgi/nf-dxgi-idxgifactory-enumadapters). A `device_id` of 0 always corresponds to the default adapter, which is typically the primary display GPU installed on the system. A negative `device_id` is invalid. - OrtStatus* OrtSessionOptionsAppendExecutionProvider_DML( - _In_ OrtSessionOptions* options, - int device_id - ); +```c +OrtStatus* OrtSessionOptionsAppendExecutionProvider_DML( + _In_ OrtSessionOptions* options, + int device_id + ); +``` ### `OrtSessionOptionsAppendExecutionProviderEx_DML` function Creates a DirectML Execution Provider using the given DirectML device, and which executes work on the supplied D3D12 command queue. The DirectML device and D3D12 command queue must have the same parent [ID3D12Device](https://docs.microsoft.com/windows/win32/api/d3d12/nn-d3d12-id3d12device), or an error will be returned. The D3D12 command queue must be of type `DIRECT` or `COMPUTE` (see [D3D12_COMMAND_LIST_TYPE](https://docs.microsoft.com/windows/win32/api/d3d12/ne-d3d12-d3d12_command_list_type)). If this function succeeds, the inference session once created will maintain a strong reference on both the `dml_device` and `command_queue` objects. - OrtStatus* OrtSessionOptionsAppendExecutionProviderEx_DML( - _In_ OrtSessionOptions* options, - _In_ IDMLDevice* dml_device, - _In_ ID3D12CommandQueue* cmd_queue - ); +```c +OrtStatus* OrtSessionOptionsAppendExecutionProviderEx_DML( + _In_ OrtSessionOptions* options, + _In_ IDMLDevice* dml_device, + _In_ ID3D12CommandQueue* cmd_queue + ); +``` -**See Also** +### See Also [DMLCreateDevice function](https://docs.microsoft.com/windows/win32/api/directml/nf-directml-dmlcreatedevice) [ID3D12Device::CreateCommandQueue method](https://docs.microsoft.com/windows/win32/api/d3d12/nf-d3d12-id3d12device-createcommandqueue) @@ -91,7 +93,7 @@ The DirectML execution provider does not support the use of memory pattern optim If using the onnxruntime C API, you must call `DisableMemPattern` and `SetSessionExecutionMode` functions to set the options required by the DirectML execution provider. -See [onnxruntime\include\onnxruntime\core\session\onnxruntime_c_api.h](../.https://github.com/microsoft/onnxruntime/tree/master/include//onnxruntime/core/session/onnxruntime_c_api.h). +See [onnxruntime\include\onnxruntime\core\session\onnxruntime_c_api.h](https://github.com/microsoft/onnxruntime/tree/master/include//onnxruntime/core/session/onnxruntime_c_api.h). OrtStatus*(ORT_API_CALL* DisableMemPattern)(_Inout_ OrtSessionOptions* options)NO_EXCEPTION; @@ -103,7 +105,7 @@ Additionally, as the DirectML execution provider does not support parallel execu ## Samples -A complete sample of onnxruntime using the DirectML execution provider can be found under [samples/c_cxx/fns_candy_style_transfer](../.https://github.com/microsoft/onnxruntime/tree/master/samples//c_cxx/fns_candy_style_transfer). +A complete sample of onnxruntime using the DirectML execution provider can be found under [samples/c_cxx/fns_candy_style_transfer](https://github.com/microsoft/onnxruntime/tree/master/samples//c_cxx/fns_candy_style_transfer). ## Performance best practices The DirectML execution provider works most efficiently when tensor shapes are known at the time a session is created. This provides a few performance benefits: @@ -119,7 +121,6 @@ In this case, there are three options: - Specify values of named dimensions within model inputs when creating the session using the OnnxRuntime *AddFreeDimensionOverrideByName* ABI. - Edit the model to ensure that an input's free dimension has a [denotation](https://github.com/onnx/onnx/blob/master/docs/DimensionDenotation.md) (such as "DATA_BATCH," or a custom denotation). Then when creating the session, specify the dimension size for each denotation. This can be done using the OnnxRuntime *AddFreeDimensionOverride* ABI. - ## See also [DirectML documentation \(docs.microsoft.com\)](https://docs.microsoft.com/en-us/windows/win32/direct3d12/dml) diff --git a/docs/reference/execution-providers/MIGraphX-ExecutionProvider.md b/docs/reference/execution-providers/MIGraphX-ExecutionProvider.md index 76b274f02c408..3e22fb60a7ff8 100644 --- a/docs/reference/execution-providers/MIGraphX-ExecutionProvider.md +++ b/docs/reference/execution-providers/MIGraphX-ExecutionProvider.md @@ -42,7 +42,7 @@ The C API details are [here](../api/c-api.md). ### Python When using the Python wheel from the ONNX Runtime build with MIGraphX execution provider, it will be automatically prioritized over the default GPU or CPU execution providers. There is no need to separately register the execution -provider. Python APIs details are [here](../python/api_summary.rst#api-summary). +provider. Python APIs details are [here](/python/api_summary). You can check [here](https://github.com/scxiao/ort_test/tree/master/python/run_onnx) for a python script to run an model on either the CPU or MIGraphX Execution Provider. diff --git a/docs/reference/execution-providers/Nuphar-ExecutionProvider.md b/docs/reference/execution-providers/Nuphar-ExecutionProvider.md index a956bbd6c66b3..7b9ede1f33207 100644 --- a/docs/reference/execution-providers/Nuphar-ExecutionProvider.md +++ b/docs/reference/execution-providers/Nuphar-ExecutionProvider.md @@ -10,7 +10,7 @@ nav_order: 8 NUPHAR stands for Neural-network Unified Preprocessing Heterogeneous ARchitecture. As an execution provider in the ONNX Runtime, it is built on top of [TVM](https://github.com/dmlc/tvm) and [LLVM](https://llvm.org) to accelerate ONNX models by compiling nodes in subgraphs into optimized functions via JIT. It also provides JIT caching to save compilation time at runtime. -Developers can tap into the power of Nuphar through ONNX Runtime to accelerate inferencing of ONNX models. The Nuphar execution provider comes with a common ONNX to TVM lowering [library](../../onnxruntime/core/codegen) that can potentially be reused by other execution providers to leverage TVM. With the Nuphar execution provider, the ONNX Runtime delivers better inferencing performance on the same hardware compared to generic X64 CPU acceleration, especially for quantized recurrent neural networks. Various products at Microsoft have seen up to a 5x improvement in performance with no loss of accuracy, by running quantized LSTMs via the Nuphar execution provider in the ONNX Runtime. +Developers can tap into the power of Nuphar through ONNX Runtime to accelerate inferencing of ONNX models. The Nuphar execution provider comes with a common ONNX to TVM lowering [library](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/core/codegen) that can potentially be reused by other execution providers to leverage TVM. With the Nuphar execution provider, the ONNX Runtime delivers better inference performance on the same hardware compared to generic X64 CPU acceleration, especially for quantized recurrent neural networks. Various products at Microsoft have seen up to a 5x improvement in performance with no loss of accuracy, by running quantized LSTMs via the Nuphar execution provider in the ONNX Runtime. ## Contents {: .no_toc } @@ -26,19 +26,21 @@ For build instructions, please see the [BUILD page](../../how-to/build.md#nuphar The Nuphar execution provider needs to be registered with ONNX Runtime to enable in the inference session. The C API details are [here](../api/c-api.md). ### Python -You can use the Nuphar execution provider via the python wheel from the ONNX Runtime build. The Nuphar execution provider will be automatically prioritized over the default CPU execution providers, thus no need to separately register the execution provider. Python APIs details are [here](../python/api_summary.rst#api-summary). + +You can use the Nuphar execution provider via the python wheel from the ONNX Runtime build. The Nuphar execution provider will be automatically prioritized over the default CPU execution providers, thus no need to separately register the execution provider. Python APIs details are [here](/python/api_summary). ## Performance and Accuracy Testing -You can test your ONNX model's performance with [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest/README.md), or test accuracy with [onnx_test_runner](../../onnxruntime/test/onnx/README.txt). To run these tools with the Nuphar execution provider, please pass `-e nuphar` in command line options. + +You can test your ONNX model's performance with [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest/README.md), or test accuracy with [onnx_test_runner](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/onnx). To run these tools with the Nuphar execution provider, please pass `-e nuphar` in command line options. Please note that Nuphar uses TVM thread pool and parallel schedule for multi-thread inference performance. When building with OpenMP or MKLML, TVM thread pool would use gomp or iomp as its implementation; otherwise, TVM creates its own thread pool. Because of this, the current default parallel schedule policy is: - Default to on for USE_OPENMP or USE_MKLML. User can use OMP_NUM_THREADS/MKL_NUM_THREADS to control TVM thread pool, as well as TVM_NUM_THREADS - Default to off for none of above. User can use TVM_NUM_THREADS to control TVM thread pool. -This choice is to ensure to get ideal performance with the different build options. When build with USE_OPENMP or USE_MKLML, users would have to avoid thread confliction from OpenMP or MKL with their inference invocations anyway, so parallel schedule is enable to leverage existing thread pool. When not building with gomp or iomp, TVM thread pool is turned off to avoid confliction with user threads. If needed, user can set env or settings with [NUPHAR_PARALLEL_MIN_WORKLOADS](../../onnxruntime/core/providers/nuphar/common/nuphar_settings.cc#L61) to 0 to disable parallel schedule, or to some non-zero value to enable parallel schedule. The non-zero value indicates the minimal number of elements being computed per thread when parallel schedule would be turned on. +This choice is to ensure to get ideal performance with the different build options. When build with USE_OPENMP or USE_MKLML, users would have to avoid thread confliction from OpenMP or MKL with their inference invocations anyway, so parallel schedule is enable to leverage existing thread pool. When not building with gomp or iomp, TVM thread pool is turned off to avoid confliction with user threads. If needed, user can set env or settings with [NUPHAR_PARALLEL_MIN_WORKLOADS](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/core/providers/nuphar/common/nuphar_settings.cc#L61) to 0 to disable parallel schedule, or to some non-zero value to enable parallel schedule. The non-zero value indicates the minimal number of elements being computed per thread when parallel schedule would be turned on. ## Model Conversion and Quantization -You may use Python script [model_editor.py](../../onnxruntime/core/providers/nuphar/scripts/model_editor.py) to turn LSTM/GRU/RNN ops to Scan ops for a given model, and then use [model_quantizer.py](../../onnxruntime/core/providers/nuphar/scripts/model_quantizer.py) to quantize MatMul ops into MatMulInteger ops. +You may use Python script [model_editor.py](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/core/providers/nuphar/scripts/model_editor.py) to turn LSTM/GRU/RNN ops to Scan ops for a given model, and then use [model_quantizer.py](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/core/providers/nuphar/scripts/model_quantizer.py) to quantize MatMul ops into MatMulInteger ops. We use dynamic per-row quantization for inputs of LSTM MatMul, so MatMul becomes three parts: quantization, MatMulInteger and dequantization. Weights for MatMulInteger are statically quantized per-column to int8. We have observed good speed-up and no loss of accuracy with this quantization scheme inside Scan for various LSTM models. @@ -57,9 +59,11 @@ As an experiment, you may test conversion and quantization on [the BiDAF model]( Speed-up in this model is ~20% on Intel Xeon E5-1620v4 (Note that AVX2 is required for Nuphar int8 GEMV performance), when comparing CPU execution provider with the floating point model with LSTM ops, vs. the Nuphar execution provider with quantized MatMulInteger inside Scan ops. Profile shows that most of the cost is in input projection outside of Scan ops, which uses MKL SGEMM. It's worth noting that MKL int8 GEMM is about the same speed as SGEMM in this model, so quantization of SGEMMs outside of Scan won't help performance. We are looking at ways to speedup int8 GEMM for better performance on quantized models. ## JIT caching -You may cache JIT binaries to reduce model loading time spent in JIT, using [create_shared.cmd](../../onnxruntime/core/providers/nuphar/scripts/create_shared.cmd) on Windows with Visual Studio 2017, or [create_shared.sh](../../onnxruntime/core/providers/nuphar/scripts/create_shared.sh) on Linux with gcc. + +You may cache JIT binaries to reduce model loading time spent in JIT, using [create_shared.cmd](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/core/providers/nuphar/scripts/create_shared.cmd) on Windows with Visual Studio 2017, or [create_shared.sh](../../onnxruntime/core/providers/nuphar/scripts/create_shared.sh) on Linux with gcc. Windows + ``` REM You need to have Visual Studio 2017 for compile and link. Optionally, you can save model checksum to the output dll with FCIV tool from https://support.microsoft.com/en-us/help/841290 set NUPHAR_CACHE_PATH=\path\to\jit\cache @@ -72,6 +76,7 @@ REM Run Nuphar inference again with cached JIT dll ``` Linux + ```bash # You need to have GCC of the same version Nuphar is built with, for compile and link. Optionally, you can save model checksum to jit.so with md5sum export NUPHAR_CACHE_PATH=/path/to/jit/cache @@ -83,27 +88,31 @@ create_shared.sh -c /path/to/jit/cache/NUPHAR_CACHE_VERSION [-m optional_model_f # run Nuphar inference again with cached JIT dll ``` - ## Debugging ### NGEMM + NGEMM (Nuphar GEMM) is an optimized low-precision GEMM implementation based on compiler techniques. Please refer to our paper for more details of NGEMM: ["NGEMM: Optimizing GEMM for Deep Learning via Compiler-based Techniques"](https://arxiv.org/abs/1910.00178). #### NGEMM Tiling / Permutation Configuration + NGEMM has default tiling parameters, but users can overwrite them through environment variables: + * NUPHAR_IGEMM_TILE_M / NUPHAR_IGEMM_TILE_N / NUPHAR_IGEMM_TILE_K These 3 parameters are the tiling sizes for the corresponding dimensions of GEMM ([M x K] x [K x N]). + Setting them to different values will generate GEMM with different tiling sizes. * NUPHAR_IGEMM_PERMUTE - This enviornment variable is to control the loop permutation in GEMM. + This environment variable is to control the loop permutation in GEMM. + The default is to not apply any loop permutation. Other options are "inner/outer/all",referring to apply permutations to only inner tile loops / only outer loops / both inner and outer loops, respectively. + There are several [environment variables](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/codegen/common/settings.h) to dump debug information during code generation, plus [some more environment variables](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/providers/nuphar/common/nuphar_settings.h) to dump/control the Nuphar execution provider. You can set environment variables prior to inference to dump debug info to the console. To list some most useful ones: -There are several [environment variables](../../onnxruntime/core/codegen/common/settings.h) to dump debug information during code generation, plus [some more environment variables](../../onnxruntime/core/providers/nuphar/common/nuphar_settings.h) to dump/control the Nuphar execution provider. You can set environment variables prior to inference to dump debug info to the console. To list some most useful ones: * CODEGEN_DUMP_LOWER Dumps the lowered function from TVM. @@ -129,13 +138,14 @@ There are several [environment variables](../../onnxruntime/core/codegen/common/ Set it to "1" to dump partitions. ## Settings + When there are conflicts of environment variables running Nuphar in multiple processes, user can specify settings string when creating the Nuphar execution provider. The string comprises of comma separated key:value pairs. Keys should be lower cased environment variable names as shown above, and separated from corresponding values with colon. For example, the equivalent string of setting environment variables of NUPHAR_CACHE_PATH/NUPHAR_CACHE_MODEL_CHECKSUM would be "nuphar_cache_path:, nuphar_cache_model_checksum:". * Using in C/C++ Settings string could be specified when creating execution provider to specify JIT cache path, as well as model checksum: -``` +```c++ OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_Nuphar(session_options, 1, "nuphar_cache_path:/path/to/cache, nuphar_cache_model_checksum:")); ``` @@ -143,7 +153,7 @@ OrtStatus* status = OrtSessionOptionsAppendExecutionProvider_Nuphar(session_opti Settings string could be specified when creating session options: -``` +```csharp SessionOptions.MakeSessionOptionWithNupharProvider("nuphar_cache_path:/path/to/cache, nuphar_cache_model_checksum:") ``` @@ -151,29 +161,31 @@ SessionOptions.MakeSessionOptionWithNupharProvider("nuphar_cache_path:/path/to/c Settings string should be passed in before InferenceSession is created, as providers are not currently exposed yet. Here's an example in Python to set cache path and model checksum: -``` +```python nuphar_settings = 'nuphar_cache_path:{}, nuphar_cache_model_checksum:{}'.format(cache_dir, model_checksum) onnxruntime.capi._pybind_state.set_nuphar_settings(nuphar_settings) sess = onnxruntime.InferenceSession(model_path) ``` ## Known issues + * ONNX shape inference dependency - To save runtime JIT cost, Nuphar requires models to have shape inference information from ONNX after model is loaded. Some nodes in ONNX can generate dynamic output tensor shapes from input data value, i.e. ConstantOfShape, Tile, Slice in opset 10, Compress, etc. Those ops may block ONNX shape inference and make the part of graph after such nodes not runnable in Nuphar. +To save runtime JIT cost, Nuphar requires models to have shape inference information from ONNX after model is loaded. Some nodes in ONNX can generate dynamic output tensor shapes from input data value, i.e. ConstantOfShape, Tile, Slice in opset 10, Compress, etc. Those ops may block ONNX shape inference and make the part of graph after such nodes not runnable in Nuphar. - User may use Python script [symbolic_shape_infer.py](../../onnxruntime/core/providers/nuphar/scripts/symbolic_shape_infer.py) to run symbolic shape inference in ONNX model. This script adds output tensor shapes in the model in graph.value_info field, by doing symbolic dimension computation using sympy when there are Shape ops in model. Besides, running symbolic shape inference on ONNX model would make the graph more readable. Note that when using [model_editor.py](../../onnxruntime/core/providers/nuphar/scripts/model_editor.py) to convert models with LSTM/GRU/RNN to Scan, the resulting model may have incomplete shape inference. Running symbolic_shape_infer.py is needed to get the Scan ops in the model to run in Nuphar. Besides, please note that quantization should be the last step, after verified accuracy and performance of the edited floating point model. +User may use Python script [symbolic_shape_infer.py](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/core/providers/nuphar/scripts/symbolic_shape_infer.py) to run symbolic shape inference in ONNX model. This script adds output tensor shapes in the model in graph.value_info field, by doing symbolic dimension computation using sympy when there are Shape ops in model. Besides, running symbolic shape inference on ONNX model would make the graph more readable. Note that when using [model_editor.py](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/core/providers/nuphar/scripts/model_editor.py) to convert models with LSTM/GRU/RNN to Scan, the resulting model may have incomplete shape inference. Running symbolic_shape_infer.py is needed to get the Scan ops in the model to run in Nuphar. Besides, please note that quantization should be the last step, after verified accuracy and performance of the edited floating point model. - In addition, user may also manually add shapes to graph.value_info using [onnx.helper.make_tensor_value_info](https://github.com/onnx/onnx/blob/v1.5.0/onnx/helper.py#L290) with model specific knowledge. For example, if you have Hardmax output casted to bool as Compress input condition, then the unknown dimension of the output of Compress is actually 1. +In addition, user may also manually add shapes to graph.value_info using [onnx.helper.make_tensor_value_info](https://github.com/onnx/onnx/blob/v1.5.0/onnx/helper.py#L290) with model specific knowledge. For example, if you have Hardmax output casted to bool as Compress input condition, then the unknown dimension of the output of Compress is actually 1. * Performance benchmark - Current Nuphar's speed-up in quantized RNNs is optimized for AVX2, when running in single thread and batch size is 1. To help understand RNN performance in different configurations, please use Python script [rnn_benchmark.py](../../onnxruntime/core/providers/nuphar/scripts/rnn_benchmark.py). For older X64 CPUs that do not support AVX2, quantized model may have worse performance than non-quantized ones. +Current Nuphar's speed-up in quantized RNNs is optimized for AVX2, when running in single thread and batch size is 1. To help understand RNN performance in different configurations, please use Python script [rnn_benchmark.py](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/providers/nuphar/scripts/rnn_benchmark.py). For older X64 CPUs that do not support AVX2, quantized model may have worse performance than non-quantized ones. * Patches to TVM - There are some changes/bug fixes in TVM for Nuphar to work properly. We are in the process of contributing them back to TVM, but for now patches are used in [our forked TVM](https://github.com/microsoft/onnxruntime-tvm). To build cleanly from scratch, please run following commands before running build.bat or build.sh: -``` +There are some changes/bug fixes in TVM for Nuphar to work properly. We are in the process of contributing them back to TVM, but for now patches are used in [our forked TVM](https://github.com/microsoft/onnxruntime-tvm). To build cleanly from scratch, please run following commands before running build.bat or build.sh: + +```bash git submodule sync git submodule foreach --recursive git stash git submodule foreach --recursive git clean -fd diff --git a/docs/reference/execution-providers/TensorRT-ExecutionProvider.md b/docs/reference/execution-providers/TensorRT-ExecutionProvider.md index 78bdc2e10b0c2..e70a9acfef61e 100644 --- a/docs/reference/execution-providers/TensorRT-ExecutionProvider.md +++ b/docs/reference/execution-providers/TensorRT-ExecutionProvider.md @@ -10,7 +10,7 @@ nav_order: 12 The TensorRT execution provider in the ONNX Runtime makes use of NVIDIA's [TensortRT](https://developer.nvidia.com/tensorrt) Deep Learning inferencing engine to accelerate ONNX model in their family of GPUs. Microsoft and NVIDIA worked closely to integrate the TensorRT execution provider with ONNX Runtime. -With the TensorRT execution provider, the ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration. +With the TensorRT execution provider, the ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration. ## Contents {: .no_toc } @@ -18,16 +18,19 @@ With the TensorRT execution provider, the ONNX Runtime delivers better inferenci * TOC placeholder {:toc} - ## Build -For build instructions, please see the [BUILD page](../../how-to/build.md#tensorrt). + +For build instructions, please see the [BUILD page](../../how-to/build.md#tensorrt). The TensorRT execution provider for ONNX Runtime is built and tested with TensorRT 7.1.3.4. ## Using the TensorRT execution provider + ### C/C++ + The TensorRT execution provider needs to be registered with ONNX Runtime to enable in the inference session. -``` + +```c++ string log_id = "Foo"; auto logging_manager = std::make_unique (std::unique_ptr{new CLogSink{}}, @@ -40,38 +43,47 @@ InferenceSession session_object{so,env}; session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::TensorrtExecutionProvider>()); status = session_object.Load(model_file_name); ``` + The C API details are [here](../api/c-api.md). #### Shape Inference for TensorRT Subgraphs + If some operators in the model are not supported by TensorRT, ONNX Runtime will partition the graph and only send supported subgraphs to TensorRT execution provider. Because TensorRT requires that all inputs of the subgraphs have shape specified, ONNX Runtime will throw error if there is no input shape info. In this case please run shape inference for the entire model first by running script [here](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/providers/nuphar/scripts/symbolic_shape_infer.py). #### Sample + This example shows how to run Faster R-CNN model on TensorRT execution provider, First, download Faster R-CNN onnx model from onnx model zoo [here](https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/faster-rcnn). Second, infer shapes in the model by running shape inference script [here](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/providers/nuphar/scripts/symbolic_shape_infer.py), -``` + +```bash python symbolic_shape_infer.py --input /path/to/onnx/model/model.onnx --output /path/to/onnx/model/new_model.onnx --auto_merge ``` Third, replace original model with the new model and run onnx_test_runner tool under ONNX Runtime build directory, -``` + +```bash ./onnx_test_runner -e tensorrt /path/to/onnx/model/ ``` ### Python + When using the Python wheel from the ONNX Runtime build with TensorRT execution provider, it will be automatically prioritized over the default GPU or CPU execution providers. There is no need to separately register the execution provider. Python APIs details are . -#### Sample -Please see [this Notebook](../python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb) for an example of running a model on GPU using ONNX Runtime through Azure Machine Learning Services. +#### Python Sample + +Please see [this Notebook](https://github.com/microsoft/onnxruntime/blob/master/docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb) for an example of running a model on GPU using ONNX Runtime through Azure Machine Learning Services. ## Performance Tuning + For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../how-to/tune-performance.md) When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest#onnxruntime-performance-test), use the flag `-e tensorrt` ## Configuring environment variables + There are four environment variables for TensorRT execution provider. ORT_TENSORRT_MAX_WORKSPACE_SIZE: maximum workspace size for TensorRT engine. diff --git a/docs/resources/compatibility.md b/docs/resources/compatibility.md index 00c681e8348b3..7fe7e5b48db71 100644 --- a/docs/resources/compatibility.md +++ b/docs/resources/compatibility.md @@ -11,8 +11,8 @@ Supporting models based on the standard [ONNX](https://onnx.ai) format, the runt * [Getting ONNX models - tutorials](https://github.com/onnx/tutorials#getting-onnx-models) -ONNX Runtime is up to date and backwards compatible with all operators (both DNN and traditional ML) since ONNX v1.2.1+. [(ONNX compatibility details)](docs/Versioning.md). Newer versions of ONNX Runtime support all models that worked with prior versions, so updates should not break integrations. +ONNX Runtime is up to date and backwards compatible with all operators (both DNN and traditional ML) since ONNX v1.2.1+. [(ONNX compatibility details)](docs/Versioning.md). Newer versions of ONNX Runtime support all models that worked with prior versions, so updates should not break integrations. -* [Supported operators/types](resources/operators/OperatorKernels.md) - * *Operators not supported in the current ONNX spec may be available as a [Contrib Operator](resource/operators/ContribOperators.md)* -* [Extensibility: Add a custom operator/kernel](docs/AddingCustomOp.md) +* [Supported operators/types](https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md) + * *Operators not supported in the current ONNX spec may be available as a [Contrib Operator](https://github.com/microsoft/onnxruntime/blob/master/docs/ContribOperators.md)* +* [Extensibility: Add a custom operator/kernel](../how-to/add-custom-op.md) diff --git a/docs/resources/high-level_design.md b/docs/resources/high-level-design.md similarity index 98% rename from docs/resources/high-level_design.md rename to docs/resources/high-level-design.md index ed3548ae6c2bf..eba475ae4a16b 100644 --- a/docs/resources/high-level_design.md +++ b/docs/resources/high-level-design.md @@ -73,8 +73,9 @@ the default execution provider or other registered execution providers. The ONNXRuntime execution engine is responsible for running this graph. ## Key design decisions + * Multiple threads can invoke the Run() method on the same -inference session object. See [API doc](C_API.md) for more details. +inference session object. See [API doc](../reference/api/c-api.md) for more details. * To facilitate this, the Compute() function of all kernels is const implying the kernels are stateless. * Implementations of the operators by execution providers are called diff --git a/docs/tutorials/fasterrcnn_csharp.md b/docs/tutorials/fasterrcnn_csharp.md index c7b6044c04739..356a2435abe6f 100644 --- a/docs/tutorials/fasterrcnn_csharp.md +++ b/docs/tutorials/fasterrcnn_csharp.md @@ -8,7 +8,7 @@ nav_order: 3 The sample walks through how to run a pretrained Faster R-CNN object detection ONNX model using the ONNX Runtime C# API. -The source code for this sample is available [here](Program.cs). +The source code for this sample is available [here](https://github.com/microsoft/onnxruntime/blob/master/csharp/sample/Microsoft.ML.OnnxRuntime.FasterRcnnSample/Program.cs). ## Contents {: .no_toc } diff --git a/docs/tutorials/mnist_java.md b/docs/tutorials/mnist_java.md index 73058029a36d4..2e82479e2e863 100644 --- a/docs/tutorials/mnist_java.md +++ b/docs/tutorials/mnist_java.md @@ -6,43 +6,52 @@ nav_order: 5 # Character recognition with MNIST in Java {: .no_toc } -Here is simple tutorial for getting started with running inference on an existing ONNX model for a given input data. The model is typically trained using any of the well-known training frameworks and exported into the ONNX format. +Here is simple tutorial for getting started with running inference on an existing ONNX model for a given input data. The model is typically trained using any of the well-known training frameworks and exported into the ONNX format. + Note the code presented below uses syntax available from Java 10 onwards. The Java 8 syntax is similar but more verbose. To start a scoring session, first create the `OrtEnvironment`, then open a session using the `OrtSession` class, passing in the file path to the model as a parameter. - + +```java var env = OrtEnvironment.getEnvironment(); var session = env.createSession("model.onnx",new OrtSession.SessionOptions()); +``` Once a session is created, you can execute queries using the `run` method of the `OrtSession` object. At the moment we support `OnnxTensor` inputs, and models can produce `OnnxTensor`, `OnnxSequence` or `OnnxMap` outputs. The latter two are more likely when scoring models produced by frameworks like scikit-learn. The run call expects a `Map` where the keys match input node names stored in the model. These can be viewed by calling `session.getInputNames()` or `session.getInputInfo()` on an instantiated session. The run call produces a `Result` object, which contains a `Map` representing the output. The `Result` object is `AutoCloseable` and can be used in a try-with-resources statement to prevent references from leaking out. Once the `Result` object is closed, all it's child `OnnxValue`s are closed too. - + +```java OnnxTensor t1,t2; var inputs = Map.of("name1",t1,"name2",t2); try (var results = session.run(inputs)) { - // manipulate the results - } + // manipulate the results + } +``` You can load your input data into OnnxTensor objects in several ways. The most efficient way is to use a `java.nio.Buffer`, but it's possible to use multidimensional arrays too. If constructed using arrays the arrays must not be ragged. +```java FloatBuffer sourceData; // assume your data is loaded into a FloatBuffer long[] dimensions; // and the dimensions of the input are stored here var tensorFromBuffer = OnnxTensor.createTensor(env,sourceData,dimensions); float[][] sourceArray = new float[28][28]; // assume your data is loaded into a float array var tensorFromArray = OnnxTensor.createTensor(env,sourceArray); +``` -Here is a [complete sample program](../java/src/test/java/sample/ScoreMNIST.java) that runs inference on a pretrained MNIST model. +Here is a [complete sample program](https://github.com/microsoft/onnxruntime/blob/master/java/src/test/java/sample/ScoreMNIST.java) that runs inference on a pretrained MNIST model. ## Running on a GPU or with another provider (Optional) To enable other execution providers like GPUs simply turn on the appropriate flag on SessionOptions when creating an OrtSession. +```java int gpuDeviceId = 0; // The GPU device ID to execute on var sessionOptions = new OrtSession.SessionOptions(); sessionOptions.addCUDA(gpuDeviceId); var session = environment.createSession("model.onnx", sessionOptions); +``` The execution providers are preferred in the order they were enabled. diff --git a/docs/tutorials/samples_catalog.md b/docs/tutorials/samples_catalog.md index 954558edac76c..c896b0beb2b0f 100644 --- a/docs/tutorials/samples_catalog.md +++ b/docs/tutorials/samples_catalog.md @@ -30,8 +30,8 @@ This page catalogs code samples for ONNX Runtime, running locally, and on Azure, ## C/C++ * [C: SqueezeNet](https://github.com/microsoft/onnxruntime/tree/master/csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/C_Api_Sample.cpp) -* [C++: model-explorer](https://github.com/microsoft/onnxruntime/tree/master/c_cxx/model-explorer) - single and batch processing -* [C++: SqueezeNet](https://github.com/microsoft/onnxruntime/tree/mastercsharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/CXX_Api_Sample.cpp) +* [C++: model-explorer](https://github.com/microsoft/onnxruntime/tree/master/samples/c_cxx/model-explorer) - single and batch processing +* [C++: SqueezeNet](https://github.com/microsoft/onnxruntime/tree/master/csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/CXX_Api_Sample.cpp) ## Java @@ -41,7 +41,6 @@ This page catalogs code samples for ONNX Runtime, running locally, and on Azure, * [Inference with Nodejs](https://github.com/microsoft/onnxruntime/tree/master/samples/nodejs) - --- ## Azure Machine Learning @@ -58,7 +57,7 @@ This page catalogs code samples for ONNX Runtime, running locally, and on Azure, * Inferencing on **CPU** with model conversion for existing (CoreML) model: * [TinyYolo](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb) * Inferencing on **GPU** with **TensorRT** Execution Provider (AKS): - * [FER+](.https://github.com/microsoft/onnxruntime/tree/master/docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb) + * [FER+](https://github.com/microsoft/onnxruntime/tree/master/docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb) ## Azure IoT Edge @@ -79,5 +78,4 @@ This page catalogs code samples for ONNX Runtime, running locally, and on Azure, ## ML.NET [Object Detection with ONNX Runtime in ML.NET](https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/object-detection-onnx) ---- - +--- \ No newline at end of file diff --git a/images/mkl-dnn_node.png b/images/mkl-dnn_node.png new file mode 100644 index 0000000000000000000000000000000000000000..d2863f8938143c3c4d4a01155e5a9b472d5cc979 GIT binary patch literal 51197 zcmeFZXH=8j`!0xrfE2+>6-1OS9V1;t=?IEQ2azVydkaKBf(l4)QX(%+1gX+X1f-YH zI|QT?dJhROJNW+2{AZnW=FEqgS@U7CmJ11p5Bu5szOVZ#&qvLtiqsU06huTs)XGYa zpA!)g2N4mS=OHHr??kU{2!IdgT%IdFBFgV(UIw3#Sjnl&5fK%IQ{qi8fX^>FD(Sfp z5ncLC_;W6n_YxBkQA?EaW4RZe#;cRmh1aydES?6E*Y$q3ll(b(q^PfFZU>Kx(`010 z`};% zE0QAXtLE9;83Ei}*UsL{c|}3-=8&X5v)(a60e;ap!Z|yyA5$gzpdvw%x|* z$5gZ5ExBNZTJy#+acW28S=SBUB?21V(VHB`ACq){vhhsm(5gaB`HqxVZSDPmXcbr8dJy z*5`jeO_9JZ-;t3={h1AyWijS9+L)@Htlp@@Wb6FUpFf#lJ*f|46l?e0tS7VYP8J_r zCK|`?6~NZAG!X0PMHeTc5U1JZd398hL*23u=fUdetj~=^4k9r=MGbaLs<6eU)0!EV ztDH}F&+%pz+Ub98*dl`B+g4192{=VPD&q4%o1zR5*0Jcn|LAYeoMzvxzz4-OCHya;Z}ss zHX^scg&>RHh24v(KHX#=lu1-~`-IG+Z{^qhnUa7fr26iUny$b2_2!RphYLy?qBbb% zJlDE{zQG9h6_&c+EHk#j4sn<=wE4g%uR#q-O|$B;|EG}^`wR28aIsKObhLV}IH#4{ z&o+nIH^Qmisy?cQrk26i?byF%=m?$sxLto2Nf&xKRR5t0joG#bd~0#}iTN8I>5u!0 z8WzGmXBQdS3>GeQDJXq7KO#}Y>Il7X7d9M!LtYAN#9IGS9y6F5PosNp`c8z!7dFJS zf-s&xf-UW9{ARBr>yLGD^HGbAIIMLKu62WI%{R{E=bOm&r#yrUe6DbhwH+Mp+vf#Y zP@3e(TuI02cdROHHzf_q?bjT3w_zu!QzwMqQBL)8RuZ08<@{|7+iVsuIQ0j$E^A|@ zJK-gsYo)_@pS)mMpKzkL!}&&o1+e42VHvKB_)8PZF<&uYgE-xkbl)96*+HBdnoYuv zF($t6hRf^>&{q%V<4ukf6$PbS!S3^`?G(w2ok{zlU;3TJ*}LOj`QZULeA?)K<%c7g zs&uC;a)LOp*ixs7vn}DchY@J)E6n{`N~wZ;gWk%py<8J*fwLa+{HgWWLD#bowU-HT zSGGJ!Q^z^a?uJ#JiXGbK@EhSgd{FxTV0X!T!ojPidU4A4z^EM58^OPq-BrIi)7bc8 zUQ<(-Mm3Hvr&WcW@ayz3Yrn-AH!A)=R7C$rC0R;+3U+!ds6Aee_QS~FHyi)G!amPy z*HTG0>Tvl!m>=a0hIR!t+n;b9CMQ=G`fy9>;S=Ee(k+}7`^TTOG(8#dTj4NQk0~#! zgxs~Ya?|K&m?F#CyTLJQ+jxQMUKY%+k(%Sy>-Q7F_v92Dd6w4#heX zHV?#kP6^_(Z;S3w`bux6#N&S%ftm+R!eygo=V@mz`G6WZ-A%nfMcp^mg zK)vp)O^Ry&d%)oS?rURWotYw_tV7-oe5;Y0NfvjR08M~(;nVC6*w7w}2$&z}K%l(G zuP0iiIvG{DI`wC39LUWwIn>O6ej-)wwPRWCz4z8*+^!spx^kzlBcaWFVx}>8vIHfA zk#JvK;;)>$a(a@bo&+rz2gQE!({foo@qB%8=Ug3X# zyy}O(xa$W_f3;DyJJ$8GS}`-GdbdyY7%|V(0o`h%E8SKn?2&FMH)}M6a`e^1l{Z^| znoN08wv*h1^sU0=j(@T@GJ)Bc^k z>59jq#pj}6_nIY&{Zmho32o7w!8uynnUTKg>jH>lu&EjiV{K^@+epn#`K8D6*Vva$ zs$6qm_u?Oi zd8wT=#R3zgpDBL|wDATC>@~r8!_$-F#uwMiOdD@AbgMW_G}6efo-S3U2lI>1i7;&* znieSA4Cn7jBluJc%+spBbNnztPEpef-s;p&vkGUa*S-JEP1&4QM1k;?PkTl8;AM>Pm~9Dpc{}7Rn(;=o09r*a_^g6zeFfe8@OTV#*{ch0dEFKJ zpcUNI;f2!&KDRW}7kSnr=fu3X$XS%h5ZQ{C_)tDWB1+Tk8`nZ6nVWyh@nNzK1U}vx z?4h*wAh|0$Qnr`hYg6ewuQFWi;UJ5SXbrzWdEO?{^N;Diwczy;Oh4ZdyT#>Vkz)#`wPM z6ePgRo2ISY1pkLlrgrX}wMD{`+py81k42&-`p_`hA*O!P18tPzK)7$t3(Dhl4))is z4atwG$VWWZh@u`3aK zOM6j;|Ao`a&oxIvN5fvE!cM#(ZYu+u-wC(qLo4_HWt5#1DY;(3uJ(5z%K&)eqbGi= zrSZjNrq{RXvmd6UCTGe}dwPOfPw|*?Li^$DJm+H!3+5XeH<92l3VKk4BU*z|I}vzL zMBnQ?ossTq3S7smfZgv3nuC2p?lPz%&JK0htW(*K$Cl%-kj|O)+xLcL{zBB_pHME) z|2~ojRJpG)E^M%iGb;#r#Iy*jstdmvBv&U|#X7tnz zGt^aTKRJdxZRahhSnkiR-d`K94rcWm@da4Ko-hrRjoa^RAWx1kekmH#dofJH<{w~P zdHSX8>d6m!3w8o1`5kk03fwuLCOZ<)%Er~strPLZO@^!k0lU_FpV#xeNPO=kq`q*6l>UFX}#nT$YJIHjXK`d*K7^xz19nNVWU>|ZTT z%WX&hAPQPpY0#{a?s+B)Ap&)qG=DY*a-yr}qEs8ka`A+@1Jo4f)1!h@M&q+v&uuie z?^_ZbDPd6K>6E~;3ZOtDz{f$AY6}41CW_e%fg>_HBBeH8=DOV1*^?$O((?HxWx+t4 z!%e<;>Y4^a4iVFa_kyu)geEK_4R^h+sv3$~Ck%2H!jw1oTA|=Bo9352V1-jNy{wfT zeKkZVs^^deppudakKP*72=t{Fu~bWt3_AXJpiHRH8T@@^8%a!mcjOW_UGX!EtT(Ac z#SDDY-dn_~+tqID=S7$0-dT_RQLA8a9kZLettme1m8RX5GQLOeb9VD<_s0b1@l}6J zQ+JJv(YrF+2k}V!-5RZ?zAM@FJt1w|Wvl7oGAqFe9K&(~W@%|9XEiPg0czZmwMADK@V-H&Za}|gakhD!wqZF52^ID`Jobo8c&T}d!Niq{?Q$ZD?wi8eK|n> z_9E>?dWhZFwZAJH`5a6{#D1t2hbgsurhM9jEhY)ao$G!uol9rq24RG`$A&brU0Tzp zxr=+OfmkzR1ehgzL$5Sxla}a;S_@=euiE{G519BuPEMW?R_3vDf0*S8+ZxTYi=lO= zCqA(L>-HJN4rm#OAO3KKscrfQQ{b_)(3M-)+gaeZ$BWh?4afl#Tpz&F9mtk0y+bMT z7|X6N^CrRfQ0mr8vA9{*)1|;3aAum((?PdZP_UB{4XtYTHT->DI_s*{;*N zQvc>Czfsj;QYS5aG{;bV#0z_#et^so0(?j}%uD1YV-c#R<3-rX;!4u`Sq&J_&YXIa zbwQSos{i>vcU9vm0Qv~7T=1VsCrO#Ox2Gofc7Tgjv>BOrR+e%?7DnncE z;b~u}*lPhgYmih&%Sh3xY10XFUqnJZ_ zf;&!P;96IrNDCM_c+Pl@*2IT$Gu97Zxh$r5z$?e5K#e=y+!ytHil%`b%AZbqZob1K zPjN^Oj|rz>guoyuO-1)bGD)lH917jT10oIRnTf0LF&?R~X9?@cu7|^?Yh_ zus)gVQ$l`UF_iupW@cws@UrV1CiWjTgo6?W?Ab-8{vdeG4H4_U3_)!s3@0r|Sf!}@ z7zNuczTF9h?dYHJVKv`?Dt}_KBVX>IHJ^NF$w41Uui|IsoLIjF z%6q7zkXY3;;U6?`osxB^b;*GXb-g9>Q|xt&1zJyot?~@4&8ZD+r{xLWyZg_x2|z3e z#$WO$s`bGJJc6Sq!Kh%$f0N|_I!B9>u-hsTC%f5objx6<->&C8LzXU!PY(bDuGTT} z9RyXk)5xVu>_nPzjcRSqDHeCI;UQ4F8wEQaGH}>CEeRK~>~sWo${Dx6wj=ijl`1`n zT|{i41s3*(O%7_~cj16Xxl^MbMCAGB_eVdjHLt~##i4v7%+urtGX!vq=CmVB7x(mk zZd&`CU>A>N^vB4R$iF#fhNWtGqjfz1YMTUiY>~iF>=2IXK?j|DIKTB=fdjg3W{`)( zO==YZm0nDCnP>>2^sVaO=dlN{*KN74snoi^8RfA)F$z|pdxNzxSg(AZ&Py?+5;Q)% z6i%V;dE6Y+Jj!p%wO{?j*PD%7tFP1-=T1QMCU9A7n}#81B>1LtD>G*ZT;_0^xcqe! z#c;YJjfsHWB&SBI>v)-+i=_MNCLPN|quU4*pS98z=%SYlIQh;$49fQrdSI?cbBt@m zS+iw?7;E5%eN=XC!9l25QCN)uN~;v{`w@w5|z4k z5Lhz>$Q1#t718lqVHsg=fe+(C=vZO+BHTif1K!V(`5FJMp1H%H(D4$haI|dSmwiuK z!0cuH;9tX8`UelIPVyPoJW$JYS?WnsyH85vQffC*Aync#-!2$G1*Rt9QMdyKljtEf zQ#Jm`&42HQUP-eCQ(uJL;n6Zc=T^mX{~(&dW)vPJ;_6J%FBkXTePcQH)7q537K%%u zhQ7^1?9!Jt?o(Ow-?giOSav3;-G9(3b-df32-uAv4;3x0H9||g&~g8x6l}Gq1?r8J zWio_&e7`f?RL9uP#q>U?l4mt3c87rHW0t|x^Y%x$n|UC^DNLws+j4yCh&%pJ1dJL# zp7FQMQ;yd9cq~*drc$~jc2Ww|M2uFs>h{S^XFrRkadttsJcQ&D*lwyVG8EuR(~Et+6X!y_%W08qRD zw_V<_x8<_iYb4VZYY5icZ-))F9Q#GwC5s7SpS+Qh&dK}3ya4m=95FsZv2eXYZJDj> zE|Z|n0Y}Ty@oiMAmN(zBMOZH{eF4JxWADz5cxx#mD(S4Ph!@_AdGS%MJwCScU92sk z)a>)V-k;3!!fTWR)4oDwn#-c)&D$UtBSLYuzI5$c+(r45H?9lSGZaRH6NXtaq^?(} zU!NzgUZ#3e$O#*6((LTuukO?kG3RfS(a=}EDo-;j3#W{ePy1P9B03$C88+kBpQXOz z)u)S;%%g;}GM-nl?|JR-&De(Sj}*A^Pz&{^hP_ZF1l=Y?63zC{9H*Skc9!Bmk>~N8 zT@e}`8}K#yR-$W*9E(M2u8i>GYXdrcCOWI~#odtw)K1ZNz*Rp*+1=rm3%sVdyvhq(agTL%%p5-rfAfN-U?5>|0k9 zdEfi=;aElDQ)Vdbt!pYwIN#YYNEMi;ano?&ThrDQsnG2Z*Mg@joZ?@aMbT}Vu(`b>BIDH-J9w8f|K+fi0g#4SJ4-QivpOu z6w4?B^-Smphu?*~m?5RFr2W=gVM%YA^X5T`6t&K}`Ne>Dgc)Q|p7>&1o~c^FwegR^ zlFf2t)Mob6at_fy!WAlNQH62=;i9#id4Z`_cj7qhL>H{TUd3M~k02B6yZAXT_*=fU znRH(G^e3j+o=wh&uTqJ!bi-|vt%L&X%j7rJzP9%~4$$HyTW4mX;q3~37szz}^6x^H zecAQvW~^(Ui|?Dq4bxMoOn10f_I}*+yMn*ENq_!{Hwz3u%Zg)*APL+3UcW7^kgA(< zOXD7`xsM6#xb(DUP-DnKI*HF^rwgL9XjQCs&!jc|XX698MCYpshVmwQ*%zEiJ`GT+ zN_w@azvDA$>z~`S{!D{e%gEyrF-YAj_S+9W_(LgHCLokc1At?nR0f#XAwZ;Hv&&@r z^;Hfbp!+I;d7}x5xN6Ns@{puI$PcCe>A|nFt^JfojB8-CSNJPgw&uoVX>khjOt#*u z%A3uOO5V*B)JFUlq%-g*v2HF!@&v%w4PD{R3s$L(x}ERMK>*+*YW$-0)_l@#Wt}GF}U*1VE%@_FPj}fzHrTE`?RPzHuYz zJPQeuP*sNoJvATDpbG^06mAVNZ}z}a@4cbPrXx%Z)_ed(e6vEO+5-~ZH2=PLr)R`5 zfrB#^pKpz1H~ns}%ZceOPX%g7tL&WAa2fJ=-vG(C+z~{!v*TxkFi?D`yCm*i=69lT zyEdb`sL#DBlvQ?W@nfn9H{rJo`!O%CXR5FAxZnkVGL43&RN;Ua3mmWUDgjiqj%O*) zbQO?YCb#E1CP7J`_;{6bLIfo>d$hlX@#!J|LufdF#+Z`NyttP`bHCX3GL=AKC4A)_vxu!wAX<; z1Lio;cm8QEwi(LHdD;QMw@n_Uq{n(Ep%H>sX^4GP1*c-TvLH<0$cGONh3$Ya#G6ny zS?w{bG@d3OjF%{`cU*{6WZBxtZYs_Yz|v)B)AlLo&mv=ab>JQvZLtDy?|vi6HGHb#=#pC)ASbxh#L<*i^!#H0h%#DZ0Li&0Wj!n6 zG`1uj+OH7KRampIzH;t2cTvnNbRJz*x7kQ@9A-#0TX^v$Ba=1zo6?`Xv&v(u2k_m`ef=13>7pIV~p(-NNsn4*V%0E^NBifYddCGvzs^$KEq2t4NXbI7-UM zSl0W8EcDen&|bD6nrsPpAib;xb^;3e420(c%Jzg*Wa2^4S~!ce1mBt=_<&c}Us4~h z^R1@&=MwV+&Dy$g`|7R@l&V1O9I>eV>z!a(d|Gp1H*~}!py+|gVaYLv`oshH!*Egk zx*qv98>g{~UMg$cZokH{x{K2IkMb#fcDt5vRu7=o?OXeR#tC)ovLSHVr~3}43l2Z! z#Wne}nNXZRe<6H5foyQb>nzHp{BdiCtf~7FYHLQJN^P$irJpW6p!!I>&xpR;^NkLS z*qd+1s(T32qmmQQnRVy_TNyrB(evCQRPadn*VG1YO$}4OiQ1U7Fsx&C&rgU2VXIu` z4hGyUFIOJfzb=BVV0idnV}VlBv;Sf?Naff-p86$x4_myvjh8+9r;*KBUpGakv7AyK z{H!b|-Hn4x6sQBA!Ph#5bQoD_K3XNsJ<)ocI@eL)Je!7_ zyAIjl&L(4pZiad%!1PzI;^I00cd^Yv$sApy6JPAP+pE~|98poJWp-05=xns=wxyuZ zAZE1x?_`qC@G-(LdBA$o2y-N;7M^?8%ESPjTNBo#9L-S)%EY)VRjr{Xk@Pqi%5*$4 zt=TlbsQdz_&1$&L7gtOc6myy8NC0wvyZN_FkwYb+wzKk9ehwl~%Ln##*oCN$*DC`# z7C9PpMy`>P2NN!hhK)Qi4q-ovFz>;dXKHEcT2fJF`XdTv6dW3+>jN7NC4gY%RzBsO zYf$T5YETFD@+^hYudCcL7-lOdgXSCOfI+C6tjd~khA?3cHnv04=IxHL@SPhf28?_y z+qRrJBXy(28Eo-jJpYYE6zC-SzCOmr3BZ3HMyc{&bSb?&H^;kkG-qJcpL7HFyDo)) z&Yu`_L^&xatF09wtL zMiCqm=l#u1;{xN_219jG+ojTu13wwjp9H?tR&-I47U zSf-646ko@t31cZZvs-l6-i-7;utF+d#T^2buEHk2vRQy~J8_u6lA}dx!OdTR-h;oJ z2g23Xd{fDtWnbYK2u*i(vJw;Ku0xKC=n=QWJ}wM0wTWw9OxPGNFK|#RF?mVDZBiz8 zM0$yNX;>pM4~QT)0i_H3EbHCXzw6g)_B4gPYj?$ew=-G1zX0L3DYI-^{Zhb%+ci&@ zRzoi)V9j|34#iCe(m;=^K)Z+fs%m8g(Y87=N5#i@N%g&RuEgY5Pd_MD*hT_#=2&#ba607it%o>Vj+mB_hsjDMg8KNC#P4KJKx z<_VLOAxAXzrZAu{ixHM7>Hw!Wcw2I<@C5imASdxUb&DFzmG9f7qlbwQIZECb^KeyyDNW0s^8~w+;4A zxJ^4m=H*DZCc_>*7bOry+2SjqL;$XV~SWR2&%ua-mhQMW72IHd%$>p{>o_?pLjCAURI1j-AokqN6}thg_GyO zt?b@BXPo@U{*ct^ic8O|xFHd;ZNix{18z$0{7$}V{X6=fJX?8M)SSsVTHHd9Oybj; zGG8+#$3M8ys?}Xz>3E4-bk%*L!bu!uv)kE+YpD_+ijr3nNB2il^gXJJclHDQmYWV3 zXdcam!=KkNiS=lAMSL@Cl%X^oVSrJk&vDS>V{_{-FizV>vw4?Tq4v+6Zye|=PV#mK;JJQthdPG&3}o)AL(`)e&b$&F+=d!tMDMAEgV9dfvHt!|<7+^Lz+f;n&Ew zfSJHq`*UP1ip2!jIy)uC{ zNP78A?)mEH3@q)tUgFEM9(?$rOy-J5PhY)BSUB((vZ3m~T5LZlA!~PdoN(bXiA5f2 z+aLdjLKgQ1Y5koy&dNY2fH!};#ILVe^xO-x^fV>2sIS-6`a53X)Umw@rN(yxzE>Q? z!!!5V1EvFx%xYCcfBo<{zhCJ*^M`C+p)zVCBTTZ3F_&f)f{P3pYMWg%yfLR((_& zOb31v3cj~AY^!1@HQ=*>Xqx+^D-12R+plRJ7;|HwweRs$(P%M9eDRITks?}@%@J_W z6(KLC;kTP~XXBH?<{p^^ea#qn_n!0)_2u35$#I(CjCylwlw2Sw7ETvdbjNpPJOjIKi%xt?Ek$I6_I?YdLv8OxXsvWkrZl37m= zTl~Bjbqr(4aMusM^>(y(N#vEs9v4515OpfKDC2-t8-2!+hahFLF1EP%o!t0Mjc`Uh zsf6`U?4uDYdzEwN4bESOJ4TVkZ4@fV7F=cCT)KF*AQ1nG{0^kHa8D0KtebkAj+blv z!epgcwn_u2*fIAI)9SY7AAiN5LQAfRE^2X zOqj3CIk{!012nlJuZ*mHI!9z6Hf*&qbS;6Cf*dmUUk81@z_|6&r1cbRLOWzg8h#YX zdc344L-Xugr+C~qtCv|#x|P5A>|QFkLksN#*cXR|5IzI z6FGlv`y)|t8vNBW_n0(MW=e|LRJk16t)OHTbGi*4aT>sA+=*qpwi0=5FJeEgR4O&C zN82v{3x>kU-cW9FI)f~@MM3cQPF~51>`LNPnC$Vg#!2L5k~wRi-aE410@nDbf@+VA zZk!TX4)gAzkZX*hV~>`}k|lo^HC@q?X(nex>SGpxe<)$MZ>yUCZPO(kfc0+!ycY?? z!tJhSnPnSp=?s4%_y5E;suBGNmGS4sQrmdOnQ;6;l=MPS_Bju+KMeL*$8rf4(`Vn$ zv~1-{M~@~$lt14E)brW0<2I{lF}I0}FTXp+8@Bhb$Ox1=B@dK;b`c7Lenx2MUz*U1 zw~{Qj>qcc8aPR%dVa?9YReSO{gM;wH!_pto)GWMQICCng3Sb{DIVTERs3g4gl=A$t zD0Tk^%Tps(0!vftAcC4z@yusMy^@OFq}{5T_McI&RHJMwxA$qqFH=!4?L3&9*^=6= zm3i=V1-EpQ#o7t*T=CNHZaIis0o8#(+06_##}}xn<~X=XKkb-zjm`P+VIbpcVJH2K zF#zCe!qWenx2dl)Ra%mug%y+EUyf}LYQCU@!1R5a=kfNXy;ZnoSeu>^|8ud?XIJJL z<7b5?@TrEys}gq&$8Usw0T};4dg)gek6$Q1(`{ijA0D2@+3_XuS;Ii%hFB! zGH2DATQ#Q>I-AksLNRKY2JI+EbGcr!CQmN8m|yWOdV;Jhu^_)i$)^Z)Cld=zgUYAP z#_SF6)r3Eqf4vw1U0ir=@z&ZVPx$)f$S=E_N8ZyR6$8Nr-6@j8z>&)1pxH{B381fR zXJ#X2a}Yw|a+n?*yGi@jTNwL1d-44+!Tf{_0~4~gNz>E7y){52q)2)7{m?CHkOu16 zI^tv*f%n<{mTy?uM?k9u@p}f1R2kt96D*WPyn4zVOio$o7sPyzToLZ75FEj)RS5tl zstX~D&Q94YvENSvXxVi6&xi-hcT{m3S?LaxsiOl7oKsy%5EQJrJ>@{jA@}8viLp|f z<3Oy}yShEv68j<5#LW``>vMNvv7qKw0y?Ul;6^yengU30JuAuHnP9|1v;|i=Z`2-C zX~-TA0K0bo;!UrolNH1h4adQk7THoq~q`u56 z=MF*&QKx&z6P;7R9YYJ1UkWKw1kDSuM-!n8B2bnLZn_m>ugme=xKW_yY<~O>1>iAv zb}Nq{*A-l~?nt&E@Zh!fCtL!&S_`8lL&`?CO<)JrXzal+sSSg~7K<;WSm)Idp;JYf zDA>ylK09}2j(!Dc3mCUUdyJh2@JJ)cw!^0VXH!qPMOG->+S`+MF3dMqc_FIeNyCox zN-VK%BbALkn@9lgp+{UB0(FP~ysV2X1BSDs*$-~a(%u6EC60BNge8NZ>-nLLft3^? zE5*ekC6#J?$Rh21=&dlB@pNwkjb?Rd8mVh>nC}q(t{%+e; z+Ioe#WtgBB#c-d?$xQNExs0v18qwMBxF+$m9y7qU9b0UPh2dLZR2w&%#m#3Lf==+w zMZc>?Hg-_5Xe6rWO$+~`N1vRL9pGwU2w}@4zN7**z0W`IcD#6;Q8bw8i{{(h^v+3Y z3ZV8?fPPUx8w z+b@1MW2Ui1@>9feTNgtgCOZ@vT9Yk?Rpdo4zJD}gUt`>@xaFi;HS6ZWtZTeBRV%qO z7p-w*-MDCH@{OjlecG&&d&`Yk?i54- zWnNNEyK)B31a){MzA1Urd*^VC3N~!yVdRjy7X5knZP4OKS)dk^N0fzhbqcCUsxR{ z`P>^r1}5yC_p?Fss^Tw~o-q>qN1x|M-T8Nynk;nqw~DSfAf_@1-keGvu+yluv|Fwq zKJLU$wRoP=m$gg4PH|}5vCTJ^a~5CJn0AumUo+F}&Ou1MhtzWB?-Q&oM-ORMuIm_H zo>JzVayJ|n4|bIK*x$G@xi+Cz;ON}yI}zRH^d(n~p%*myFhv5u{3yd^axx&ZtVX^^9imbk(H3rO$?etMi&iK2;Ke9ggW<_32XR{zb zBsaE29*?pp6O`BhuAP;@|Ee4QL2 zmx|gG@-w8dQ1k^0VmhSy*SEjF!py@|0R}>SdM5_!%f@zN$@~ z6F}N=B1|*<(nuwdi5uZwp35^$#_WWU#uXF20iV zLFukdJ$)ZAq39Qze?+D!2|-7<@3zDq8NRW-yc)^`;jf8eF+K=RVpDXKOAkL`#L3FiHYVIzwf2aOd^dVyfGoGgT9YLvX-!s+wEi6R%Q9uGT8hNr z;yEw>iq6rpK?`rp(q+rU*SfNhuC%Ewtf4h3{=TNpy$4x*6*H99ckD7a>$uLfONtM~ z&8F5^9LI=Z8BG$}TDrxB3}m_?{Pw!X#n8uwmIGRS8dGE9{4!Izsa57~U7scQgM5p< zkMOvCdp@Gf7d^qn zl^6Uo7CuE$r}X3C-bc9LM&?BY#xECwtQm>>q;&ss`(oN}HNOE)=U40i^c-8owvWX5 zNBMt*LqmCs4NGGe!r6>sB>hdtOzQ}vNMB7(b|DMTp2{P}EypawKWTRA>iKDDLQH?| z5Wd_M0uNVd`!|;;%E~Bn7O1&>`?}))Fqas2*~5>|btks@x62w(zqbTA=O1}-B3xeb zq$)R0K3ZcNvXV%ZY);YWm#Hu!XdjI6!0T8w9YDE)U%JTRXTmBqmk>;XxD>_^dVitH z%kPfI?H@Gll3m1glrw3liM5~NBtcRo(M0i5ISx~2%19#!j3@vXx$9r7-y0+EX4M15X@J5_@Umnl7_Oo-Zq{SEv z^`%>c;TS|L4^QiD(iOtC>lK0137VeB5`*1Y#7$ibkYnKog0>1F`G%w_(D)+SrjcDw zs}K-Q2zPX)NJ4<6%XN#$$2%R^M)hT--Bw<(whI2<_{f>+NU%MQ*MJa#@0jOP)^qD; z$p;$jLyaNcf|@_Zepm}6^#a3w1qgqfiVvL|lJwksC2K_G^=K+0pP+V3f1Hln_5WF5 zP@eBSNe{6%baUWqbW;|D9vV4`v!k*xAL!Lt62juv5pwu8)j1jjZ11`cG<~*HSQg=rCO?J; zji8Sw(b(b0ydBecy}D$A0Z zfxs*R@*9_2w5>Zx{ei&Cjrlxm(W*0Kio2R_c$gaw#k@+|!o-}O0XX1rex5Ef4KkGl zC-`U$nZ!{bW+3jbjsu>w(Xe?KVf!j!wA!NpCNO+5->}GqoI`_|qJfjb%xBJeJG@wp z=Y@NQSHt^@XCcxIsg=GuhfAW+pC2oZR&pC-T!6sIhl%**>B(OEQkh_-($cQ-*jV?e zW@M5!wGBaQJw?b&9#9zrW)wJom$v@Nj%4AEv!Z{?zxz_moJLoFi1_06B>cNQ@Qfc& zG#?#oux5RSCl?c_)m{;X4a6Q}@hR{xtJT7G*ZWG6@diKNJaapuFh#VY=1g*snHC?G zS4T@4&vGE5fERdv4QUV}bggJa@HhP7CJ=jAS(#}^1b1CPVsd7(f!^Jw(ea{+ZfaVL z)%oo>-*6%PpdeM8X|YpJli!txOy?h;c!sBavyN2U@?H~I>vq)W*7n#N{=}r-*KWh>KKWU*XP0&B&6-Z@Mhx{e!Dhw$fk9p_XeNtbhepO z!R`8|!mXXbEF#|DsA^5yrURsk-hH(8YfPYAUOb4(y!GI3Mr-h-H+pE~u_34le;Mw2ZA!%IQ04_ni18rpv<-B{=^t0>uxL9<+S$UcLkJH&Y zgA>u#+@s`ON1nj9X_`8`#=$TKG7{~clr_Gy2nbHF{2N$}9x<{MQvIRu3eK~Gx_J}z+pHQ6YrP#ZZyP*L@ z3D9=2-}kKtziSV6Fa$8~Pul-#l<6h5f@HF3@S#^pb;7YT$)m4=uOEXbp8ENUc$ZV z8#(h1J!=bL`@ZXbnLf^?;`@vH@Y=t+G&k!WNO?5Lv}Bx{V|>n3GLx%*89ChH!Ko#< zD%3@*Mc}-w^+lmCRQ^p%XJC8&4NzasdGKFy91Z;$Jq~@IHG$GDf5X-pLfl+Z)D`Jg zc+K5A_T~*c2TvjE586`{lO(o3Ssg{bO!c>-!D?VYaiP-cqD1jz@J9;F(V)nsQn)z# zib0-WQ^ijzS_GTik_BIM*Uu` zxGkIv3)z#bh$>K*b|m=Ol;y!RAjB7=cyG>Tvu{Vxb8}iyhmr9cD=3`_svBV78f?^N z7;1<1fqeBs;Gkix<1W#baZxIlcm7J0OdMO!E36?oRKjC@HSjaYHpMbu>#INo+ksOJ z<0pi-H?V(BpU)d)o2E!rrH?Fbrx|)5tgm7832jKG>6?Ic0AstD$<$(lJC^tNidr2C z1hIa_TksFa#r?q0VgMPncTdO2N^h#8JTLPH|DGY}&U3ETSWQB_wD;>eplK3Rtw5r} ze&E|PufPUzmxYQme(Jg+@h)(H@6`%O8JPpd#U6;Y`9MVm{@)P#M}#tte&q{z@!J07uK0U;F;Y z@FaJyBh~HY;2voi>ck!7Vbnn_^aaeKJs1)OmwDgs3jf>z>5+91n^+&ULJ(qmZt*65 zl`sT;Q7oVs9KoZXQ3d;cLVSv2%CmndwUKaO-8wR@18(?KRu!K=zb_60V;0hA|0909 z)FvM7A)_Hvj3D9N)$bc`xNRqE@sT9oNy#rYg~8yPv@+gh~3_=h#}BthYk((S3R6fsIjm1$bd5kmDp{5lE{gG z3YP#qSunm0=WQ$Ee1?)W~l0z`O@^*YY0VT=Vz(+38^UX}_f_GuMX+A%6@#w!uL-hw^1`|MY{89ipk0T)BH$Uw{it zhbHGKf5(RCkKkxCf0mI$-D%Gu!RMgvh+hTypR4DY8B?(KZy)K1ozuwi*Al-2lYJte z+$@NDl%ALt)`o;X^lPf;ev=%3g+w$9fa&)+JX8!d>s6Qr!{b849J@b*yd0o{5C9UudI79;cF=_#(oHAv96#71 zSa&OQ<$Zx?(61_oB#7iKo+VOh1{OeCElweK9%)33)j++xLPxY}$wA`p)b?@pg^KoZa)C*|Vu^zG z`dr$oKz&?u_*%cHCoul;@}_<|x=fWwJ}~8$Xmd~hb9ig9F{SXuia);+6?`Vh$u`E- z)hW`>X%r^?u8WMHqw^2Tj9)91USDXVHuXIH#GAi)E@F?Cqud5_%aLC6O$1_*RP=_@ zCd;|Ry-O|DoTnEH+gw&?Htq)BUug6mec9P(H^LgAuenbZBsyUTUt;*a^Tr^kVW4R=L&brmsoDhCyeYj2k2*E zOUu;_EgkxH72hJZp{=45x(U9__D)`j5{xJQ1Up2{&{iCs{_uxuj zQk+9d^N(C6@?8SN6q(N?px$QG!MD!`FXJ$nOYDzo55T!vWZO~UY;wV4pf>t zbXk|kz+Qg!x-)3K+ca-YKnAeGT=>L)4OD9?1FlfE5|W4iW=xelOFw0<0W+jCp@o_t zGs9|bioJ~?WN5~VEkbNYi!)F0AgFHy1SzP#Ufiqip!CWVCP{kaqu5i~l?oM8>DnQE zPyBwq>1_b=!Js-|Zh&o!uf~90UIt=6u{6wdN{Cjj13Hcd(H>s4S&csC?-cjIA37l{ z_nS5of!S>=-T-*l8gdC6t55gLlRT3 zVo|%eUL}ENl8`9{z9_2yn15o;zx=@pnKd7tD$lmZ@*cb9`*Lxhg-0J2DU5Mwq%vm- zCjwQ-lHfxEUL&mS=4!U`RTz9&kK-j5n$Su*|4WzBCa6mI{V3R0-Lh9gzpWGeL!g9?;NCQ6uU@OIZ7$-%CPcf-@}(m z%wlwo5X@@?Q|t%^bki#f-{Cy2>l5Og+NMhwFh?0?qo1FdOfxdSZc@oPeA(l_T(OC( zhQIcwtyk2l?%+n=oIChHqJ-CA`QJEu@2Doz_uJcof&yZtOA%>OLX%K~h|&cF6;O(R zfG9;e#3TqPs1SM)DG{kE(xnNBfbU%%ToZs(z&U?-}?|T23S!>pe zj0T?PzOQTV{n=e#88BZR)dk>jE#CCR2l!JP9GEtzgKY4x>Avy{U!3)!XzE^vz>uKA z#U1wo@r{uSTh@Kbz5jgubgqsbjX~b`9%tep?=V;Nv%Tr1Dv)_bUn%_8J4`b10zd0p zT<=go8`=%I1l=p!EI{lI%*6h?*p3Xs7I2tzmsb^8Y8{Z`mKl9N1_gN_t?AqT05(z) zZ(61MxF(?hn12g8V;dV!T2bS;w&q4_hUiNZ9HM-y*MN{x1rl0P)%zPb><42vxt~G2 ztdM^t@ND37xf4N}K@}SjDe*xhq0?uVnKYmuEz!t-STJ%r7Fp&5Obtb6{1iLb9~vBa z4m~BtC-zev3!-FSO;E$ZF?soTOMI|*6o)(@K>qfee>MGEoOWM|e$*5o9tRnb*3jZ5@$X%&`% zVe+o}J$JSP!yV3)p41J-K!4>GMIS1RFWP50eB%R(TZ9?Le75P4OnvrvSmK-P`H90NY~wM zT0Y3iEbM#gyC7>dL_7gxD(k%g>%GlBj3qRWSmW7|H*^F7@&p>Ax*@Uvl-N!MPXqMe zEWCZ3uQT#V8^$|(-@JPRe5Eay6z^Vh%*zE}0{pZbR-Iu1zEcP_H~-c$n!w(bD?|*{ z98v$2%sef;=l547up7cvU1zf-s~IYZtP|DthSv|ox@CslkLfd8hY~V)p?#7W7!>mZ z_ICfon;>>K?AF1z`c|Snr*rI(cmN=EAwJXDlcoKC0FW5YYRw)f(Z6VA(320rlu(Da zG|UT0;03qJCIGGas=$*9Y_~C5&cWyme39$P2VnX7L1`Xx$bJRjF@R~A-`=8o%YB%h zg=fY7yHY=5^_3C?CF*_yt>#$$4AOf#_74#yFif{Tuv?Te9csjK$%>ig19nMU`2u*k zI}~Hrz;L1eMv7nGUIr`^JGqs8#G&Qnfs3)sDG;E*xxbLJ4Jh(*vfwb5N?f8qzRg(X z6EeWW;N>t-Z2u>u;_f}XKa%n!J9yN8H_0}DhG!iSlh+X z2ULn%R>>_l3q*p+$xvnKxuI-K8&#?8d(2SV1PF942UO;nmk4=dzJs~T+%6SXy?E6D zM2az+U^>1$L*HBu6!8tGR=%WyJXn7DKNAyi(-!=L9r?EW{f@bza@amY&X@m&mnI+%koob=yy9Eu#&q} zW#d1)liBA4LZE+ALwtq9l?=k z8Jx}q-`bq z%qfp_KH>EhIBiR3?F)ZEa5;Ejq;9`RPkurG--#_Lefru?>kZ-Tp28?p+0W& zX*dtap2WA8+X#lK&Ff!#MOlwc_4B^z(+d(lOF<>Y{+xKrLw zv4??Y4qmEJS`xVQ_s1&=vc>JkU;gn%VY(CBC-S5z{Yk6@<%@n1hRiXX57 z7qC55i|_*?3$*b@5qB0YI5K@IZejyBZ%!1dJx74k4c#BoAs0+s4rC?V)W6`iJRw{F zN%rmWAw1In9zwJF24a|eaVw2O7rV-pzG0RS{x)Xx1ZFvu5MKR8id&C6wQenWUTWkl z1tA6=0Atbr5yF%n&pzMK^>@MnHk=~B=Bj_yNXB)zVo?`KcVYOAz^mV(Ru?&!<+_gT zGSKy(z4adT*-ml^!vnOPe&(=2(CUvB%@Nd}*6 zgVY%{YleeD8joTtbQp*DuLv3mKIB&rEEFoJP=2hES9&smKi5We?w*QQG><}?G8du+ z(^RXtr2&)d7jm)*Xz3Hnuc)KH(yQ@?d*Pu!5nsr?T;}#$b&Z4!9dOO(k7eq}DwkZa z&o;QZKG%4~dp}?axd2QJ_(|eX0yi={LMSnbbb)lC`X-Sv^=T3$Kb%TA>hSEu*umzX z3WU$$cJ68Wd(rID`K?J!bsN>p&vJBX?l9xigy}KWg*&2-M;VV#^Se0H=?tHW``z3n ztHbwpHX;i;NsZyaX!r=kqq~xCu#-19E5oKZ8N-zb_x~j|-F&r}m@?nwhF7 zQve<^f2qqGx)((20;o~AWX10wB{gJFC%5oJ>n*41I>q0kWL&?PdIteL&|_bb2wJr>uJ@&&I4+A7sT zy}J=lOoqcyW4DqPlfX@;$&du-3>ek8rZ`cPZM-FNIt!6|F;|#$B?(AZ-_#1CR@DGm z6SD&}z(?58-+LRpdl`^XZ3L`z>Y*p_LieqBa0cD^-^yPP4u^vy=fUC6?O@ji#I61G zL-&K=miB$I>s?}aPmsMR`N^m(J1LF_JQBfQLcP#7$Am$Ji$5SS1_(}PtN2(!XkZlSFkqau8ECm z)VuPokdMWC;4-%Mvk_^lP8kO&v+WMYxyUpf8^f>tkwW2Z3*=Ad#cbHT`E+{fOzk`fRBjuY zZP{tVG~Af$U5@ei!}>b>80L-%aTx#m83SQKql?j=pmXu>UGSR4NjiXwzbnX5{tDy@ z>;j+WqwU?!AO%G1tYi)yl2s}?WTGu`%c<*36J=VP2#`Xkf&s8`uK#-5*&M(|?Zbc1 zfob@-XXjoSYytaNe1*F7RJV<->@mTOj@IaQd>&1`|Kt7jU%tWVwC_3xn|#k3}^ zC(XR@Yp+5zSx>tK^T(|`TO#ng^d@_Q8}G*aNjdjr_V4gae#`2nj@wrZ`$39q=ZXFy z%^(3iSq6nl4yHD97}u=8P6*%6g%hoo*USS_@F@1tr0P#12ia$Z3Ln&v&rJ(d2%I0L z3)u-B`A$!l15KUQ!v6PF^T=FIjwJSwwho8;_77;+b_=c1n=2ni4PCwJ7w-=3r@Q4` z;*l8~z^H9hKWEeZD!~~@&9jbp2nX0h2 z!T8Ud(%l1@6naeh{rnpF;T|oaQ*zlHw)=xnOv3*MgM#1Nvzs;Ck!soqvwhdWwM=Oc zd}XdMaoqm4cG0!CPL|U8ivII5s?K=PSz|%v(rsdNDg6-qTu=JXXrZ*HEwbE#YH1lQ zML&CH^f{Flk`sUsJgjE&<)q}*wCPo>V~2dE9Ie-EBY#5K^wE2lTaT18KNuzHdcK3g zl;l%t)vD9LESQf`K=I2}Lkv$EZ$0&$PoelSxP3}YGg1JUYmvnVb{z`c-LgT^3*4B`pUrM5Z>;l`AXOTl@jXlII{^Tz25&B@N%jx`Db_|3 z9{)j(9-{K_WC8o_6gs62Jc+~mnMWH`I`es@9yJvVxV|Cyz*Y1s;ju9oHXhT7${qDn zvu1ff1eBWnli`N=GMIc>w~ZYLRMk&We+pM?*Os$VzrC5ClU1{;)BT?fya%EqP5<@k) z{qOd9z{*l49;3M`oP|x_>HAN76bn&`7qUjs$gwG_*s2saUi0iO_eW+#Xa{GZ1;jpq0;*CeMeR+ z5v!lO+&S%(S3jHTpdwixg`T~7H+R;llfM#S@>jurq@m0@SN|tBW3}mDbhO9Af5bKS zu}rP}v%u-OstN~8q5P2jAuzuJSQKezIwAN^unv+PK!L}7D5LX+M%7KSJJ47JEx*>m z5cU;9PsSFcJBIKFfC(}G7vndb7&-SN&R}-ip9j^2RnJ4N>9)m9utQzgvqCLhhXy}_ zxb;W6i*yXo+=H8t6;D1oZQI}VKiC1hM*})8_uUGJ_B@md-n}Fo*!%VL%bV`ew*IgL z{C(;Gw9?CcA8(8CdXYTZBz9%`~e&i7`1#rtt$+v8viI%dp~^DBl>qk_JrKWE7t%wuXnz`sGedC;-foL zA`}QH<~`T{gP3z2%3b2C@QY18bAErq;+5P-{ikRy#yG$b>%C>VO|#9RkKeqqDS9S? zx9oaq=FKbkn#StI~@z#Al*-XMLd8x%NMUO?3DRqR!#+Tvo~`@^@5J88UUIJnSt$Y~ zMc{K9n+6&Rm3}1F#KEVIwZ>LmLn?dy{oNK-MJ#Hlj6i=GkJ~BUY@F?ly&Bc37>#}> z9;+(OUEXtMMr=A8aThHX!k0|BG$ZHVwinm&g3anLX7eX*6U9$zVVkj|n*Byg-tlnF zLI^HnOCkCotbVU&6*8{NWZ|(Oi(QadYyq$bhAd5rTiW9(O0_aQS`p)52i{NNVqj9} zZI-1b&U^0W;G+@B2_6&4=NX|Ca1+0Mq6(`vS5rQk6pf&kfzNYS_C>$q%k2mLtv{cn z47)d-$DfGtSxco14uNPtuKedae3U~3EJC!q1rr-U=9tO=nZ)6}qP-sVPi}D>_O&!I zTFn8x04?8c#Ao3Fny@qPhECg^gcFwh9Y{8D2ZzXC>w9l9c)#>`pMQ}k;gixxVq9|9 zRen)+WxFP-05AP96uI2U<#sDkY&NIkySpyl-6|_B_@>$31Yw&ES5SJrUw-JzI%B+O zjLX5+p6%u@IYvcrf;9K9`f`)%wBuL~-#O~A`^UNXZ4gpqP-K}$o62MB7GQEu3 z>4qY$fRLBkBF_Wl*Q;9=+VWj(On@T>Ehu6gYQ2|6fd^_wt}TXjv1%ev0_WoAu@2a7RZ_w5OoUr}C<)IDr?k<4+Lf3OEvS zBsr+Bw&kjMVMNTuB2+aYOC)ZWTvlvdoYchgRJ=Va*5yf0Zat(K@3LQxyJ@jE;3GqV z*5+WDz6yYJ^Oq6y5kDWGGh98}gO$EH+LCd<9AjPDZ^KM@TZ+@S$I-$tC>(4m;6m1u ze|f^Q!Ck+BcEeJ4;jws<{LJB|$XBzM{NJiL&`>g@;~etNW?;{^J~sZw^IrO~1xqme;0U(B zES}RvBNkJ9R$;^%?x#+K*yeIS^C#j1_^3#C#FEndVUs+4^_^*OMeiIJ8^G|ZdMlbv zDDy0z_vO}V84udVt~1kS?n$<|+>~rs6SPi>Df3J28QI`&+HG8w^G|7>zWHC?{nxg| z^umqjE+E5oYj0cUALUOmd|9rq(jq$nJ_>0XR!&4#6} z@$hW(Il267`8dCkmeJd5Nmhr4V8}OP{5(s)TFCLb`MVxjs-0~4gYJTbw_0p&lVP)y zGwGj`#iHY4(?jc#*vjtaMN){d^BcyA*!JWHqacR@g1$cZ<)>ydGGHTe{-7!DLm0ED zDLOMTtAAIh=mgRywnC=)oV}QAYb5$*8$8LaWt1ZvfI1w}dM6ko3qnlRD^9x&O-9ZO zg^G9Ii<#>%4EFvh_6=ESoTjon5LwYAZJsPn=pQ@OAfRGW87_t!H4C*sc2Tw_+T>*M2b zYE&r%;ldDm!A;Zd_lYl8-fnC+`HV$NqU3L|*ocmK!nV@N@IS%~U`{Ep6vCV`*1O0$ z!H{fYLGp+fZMt$=^}GCC;YR;ga_o4n8ISx+=cZg*1xMW?vVcJu7={f7D5xeUbd+T0 zrzQC$K)p-%iO^OQ1)9D=CrWgZYo_59HpH0v{5F&DRK*Y7j6m;!ll*u(_PSalo~?To zPi(NT5tNzyfXHXPt{yXX*SbZ~AsA=7(<%K$u4f9??3^Yd4~d~?ki);z!<`hq&rU|# z2uL4l0Ia$#d#?AV9_MPb9A&q@`ht_KlGB+MtA`pcD z8%H&etH1it*~6m9GD2F?O8VAmr*2Kv3lx#52fwf3ixwW>FJH5Z^PU&1THw_i*cz zYQG^STG-*r#;f;vabzN=^=l#dWVj&v+o@d>}K!Nqk%$f5R(3 zV14;z9fi_)@@3FKAH}hl6_p)$#V)GxoM~spA?=fz-^@R5=*7)bsFj^592O6w9`J=M0x>%aZR=??8`c-xPVF8Lq9XUG=-s5hylf8@4o> zN6eF&YDL>n9_mc*DSx0k`To*sM{Atj>(M%6GdKPK<`F&Z!FiPSK#3dQ;``6GtIR0L z-3VZOJEdTt$Rg|>@q0ZygfdxS8Ot(lI{@r5Th$f^OSC`yyYH-!X^rd-`;(~=%twkB zJ9GxW@%P6kMgxr8p}CK|!XAeX*C#$l2gb4@9>YEOPU;-JnimRVuWyNbH#McUS82^A ztc9g4z2y+azm&^8Nal+U3lx|YOK!W*{i-Rf_hRy@7mXv+9s&v^Ff ztuv{&ZA)!;DRL+B+kh{A<>5uCjx`!TeCwK3Q2_ZMWm!-=ZBZ*YEcQ#5vIO-xEarhB zAMFv=hEs>k!Thq6rl}FH^4s?62X!v%zj2l5KSh)~YF7rurGB?1|32$}SDVZK8v1>8 z%xWwQK9s{gM6UQfgSIpiAaV+4N{sF)n7x<$USD~<2px8+VzB4{qJHz3NE$x>9qSuZh4=coiKq}2yCI*Vm*YX}5)9frIdYd`*i4)&<;JR-0sZb!)9EJq0?%PH zqy(=PZV?INLfFi5b3+ANU zNY?6-0F+k$i<@iF6L#*Z_BAa>YvNy_c?XNWDZWAS=0wgfDY|~Vit*&pANqO?BM8gw zuNp^RsNusk2a}7U45?|fu2Y(>Ebg6DU!`It zsfTOJis+qEqOyvmz8d0anBl?+6Z_$1Ewgco^MMH*pPSXKJ$IM$dIRUZ z_-BpvfWU5QTyJrWtY5#bG=;(NO5XlEaU0W|WT2iNmuGy5*%Vg!+9^Z%qOvRNxnj{lg6Ny!B!J z>BvLP{q61lwdrV)sXn#F&o5tfZ(6_BCSjUJ27BtWtzZNFClrCMEW7bmRE&Jd_mkVq{5~&n_bwvt9N=>?{h5QDux*i= zG~fgr0q5ihu>Frf)*YX@{y(l&(?WX)RrlGx0$(szLyGNM0N6ZxLJkB9%o;zu1C@e6 zZdS1K{Rrl*C4Uvlx#p>UWSs^Bf#lu z0Zh6(TXYxq+c^GZF_E-F?03n+YVJ@^07zHUPg9NlfGoP{x-s>eO-Rui80hJr&T(!1 z1PVK09xR>3vGN(?MDz-)>fQ-M~(im_P$eX0Ck1j3Jn&h;w)sH|W; zUii;CJl@%vC;O4@0Qi=LR$fB+faBUGzZHN6WWWub2=XI>pmhpj34uLFJGa6UzG`E^ z%3b4b2W>t1-))eZTWx#~Y|Aa=%}#hGaXR?}aQvgZSco*=-n%;NNu+JtkevOgPPIWt zFoT7>U@~O})7ao7U{q{9@S+BvV^ZpZJXg zpP9p3tV4jx`?>l(&rO_JIZ%DHJ)@!50EF1>61z^}tWjS1_(RXDo*Dk!mobC>gCPz9 z{v^<@u19pm;Ky=oLHR51uJJSLr-MpC5m$|YQyB1dwE=juPa0mFWG0MNw4lRjwv;|tpg0> zp8&N4M#B0y8rf{?t;40NHE~5s!;Q@;JEx#${`3KZWcd#I8agGiKuzSk@^}){U3}2| z#bsgMZcR4}%b!P)n{!8D0R6UP-`~}*){2k_vTuF+j@T8fYt_KO;6Fk&fnIr?7F9p= zRWo<^FJbo}A_9qQ2drq2O=I`yC7%*j3(zeE87>go;}lcE=IvExV_@~0NQA(=+r><) zz;~KKIkq)z7%-z&FAqd1uJvTncH{cm#UhEItH`FY5S6y2P2f?-DY?+yOOQVl0m1iK zzLINzw|9dWJSvBL<9EJHMT14>Fvxj|Oiv1luFOEW0|;#6t~e*0!RCt~DXUi8;55HG5*MK>A9vGN?)@BZ?r0ux{cC5ts{7VrIYeL!EiN znTr8-xrT$bV5_0C>bYebD|F+y$(JSnHLO+F#ZduEm5=F?GbJipsgJE~q2LJAHn&H( zRk>Dv_63z0=w|7pb&IVl7^6GQw->;A{4E9ggLzE;yRDXfUIHk4Hj-+b6TmX%1tHsk znb{i>v4)E5LiZcgwpPAFRX-3n*~S-}Ts)9=xZ| zze0`1Bsk)rq8Vsa4w3zD=L~gU+XUdmiGNH^)7e@HYHM6`!V{g2Gysi}oNEIfSM=z~ z@aP5ba%~Ps^J)h)7sz#sZ)f1sVXM7ZP_zIR@mZMN`$Z^$cgafLA7Mmb`TGiWDkchh zhoEQr3UFUR9Cmo-*C(|37w3U)RiJh+@a`?~?ojgr{uis0>>ZMs;eUBv9m{|_!6Ed& zvK!|TT;4cBAq~K{b3Tv&mhm^gxp&)MCiB-yQ&e!ffETN&?Zt z=cbSDg5k~qi0k{g3{yasx7HnG0=6X6Bv35y!t_yF`H6(Kqir^lCIS*HDvMXJ-ZA;M zPSSa@T)zQ)9^xXYto7U3&;%~-3!E?q{=#a|u$M#?4#Cr=*QXYid{3*G!)1-;PUhV7d$fNA{q zzIv~{ve2b`9rlA6K74XiBAn-6zyoCPGs)@l)Fc|>M}>OTBMpc37`l4_@mMKKLLLMc z`oKpcm1vb?kU9*4Lgyz_C>=5@r46(ClN$U?SpBs(z&KTDP_MFFxzxx7_##N#=^8P5 zuztf!rvPu}V)Aw5os!yzemS}+Icl3fBI|C&Lxjam2hBCD|KDfesc#L0&~dn{)2R(^ z(1QugwU$}ih~hW@W-`9)XEKJT>y6A>W3Hfqzi%P!>B2F}Szq9D-Qv#|XWJ{vXVY66VBLCA`SHH+t7i?a>{ox+f%DIeji5U2`{*6L^FQYwW#~W8 zKMW)5Cp~~FzC<6yf&8j1OE@TQ|8?aVbx7&!Q<3Rx;W*{;Mu4WUga z3wJEu(>o-UFdk5*MHFVppF2_H9PA?e=i=({Tlv7;OLpN5~mB$YOT zLos#K0(u21mVCL-?G0^)OAK`EGg^@rD&djO>X3gK>W(sV z-a-p-j0*IR2PD{Rhy-%OnTq&hRyKZY2uIl}JUm_FEMCv7J;tAziA~cQEzle67b@Tg zl1|X7wg~?gE_d=_lCMy%s|)I;uWzNhbC0Mkk6lsleYARZpn4eh&#O z9+880+29z~#7PWqkBx!9&&-V-n5dnl`_;qM-CCR(>Sgw}*>@)JNQZO%q5Iv}2AyAC z@?R;z?X*K6BMRZaFMLqiU-z?I4r8ELjT@+)@hW*&2plYb;#<`Z=sX!X+o}#8hfMx| z3KFw)N=W(A+c9b1jWN6+Q>6IduSks>Eo4^SamDDLXr>dQ$Ha_fMl>#@G0GUpzx*n6 z-%+hy-@SNKH0d9Xd-Npev4I{l@Ri3z{-ed#@C;9E6bmuGWa{+(rS2t^3q&$PJ4nvD;D0v^Ie_IF$LTYLmsAi>6;RT0d>Fta-%!JxeH;L3CiTLcERljD zz0W}RRb)WF3YHuqfB)MvO>;NTEA>zm`=izp$|W=uqLm0P@{ z9FX$MSvCFc+W51|5!9m1JBgdukgq4`w{G=$d~mZGDqEi5qMTu(rhcPsL$qN^CV*iL zmMJ!|)*&XVUfy~*a({<5V~1V_4TY_r{tcc8HF%`^q_XoF7A|6Sb;eubm>T4W*N6WhWPG1ExuqObYEm9(ZbhtB1*nRA`-h&*V1Z;!`Jb{7@ zAE!2?Z-t2s+$(c-hY~Ab@$6s;P2J3#-I@iipl}Sug9h=#1>k>0xcNf}0xw|ar4Bt1 znTM{gU^t%o2{FYsiLn$2kV~DA5$MI&f3&Jmo_Jz;z&zrTa>a z&Al?WYN$tSKZxSmI?Nf1!T@U;#$W6wK#G@cqB%Y0lyMnaVaPKX(Qwa6ec%e4qIs$A zZ7(l?z8`P!BUI+b2eHjC8?yF|yG6{c)p!PNki3&z&*0t3RWbtxk)X|k!tj8cXxOF(LN z%^xs_UBR@wn{v%R2Hs<@-1#9gC3#>C*otpH9!b&I*lIDvj1hX6`Nby9Mi!|@$fN-e!2S9oMc!DS=xOKURKPbi~g6oF7S@Q!f0`PaY9FbI>w~c*}g5O{W81TFNls@nwaDP%^QA~Q=RLjV_nE0mEj~; znN(;+0r|h%Q>lPP2EKF~fM0xnc`C{I%AXu_Lm84zMh#95vYE%XKY|g1=+n=)E^>SY*9*xXV8TVH>pp>*a5USvIPiJ>q>W%@IZmVjp>}*uwC2jMe<45 zT4eQ&u}g&2F?K3~Cn&yseFKdY*M^(UOfzFc1-k8kXsqhxKG5~}yUfbFLKV}AWoF8y zbJHbhO!b&zLda-CML3to8}`*umz!=Mc=LvkW6c5~Qg>PC1Gr_gm;?kz$T_caKDe#C z?5}Du?Y??m;5?HH`SMeo6(3iSF3Y8w4@D0~@>#m0=hWXLq9={JwqifA;m{>JW0BR_ z!mM{lf746WW5+hX$*HK~@D3<(TU*ES0^4V;VfB#H8_s z{b);<{5^bn)b4}v=pFr|Z4FPxBdh2>K}JNFUblL1JU^E8zL9ma@S5VTo|Qu=DS6$F1Un)9cDrXD|nPO*r4V+F^ zxSt!~%(JjG07@L@;?G96KY6}Ur9YKzn9m2D1ACk_e#ZAsD<`kSd1Nr=fAkbg<$Io{ zoc)>Kg312+pbLSv!}Qa=KJ42|k117z9?>BGT`cj3n8V*2@5233OQY#&oL=Pf6yYU* z4DLwx_0h~@b7m35SO;K5s_ChsK~K8o;}2|#xyq3$)R&Zz^hA5$$3Pg-sV3ybsz}Gd zn+!DGX}#IBU&U@;I8crfa9LU7;v`qI9MU|5o?V(yjd)>va;4EbIgGUo@yX3S16S7>Z+DMTSt^tkV<1eTuj7-7dIFXEoZ+Gebu*VQS zUfM)EI;q3!{q{hj(gT4@c?N;N5psIV*87>0-ec)Fj+&MPwm1pzcx4$l9W|nQ(l?RJ zNRV##PKn8x3!vH>^0#azsIYdwT*mGxHobnS{qt9lbS6~Z13MTQAQA$*!)Xj|f}k<< zT5t%gz4V|?c%$*Kk>ESs6Nj;C8b)Y#? z!?S`vHp+NTaN73z8Duk)nFG%;NNFcqeioXq=3Rf-wb&9*UuonMcTAPu8S+2opRj#+ zONf^x$^wQf+L>%c4)X~ZP0lERHJ*c8^-?3J>44LBb3Sd1dkG^a*8TjuKg14xZMM}= zF6D2W{=L61_zAI$uPkv`38>`;RJ+#yeRwS)ozsN!rS85`eM&K&C7i_>vQBg=Vt4&)L}T|vkw_9n|Jh!`x5R0{QnP|{7duX z%gp}S)6s$=j`zfy+}mdhH(95+4oZ6m6JyJVS6*y#xMf74R+;lcf754%ah-BMljFCy zJdtcU{0{FCoQ~Vuo*}kM7g4&f~mS2nqyP0|MVKmvF?v%mq82=>Zw=ZYE4r&byX>8OH=d9=BsI$T6opN z=IckMJ?{2;B8_dT`OZRv(esqpvQ)Y~dh0QG5sojh0^2mHzU?PBx4gJ>E*PspH7)p) zeo<}+mGUSE(WVlljB5J1q|FlKo2G7VYLcyg%;aewo=R z%SJb&uq&$XDrj>OBe?7mUmU#xW+~bG-wu$>+Cx=Z`YH;Dc4Q_!y9OTNe2(8-@y z-bHNyijk^6?#!d9@+zL3%%q9sRNfC^v`%Z0IDjSe{N+fkHp3KJpl3ii$7kn)e)$hg ztD-+kn~d|vv+z@Q@FmE#IgZIk7gW7F;QcfSYxJe;t|G8;_Yo(zCEC*NkYvt9Z~myC zoehM|81uR+`Yu?&ANAx`tN#IwJj45~FR#bxkkG&rnzG#70-_-mU^iI_0*cQKZ>i>@ zK`d_p0Djh4uB!RWz4GzzT`T#(ptEaAaSn>Dy8|;2uf*8{K^&9 zB13)d;Q>ke2OtNccl8pfW!qznZ31>{RGlMwD*c)S+)m z%1V9rV$-WracaY>L_+yQ?UZ6@p*W>z56qj(w6lHi86QlLS!X~*aOl*3w>hv0c`#)P9m zoRU0b-DJQO$<7lg2PE#AmfP8KgV1ocJP(BR#U_cST-A$e`EF!I?~tlN;Cv2HG0H}{ z8IeS|*5YMuOj=7ebUWDZf_=_cAZmXFyH47wvGD6ZTMt3M)9yA@jANbH568#O4R@$a z3mSe%?BQC&t^~{QTlO?7`#jCX2reqh%JlP~eph@ak!mR*TN?*S4xaVyV$JlmQSrh*I8(vW0hpjp#0MoX! z@bLr9;IacpK><(QR55cQh@?TzVtxB$qd9xRnRNhpp9$vAK(`H(F%q+Q6@nh-JptiQU$of;xFD z&h_pAj3k)3X$jjZL%O}urDh38n|^uko(Ei&U`*Sx@#f?DMjMyc+$lm&)i_5{a+urh zDte%V`wUe@ZsYAzH|rID#Ch!6^2X4%!SKAXebD|iZ-Bn=`Vm@p0<*#6IH%(LVtu^K zM9EU|eYA7}@u-2~?89OUpi3AF;9iTy$8|#7t zXWTz~pWu{hwniVpVV^2hd`xsP+jPoDbw90*zky8K7;hCW$Z43Wy!xsXrA zOz?R|i$9LMV|IaJ)86N0-msVo>#(Rgto*)fRdhZ=AmUV6NBoM#-z?p=-#2*`93)1z z-Sc0yT}*Og{R*}!wg6ssIoez1E5;!Z{3Co;9jj`tBe2C zaFamh^eYkGDZAa^^Z|xUP;4r~vy)Rz+jI1>fs)o3 zrxJ?ZjJP-4A>+{%y!$S88S8x)A2=?DXj)3Xc1jcmO^R&v!d({Dh~JFZ#+y{VT)cw2 zFrk^F)6SaJJ!7DMN=MAeRzYvONvcimVO~)LFjAPpxZq69ES!>b;@e9I?1BeBaeOBy zT-QnAGAlLw&nxNHizl8i+VFD9bX__!(RT6@2+p(5Jk%I75bjQ>21%}a8i+oM*_;`l zp;KYQR)h7R9lyjOKG(JH*8?}qZ2HBR5y}P|`gePhEt;gUqdz4~lfJbEl?rn;xJ&5` zxTi~u!e)}0R|TF`KsiS0SFq-wCQ`PA*eg z>3rJA1d}O?h+s^YoKj-#E<4uGgA)W!0S<$~lhpG$cTQAxcNRG+L>!9%3+|>yQ} zwBaQF31`Na+UKAT*Nx-2r8K!UAYD18_nh^8i_4{afiNO?liq^A{d=qkcEVsW<>hDT zIyNTeKNY4yX97g6<m*Xo$v&r{F~T3cZD$Qm8`6urhG$iqR(1ou&b%COtSzdG z576M87JPj(LT#g#W!o))(|B0w;z*{11m}Y}`qkmChs3_2H;l6{+oJ4hk>fMLb}z@f zW)L(3%knf2B(DOg-#^&J!%s9^dR#y3}wb zBN2AIEXijC^VA#IQuZIsj1ZV*n`3K)3_mlOzW(-DWBKJ2e{7?u#=HEg znU6DEbx~Gpkvs3g{cGa>5a^uZ!dzm#hPu>xwUG0SX&@riZ94+iC7LX?@BXMF>M>t0Xn7{ju{W>VR1mBRC&VjSy~PU&a%KbN#+=VvsKwjT z?rFxMl~2Cm&GF`l#yk~f9KYc=T9C>g^x7-CcShi)g!N?hcce5-MO!p0Gq&cPFfQYc zbl=oYzP$M%@($a{L=OKB{IWkaaYy79sh`a?|6znagK zsSozcPejWW3|Z!G^(oLgm0Qe}{vQEv#?I5P?%R z{Zq@IO>SZ)u8OsUGHd1mb|mP6B_v8b3L(jGlc&1#C9kIP6Y)hJd)L`1S+d^S`-*+; zk5KNnEo-@i?AgqRb#72IY@g%)_@X+t4gRR#yhyMd)_oQw- zGkLoXj#6AoGk7*}ect6u-?Q#gfKA8Qiu=;P& z1+&P|(w8se4%dXOR4qEz2zPJ%t$YuEy^N?y0{)uU$m6r=#swE&zU1pWxnSnO@Co>m zOmaXZTIT;!%yUk92s~s53G%1F@?#17tRsSn7Gu0A z1}9g8dAGlLFANXBJ97+i3t)WVE_Wj#paRRj5=TQ{pfj2l*a2s4L{2C-@FziKN3KBD zgQ|t7PWLg8HMpiD8U7WruNL$b)&Q$(739&Y0f*Q^e~}#oL0X~|xdurlfIMRIUI2)i z?l%py$n83Uid5hrodigA%N-?W|DbZK8sslO1~v5t;EtT_(-{QCn19N_D9Hk*$h!LH zzThIUx4}JF?nc?0qB(%YhBqWx4ygh7k9SoV|Nqn8c}6waw&~h}A|hg?_o{+)=|#Z; zB7&gwrW8SXhme4Pg3>~hVrWVgkS;|b5m0)O-U$$DLhm7zxgY$#Z}yMbGqd)ZKYOq3 zA6)O7mB`9-m+QLD^ElP#>7_)dln3!h zW>9&+rRbY^jU`B>rZxWHp9VB(a_ay_2$aMz-bSO4UlIQcK#S|1@<91O=0H&fs7=4z z-nrJd`ix%8O2e%n zvFSAZH(1U&QlF@t3fwlB>2VQw^;Ls%mZ%3QQ7BsUkdqDz5$^WUD~(1R$n&0 zgpKfHU8vZrw}xy+K?{U==JBWUuxUHJW@K?_ztd)~PSBSe5q|Msaq5_nBJ_C2&@UF3405PnlHNDH(m15KcG` z?Mjf;F_z*ETiueC<7GBorH`kj>qK6*#y`O=gJ#e6a+-V%*_|hA9wwyiWsjq25?^Sk zaN-BkBl5H_s$chvxotMryz=Raw-+jY-(je`aZ`}Oxrqc_if6rKH@?xW-4YNyptIC# zmS&(N#qUvZ`f=QmcN3)F6LBagG9jyNLYVF3KR2qXkFZ>G0AD{k@x|VOD^MIBA^Goq z1Y53!BC-hp(B>eFBA@~!ujH)DiSOONiB99Cvh|6!IAgps8b47EYm(8a4|D!ea|VJI zc)R9xsxyG`j@CY@sC3R~%EM_xs2Cksi@vx*-(@G!&0er0Hv;e*hz_ffA0JN31(U^0 zjRKY2o|$Nr7d2_$RK!!@4Oa+f_fpqhsokfm2pEo7fF?J!Y z>w3l~H)G)Mmd~gCtYUNfXLW)vfvY2$L9`ey^}D zOvah&L3*S;l@knQHE>yo9%h^u7yc;#sO9o6a$S5R+)BN%RamvDe^Ka?sb9T}lH%8k z>Oakor?Wc-2N}^Qt~|VbnK1gJoK@b6z$0mvGU(f-Jbq{QT)`mvBHQKHGYnC>Hv;7^ zX2UU1Y0lMc<(WDnBYx)@6KMc{#BCS4DY{gMa%_n=dov2kQca7$1- z^jfodW*O!&4cdR4h_#I#Pc}A$J*M#dA_J};jBcl=tP`=O%w*oSC^7v?U3LEDlJmM= zzuf}YUij{CE>AKtM$*^|{c4Mtl!m|{T|FQ%-9g|AV9{r2e-u4+hrNy|qiXJ16+S2T z)&BiwkiO|lE>Qgfd`QV%YNl#EvcRTf4Qs=V_`hN)DVOXc<_J*yvS+L=Z%0Kb=|Z}w z^a-n->=@vPftIcEItb5&HZ*{Fa{5O~$~)*J{`=pq{^Moulu!LbAIzay^Pj$~8#HX^ zzzTE&gvxU+NACjE%#(ElfMPu9Hvw!SjDI1EQ;)QP4d>rK2y-2>sr+vx$46+XyZ@zI z=l{R#?tcvv{;Q7rFPX#t54?;p0Ukvvt)s`#ITNfFM93p34nX5M(8kjLeGt>7=pc1{ z4%6AQgy(>zcU8|C+I#MPO#6!-g_g;G|M~xKE|dTGWojdU`7#1dN1BqhGV23PoMV#YPF3_W5 zxC;BAKubAqOdZ-vB1Kh(pyb-rLY6M=kI)n&bsn_i9bYE7#io?MM!qI2lyq#>Fv|H? zweNxZ(+Ff@Qvh=Hj-7tM7@M%K-qeN=DXfLSgfxl(#yWI=Giy*QarCmv%6$hG zC2fj1x>{*4y4hX^gKT+bQyB=G+v0#lk2(;#R0TI{M!`pqfT;n(w24$z@7RakU-IzL z^H6-k01fYULL?GwZkHFq6AC~-+F`q)>{~( zf)702B_9~4mLM2icT1e02k=R(b*Fj=^+!DJ`uXP>jEiCy8pfEs+=ZM?a_^`W_6jjh zt-!Z1jTMDxHYq1tx^azEj*k+>`xr62N2^2)GA0(vxVi&=^C++!$lo@Px6;kh_f zVs6?&67AXK21IPSB#k^A5{&LpAp(v z{6wvyoFjh+tDhb_uq$qOP^yI(|AbfjlD(xdJ*sWRUjF0z>~zt-HpjQYl9wg$J~ezy z{$TFIRC!27YQJEA8U4d%RzlmYyrY}X-FEodv#8_#wNO5+g4qh7v$z7c%DLqp>N@4h z=<9@U69s<{*DBzulY9DVyZC6Ea!LQ6JiPofddAFEui9d@Cqt!>O_$#b!#FAFz@NowN2UtWwsgR+SRo_? zAeAukuViSX+?FnY!ZHQ*RDq1}fiGZ*jfD6M3p(E}J-7!`AE(~Jn4t?bVPE)OI%IX}rM84Zxm-of&{1zMm zM=B$i(A0=8cw|Y7L%T%2Ah2nZ7K;&?HTNoO0?2X(tdaSa#x;p@Ff{2bi}PfY6qYqo+wuvvJzT( zITxv4^x|H7snb0;@MFoR9PG_1kU9Vxz74{jB>^Bd8a}{7QXhgO<$&FP31hql2Oc&w zzcMG`Us?*%_`_YWuo1hBDdy;_FNn1;;^*>31{n3u8ZTx+6he<-11EBXbwYh1tQO|G zCum)bg6JMXRZFx2nWuGtVg#*IngzIJ$V+}MvT*x}2RU9_K)p?>nx>lrH6{pUpmO`R zd;5DfX{rsAGg|DxO|5sG?N~BMwJ0D@kA^8$tAyBTja7Et4~XAw@QL)< zN|27>aZKHg+*NT%WS(WJGy3jIk&^vT#w~EqjIm1R6@t^KpnB^gb`~-QK(_48d@mzbOQQzW&dEcqh-l$WFY4us6`A}o_P z|AQpQ`N~@esHhM- z9sQU|9zG!WQdz+*%;jYF)HYx);=1oA6bhEf_uZ=;0j_kPWH`iAgQI6!B;-3hPA-=q z&@UHAQN`ni_eRH=9;sV1(HS6fla!*cDzV8x1qN_*PE}8k!5Lw@50k4vfAK(<%Lv}j ztH+>PJ;%jCl@`Hgs z{0~(O_!+Y;icMO9?!XJQOLEp}fn_c4v4IE}b?mQ-pCcvMT_aleK{ry8NH}P63e>TG z%;U?T1MLbx=33Mm%Ls0{kS;R@(zo-!kDQ%0$cNph*GmrQ_(l3zHn~gFQKA4g785t( zx?T9OpF(kPQyHhE4EtZ!>`AL|_?~~quq;t=zX;BLSVHBhz-&fh(keqbNkzuZ0Hj@` z`N?o0j~t)DT|N1HY!z$2Nwci6i9T^noY7P}D*>+JooB!@bRfIG(S0xyH*$V`iruS@ zR5_(;uqGLlBzP^&mTkU;1!Rb?&hjY@O*-QS-V<2oUmE2FxtM5*s5O;>oR3f6(Y%CV z5J!UWq1#GvsbsfH|FHgaZRGY~lunt+R`ta!=5(op{9uBCczg>=JQL@_-OUuDlHS?J z&heKIas&oYz1wj?C$G}K@oN7IC2wcYvGUIGWu!52y9qQ{eJ6jH;l{|jEb799K#sN0 zhsR*DzwhI;nUb4jC^a2yt*;b#?hAU&XKU~!X}a=zxcLxWejKGv<)R#ncj1Ntp#=N0 zld@*_1r){@#y2%pu+KtXzUz zGq2LNQ-uS+7CUI$XG}5bED>!-E^9TTU2Hpl++L=rHs0%yE$FAU#@+CRk94;DxllKM zoBAXnt=*sH1fEll>h~X2bGdJ+2QL{reHeNq@CyUW{gf_f_=7-GHj6eV$gzO@dRJm{jZAMGhb^ z+k!Z%a$O-p{XFthD^epchWuMHC$vx^c7# zN94-Bm@>)laC{_og`qJK3kI1te1-i(gsdZjnl=XELwnN$?a#<7ul)ixdNO~EA#JyF zZ@~+%p{Eg#1>h^BpKze^nGlrjp`8s0ly1NCTJ7iWh?T&Y+;*;$F9bM>5rF{>5+R=j zxZ(~`7$Qoas4CyXE$G~P%LjXnqw5NYQ#$E?JO6Z7Bj>wG%J+N~MPgx?`r8_&I{d`i z8yUjnBt>pd_G6zuH97b`sxo=RTR?*Wqv2cVYt~h|Y<1GO(yOpM#Y=wItDkLoQ!%I; zsv$3mb^fr{Rj4)$3lnp?B@>(5=?>(M-&6J%Ub^KlP8p1}e7xIX55HA?MeO6GNZJE| zrO{kIn^56cR@;EFB=eJUV`T^$u@T93E2|+^uL$wNYj4crSU66=lzv|9$cilB|FYgB z{Zm$M_4mMm9`d9$4ESUA#j+jx2rRQ|Yl?TjkY1cQ@#SP5(wGLeKWP?L?_y}VN<7Y# z>z)`eUMH`f;QYAQOTNJgMI$FqmYW_1i($HXto^yRKsw6B#YiAFQDB4S65UL#Qxj=H z9RUkm)U^#g#?$|{U`O&(>%+=Fji>x~s*b^qx%4{VPo}0mK%kRk_94gzCLWM)QppvLWW_}h#yXk(3on-YN3dA3e4$b;JB#M{-f_Qb%y{6 ziI@c@9GG^Wl5@i|Ey*ah2qd_2Zq%(*&{GiWs1IwzB+?<(0HcmBr)&ZU%|RUpz#p$K zC)l8P@hw#H*{$WHAjYdBuHztvWkbor4&#+nFCgJsc>fY{C*^S6H{WR;Ra&8UQZ8e} zk@x*+7HiV1@4@WKL+c89GKrU|`I-gv$+p&95uYXwpvj@HfFAY?%N#e7z~^NSEX%4% zlSmN^E?j2sf-YCcNizw7 z&*uxN>>F!|0XVM(4{S}}2kq&BvPIov*kOMkhX z7_cLIhqHmBdt~odV07{0t+yz@5%*y`AA>33+2>vMvlZ|&_LYCG^z&w3zl^_GXD3zQ zVyJOo8{%@z?Y%7Dn>{XS;J`FSD6y^0`5rDG8X*=|tM4#RCpH$<_zY6%objIjH^Q#MvX}}Bl3VSs6+x3g9Y%?_g^bj_4~o8zXgH6aNz+AKf67V-aT}%*!~2_k z`qTE1fy1PvMsEfr8J%~@DjZd8cu`b(RdZoa+ZLlM-&R&x^WgfmT=*t8R!w_qBTb-h zmdfA&SV5Z?~T1- z74!3x16J=CxgvE&tlqFkZs4bw`;A)O;~oO;T8Wwt#acI(t_*5OV6D%| zBKcX@0$)bmiYaYSbisT%PwKyd^k^nMY9eL2qE(8`(0_;zK)KCw>TMet^*KRW^hQm9 zyO*v*Xb@>Fo1BGsn=_<=VkLDF?kOph($p(Cd+|zM%R#(DcHrq@TuoOXs-ZAU@s1+< zLIX%czd93a(`p(Tln=7L0=yW*VpzTGr+eZn*wwX|<2uaFpB-(B-G!2BgDPwD&NpmE z2MT+&x@#=xjz1^CEdk#hgdOGi46#yg2|P$cvBfS%3Miph7!LbSH8>d|OX?MU){ePs zy+1_39Ch);%*5a%Uoeca*3=kyWII+R_2?GM33~tAHUnGp-3=1E)~R$1I`H=7{f71F zvz6e6HVPE_BT(nSSju8$yyn1nY<3xHOhX;Ff?nEr{X$bebwO}6L?B3qpHRuWumJ^%;;S|>}P4}I3)W%czp4~aIx0Oaz_s3Gc1#Th*4l`f{G zH)Pgjzt)jT7sS}w9155V#HzZ*J~*SVOW?gfts_D9iP}xp=Cj?J91NF$Cz{Hovhx~O z#1IrOI9&Gi5O;$Ji4Z`B)goh(*lkczE;7cD=|Y8EC*AZu0Em5uA{)lu0HF9!W>CsO zSa8tT^*%^2-?*|?b<@8N>J_@pcX9I@eR$keXg4g534@voM~PX>t3Pd6oYnNtOwJuQ zW#ng7(DpkPcr5tzXT(Hzh$AGDPmTcIhuvHPs9)Vdhg>`C_G>X=b)hC{-5KB^Kew!1 z)WP+;VEwfH`GFy9%MNPxF!Y_Mh8uiKnv#?PItUws-o<>BH9FGfjH;@ zl{dnWMwj<&ob*jC+RLtsHpeuSyk6rN{~YejZwUSc5KpVOCHet~y5E?e6a`R#948;P z{T?r%#Y}7fSbpyAdMDF{_elzei`av&yL&3%yJKbMKk+Y)e3W&dsS`;YmWoxzuHqUH zh;d0S$!SmSn;Ual?&-r__q|s5wbo)?mj_E4mQl>A?qe^jdoH(A*mkUqwgUsZZ#9;t zfeEY5WaEB(1&l@336@^XSNQ7y&@^Itz{41~XEjS(_Ivd=^IoG1x&RN$3w{^#R};f| zJ>8y0(3mOnY>W6G5g^D1>~0Xt()b7fzCoRSndecXImWflkOz{V^zXpM?F&{9z`EVH zDVnO10&|oZNabRFl0zY7-?<$~X!2lV<^j?v;kId9E(-&~7^Y&XZ>@lg5!d8XL?Y6q zuqS~Bzf>Iyuso4cu5%jgrK`IDAij49%2?n0{>58e?oiP#7EtLs1x_UE;UNN=dr|_p5NMNKhPoQz*zdtrm zWaAflQfvll;r;l8Jho zj0mTL#Lhl#t0a`WellvCrNLvd`|*&I?p8TbR7howaQ*t83f%PJH8N#+!#IQbST6m# zvhmG71oGn}&N9)daQXo|N{#CFz8tXG+o3UBKZouU$u9pur_DGWfB$CkGnZLhlJ41L zE_TtH-r_N`0?vrBKAx6Y!8W^Kv?(DWRaFD=@V4hUH-QTVJmj$jUf1 zm}X`LSft>>ZH=PI$$=C7Ef3FaqAaQ_ zz;}PYjSKBN{2iAR+OK`3z?mz zzI&H%n$>vxDC^`6Rf#@NOHWz&aDWhp&0EMl&~PKFA+z&RilORxw<2=vXRE;Sh0PYU@xSfu_R+Y;aT&VOULoX?F-i=V|y=9 z#0-!1ze;>P`#Xddm8xDSm`zVX*gGiWmZCPu8N>J<;(U%H_1yd#vM;xGQEET%>M3|r zgF7Lv5I9csS1s6tH~4_5)eO0FeV@eeE}5cPZXQg_vL->=Ej(>3<~J#fo*2mm`zit( zA$h~eo}GB&Mi5pQ6Y_bocBMJ(EBLajtQ$ZJ+ZL~Fy5z%0jsN*7e!VZeT}e(JD1u$i zw~spYP~acSc|DT3Rt|IKpDug2uv7M)b3wzuha&X!Iu4wp+Pu!&Uo%&Rx!_hY=dRL%16PZZ9L%u1 zx8yyinQoFNYroUJmWb7*md$!X=&g=Ubh97KjoEQnI5D?_zcfo&cKcSp&HlycxG3ua zH~VIZ`p-pKa$#N#+wm@rqw8=BrhPj@yDHgK%R+2}p-y!x06JPoUd#v98}E_%$H3%MW55~TAf zK5{jseELvJX_X