Skip to content

Commit

Permalink
Fix broken links #3
Browse files Browse the repository at this point in the history
  • Loading branch information
natke committed Sep 30, 2020
1 parent 373d02c commit 9d89cf4
Show file tree
Hide file tree
Showing 12 changed files with 25 additions and 22 deletions.
13 changes: 8 additions & 5 deletions docs/how-to/add-custom-op.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,26 +13,29 @@ nav_order: 2
* TOC placeholder
{:toc}

## A new op can be written and registered with ONNXRuntime in the following 3 ways
A new op can be written and registered with ONNXRuntime in the following 3 ways

### 1. Using the experimental custom op API in the C API (onnxruntime_c_api.h)
1. Using the experimental custom op API in the C API (onnxruntime_c_api.h)

Note: These APIs are experimental and will change in the next release. They're released now for feedback and experimentation.

* Create an OrtCustomOpDomain with the domain name used by the custom ops
* Create an OrtCustomOp structure for each op and add them to the OrtCustomOpDomain with OrtCustomOpDomain_Add
* Call OrtAddCustomOpDomain to add the custom domain of ops to the session options
See [this](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/test/shared_lib/test_inference.cc) for an example called MyCustomOp that uses the C++ helper API (onnxruntime_cxx_api.h).
Currently, the only supported Execution Providers (EPs) for custom ops registered via this approach are the `CUDA` and the `CPU` EPs.
Currently, the only supported Execution Providers (EPs) for custom ops registered via this approach are the `CUDA` and the `CPU` EPs.

2. Using RegisterCustomRegistry API

### 2. Using RegisterCustomRegistry API
* Implement your kernel and schema (if required) using the OpKernel and OpSchema APIs (headers are in the include folder).
* Create a CustomRegistry object and register your kernel and schema with this registry.
* Register the custom registry with ONNXRuntime using RegisterCustomRegistry API.

See
[this](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/test/framework/local_kernel_registry_test.cc) for an example.

### 3. Contributing the op to ONNXRuntime
3. Contributing the op to ONNXRuntime

This is mostly meant for ops that are in the process of being proposed to ONNX. This way you don't have to wait for an approval from the ONNX team
if the op is required in production today.
See [this](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/contrib_ops) for an example.
2 changes: 1 addition & 1 deletion docs/how-to/custom-python-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The Python Operator provides the capability to easily invoke any custom Python c

## Design Overview

The feature can be found under [onnxruntime/core/language_interop_ops](../onnxruntime/core/language_interop_ops).
The feature can be in [https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/core/language_interop_ops](onnxruntime/core/language_interop_ops).

Here is a chart of calling sequence:

Expand Down
4 changes: 2 additions & 2 deletions docs/reference/execution-providers/ACL-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,10 @@ session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::ACLExec
status = session_object.Load(model_file_name);
```
The C API details are [here](../api/c-api.md.md).
The C API details are [here](../api/c-api.md).
## Performance Tuning
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../how-to/tune-performance.md)
When/if using [onnxruntime_perf_test](../../onnxruntime/test/perftest), use the flag -e acl
When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest), use the flag -e acl
4 changes: 2 additions & 2 deletions docs/reference/execution-providers/ArmNN-ExecutionProvider.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ InferenceSession session_object{so, env};
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::ArmNNExecutionProvider>());
status = session_object.Load(model_file_name);
```
The C API details are [here](../api/c-api.md.md).
The C API details are [here](../api/c-api.md).

### Performance Tuning
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../how-to/tune-performance.md)

When/if using [onnxruntime_perf_test](../../onnxruntime/test/perftest), use the flag -e armnn
When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest), use the flag -e armnn
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ InferenceSession session_object{so,env};
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime:: DNNLExecutionProvider >());
status = session_object.Load(model_file_name);
```
The C API details are [here](../api/c-api.md.md).
The C API details are [here](../api/c-api.md).

### Python
When using the python wheel from the ONNX Runtime built with DNNL execution provider, it will be automatically prioritized over the CPU execution provider. Python APIs details are [here](https://aka.ms/onnxruntime-python).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ status = session_object.Load(model_file_name);
```
You can check [here](https://github.com/scxiao/ort_test/tree/master/char_rnn) for a specific c/c++ program.

The C API details are [here](../api/c-api.md.md).
The C API details are [here](../api/c-api.md).

### Python
When using the Python wheel from the ONNX Runtime build with MIGraphX execution provider, it will be automatically
Expand All @@ -50,7 +50,7 @@ model on either the CPU or MIGraphX Execution Provider.
## Performance Tuning
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../how-to/tune-performance.md)

When/if using [onnxruntime_perf_test](../../onnxruntime/test/perftest#onnxruntime-performance-test), use the flag `-e migraphx`
When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest#onnxruntime-performance-test), use the flag `-e migraphx`

## Configuring environment variables
MIGraphX providers an environment variable ORT_MIGRAPHX_FP16_ENABLE to enable the FP16 mode.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,4 @@ InferenceSession session_object{so,env};
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::NnapiExecutionProvider>());
status = session_object.Load(model_file_name);
```
The C API details are [here](../api/c-api.md.md).
The C API details are [here](../api/c-api.md).
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,13 @@ For build instructions, please see the [BUILD page](../../how-to/build.md#nuphar

## Using the Nuphar execution provider
### C/C++
The Nuphar execution provider needs to be registered with ONNX Runtime to enable in the inference session. The C API details are [here](../api/c-api.md.md).
The Nuphar execution provider needs to be registered with ONNX Runtime to enable in the inference session. The C API details are [here](../api/c-api.md).

### Python
You can use the Nuphar execution provider via the python wheel from the ONNX Runtime build. The Nuphar execution provider will be automatically prioritized over the default CPU execution providers, thus no need to separately register the execution provider. Python APIs details are [here](../python/api_summary.rst#api-summary).

## Performance and Accuracy Testing
You can test your ONNX model's performance with [onnxruntime_perf_test](../../onnxruntime/test/perftest/README.md), or test accuracy with [onnx_test_runner](../../onnxruntime/test/onnx/README.txt). To run these tools with the Nuphar execution provider, please pass `-e nuphar` in command line options.
You can test your ONNX model's performance with [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest/README.md), or test accuracy with [onnx_test_runner](../../onnxruntime/test/onnx/README.txt). To run these tools with the Nuphar execution provider, please pass `-e nuphar` in command line options.

Please note that Nuphar uses TVM thread pool and parallel schedule for multi-thread inference performance. When building with OpenMP or MKLML, TVM thread pool would use gomp or iomp as its implementation; otherwise, TVM creates its own thread pool. Because of this, the current default parallel schedule policy is:
- Default to on for USE_OPENMP or USE_MKLML. User can use OMP_NUM_THREADS/MKL_NUM_THREADS to control TVM thread pool, as well as TVM_NUM_THREADS
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ InferenceSession session_object{so,env};
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::RknpuExecutionProvider>());
status = session_object.Load(model_file_name);
```
The C API details are [here](../api/c-api.md.md).
The C API details are [here](../api/c-api.md).


## Supported Operators
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ InferenceSession session_object{so,env};
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::TensorrtExecutionProvider>());
status = session_object.Load(model_file_name);
```
The C API details are [here](../api/c-api.md.md).
The C API details are [here](../api/c-api.md).

#### Shape Inference for TensorRT Subgraphs
If some operators in the model are not supported by TensorRT, ONNX Runtime will partition the graph and only send supported subgraphs to TensorRT execution provider. Because TensorRT requires that all inputs of the subgraphs have shape specified, ONNX Runtime will throw error if there is no input shape info. In this case please run shape inference for the entire model first by running script [here](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/providers/nuphar/scripts/symbolic_shape_infer.py).
Expand Down Expand Up @@ -69,7 +69,7 @@ Please see [this Notebook](../python/notebooks/onnx-inference-byoc-gpu-cpu-aks.i
## Performance Tuning
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../../how-to/tune-performance.md)

When/if using [onnxruntime_perf_test](../../onnxruntime/test/perftest#onnxruntime-performance-test), use the flag `-e tensorrt`
When/if using [onnxruntime_perf_test](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/test/perftest#onnxruntime-performance-test), use the flag `-e tensorrt`

## Configuring environment variables
There are four environment variables for TensorRT execution provider.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ The C API details are [here](../api/c-api.md).
### Python
When using the python wheel from the ONNX Runtime built with nGraph execution provider, it will be automatically prioritized over the CPU execution provider. Python APIs details are [here](../python/api_summary.rst#api-summary).
When using the python wheel from the ONNX Runtime built with nGraph execution provider, it will be automatically prioritized over the CPU execution provider. Python APIs details are [here](/python/api_summary).
## Performance Tuning
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/resnet50_csharp.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ nav_order: 2

The sample walks through how to run a pretrained ResNet50 v2 ONNX model using the Onnx Runtime C# API.

The source code for this sample is available [here](Program.cs).
The source code for this sample is available [here](https://github.com/microsoft/onnxruntime/tree/master/csharp/sample/Microsoft.ML.OnnxRuntime.ResNet50v2Sample).

## Contents
{: .no_toc }
Expand All @@ -23,7 +23,7 @@ To run this sample, you'll need the following things:

1. Install [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet-core/3.1) or higher for you OS (Mac, Windows or Linux).
2. Download the [ResNet50 v2](https://github.com/onnx/models/blob/master/vision/classification/resnet/model/resnet50-v2-7.onnx) ONNX model to your local system.
3. Download [this picture of a dog](dog.jpeg) to test the model. You can also use any image you like.
3. Download [this picture of a dog](/images/dog.jpeg) to test the model. You can also use any image you like.

## Getting Started

Expand Down

0 comments on commit 9d89cf4

Please sign in to comment.