Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] ARM CPU plugin docs #10885

Merged
merged 19 commits into from
Mar 15, 2022

Conversation

alvoron
Copy link
Contributor

@alvoron alvoron commented Mar 10, 2022

TODO:

  • get disclaimer wording from Matthew and add it to device page
  • update the list of supported layers

@ilya-lavrenov ilya-lavrenov added this to the 2022.1 milestone Mar 11, 2022
@ilya-lavrenov ilya-lavrenov added the port to master Required port to master from 2022.3 LTS label Mar 11, 2022
### Read-write properties
All parameters must be set before calling `ov::Core::compile_model()` in order to take effect or passed as additional argument to `ov::Core::compile_model()`

- ov::enable_profiling
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose you also support streams, pinning, etc
@apankratovantonp please, provide a full list of properties.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thus these options are in the supported list they do not provide expected threading control. We even test only in latency (sync) mode only. I am going to enable tbb threading backend and provide full threading control as in the CPU plugin. It won't give any valuable throughput gain but give full threading control. So we can claim support of these options.

| Activation-Not | Supported |Supported\*\*\*| Supported | Not Supported | ? | Supported |
| Activation-PReLU | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
| Activation-ReLU | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
| Activation-ReLU6 | Supported |Supported\*\*\*| Supported | Not Supported | ? | Supported |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no spec for ReLU-6

| Cosh | Supported | Supported\*\* | Not Supported | Not Supported |Supported\*\*\*\*| Supported |
| Crop | Supported | Supported | Supported | Supported | ? | Supported |
| CTCGreedyDecoder | Supported\*\* | Supported\*\* | Supported\* | Not Supported |Supported\*\*\*\*| Supported |
| Deconvolution | Supported | Supported | Supported | Not Supported | ? | Supported |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no specs for Deconvolution and Crop.
However I've found mention of a Crop layer in the issue created in 2018

| Eltwise-Mul | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
| Eltwise-NotEqual | Supported |Supported\*\*\*| Supported | Not Supported | Supported\* | Supported |
| Eltwise-Pow | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
| Eltwise-Prod | Supported |Supported\*\*\*| Supported | Supported | ? | Supported |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no spec for Eltwise-Prod

| Eltwise-Prod | Supported |Supported\*\*\*| Supported | Supported | ? | Supported |
| Eltwise-SquaredDiff | Supported |Supported\*\*\*| Supported | Not Supported | Supported | Supported |
| Eltwise-Sub | Supported |Supported\*\*\*| Supported | Supported | Supported | Supported |
| Eltwise-Sum | Supported |Supported\*\*\*| Supported | Supported | ? | Supported |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no spec for Eltwise-Sum. Should it be considered CumSum layer?

Copy link

@alalek alalek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to align:

The ARM® CPU plugin
The Arm® CPU plugin

## Introducing the Arm® CPU Plugin
The ARM® CPU plugin is developed in order to enable deep neural networks inference on Arm® CPU, using [Compute Library](https://github.com/ARM-software/ComputeLibrary) as a backend.

The Arm® CPU plugin is not a part of the Intel® Distribution of OpenVINO™ toolkit and is not distributed in pre-built form. To use the plugin, it should be buid from source code. Plugin build procedure is described on [How to build Arm® CPU plugin](https://github.com/openvinotoolkit/openvino_contrib/wiki/How-to-build-ARM-CPU-plugin).
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be built


described on ... page ?


- Floating-point data types:
- f32
- f16
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

F32 as it is used below in this form.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I set f32 everywhere

@alvoron alvoron marked this pull request as ready for review March 15, 2022 10:03
@alvoron alvoron requested a review from a team as a code owner March 15, 2022 10:03
@alvoron alvoron requested review from a team and avladimi and removed request for a team March 15, 2022 10:03
@azhogov azhogov merged commit 6cf81ad into openvinotoolkit:releases/2022/1 Mar 15, 2022
@alvoron alvoron deleted the alvoron-arm-docs branch March 15, 2022 14:24
ilya-lavrenov pushed a commit to ilya-lavrenov/openvino that referenced this pull request Mar 18, 2022
* initial commit

ARM_CPU.md added
ARM CPU is added to the list of supported devices

* Update the list of supported properties

* Update Device_Plugins.md

* Update CODEOWNERS

* Removed quotes in limitations section

* NVIDIA and Android are added to the list of supported devices

* Added See Also section and reg sign to arm

* Added Preprocessing acceleration section

* Update the list of supported layers

* updated list of supported layers

* fix typos

* Added support disclaimer

* update trade and reg symbols

* fixed typos

* fix typos

* reg fix

* add reg symbol back

Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com>
@ilya-lavrenov ilya-lavrenov added ported to master Ported from 2022.x branches to master and removed port to master Required port to master from 2022.3 LTS labels Mar 18, 2022
@ilya-lavrenov
Copy link
Contributor

Ported as a part of #11040, please, check that everything is correct

azhogov pushed a commit that referenced this pull request Mar 18, 2022
* Added migration for deployment (#10800)

* Added migration for deployment

* Addressed comments

* more info after the What's new Sessions' questions (#10803)

* more info after the What's new Sessions' questions

* generalizing the optimal_batch_size vs explicit value message

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Runtime_UG/automatic_batching.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Perf Hints docs and General Opt Guide refactoring (#10815)

* Brushed the general optimization page

* Opt GUIDE, WIP

* perf hints doc placeholder

* WIP

* WIP2

* WIP 3

* added streams and few other details

* fixed titles, misprints etc

* Perf hints

* movin the runtime optimizations intro

* fixed link

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* some details on the FIL and other means when pure inference time is not the only factor

* shuffled according to general->use-case->device-specifics flow, minor brushing

* next iter

* section on optimizing for tput and latency

* couple of links to the features support matrix

* Links, brushing, dedicated subsections for Latency/FIL/Tput

* had to make the link less specific (otherwise docs compilations fails)

* removing the Temp/Should be moved to the Opt Guide

* shuffled the tput/latency/etc info into separated documents. also the following docs moved from the temp into specific feature, general product desc or corresponding plugins

-   openvino_docs_IE_DG_Model_caching_overview
-   openvino_docs_IE_DG_Int8Inference
-   openvino_docs_IE_DG_Bfloat16Inference
-   openvino_docs_OV_UG_NoDynamicShapes

* fixed toc for ov_dynamic_shapes.md

* referring the openvino_docs_IE_DG_Bfloat16Inference to avoid docs compilation errors

* fixed main product TOC, removed ref from the second-level items

* reviewers remarks

* reverted the openvino_docs_OV_UG_NoDynamicShapes

* reverting openvino_docs_IE_DG_Bfloat16Inference and openvino_docs_IE_DG_Int8Inference

* "No dynamic shapes" to the "Dynamic shapes" as TOC

* removed duplication

* minor brushing

* Caching to the next level in TOC

* brushing

* more on the perf counters ( for latency and dynamic cases)

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Updated common IE pipeline infer-request section (#10844)

* Updated common IE pipeline infer-reqest section

* Update ov_infer_request.md

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* DOCS: Removed useless 4 spaces in snippets (#10870)

* Updated snippets

* Added link to encryption

* [DOCS] ARM CPU plugin docs (#10885)

* initial commit

ARM_CPU.md added
ARM CPU is added to the list of supported devices

* Update the list of supported properties

* Update Device_Plugins.md

* Update CODEOWNERS

* Removed quotes in limitations section

* NVIDIA and Android are added to the list of supported devices

* Added See Also section and reg sign to arm

* Added Preprocessing acceleration section

* Update the list of supported layers

* updated list of supported layers

* fix typos

* Added support disclaimer

* update trade and reg symbols

* fixed typos

* fix typos

* reg fix

* add reg symbol back

Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com>

* Try to fix visualization (#10896)

* Try to fix visualization

* New try

* Update Install&Deployment for migration guide to 22/1 (#10933)

* updates

* update

* Getting started improvements (#10948)

* Onnx updates (#10962)

* onnx changes

* onnx updates

* onnx updates

* fix broken anchors api reference (#10976)

* add ote repo (#10979)

* DOCS: Increase content width (#10995)

* fixes

* fix

* Fixed compilation

Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Aleksandr Voron <aleksandr.voron@intel.com>
Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ported to master Ported from 2022.x branches to master
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants