Skip to content

Commit

Permalink
Project import generated by Copybara.
Browse files Browse the repository at this point in the history
GitOrigin-RevId: ec25bf2e416c3689477e82946fb69de2e53b9161
  • Loading branch information
MediaPipe Team authored and chuoling committed Jun 10, 2021
1 parent b48d72e commit b544a31
Show file tree
Hide file tree
Showing 32 changed files with 562 additions and 235 deletions.
6 changes: 6 additions & 0 deletions .github/ISSUE_TEMPLATE/00-build-installation-issue.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
---
name: "Build/Installation Issue"
about: Use this template for build/installation issues
labels: type:build/install

---
<em>Please make sure that this is a build/installation issue and also refer to the [troubleshooting](https://google.github.io/mediapipe/getting_started/troubleshooting.html) documentation before raising any issues.</em>

**System information** (Please provide as much relevant information as possible)
Expand Down
6 changes: 6 additions & 0 deletions .github/ISSUE_TEMPLATE/10-solution-issue.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
---
name: "Solution Issue"
about: Use this template for assistance with a specific mediapipe solution, such as "Pose" or "Iris", including inference model usage/training, solution-specific calculators, etc.
labels: type:support

---
<em>Please make sure that this is a [solution](https://google.github.io/mediapipe/solutions/solutions.html) issue.<em>

**System information** (Please provide as much relevant information as possible)
Expand Down
6 changes: 6 additions & 0 deletions .github/ISSUE_TEMPLATE/20-documentation-issue.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
---
name: "Documentation Issue"
about: Use this template for documentation related issues
labels: type:docs

---
Thank you for submitting a MediaPipe documentation issue.
The MediaPipe docs are open source! To get involved, read the documentation Contributor Guide
## URL(s) with the issue:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
---
name: "Bug Issue"
about: Use this template for reporting a bug
labels: type:bug

---
<em>Please make sure that this is a bug and also refer to the [troubleshooting](https://google.github.io/mediapipe/getting_started/troubleshooting.html), FAQ documentation before raising any issues.</em>

**System information** (Please provide as much relevant information as possible)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
---
name: "Feature Request"
about: Use this template for raising a feature request
labels: type:feature

---
<em>Please make sure that this is a feature request.</em>

**System information** (Please provide as much relevant information as possible)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
---
name: "Other Issue"
about: Use this template for any other non-support related issues.
labels: type:others

---
This template is for miscellaneous issues not covered by the other issue categories

For questions on how to work with MediaPipe, or support for problems that are not verified bugs in MediaPipe, please go to [StackOverflow](https://stackoverflow.com/questions/tagged/mediapipe) and [Slack](https://mediapipe.page.link/joinslack) communities.
Expand Down
20 changes: 17 additions & 3 deletions WORKSPACE
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,20 @@ http_archive(
url = "https://github.com/opencv/opencv/releases/download/3.2.0/opencv-3.2.0-ios-framework.zip",
)

http_archive(
name = "stblib",
strip_prefix = "stb-b42009b3b9d4ca35bc703f5310eedc74f584be58",
sha256 = "13a99ad430e930907f5611325ec384168a958bf7610e63e60e2fd8e7b7379610",
urls = ["https://github.com/nothings/stb/archive/b42009b3b9d4ca35bc703f5310eedc74f584be58.tar.gz"],
build_file = "@//third_party:stblib.BUILD",
patches = [
"@//third_party:stb_image_impl.diff"
],
patch_args = [
"-p1",
],
)

# You may run setup_android.sh to install Android SDK and NDK.
android_ndk_repository(
name = "androidndk",
Expand Down Expand Up @@ -369,9 +383,9 @@ http_archive(
)

# Tensorflow repo should always go after the other external dependencies.
# 2021-05-27
_TENSORFLOW_GIT_COMMIT = "d6bfcdb0926173dbb7aa02ceba5aae6250b8aaa6"
_TENSORFLOW_SHA256 = "ec40e1462239d8783d02f76a43412c8f80bac71ea20e41e1b7729b990aad6923"
# 2021-06-07
_TENSORFLOW_GIT_COMMIT = "700533808e6016dc458bb2eeecfca4babfc482ec"
_TENSORFLOW_SHA256 = "b6edd7f4039bfc19f3e77594ecff558ba620091d0dc48181484b3d9085026126"
http_archive(
name = "org_tensorflow",
urls = [
Expand Down
32 changes: 22 additions & 10 deletions docs/framework_concepts/calculators.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ specified, appear as literal values in the `node_options` field of the
output_stream: "TENSORS:main_model_output"
node_options: {
[type.googleapis.com/mediapipe.TfLiteInferenceCalculatorOptions] {
model_path: "mediapipe/models/active_speaker_detection/audio_visual_model.tflite"
model_path: "mediapipe/models/detection_model.tflite"
}
}
}
Expand All @@ -272,28 +272,40 @@ The `node_options` field accepts the proto3 syntax. Alternatively, calculator
options can be specified in the `options` field using proto2 syntax.

```
node: {
calculator: "IntervalFilterCalculator"
node {
calculator: "TfLiteInferenceCalculator"
input_stream: "TENSORS:main_model_input"
output_stream: "TENSORS:main_model_output"
node_options: {
[type.googleapis.com/mediapipe.IntervalFilterCalculatorOptions] {
intervals {
start_us: 20000
end_us: 40000
}
[type.googleapis.com/mediapipe.TfLiteInferenceCalculatorOptions] {
model_path: "mediapipe/models/detection_model.tflite"
}
}
}
```

Not all calculators accept calcuator options. In order to accept options, a
calculator will normally define a new protobuf message type to represent its
options, such as `IntervalFilterCalculatorOptions`. The calculator will then
options, such as `PacketClonerCalculatorOptions`. The calculator will then
read that protobuf message in its `CalculatorBase::Open` method, and possibly
also in the `CalculatorBase::GetContract` function or its
also in its `CalculatorBase::GetContract` function or its
`CalculatorBase::Process` method. Normally, the new protobuf message type will
be defined as a protobuf schema using a ".proto" file and a
`mediapipe_proto_library()` build rule.

```
mediapipe_proto_library(
name = "packet_cloner_calculator_proto",
srcs = ["packet_cloner_calculator.proto"],
visibility = ["//visibility:public"],
deps = [
"//mediapipe/framework:calculator_options_proto",
"//mediapipe/framework:calculator_proto",
],
)
```


## Example calculator

This section discusses the implementation of `PacketClonerCalculator`, which
Expand Down
2 changes: 1 addition & 1 deletion docs/solutions/selfie_segmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -284,6 +284,6 @@ on how to build MediaPipe examples.
* Google AI Blog:
[Background Features in Google Meet, Powered by Web ML](https://ai.googleblog.com/2020/10/background-features-in-google-meet.html)
* [ML Kit Selfie Segmentation API](https://developers.google.com/ml-kit/vision/selfie-segmentation)
* [Models and model cards](./models.md#selfie_segmentation)
* [Models and model cards](./models.md#selfie-segmentation)
* [Web demo](https://code.mediapipe.dev/codepen/selfie_segmentation)
* [Python Colab](https://mediapipe.page.link/selfie_segmentation_py_colab)
4 changes: 4 additions & 0 deletions mediapipe/calculators/core/end_loop_calculator.cc
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,10 @@ typedef EndLoopCalculator<std::vector<::mediapipe::NormalizedRect>>
EndLoopNormalizedRectCalculator;
REGISTER_CALCULATOR(EndLoopNormalizedRectCalculator);

typedef EndLoopCalculator<std::vector<::mediapipe::LandmarkList>>
EndLoopLandmarkListVectorCalculator;
REGISTER_CALCULATOR(EndLoopLandmarkListVectorCalculator);

typedef EndLoopCalculator<std::vector<::mediapipe::NormalizedLandmarkList>>
EndLoopNormalizedLandmarkListVectorCalculator;
REGISTER_CALCULATOR(EndLoopNormalizedLandmarkListVectorCalculator);
Expand Down
22 changes: 15 additions & 7 deletions mediapipe/calculators/tensor/inference_calculator_cpu.cc
Original file line number Diff line number Diff line change
Expand Up @@ -35,20 +35,28 @@ namespace api2 {

namespace {

int GetXnnpackDefaultNumThreads() {
#if defined(MEDIAPIPE_ANDROID) || defined(MEDIAPIPE_IOS) || \
defined(__EMSCRIPTEN_PTHREADS__)
constexpr int kMinNumThreadsByDefault = 1;
constexpr int kMaxNumThreadsByDefault = 4;
return std::clamp(NumCPUCores() / 2, kMinNumThreadsByDefault,
kMaxNumThreadsByDefault);
#else
return 1;
#endif // MEDIAPIPE_ANDROID || MEDIAPIPE_IOS || __EMSCRIPTEN_PTHREADS__
}

// Returns number of threads to configure XNNPACK delegate with.
// (Equal to user provided value if specified. Otherwise, it returns number of
// high cores (hard-coded to 1 for Emscripten without Threads extension))
// Returns user provided value if specified. Otherwise, tries to choose optimal
// number of threads depending on the device.
int GetXnnpackNumThreads(const mediapipe::InferenceCalculatorOptions& opts) {
static constexpr int kDefaultNumThreads = -1;
if (opts.has_delegate() && opts.delegate().has_xnnpack() &&
opts.delegate().xnnpack().num_threads() != kDefaultNumThreads) {
return opts.delegate().xnnpack().num_threads();
}
#if !defined(__EMSCRIPTEN__) || defined(__EMSCRIPTEN_PTHREADS__)
return InferHigherCoreIds().size();
#else
return 1;
#endif // !__EMSCRIPTEN__ || __EMSCRIPTEN_PTHREADS__
return GetXnnpackDefaultNumThreads();
}

} // namespace
Expand Down
4 changes: 2 additions & 2 deletions mediapipe/calculators/tensor/inference_calculator_gl.cc
Original file line number Diff line number Diff line change
Expand Up @@ -269,8 +269,8 @@ absl::Status InferenceCalculatorGlImpl::InitTFLiteGPURunner(
break;
}
}
MP_RETURN_IF_ERROR(
tflite_gpu_runner_->InitializeWithModel(model, op_resolver));
MP_RETURN_IF_ERROR(tflite_gpu_runner_->InitializeWithModel(
model, op_resolver, /*allow_quant_ops=*/true));

// Create and bind OpenGL buffers for outputs.
// The buffers are created once and their ids are passed to calculator outputs
Expand Down
4 changes: 4 additions & 0 deletions mediapipe/calculators/tensor/inference_calculator_metal.cc
Original file line number Diff line number Diff line change
Expand Up @@ -226,6 +226,10 @@ absl::Status InferenceCalculatorMetalImpl::LoadDelegate(CalculatorContext* cc) {

// Configure and create the delegate.
TFLGpuDelegateOptions options;
// `enable_quantization` enables the run of sparse models i.e. the models with
// DENSIFY op preceding DEQUINTIZE op. Both ops get removed from the execution
// graph after the tensor of the weights is read.
options.enable_quantization = true;
options.allow_precision_loss = allow_precision_loss_;
options.wait_type = TFLGpuDelegateWaitType::TFLGpuDelegateWaitTypeDoNotWait;
delegate_ =
Expand Down
17 changes: 7 additions & 10 deletions mediapipe/calculators/tensor/tensors_to_segmentation_calculator.cc
Original file line number Diff line number Diff line change
Expand Up @@ -763,9 +763,13 @@ out vec4 fragColor;
#endif // defined(GL_ES);
void main() {
vec4 input_value = texture2D(input_texture, sample_coordinate);
vec2 gid = sample_coordinate;
#ifdef FLIP_Y_COORD
float y_coord = 1.0 - sample_coordinate.y;
#else
float y_coord = sample_coordinate.y;
#endif // defined(FLIP_Y_COORD)
vec2 adjusted_coordinate = vec2(sample_coordinate.x, y_coord);
vec4 input_value = texture2D(input_texture, adjusted_coordinate);
// Run activation function.
// One and only one of FN_SOFTMAX,FN_SIGMOID,FN_NONE will be defined.
Expand All @@ -787,13 +791,6 @@ void main() {
float new_mask_value = input_value.r;
#endif // FN_NONE
#ifdef FLIP_Y_COORD
float y_coord = 1.0 - gid.y;
#else
float y_coord = gid.y;
#endif // defined(FLIP_Y_COORD)
vec2 output_coordinate = vec2(gid.x, y_coord);
vec4 out_value = vec4(new_mask_value, 0.0, 0.0, new_mask_value);
fragColor = out_value;
})";
Expand Down
34 changes: 25 additions & 9 deletions mediapipe/calculators/tflite/tflite_inference_calculator.cc
Original file line number Diff line number Diff line change
Expand Up @@ -128,23 +128,35 @@ struct GPUData {
} // namespace
#endif // MEDIAPIPE_TFLITE_GPU_SUPPORTED

namespace {

int GetXnnpackDefaultNumThreads() {
#if defined(MEDIAPIPE_ANDROID) || defined(MEDIAPIPE_IOS) || \
defined(__EMSCRIPTEN_PTHREADS__)
constexpr int kMinNumThreadsByDefault = 1;
constexpr int kMaxNumThreadsByDefault = 4;
return std::clamp(NumCPUCores() / 2, kMinNumThreadsByDefault,
kMaxNumThreadsByDefault);
#else
return 1;
#endif // MEDIAPIPE_ANDROID || MEDIAPIPE_IOS || __EMSCRIPTEN_PTHREADS__
}

// Returns number of threads to configure XNNPACK delegate with.
// (Equal to user provided value if specified. Otherwise, it returns number of
// high cores (hard-coded to 1 for Emscripten without Threads extension))
// Returns user provided value if specified. Otherwise, tries to choose optimal
// number of threads depending on the device.
int GetXnnpackNumThreads(
const mediapipe::TfLiteInferenceCalculatorOptions& opts) {
static constexpr int kDefaultNumThreads = -1;
if (opts.has_delegate() && opts.delegate().has_xnnpack() &&
opts.delegate().xnnpack().num_threads() != kDefaultNumThreads) {
return opts.delegate().xnnpack().num_threads();
}
#if !defined(__EMSCRIPTEN__) || defined(__EMSCRIPTEN_PTHREADS__)
return InferHigherCoreIds().size();
#else
return 1;
#endif // !__EMSCRIPTEN__ || __EMSCRIPTEN_PTHREADS__
return GetXnnpackDefaultNumThreads();
}

} // namespace

// Calculator Header Section

// Runs inference on the provided input TFLite tensors and TFLite model.
Expand Down Expand Up @@ -737,8 +749,8 @@ absl::Status TfLiteInferenceCalculator::InitTFLiteGPURunner(
break;
}
}
MP_RETURN_IF_ERROR(
tflite_gpu_runner_->InitializeWithModel(model, *op_resolver_ptr));
MP_RETURN_IF_ERROR(tflite_gpu_runner_->InitializeWithModel(
model, *op_resolver_ptr, /*allow_quant_ops=*/true));

// Allocate interpreter memory for cpu output.
if (!gpu_output_) {
Expand Down Expand Up @@ -969,6 +981,10 @@ absl::Status TfLiteInferenceCalculator::LoadDelegate(CalculatorContext* cc) {
const int kHalfSize = 2; // sizeof(half)
// Configure and create the delegate.
TFLGpuDelegateOptions options;
// `enable_quantization` enables the run of sparse models i.e. the models with
// DENSIFY op preceding DEQUINTIZE op. Both ops get removed from the execution
// graph after the tensor of the weights is read.
options.enable_quantization = true;
options.allow_precision_loss = allow_precision_loss_;
options.wait_type = TFLGpuDelegateWaitType::TFLGpuDelegateWaitTypeActive;
if (!delegate_)
Expand Down
8 changes: 6 additions & 2 deletions mediapipe/calculators/util/filter_collection_calculator.cc
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,15 @@ typedef FilterCollectionCalculator<std::vector<::mediapipe::NormalizedRect>>
FilterNormalizedRectCollectionCalculator;
REGISTER_CALCULATOR(FilterNormalizedRectCollectionCalculator);

typedef FilterCollectionCalculator<
std::vector<::mediapipe::NormalizedLandmarkList>>
typedef FilterCollectionCalculator<std::vector<::mediapipe::LandmarkList>>
FilterLandmarkListCollectionCalculator;
REGISTER_CALCULATOR(FilterLandmarkListCollectionCalculator);

typedef FilterCollectionCalculator<
std::vector<::mediapipe::NormalizedLandmarkList>>
FilterNormalizedLandmarkListCollectionCalculator;
REGISTER_CALCULATOR(FilterNormalizedLandmarkListCollectionCalculator);

typedef FilterCollectionCalculator<std::vector<::mediapipe::ClassificationList>>
FilterClassificationListCollectionCalculator;
REGISTER_CALCULATOR(FilterClassificationListCollectionCalculator);
Expand Down
Loading

0 comments on commit b544a31

Please sign in to comment.