Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Min-max quantization #183

Merged

Conversation

KazukiYoshiyama-sony
Copy link
Contributor

The following four functions are added.

  1. nnabla.functions.min_max_quantize
  2. nnabla.parametric_functions.min_max_quantize
  3. nnabla.parametric_functions.min_max_quantized_affine
  4. nnabla.parametric_functions.min_max_quantized_convolution

1 is implemented by the composite functions in C++-layer
2 is the wrapper function like the F.fixed_point_quantize
3 and 4 are the parametric versions like F.fixed_point_quantized_affine and F.fixed_point_quantized_convolution

@TE-StephenTiedemann TE-StephenTiedemann merged commit 30df1b7 into master Sep 24, 2019
@KazukiYoshiyama-sony KazukiYoshiyama-sony added the release-note-op-layer Auto-release; function improvemenet or/and addition label Oct 9, 2019
@YukioOobuchi YukioOobuchi deleted the feature/20190801-min-max-quantization-function branch January 20, 2020 04:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement release-note-op-layer Auto-release; function improvemenet or/and addition
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants