Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

add new QAT_quantization #1732

Merged
merged 58 commits into from
Nov 25, 2019
Merged

add new QAT_quantization #1732

merged 58 commits into from
Nov 25, 2019

Conversation

Cjkkkk
Copy link
Contributor

@Cjkkkk Cjkkkk commented Nov 12, 2019

No description provided.

when the type is int, all quantization types share same bits length
* **quant_start_step:** int
disable quantization until model are run by certain number of steps, this allows the network to enter a more stable
state where activation quantization ranges do not exclude a significant fraction of values, default value is 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“where activation quantization ranges do not exclude a significant fraction of values”, don't understand this sentence, could you explain a little more?

@Cjkkkk Cjkkkk requested a review from chicm-ms November 25, 2019 04:16
@chicm-ms chicm-ms merged commit 06a9837 into microsoft:master Nov 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants