Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
refine
Browse files Browse the repository at this point in the history
  • Loading branch information
chenbohua3 committed Jul 13, 2021
1 parent 1bae39b commit 5fded17
Showing 1 changed file with 2 additions and 3 deletions.
5 changes: 2 additions & 3 deletions nni/algorithms/compression/pytorch/quantization/quantizers.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,9 +124,8 @@ def quant_backward(tensor, grad_output, quant_type, scale, zero_point, qmin, qma


class ObserverQuantizer(Quantizer):
"""
This quantizer uses observers to record weight/activation statistics to get quantization
information. The whole process can be divided into three steps:
"""This quantizer uses observers to record weight/activation statistics to get quantization information.
The whole process can be divided into three steps:
1. It will register observers to the place where quantization would happen (just like registering hooks).
2. The observers would record tensors' statistics during calibration.
3. Scale & zero point would be obtained after calibration.
Expand Down

0 comments on commit 5fded17

Please sign in to comment.