We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
in quant_layer.py, in forward function of QuantModule, why is bias not quantized?
quant_layer.py
forward
QuantModule
def forward(self, input: torch.Tensor): if self.use_weight_quant: weight = self.weight_quantizer(self.weight) bias = self.bias else: weight = self.org_weight bias = self.org_bias out = self.fwd_func(input, weight, bias, **self.fwd_kwargs) ...
The text was updated successfully, but these errors were encountered:
@yhhhli Could you please help to answer the upper question?
Sorry, something went wrong.
No branches or pull requests
Hi,
in
quant_layer.py
, inforward
function ofQuantModule
, why is bias not quantized?The text was updated successfully, but these errors were encountered: