-
Notifications
You must be signed in to change notification settings - Fork 1.8k
How to prune part of model with QAT pruner #5037
Comments
I think it's OK.
So QATQuantizer don't need the parameters bounded in your optimizer, just use it to count steps. But please pay attention to the config_list and the calibration_config, the module name in these dict will be |
Thank you for your reply. I am using the mmdetection framework, and the optimizer used is not |
Hello, can anyone provide a solution? |
could you show the error and the optimizer you used? |
Hello, I am reading the source code to try to solve the problem of optimizer; but I found another problem, if QAT does not specify dummy_input, there will be this bug below, is this normal? I see |
Add my code here: |
I check the QAT logic, I think |
The link I refer to is as follows:
Dummy_input is not used here, and whether the shape of this dummy_input needs to be the same as the input size of the model. If the shape is inconsistent, it will cause the QAT to fail, right? |
Seems an old version example, I will update it. yes, QAT need to know the input/output shape of each quantized layer. |
Describe the issue:
Hello, I have a model that needs to be quantized. This model consists of backbone, neck and head. I only want to quantify the neck part. There are many model parameters here, I don't want to specify each tensor name. Here is the method I use:
quantizer = QAT_Quantizer(model.neck, config_list, optimizer)
But the optimizer saves the parameters of the entire model. Will it have any effect if I use it like this? At the same time, what is the role of optimizer in QAT, and why does QAT need to pass in an optimizer?
If there is something wrong with my approach, what should I do?
Environment:
Configuration:
Log message:
How to reproduce it?:
The text was updated successfully, but these errors were encountered: