Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

nni.compression.pytorch.speedup.error_code.UnBalancedGroupError: The number remained filters in each group is different #4864

Open
qi657 opened this issue May 16, 2022 · 5 comments
Assignees
Labels

Comments

@qi657
Copy link

qi657 commented May 16, 2022

Describe the issue:
Cannot speed up depthwise separable convolutions
[2022-05-16 11:22:40] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: _backbone._block2.1._dw_conv.0, op_type: Conv2d)
Traceback (most recent call last):
File "D:/nni-master/examples/model_compress/pruning/FD_Prune/FD_simulated_prune.py", line 278, in
pruner.compress()
File "D:\nni-master\nni\algorithms\compression\v2\pytorch\base\scheduler.py", line 194, in compress
task_result = self.pruning_one_step(task)
File "D:\nni-master\nni\algorithms\compression\v2\pytorch\pruning\basic_scheduler.py", line 154, in pruning_one_step
result = self.pruning_one_step_normal(task)
File "D:\nni-master\nni\algorithms\compression\v2\pytorch\pruning\basic_scheduler.py", line 77, in pruning_one_step_normal
ModelSpeedup(compact_model, self.dummy_input, pruner_generated_masks).speedup_model()
File "D:\nni-master\nni\compression\pytorch\speedup\compressor.py", line 519, in speedup_model
self.replace_compressed_modules()
File "D:\nni-master\nni\compression\pytorch\speedup\compressor.py", line 386, in replace_compressed_modules
self.replace_submodule(unique_name)
File "D:\nni-master\nni\compression\pytorch\speedup\compressor.py", line 450, in replace_submodule
leaf_module, auto_infer.get_masks())
File "D:\nni-master\nni\compression\pytorch\speedup\compress_modules.py", line 14, in
'Conv2d': lambda module, masks: replace_conv2d(module, masks),
File "D:\nni-master\nni\compression\pytorch\speedup\compress_modules.py", line 376, in replace_conv2d
raise UnBalancedGroupError()
nni.compression.pytorch.speedup.error_code.UnBalancedGroupError: The number remained filters in each group is different

Environment:

  • NNI version: 2.7
  • Training service (local|remote|pai|aml|etc):local
  • Client OS:
  • Server OS (for remote mode only):
  • Python version:3.7
  • PyTorch/TensorFlow version:pytorch 1.10.1
  • Is conda/virtualenv/venv used?:yes
  • Is running in Docker?:no

Configuration:

  • Experiment config (remember to remove secrets!):
  • Search space:

Log message:

  • nnimanager.log:
  • dispatcher.log:
  • nnictl stdout and stderr:

How to reproduce it?:

@J-shang
Copy link
Contributor

J-shang commented May 19, 2022

hello @qi657 , a pr will be submitted as soon to fix your issue, we will let you know when it's done.

@scarlett2018
Copy link
Member

hello @qi657 , a pr will be submitted as soon to fix your issue, we will let you know when it's done.

@J-shang the fix has released to 2.8, right? shall we close this issue?

@Triple-L
Copy link

using nni 2.8, still facing the same bug when pruning with SlimmerPruner

@nnzs9248
Copy link

using nni 2.10, still facing the same bug when pruning

@vladgo329
Copy link

2.10 the issue is still reproduced for all depthwise convolutions.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants