Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Add flops and params counter #2535

Merged
merged 21 commits into from
Jun 30, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/en_US/Compressor/CompressionReference.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,3 +31,10 @@
:members:

```

## Model FLOPs/Parameters Counter

```eval_rst
.. autofunction:: nni.compression.torch.utils.counter.count_flops_params

```
12 changes: 12 additions & 0 deletions docs/en_US/Compressor/CompressionUtils.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,4 +118,16 @@ When the masks of different layers in a model have conflict (for example, assign
```
from nni.compression.torch.utils.mask_conflict import fix_mask_conflict
fixed_mask = fix_mask_conflict('./resnet18_mask', net, data)
```

### Model FLOPs/Parameters Counter
We provide a model counter for calculating the model FLOPs and parameters. This counter supports calculating FLOPs/parameters of a normal model without masks, it can also calculates FLOPs/parameters of a model with mask wrappers, which helps users easily check model complexity during model compression on NNI. Note that, for sturctured pruning, we only identify the remained filters according to its mask, which not taking the pruned input channels into consideration, so the calculated FLOPs will be larger than real number (i.e., the number calculated after Model Speedup).

### Usage
```
from nni.compression.torch.utils.counter import count_flops_params

# Given input size (1, 1, 28, 28)
flops, params = count_flops_params(model, (1, 1, 28, 28))
print(f'FLOPs: {flops/1e6:.3f}M, Params: {params/1e6:.3f}M)
```
127 changes: 127 additions & 0 deletions src/sdk/pynni/nni/compression/torch/utils/counter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.

import logging
import torch
import torch.nn as nn
from nni.compression.torch.compressor import PrunerModuleWrapper


_logger = logging.getLogger(__name__)

try:
from thop import profile
except ImportError:
_logger.warning('Please install thop using command: pip install thop')


def count_flops_params(model: nn.Module, input_size, verbose=True):
"""
Count FLOPs and Params of the given model.
This function would identify the mask on the module
and take the pruned shape into consideration.
Note that, for sturctured pruning, we only identify
the remained filters according to its mask, which
not taking the pruned input channels into consideration,
so the calculated FLOPs will be larger than real number.

Parameters
---------
model : nn.Module
target model.
input_size: list, tuple
the input shape of data


Returns
-------
flops: float
total flops of the model
params:
total params of the model
"""

assert input_size is not None

device = next(model.parameters()).device
inputs = torch.randn(input_size).to(device)

hook_module_list = []
prev_m = None
for m in model.modules():
weight_mask = None
m_type = type(m)
if m_type in custom_ops:
if isinstance(prev_m, PrunerModuleWrapper):
weight_mask = prev_m.weight_mask

m.register_buffer('weight_mask', weight_mask)
hook_module_list.append(m)
prev_m = m

flops, params = profile(model, inputs=(inputs, ), custom_ops=custom_ops, verbose=verbose)

for m in hook_module_list:
m._buffers.pop("weight_mask")

return flops, params

def count_convNd_mask(m, x, y):
"""
The forward hook to count FLOPs and Parameters of convolution operation.

Parameters
----------
m : torch.nn.Module
convolution module to calculate the FLOPs and Parameters
x : torch.Tensor
input data
y : torch.Tensor
output data
"""
output_channel = y.size()[1]
output_size = torch.zeros(y.size()[2:]).numel()
kernel_size = torch.zeros(m.weight.size()[2:]).numel()

bias_flops = 1 if m.bias is not None else 0

if m.weight_mask is not None:
output_channel = m.weight_mask.sum() // (m.in_channels * kernel_size)

total_ops = output_channel * output_size * (m.in_channels // m.groups * kernel_size + bias_flops)

m.total_ops += torch.DoubleTensor([int(total_ops)])


def count_linear_mask(m, x, y):
"""
The forward hook to count FLOPs and Parameters of linear transformation.

Parameters
----------
m : torch.nn.Module
linear to calculate the FLOPs and Parameters
x : torch.Tensor
input data
y : torch.Tensor
output data
"""
output_channel = y.size()[1]
output_size = torch.zeros(y.size()[2:]).numel()

bias_flops = 1 if m.bias is not None else 0

if m.weight_mask is not None:
output_channel = m.weight_mask.sum() // m.in_features

total_ops = output_channel * output_size * (m.in_features + bias_flops)

m.total_ops += torch.DoubleTensor([int(total_ops)])


custom_ops = {
nn.Conv1d: count_convNd_mask,
nn.Conv2d: count_convNd_mask,
nn.Conv3d: count_convNd_mask,
nn.Linear: count_linear_mask,
}