Skip to content

Code for the paper titled "Towards Privacy Aware Deep Learning for Embedded Systems" in ACM SAC'22 and PPML@NeurIPS'20

Notifications You must be signed in to change notification settings

vasishtduddu/EmbeddedMIA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 

Repository files navigation

Towards Privacy Aware Deep Learning for Embedded Systems (ACM SAC'22, PPML@NeurIPS'20)

This is the code repository for the paper "Towards Privacy Aware Deep Learning for Embedded Systems".

Experiments

All the code is in Jupyter notebooks for easy reproducibility. Following are the folders and their contents:

  • Quantization: Contains the privacy risk analysis for binarization and XNOR networks.
  • StdArchitectures: Contains the privacy risk analysis for standard deep learning architectures designed for efficiency such as SqueezeNet and MobileNet.
  • Pruning: Contains the privacy risk analysis of pruning the models followed by retraining. "Sparsity" folder provides an alternate implementation in a different ML library.
  • Defences: Contains the code for blackbox defences such as adversarial regularization and differential privacy for comparison with Gecko models.
  • Knowledge Distillation: Contains the code for homogeneous and heterogenous knowledge distillation of quantized models which forms the Gecko training methodology.

Credits

The code for binarization is adapted from https://github.com/itayhubara/BinaryNet.pytorch and https://github.com/jiecaoyu/XNOR-Net-PyTorch. Pruning code adapted from https://github.com/yangbingjie/DeepCompression-PyTorch. MIA attack code adapted from https://github.com/inspire-group/privacy-vs-robustness.

About

Code for the paper titled "Towards Privacy Aware Deep Learning for Embedded Systems" in ACM SAC'22 and PPML@NeurIPS'20

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published