Paper sharing in adversary related works
a). Relatedness: related extent to our topic
- 1 - slight related
- 2 - related
- 3 - strong related
b). Familiariyu: Reading situation
- 0 - unread
- 1 - read introduction
- 2 - know the method
- 3 - understand
- 4 - fully understand
Paper | Relatedness | Familiarity |
---|---|---|
Intriguing properties of neural netwroks | 3 | 3 |
Explaining and Harnessing Adversarial Examples | 3 | 3 |
Adversarial examples in the physical world | 3 | 3 |
The limitations of deep learning in adversarial settings | 3 | 3 |
DeepFool: a simple and accurate method to fool deep neural networks | 3 | 3 |
Towards Evaluating the Robustness of Neural Networks | 3 | 3 |
Adversarial Diversity and Hard Positive Generation | 3 | 3 |
Learning with a strong adversary | 3 | 3 |
Adversarial Transformation Networks: Learning to Generate Adversarial Examples | 3 | 2 |
Distributional Smoothing with Virtual Adversarial Training | 3 | 2 |
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples | 3 | 2 |
Universal adversarial perturbation | 3 | 3 |
One pixel attack for fooling deep neural networks | 3 | 3 |
Ensemble Adversarial Training: Attacks and Defenses | 3 | 2 |
Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN | 3 | 2 |
Practical black-box attacks against deep learning systems using adversarial examples | 3 | 3 |
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples | 3 | 2 |
Deiving into Transferable Adversarial Examples and Black-box Attacks | 3 | 1 |
Adversarial Machine Learning at Scale | 3 | 2 |
Machine vs Machine:Defending Classifiers Aginst Learning-based Adversarial Attacks | 3 | 3 |
Distillation As a Defense to Adversarial Perturbations Against Deep Neural Networks | 3 | 3 |
Defensive Distillation is Not Robust to Adversarial examples | 3 | 2 |
Extending Defensive Distillation | 3 | 2 |
Towards Deep Neural Network Architectures Robust to Adversarial Examples | 3 | 2 |
Assessing Threat of Adversarial Examples on Deep Neural Networks | 3 | 2 |
Countering Adversarial Images Using Input Transformations | 3 | 2 |
Foveation-Based Mechanisms Alleviate Adversarial Examples | 3 | 2 |
Enhancing Robustness of Machine Learning Systems via Data Transformations | 3 | 2 |
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics | 3 | 1 |
On Detecting Adversarial Perturbations | 3 | 1 |
SafetyNet: SafetyNet: Detecting and Rejecting Adversarial Examples Robustly | 3 | 1 |
MagNet: MagNet: a Two-Pronged Defense against Adversarial Examples | 3 | 3 |
DeepCloak: Masking Deep Neural Network Models for Roubustness Against Adversarial Samples | 3 | 1 |
SATYA: Defending against Adversarial Attacks using Statistical Hypothesis Testing | 3 | 1 |
MTDeep:Boosting the Scurity of Deep Nueral Nets Against Adversarial Attacks with Moving Target Defense | 3 | 1 |
The Best Depense is a good offense: Counterning black box attacks by predicting slightly wrong labels | 3 | 1 |
Efficient Defenses Against Adversarial Attacks | 3 | 1 |
Detecting adversarial Samples from Artifacts | 3 | 2 |
Early Methods for Detecting Adversarial Images | 3 | 0 |
On the (Statistical) Detection of Adversarial Examples | 3 | 0 |
Detecting Adversarial Examples in Deep Networks with Adaptive Noise Reducation | 3 | 2 |
Adversarial examples are not easily detected: Bypassing ten detection Methods | 3 | 1 |
Adversarial Attacks on Neural Network Policies | 2 | 1 |
Tactics of Adversarial attacks on Deep Reinforcement Learning Agents | 2 | 0 |
Deiving into adversarial attacks on deep policies | 2 | 0 |
Adversarial Perturbations Against Deep Neural Networks for Malware Classification | 2 | 1 |
Adversarial Examples for Semantic Segmentation and Object Detection | 2 | 0 |
Adversarail examples for generative models | 3 | 0 |
Crafting Adversarial Input Sequences for Recurrent Neural Networks | 2 | 1 |
Vulnerability of deep reinforcement learning to policy induction attacks | 2 | 0 |
Accessorize to a crime: Real and Stealthy attacks on state-of-art face recognition | 2 | 0 |
Adversarial Learning: A Critical Review and Active Learning Study | 2 | 1 |
Machine Learning in Adversarial Settings | 3 | 2 |
Behavior of Machine Learning Algorithms in Adversarial Environments. | 2 | 1 |
Poisoning attacks against support vector machines | 2 | 0 |
Evasion attacks against machine learning at test time | 2 | 1 |
On the Integrity of Deep Learning Systems in Adversarial Settings | 3 | 3 |
One Network to Solve Them All -- Solving Linear Inverse Problems using Deep Projection Models | 1 | 2 |
Generative Adversarial Nets | 2 | 2 |
A Game-Theoretic Analysis of Adversarial Classification | 2 | 1 |
Distilling the knowledge in a Neural Network | 1 | 1 |
Noniterative algorithms for Sensitivity Analysis Attacks | 2 | 1 |
SoK:Towards the Science of Security and Privacy in Machine Learning | 2 | 1 |
Deceiving Googles Cloud Video Intelligence API Built for Summarizing Videos | 1 | 1 |
On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches | 1 | 1 |
Adversarial Cross-Modal Retrieval | 2 | 1 |
Adversarial Training Methods for Semi-Supervised Text Classification | 2 | 1 |
Machine Learning in adversarial environments | 2 | 1 |
On Reliability and Security of Randomized Detectors Against Sensitivity Analysis Attacks | 2 | 1 |
Analyzing stability of convolutional neural networks in the frequency domain | 1 | 1 |
Batch Normalization: Accelerating Deep Network Training by Reducting Internal Covariate Shift | 1 | 1 |
Learning in the presence of malicious errors | 1 | 1 |
Adversarial Autoencoders | 1 | 1 |
Adversarial Classification | 2 | 1 |
Standard detectors arent (currently) fooled by physical adversarial stop signs | 2 | 1 |
Semi-supervised knowledge transfer for deep learning for private traning data | 1 | 1 |
Enforcing agile access Control Policies in Relational Databases using Views | 1 | 1 |
Security and Science of Agility | 1 | 1 |
No need to worry about adversarial examples in object detection in autonoumous vehicles | 2 | 0 |
The Space of Transferable Adversarial Example | 3 | 2 |
Are Accuracy and Robustness Correlated? | 3 | 1 |
Towards Deep Learning Models Resistant to Adversarial to Adversarial Attack | 3 | 0 |
Interpretable Explanations of Black Boxes by Meaningful Perturbation | 2 | 0 |
Whitening Black-Box Neural Networks | 3 | 1 |
Deep Neural Networks Are Easily Fooled- High Confidence Predictions for Unrecognizable Images | 3 | 1 |
Ground-Truth Adversarial Examples | 3 | 1 |
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples | 2 | 1 |
Adversarial and Clean Data Are Not Twins | 3 | 1 |
Adversarial Learning | 2 | 1 |
Attacking the Madry Defense Model with L1-based Adversarial Examples | 3 | 0 |
Feature Squeezing Mitifates and Detects Carlini/Wagner Adversarial Examples | 3 | 1 |
Adversarial Examples: Attacks and Defenses for Deep Learning | 3 | 1 |
When Not to Classifiy: Anomaly Detection of Attacks onDNN Classifiers at Test Time | 3 | 1 |
Query-efficient Black-box Adversarial Examples | 3 | 1 |
Learning Universal Adversarial Perturbations with Gnerative Models | 3 | 1 |
ReabsNet: Detecting and Revising Adversarial Examples | 3 | 1 |
Exploring The space of Black-box Attacks on Deep Neural Networks | 3 | 1 |
Locally Optimal Detection of Adversarial Inputs to Image Classifiers | 3 | 0 |
Adversarial Patch | 3 | 1 |
Adversarial Spheres | 3 | 1 |
Mitigating Evasion Attacks to Deep Neural Newtorks via Region-based Classification | 3 | 1 |
Synthesizing Robust Adversarial Examples | 3 | 1 |
Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks | 3 | 1 |
High dimensional spaces, deep learning and adversarial examples | 3 | 1 |
The Vulnerability of Learning to Adversarial Perturbation Increases with Intrinsic Dimensionality | 3 | 1 |
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality | 3 | 2 |
Generating adversarial examples with adversarial networks | 3 | 1 |
Defense against Adversarial Attacks Using High-level representation guided denoiser | 3 | 2 |
Adversary A3C for Robust Reinforcement learning | 3 | 1 |
Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples | 3 | 1 |
Boosting Adversarial Attacks with Momentum | 3 | 1 |
A Connection Between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models | 1 | 1 |
Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization | 3 | 1 |
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks | 3 | 1 |
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models | 3 | 1 |
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples | 3 | 1 |
On the suitability of Lp-norms for Creating and Preventing Adversarial Examples | 3 | 1 |
Defnese-GAN: Protecting Classifier Against Adversarial Attacks Using Generative Models | 3 | 1 |
PixelDefend: Leveraging Generative Madels To Understand and Defend Against Adversarial Examples | 3 | 1 |
Mitigating Adversarial Effects Through Randomization | 3 | 1 |
Stochatic Activation Pruning For Roubust Adversarial Defense | 3 | 1 |
Thermometer Encoding: One Hot Way To Resist Adversarial Example | 3 | 1 |
Spatially Transformed Adversarial Examples | 3 | 1 |
Adversarial Vulnerability of Neural Networks Increases with Input Dimension | 3 | 0 |
Adversarial Examples that Fool both Human and Computer Vision | 3 | 0 |
Adversarial vulnerability for any classifier | 3 | 0 |