Skip to content

Latest commit

 

History

History
63 lines (53 loc) · 1.27 KB

readme.md

File metadata and controls

63 lines (53 loc) · 1.27 KB

PGA

source code of Simple and Efficient Partial Graph Adversarial Attack: A New Perspective

Main Structure

  • models: implementations of GNN models
  • victims: training GNN models
    • configs: hyperparameters of models
    • models: saving checkpoints
  • attackers: implementations of attack methods
  • attack: perform attacks
    • configs: hyperparameters of attacks
    • perturbed_adjs: adversarial graphs generated by attackers

Running Steps

  1. Training GNN models
> cd victims
> python train.py --model=gcn --dataset=cora
  1. Perform attacks
> cd attack
> python gen_attack.py 

PGA attack

  1. Training GNN models
> cd victims
> python train.py
  1. Generate statistics of graphs, such as node degrees, classification margin and etc.
> cd analysis
> python gen_statistics.py --dataset=cora
  1. Perform PGA attack
> cd attack
> python gen_attack.py --attack=pga --dataset=cora

Evaluate(evasion attack)

> python evasion_attack.py --victim=robust --dataset=cora
> python evasion_attack.py --victim=normal --dataset=cora

Evaluate(poisoning attack)

> python poison_attack.py --victim=gcn --dataset=cora
> python poison_attack.py --victim=gat --dataset=cora

requirements

  • deeprobust
  • torch_geometry
  • torch_sparse
  • torch_scatter