Skip to content

Latest commit

 

History

History
29 lines (22 loc) · 1.13 KB

README.md

File metadata and controls

29 lines (22 loc) · 1.13 KB

Simplifying Surrogate Models for Transferable Graph Poisoning Attacks on Link Prediction

Official implementation of Simplifying Surrogate Models for Transferable Graph Poisoning Attacks on Link Prediction

Built based on GCA, DeepRobust, Viking, CLGA

Requirements

Tested on pytorch 1.7.1 and torch_geometric 1.6.3.

Usage

1.To produce availability attack with STAA

python main.py --dataset Cora --attack_method aalp --exp_type poisoning --device cuda:0 --attack_rate 0.05 --seed 2 --lp_model deepwalk  --attack_goal='availability'

2.To produce Integrity attack with STAA

python main.py --dataset CiteSeer --attack_method aalp --exp_type poisoning --device cuda:0 --attack_rate 0.1 --seed 2 --lp_model deepwalk  --attack_goal='integrity'

3.To run clean experiments

python main.py --dataset Cora --lp_model metamodel --exp_type clean --device cuda:0 --seed 1

4.To produce all experiments, pealse config scripts in ./run/

bash ./run/run_poisoning.sh