This repository contains additional details and reproducibility of the stated results. Accepted in PETS 2023
Given the explanation and/or some auxiliary information, can we reconstruct the private graph?
- The generator takes the explanations and/or node features and outputs an adjacency matrix
- First, it assigns weights to all possible edges of each node and is fully parameterized.
- Then optimizes these weights (parameters) over the training epochs via the final loss function that combines both the classification task loss and the self-supervision task loss
- The final adjacency matrix is the one that minimizes the final loss function
- Some nodes might not be connected to a labelled nodes. In such case, they will retain their initially assigned weights. Hence, the self-supervision module helps in learning reasonable edge weights for all possible edges
- The goal is to reconstruct clean features from noisy features
- It consists of a GCN denoising autoencoder that takes as input the generated adjacency and noisy features
- Utilize the knowledge of the labels to further optimize the adjacency matrix
- The classification module is a GCN model that takes as input the the node features and the newly optimized adjacency matrix via self-supervision
All experiments were run 10 times. We present the mean and the standard deviation below.
Exp | Method | Cora | CoraML | Bitcoin | |||
---|---|---|---|---|---|---|---|
AUC | AP | AUC | AP | AUC | AP | ||
Baselines | FeatureSim | ||||||
LSA | |||||||
GraphMI | |||||||
SLAPS | |||||||
Grad | GSEF-Concat | ||||||
GSEF-Mult | |||||||
GSEF | |||||||
GSE | |||||||
ExplainSim | |||||||
Grad-I | GSEF-Concat | ||||||
GSEF-Mult | |||||||
GSEF | |||||||
GSE | |||||||
ExplainSim | |||||||
Zorro | GSEF-Concat | ||||||
GSEF-Mult | |||||||
GSEF | |||||||
GSE | |||||||
ExplainSim | |||||||
Zorro-S | GSEF-Concat | ||||||
GSEF-Mult | |||||||
GSEF | |||||||
GSE | |||||||
ExplainSim | |||||||
GLime | GSEF-Concat | ||||||
GSEF-Mult | |||||||
GSEF | |||||||
GSE | |||||||
ExplainSim | |||||||
GNNExp | GSEF-Concat | ||||||
GSEF-Mult | |||||||
GSEF | |||||||
GSE | |||||||
ExplainSim |
- num_layers - 2
- learning rate - 0.01
- epochs - 2000
- hidden_size - 512
- dropout - 0.5
- noisy_mask_ratio - 20
- num_layers - 2
- learning rate - 0.001
- epochs - 200
- hidden_size - 32
- dropout - 0.5
- num_layers - 2
- learning rate - 0.01
- epochs - 200
- hidden_size - 32
- dropout - 0.5
- weight_decay - 5e-4
To simulate a realistic setting (and as described in the paper), we seperate the explanation generation environment and the attack environment where the explanations and attacks are executed by different entities (although installing graphlime pip install graphlime
on the attack environment should be sufficient or alternatively, running all experiments on the explanation environment).
- OS and version: Debian 10.3
- Python version: Python 3.8
- Anaconda version: 2021.05
- Cuda: Cuda 11.1 (explanation environment), Cuda 10.1 (attack environment)
Note: All explanations used for the experiments are in the Explanations folders to avoid regenerating explanations. They only need to be unzipped if necessary. Hence, step 1 and 2 can be skipped.
- Step 1: Setup explanation environement
./explanation_environment.sh
- Step 2: Generate expalanations and save the model
See Generating Explanations section
-
Step 3: Setup attack environment
Install libraries in attack_requirements.txt
Refer to https://pytorch.org/get-started/previous-versions/#linux-and-windows-20 on installing the PyTorch versions. -
Step 4: Run attack
See Running Explanation Attacks section
reconstructed auroc mean 0.9239545119093648
reconstructed avg_prec mean 0.9385418527248227
reconstructed auroc std 0.03037829443212373
reconstructed avg_prec std 0.023833043366627218
average attacker advantage 0.8359649121761322
std attacker advantage 0.023077240761854497
total time 501.8652458667755127
Parameters for running the code are enclosed in {}. The take the following values:
- dataset-name ==> ['cora', 'cora_ml', 'bitcoin', 'citeseer', 'credit', 'pubmed']
- explainer ==> ['grad', 'gradinput', 'zorro-soft', 'zorro-hard', 'graphlime', 'gnn-explainer']
- eps ==> [0.0001, 0.001, 0.01, 0.1, 0.2, 0.4, 0.6, 0.8, 1]
python3 explanations.py --model gcn --dataset {dataset-name} --explainer {explainer} --save_exp
python3 main.py -model end2end -dataset {dataset-name} -explanation_method {explainer} -ntrials 10 -attack_type gsef_concat
python3 main.py -model end2end -dataset {dataset-name} -explanation_method {explainer} -ntrials 10 -attack_type gsef_mult
python3 main.py -model end2end -dataset {dataset-name} -explanation_method {explainer} -use_exp_as_reconstruction_loss 1 -ntrials 10 -attack_type gsef
python3 main.py -model end2end -dataset {dataset-name} -explanation_method {explainer} -ntrials 10 -attack_type gse
python3 main.py -model pairwise_sim -dataset {dataset-name} -explanation_method {explainer} -ntrials 10 -attack_type explainsim
python3 main.py -model end2end -dataset {dataset-name} -ntrials 10 -attack_type slaps
python3 main.py -model pairwise_sim -dataset {dataset-name} -ntrials 10 -attack_type featuresim
Download code from their repository and use our data pipeline
Download code from their repository (attack-2) and use our data pipeline
python3 main.py -model pairwise_sim -dataset {dataset-name} -explanation_method zorro-hard -ntrials 10 -attack_type explainsim -use_defense 5 -epsilon {eps}
python3 main.py -model fidelity -get_fidelity 1 -dataset {dataset-name} -explanation_method {explainer} -ntrials 10 -use_defense 5 -epsilon {eps}
python3 main.py -model exp_intersection -get_intersection 1 -dataset {dataset-name} -explanation_method {explainer} -ntrials 10 -use_defense 5 -epsilon {eps}
If you use this code or want to build on this research, please cite our paper:
@article{olatunji-gsef2023,
title={Private Graph Extraction via Feature Explanations},
author={Olatunji, Iyiola E and Rathee, Mandeep and Funke, Thorben and Khosla, Megha}
journal={Proceedings on Privacy Enhancing Technologies},
year={2023},
volume={2023},
number={2},
address={Lausanne, Switzerland}
}
Copyright © 2022, Olatunji Iyiola Emmanuel. Released under the MIT license.