This repository contains additional details and reproducibility of the stated results.
Real-world graphs, such often are associated with sensitive information about individuals and their activities. Hence, they cannot always be made public. Moreover, graph neural networks (GNNs) models trained on such sensitive data can leak significant amount of information e.g via membership inference attacks.
Release a GNN model that is trained on sensitive data yet robust to attacks. Importantly, it should have differential privacy (DP) guarantees.
PrivGNN can be executed via
python3 knn_graph.py
Please adjust the all_config.py
file to change parameters, such as the datasets, K, lambda, attack, baselines, or gamma.