Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks
Published in European Conference on Artificial Intelligence (ECAI), 2024
Graph Neural Networks (GNNs) are recognized as potent tools for processing real-world data organized in graph structures. Especially inductive GNNs, which allow for the processing of graph-structured data without relying on predefined graph structures, are becoming increasingly important in a wide range of applications. As such these networks become attractive targets for model-stealing attacks where an adversary seeks to replicate the functionality of the targeted network. Significant efforts have been devoted to developing model-stealing attacks that extract models trained on images and texts. However, little attention has been given to stealing GNNs trained on graph data. This paper identifies a new method of performing unsupervised model-stealing attacks against inductive GNNs, utilizing graph contrastive learning and spectral graph augmentations to efficiently extract information from the targeted model. The new type of attack is thoroughly evaluated on six datasets and the results show that our approach outperforms the current state-of-the-art by Shen et al. (2021). In particular, our attack surpasses the baseline across all benchmarks, attaining superior fidelity and downstream accuracy of the stolen model while necessitating fewer queries directed toward the target model.
Recommended citation: Podhajski, M., Dubiński, J., Boenisch, F., Dziedzic, A., Pregowska, A., & Michalak, T. P. (2024). "Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks." In ECAI 2024 (pp. 1438–1445).
Paper
@incollection{podhajski2024efficient,
title={Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks},
author={Podhajski, Marcin and Dubi{\'n}ski, Jan and Boenisch, Franziska and Dziedzic, Adam and Pregowska, Agnieszka and Michalak, Tomasz P},
booktitle={ECAI 2024},
pages={1438--1445},
year={2024},
publisher={IOS Press}
}