Understanding Universal Adversarial Attack and Defense on Graph

Understanding Universal Adversarial Attack and Defense on Graph

Tianfeng Wang, Zhisong Pan, Guyu Hu, Yexin Duan, Yu Pan
Copyright: © 2022 |Pages: 21
DOI: 10.4018/IJSWIS.308812
Article PDF Download
Open access articles are freely available for download

Abstract

Compared with traditional machine learning model, graph neural networks (GNNs) have distinct advantages in processing unstructured data. However, the vulnerability of GNNs cannot be ignored. Graph universal adversarial attack is a special type of attack on graph which can attack any targeted victim by flipping edges connected to anchor nodes. In this paper, we propose the forward-derivative-based graph universal adversarial attack (FDGUA). Firstly, we point out that one node as training data is sufficient to generate an effective continuous attack vector. Then we discretize the continuous attack vector based on forward derivative. FDGUA can achieve impressive attack performance that three anchor nodes can result in attack success rate higher than 80% for the dataset Cora. Moreover, we propose the first graph universal adversarial training (GUAT) to defend against universal adversarial attack. Experiments show that GUAT can effectively improve the robustness of the GNNs without degrading the accuracy of the model.
Article Preview
Top

1. Introduction

All kinds of relationships can be represented by graphs, including social relationships (Gupta et al., 2018), paper citation relationships (Sen et al., 2008), and communication topologies (Leskovec et al., 2007). Compared with traditional machine learning methods, graph neural networks (GNNs) can better model complex relationships. For this reason, GNNs have attracted extensive attention. Many representative models have also been proposed, such as GCN (Kipf et al., 2017), GAT (Veličković et al., 2017) and GraphSAGE (Hamilton et al., 2017). Generally, GNNs realize information transfer between adjacent nodes through well-designed aggregation operation. The obtained graph representations can be applied to various downstream tasks, such as node classification (Wu et al., 2019; Xu et al., 2019), graph classification (Xie & Ying, 2021; Zhang et al., 2019) and community discovery (Chen et al., 2019; Zhang et al., 2020; Zhang et al., 2019). In addition, abstract contents, including images (Nhi et al., 2022) and documents (Stylianou et al., 2022; Ismail et al., 2022; Urkalan & Geetha, 2020), can be interpreted as nodes in the graph. The graph-based methods help discovering the relationship among contents.

However, GNNs inherit the vulnerability of deep neural networks (DNNs), which may be misguided by unnoticeable perturbation. The research of adversarial attacks on GNNs will clarify the vulnerability of GNNs and improve GNN models. In general, adversarial attacks on GNNs can be categorized into three types according to attack goals. The first type, the global attack on topology, includes CE-PGD (Xu et al., 2019) and Meta-Attack (Zugner et al., 2019). This type aims to degrade the overall performance of GNNs. The second type, the target-dependent attack, includes Nettack (Dai et al., 2018), FGA (Chen et al., 2018), and IG-Attack (Wu et al., 2019). This type attacks a single target node through direct or indirect ways. The third type, the universal attack on graph, aims to achieve the target-independent attack against all nodes. Graph universal attack (GUA) first defines the form of universal attack on the graph that any node can be attacked by flipping edges connected to anchor nodes (Zang et al., 2021). In this paper, we explore graph universal attack and defense on this basis.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing