Adversarial Attacks on Graph Neural Network: Techniques and Countermeasures

Adversarial Attacks on Graph Neural Network: Techniques and Countermeasures

Copyright: © 2023 |Pages: 16
DOI: 10.4018/978-1-6684-6903-3.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Graph neural networks (GNNs) are a useful tool for analyzing graph-based data in areas like social networks, molecular chemistry, and recommendation systems. Adversarial attacks on GNNs include introducing malicious perturbations that manipulate the model's predictions without being detected. These attacks can be structural or feature-based depending on whether the attacker modifies the graph's topology or node/edge features. To defend against adversarial attacks, researchers have proposed countermeasures like robust training, adversarial training, and defense mechanisms that identify and correct adversarial examples. These methods aim to improve the model's generalization capabilities, enforce regularization, and incorporate defense mechanisms into the model architecture to improve its robustness against attacks. This chapter offers an overview of recent advances in adversarial attacks on GNNs, including attack methods, evaluation metrics, and their impact on model performance.
Chapter Preview
Top

Introduction

Graph Neural Networks (GNNs) have emerged as a popular tool for modeling and analyzing complex data structures, such as social networks, biological systems, and infrastructure networks. GNNs learn representations of graph-structured data by propagating information from the neighboring nodes and edges, and have achieved state-of-the-art performance in various tasks such as node classification, link prediction, and graph classification (Zhou et al., 2018). However, the increased use of GNNs has also attracted attention from malicious actors who seek to exploit vulnerabilities in these models. Adversarial attacks on GNNs refer to a class of techniques that aim to manipulate the model's behavior by injecting carefully crafted inputs. These attacks can have serious consequences, including privacy violations, financial losses, and safety risks (Zhao et al., 2021).

Adversarial attacks can be broadly classified into two categories: evasion attacks and poisoning attacks. Evasion attacks aim to manipulate the model's output by modifying the input in a way that is imperceptible to humans but leads to a misclassification or incorrect prediction. Poisoning attacks, on the other hand, aim to modify the training data in a way that alters the model's behavior during inference. In this chapter, we focus on evasion attacks on GNNs. We survey the recent literature on adversarial attacks on GNNs and the countermeasures that have been proposed to mitigate these attacks.

Complete Chapter List

Search this Book:
Reset