Can Large Language Models Improve The Adversarial Robustness of Graph Neural Networks?
Can Large Language Models Improve The Adversarial Robustness of Graph Neural Networks?
Chuan Shi†
shichuan@bupt.edu.cn
Beijing University of Posts and
Telecommunications
Beijing, China
ABSTRACT where the perturbation ratio increases to 40%, the accuracy of GNNs
Graph neural networks (GNNs) are vulnerable to adversarial per- is still better than that on the clean graph.
turbations, especially for topology attacks, and many methods that
improve the robustness of GNNs have received considerable atten- KEYWORDS
tion. Recently, we have witnessed the significant success of large graph neural networks, large language models, adversarial robust-
language models (LLMs), leading many to explore the great poten- ness
tial of LLMs on GNNs. However, they mainly focus on improving ACM Reference Format:
the performance of GNNs by utilizing LLMs to enhance the node Zhongjian Zhang, Xiao Wang, Huichi Zhou, Yue Yu, Mengmei Zhang, Cheng
features. Therefore, we ask: Will the robustness of GNNs also be Yang, and Chuan Shi. 2024. Can Large Language Models Improve the Ad-
enhanced with the powerful understanding and inference capabilities versarial Robustness of Graph Neural Networks?. In . ACM, New York, NY,
of LLMs? By presenting the empirical results, we find that despite USA, 14 pages. https://doi.org/XXXXXXX.XXXXXXX
that LLMs can improve the robustness of GNNs, there is still an
average decrease of 23.1% in accuracy, implying that the GNNs 1 INTRODUCTION
remain extremely vulnerable against topology attack. Therefore, Graph neural networks (GNNs), as representative graph machine
another question is how to extend the capabilities of LLMs on graph learning methods, effectively utilize their message-passing mecha-
adversarial robustness. In this paper, we propose an LLM-based nism to extract useful information and learn high-quality represen-
robust graph structure inference framework, LLM4RGNN, which tations from graph data [20, 35, 42]. Despite great success, a host
distills the inference capabilities of GPT-4 into a local LLM for iden- of studies have shown that GNNs are vulnerable to adversarial at-
tifying malicious edges and an LM-based edge predictor for finding tacks [18, 23, 29, 33, 40], especially for topology attacks [43, 54, 55],
missing important edges, so as to recover a robust graph structure. where slightly perturbing the graph structure can lead to a dramatic
Extensive experiments demonstrate that LLM4RGNN consistently decrease in the performance. Such vulnerability poses significant
improves the robustness across various GNNs. Even in some cases challenges for applying GNNs to real-world applications, especially
∗ Both authors contributed equally to this research. in security-critical scenarios such as finance networks [38] or med-
† Corresponding author. ical networks [26].
Threatened by adversarial attacks, several attempts have been
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed made to build robust GNNs, which can be mainly divided into model-
for profit or commercial advantage and that copies bear this notice and the full citation centric and data-centric defenses [52]. From the model-centric per-
on the first page. Copyrights for components of this work owned by others than the spective, defenders can improve robustness through model enhance-
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission ment, either by robust training schemes [7, 21] or new model ar-
and/or a fee. Request permissions from permissions@acm.org. chitectures [17, 50, 53]. In contrast, data-centric defenses typically
Conference’17, July 2017, Washington, DC, USA focus on flexible data processing to improve the robustness of GNNs.
© 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-x-xxxx-xxxx-x/YY/MM Treating the attacked topology as noisy, defenders primarily purify
https://doi.org/XXXXXXX.XXXXXXX graph structures by calculating various similarities between node
Conference’17, July 2017, Washington, DC, USA Zhongjian Zhang, Xiao Wang, Huichi Zhou, Yue Yu, Mengmei Zhang, Cheng Yang, and Chuan Shi
embeddings [5, 19, 22, 41, 48]. The above methods have received &