Web Semantic-Based Robust Graph Contrastive Learning for Recommendation via Invariant Learning

Web Semantic-Based Robust Graph Contrastive Learning for Recommendation via Invariant Learning

Wengui Dai, Yujun Wang
Copyright: © 2024 |Pages: 15
DOI: 10.4018/IJSWIS.337962
Article PDF Download
Open access articles are freely available for download

Abstract

The use of contrastive learning (CL) in recommendation has advanced significantly. Recently, some works use perturbations in the embedding space to obtain enhanced views of nodes. This makes the representation distribution of nodes more even and then improve recommendation effectiveness. In this article, the authors provide an explanation on the role of added noises in the embedding space from the perspective of invariant learning and feature selection. Guided by this thinking, the authors devise a more reasonable method for generating random noises and put forward web semantic based robust graph contrastive learning for recommendation via invariant learning, a novel graph CL-based recommendation model, named RobustGCL. RobustGCL, randomly zeros the values of certain dimensions in the noise vectors at a fixed ratio. In this way, RobustGCL can identify invariant and variant features and then learn invariant and variant representations. Tests on publicly available datasets show that our proposed approach can learn invariant representations and achieve better performance.
Article Preview
Top

Web Semantic-Based Robust Graph Contrastive Learning For Recommendation Via Invariant Learning

The research and development of information technology has assisted lives in various aspects, such as education, healthcare, and transportation (Hu et al., 2022; Xu et al., 2021; Zhou et al., 2022; Deveci et al., 2023; Mohammed et al., 2022; Appati et al., 2022; Rajput et al., 2022; Tripathi & Kumar, 2022; Liu et al., 2022; Gupta et al., 2023; Alakbarov, 2022; Roy et al., 2022). In the age of data explosion, recommendation systems play a significant role (Li et al., 2022; Xiao et al., 2022; George & Lal, 2021; Zhang et al., 2023). It is important for collaborative recommendation to learn high-quality representations. The introduction of graph convolution network (GCN) (Hamilton, Ying & Leskovec, 2017; Kipf & Welling, 2016) enhances representations by offering a comprehensive method of integrating multi-hop neighbors into node representations (Berg, Kipf & Welling, 2017; He et al., 2020; Wang et al., 2019; Ying et al., 2018). Unfortunately, the following limitations affect GCN-based recommendation models: sparse supervision signal (Bayer et al., 2017; He & McAuley, 2016), skewed data distribution (Clauset, Shalizi & Newman, 2009; Milojević, 2010) and noises in interactions (Wang et al., 2021). Fortunately, contrastive learning techniques have been proven to be able to address the aforementioned issues in other fields because it can extract generic characteristics from big amounts of unlabeled data and generalize representations in a self-supervised manner (Chen et al., 2020; Gidaris, Singh & Komodakis, 2018; Oord, Li & Vinyals, 2018, Devlin et al., 2018; Lan et al., 2019).

Naturally, introducing contrastive learning into recommendation models to address the aforementioned issues is a great idea. An increasing number of studies have gained significant success by applying contrastive learning to the recommendation (Lin et al., 2022; Wu et al., 2021; Yu et al., 2022a; Yu et al., 2022b; Zhou et al., 2020). It is worth mentioning that some works (Yu et al., 2022a; Yu et al., 2022b) use embedding perturbation to enhance the robustness of recommendation by directly adding random uniform noises to the original representations In recommendation models based on graph contrastive learning, which is less time-consuming but more efficient. These methods speculate that a more even representation distribution in a certain scope can enhance the capacity for generalization while maintaining the intrinsic qualities of nodes. Their experiment proved the correctness of this hypothesis. We believed that continuing to study along this line of thought is very promising.

Invariant learning is based on the invariance principle of causality, leveraging the invariant features of observed data from different environments, and ignoring spurious relationships. To capture representations with invariant prediction capacity across environments is the aim of invariant learning (Wang et al., 2022; Zhang et al., 2023). In theory, it can achieve guaranteed generalization in the case of distribution deviation, and has achieved great success in practice. InvPref (Wang et al., 2022) assumes the observed user actions are determined jointly by invariant tendency, which is the true tendency and variant preference influenced by environments.

Figure 1.

The framework of RobustGCL model

IJSWIS.337962.f01

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 1 Issue (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing