Article Preview
TopWeb Semantic-Based Robust Graph Contrastive Learning For Recommendation Via Invariant Learning
The research and development of information technology has assisted lives in various aspects, such as education, healthcare, and transportation (Hu et al., 2022; Xu et al., 2021; Zhou et al., 2022; Deveci et al., 2023; Mohammed et al., 2022; Appati et al., 2022; Rajput et al., 2022; Tripathi & Kumar, 2022; Liu et al., 2022; Gupta et al., 2023; Alakbarov, 2022; Roy et al., 2022). In the age of data explosion, recommendation systems play a significant role (Li et al., 2022; Xiao et al., 2022; George & Lal, 2021; Zhang et al., 2023). It is important for collaborative recommendation to learn high-quality representations. The introduction of graph convolution network (GCN) (Hamilton, Ying & Leskovec, 2017; Kipf & Welling, 2016) enhances representations by offering a comprehensive method of integrating multi-hop neighbors into node representations (Berg, Kipf & Welling, 2017; He et al., 2020; Wang et al., 2019; Ying et al., 2018). Unfortunately, the following limitations affect GCN-based recommendation models: sparse supervision signal (Bayer et al., 2017; He & McAuley, 2016), skewed data distribution (Clauset, Shalizi & Newman, 2009; Milojević, 2010) and noises in interactions (Wang et al., 2021). Fortunately, contrastive learning techniques have been proven to be able to address the aforementioned issues in other fields because it can extract generic characteristics from big amounts of unlabeled data and generalize representations in a self-supervised manner (Chen et al., 2020; Gidaris, Singh & Komodakis, 2018; Oord, Li & Vinyals, 2018, Devlin et al., 2018; Lan et al., 2019).
Naturally, introducing contrastive learning into recommendation models to address the aforementioned issues is a great idea. An increasing number of studies have gained significant success by applying contrastive learning to the recommendation (Lin et al., 2022; Wu et al., 2021; Yu et al., 2022a; Yu et al., 2022b; Zhou et al., 2020). It is worth mentioning that some works (Yu et al., 2022a; Yu et al., 2022b) use embedding perturbation to enhance the robustness of recommendation by directly adding random uniform noises to the original representations In recommendation models based on graph contrastive learning, which is less time-consuming but more efficient. These methods speculate that a more even representation distribution in a certain scope can enhance the capacity for generalization while maintaining the intrinsic qualities of nodes. Their experiment proved the correctness of this hypothesis. We believed that continuing to study along this line of thought is very promising.
Invariant learning is based on the invariance principle of causality, leveraging the invariant features of observed data from different environments, and ignoring spurious relationships. To capture representations with invariant prediction capacity across environments is the aim of invariant learning (Wang et al., 2022; Zhang et al., 2023). In theory, it can achieve guaranteed generalization in the case of distribution deviation, and has achieved great success in practice. InvPref (Wang et al., 2022) assumes the observed user actions are determined jointly by invariant tendency, which is the true tendency and variant preference influenced by environments.
Figure 1. The framework of RobustGCL model