Extending LINE for Network Embedding With Completely Imbalanced Labels

Extending LINE for Network Embedding With Completely Imbalanced Labels

Zheng Wang, Qiao Wang, Tanjie Zhu, Xiaojun Ye
Copyright: © 2020 |Pages: 17
DOI: 10.4018/IJDWM.2020070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Network embedding is a fundamental problem in network research. Semi-supervised network embedding, which benefits from labeled data, has recently attracted considerable interest. However, existing semi-supervised methods would get biased results in the completely-imbalanced label setting where labeled data cannot cover all classes. This article proposes a novel network embedding method which could benefit from completely-imbalanced labels by approximately guaranteeing both intra-class similarity and inter-class dissimilarity. In addition, the authors prove and adopt the matrix factorization form of LINE (a famous network embedding method) as the network structure preserving model. Extensive experiments demonstrate the superiority and robustness of this method.
Article Preview
Top

1. Introduction

Network embedding (Perozzi et al., 2014; Moyano 2017) is playing a critical role in network analysis (Á-zyer et al., 2015; Yue et al., 2017). It aims to extract low-dimensional feature vectors, termed embeddings, for nodes in a network. The learned embeddings encode meaningful relational and structural network information, so that they can be used as features for downstream network analysis tasks such as node classification, link prediction and network visualization.

Most network embedding methods learn embeddings based on network structure. One of the most well-known methods is LINE (Tang et al., 2015) which has shown its effectiveness in dealing with large-scale undirected, directed, and/or weighted networks. Particularly, its two sub-models LINE(1st) and LINE(2nd) preserve the first-order proximity (i.e., the similarity between linked nodes) and second-order proximity (i.e., the similarity between the nodes with shared neighbors) of a network, respectively.

Semi-supervised methods, which take advantage of labeled data, has recently attracted considerable interest. Typical studies include LSHM (Jacob et al., 2014), LDE (Wang et al., 2016), and MMDW (Tu et al., 2016). Generally, these methods utilize labeled data by guaranteeing both intra-class similarity (i.e., the same labeled nodes are close to each other) and inter-class dissimilarity (i.e., the different labeled nodes are far from each other) in the embedding space.

1.1. Problem

Unlike existing semi-supervised methods which assume the labeled data is balanced, i.e., every class has at least one labeled node, this paper considers the completely imbalanced case where some classes have no labeled nodes at all. This case deserves special attention for two reasons. Firstly, it has practical significance. For instance, considering Wikipedia which has millions of pages about various topics, it is hard to collect labeled pages for every topic without missing any one; moreover, its topic number may be unknown.

Secondly and more importantly, existing semi-supervised methods, which guarantee both intra-class similarity and inter-class dissimilarity, would get biased results in the completely-imbalanced case. To verify this, we conduct an experiment on Citeseer dataset (McCallum et al., 2000). We set the label rate as 50% in each class. In the balanced case, we modify the original network by adding edges between same labeled nodes and removing the existing edges between different labeled nodes. Then, we apply LINE(1st)1 on this modified network to get the embedding results. In the completely-imbalanced case, we choose Citeseer’s first two classes as unseen and the rest classes as seen, i.e., the nodes from the first two classes are removed from the labeled data. Then, we also modify the network and apply LINE(1st) for network embedding as described above. At last, the learned embeddings in these two cases are visualized by t-SNE (Maaten & Hinton 2008) in Figure 1. We can clearly see that the embeddings learned with balanced labels look meaningful, but the embeddings learned with completely-imbalanced labels are greatly biased towards the seen classes and heavily mixing the unseen class nodes together.

Complete Article List

Search this Journal:
Reset
Volume 20: 1 Issue (2024)
Volume 19: 6 Issues (2023)
Volume 18: 4 Issues (2022): 2 Released, 2 Forthcoming
Volume 17: 4 Issues (2021)
Volume 16: 4 Issues (2020)
Volume 15: 4 Issues (2019)
Volume 14: 4 Issues (2018)
Volume 13: 4 Issues (2017)
Volume 12: 4 Issues (2016)
Volume 11: 4 Issues (2015)
Volume 10: 4 Issues (2014)
Volume 9: 4 Issues (2013)
Volume 8: 4 Issues (2012)
Volume 7: 4 Issues (2011)
Volume 6: 4 Issues (2010)
Volume 5: 4 Issues (2009)
Volume 4: 4 Issues (2008)
Volume 3: 4 Issues (2007)
Volume 2: 4 Issues (2006)
Volume 1: 4 Issues (2005)
View Complete Journal Contents Listing