Article Preview
Top1. Introduction
Network embedding (Perozzi et al., 2014; Moyano 2017) is playing a critical role in network analysis (Á-zyer et al., 2015; Yue et al., 2017). It aims to extract low-dimensional feature vectors, termed embeddings, for nodes in a network. The learned embeddings encode meaningful relational and structural network information, so that they can be used as features for downstream network analysis tasks such as node classification, link prediction and network visualization.
Most network embedding methods learn embeddings based on network structure. One of the most well-known methods is LINE (Tang et al., 2015) which has shown its effectiveness in dealing with large-scale undirected, directed, and/or weighted networks. Particularly, its two sub-models LINE(1st) and LINE(2nd) preserve the first-order proximity (i.e., the similarity between linked nodes) and second-order proximity (i.e., the similarity between the nodes with shared neighbors) of a network, respectively.
Semi-supervised methods, which take advantage of labeled data, has recently attracted considerable interest. Typical studies include LSHM (Jacob et al., 2014), LDE (Wang et al., 2016), and MMDW (Tu et al., 2016). Generally, these methods utilize labeled data by guaranteeing both intra-class similarity (i.e., the same labeled nodes are close to each other) and inter-class dissimilarity (i.e., the different labeled nodes are far from each other) in the embedding space.
1.1. Problem
Unlike existing semi-supervised methods which assume the labeled data is balanced, i.e., every class has at least one labeled node, this paper considers the completely imbalanced case where some classes have no labeled nodes at all. This case deserves special attention for two reasons. Firstly, it has practical significance. For instance, considering Wikipedia which has millions of pages about various topics, it is hard to collect labeled pages for every topic without missing any one; moreover, its topic number may be unknown.
Secondly and more importantly, existing semi-supervised methods, which guarantee both intra-class similarity and inter-class dissimilarity, would get biased results in the completely-imbalanced case. To verify this, we conduct an experiment on Citeseer dataset (McCallum et al., 2000). We set the label rate as 50% in each class. In the balanced case, we modify the original network by adding edges between same labeled nodes and removing the existing edges between different labeled nodes. Then, we apply LINE(1st)1 on this modified network to get the embedding results. In the completely-imbalanced case, we choose Citeseer’s first two classes as unseen and the rest classes as seen, i.e., the nodes from the first two classes are removed from the labeled data. Then, we also modify the network and apply LINE(1st) for network embedding as described above. At last, the learned embeddings in these two cases are visualized by t-SNE (Maaten & Hinton 2008) in Figure 1. We can clearly see that the embeddings learned with balanced labels look meaningful, but the embeddings learned with completely-imbalanced labels are greatly biased towards the seen classes and heavily mixing the unseen class nodes together.