Fundamentals of Graph for Graph Neural Network

Fundamentals of Graph for Graph Neural Network

Copyright: © 2023 |Pages: 18
DOI: 10.4018/978-1-6684-6903-3.ch001
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The vertices, which are also known as nodes or points, and the edges, which are responsible for connecting the vertices to one another, are the two primary components that make up a graph. Graph theory is the mathematical study of graphs, which are structures that are used to depict relations between items by making use of a pairwise relationship between them. Graphs can be thought of as a visual representation of a mathematical equation. The principles of graph theory will be covered in this chapter.
Chapter Preview
Top

Introduction

A graph is a useful tool for visually representing any type of physical scenario with distinct objects and some sort of connection between those objects. Many problems are easy to state and have natural visual representations. Nowadays, there are a wide range of applications of graph theory in real life. such as designing a family tree, a computer network, the flow of computation, data organization, finding the shortest path on a road, designing circuit connections, parsing a language tree, constructing the molecular structure, social networking, representing molecular structures, and many more. The publication written by Euler in 1736, in which he solved the Konigsberg bridge problem, is considered to be the birth year of graph theory (Deo, 2017). The importance of graphs in graph neural networks (GNNs) cannot be overstated. Graphs are the fundamental data structure that GNNs operate on and enable the representation of complex relationships and dependencies between entities. In many real-world applications such as social networks, recommender systems, drug discovery, and traffic flow prediction, the data can be naturally represented as graphs. Graphs provide a flexible and powerful framework for modeling such data and capturing the dependencies between entities (Ray, 2013). GNNs leverage the graph structure to learn meaningful representations of nodes and edges by propagating information across the graph. They use techniques such as message passing and graph convolutions to iteratively aggregate information from neighboring nodes and update node representations. Moreover, graphs provide a natural way to model inductive transfer learning, where the learned representations from one graph can be transferred to another graph with a similar structure. This is particularly useful in domains such as drug discovery and recommender systems, where the graph structure is similar across different datasets. The importance of graphs in GNNs lies in their ability to model complex relationships and dependencies between entities, their flexibility in representing different types of data, and their usefulness for inductive transfer learning (Zhou et al., 2020).

A graph may be used to represent a variety of different objects, including social media networks and molecules. Consider the nodes to be the users, and the edges to be the connections. Figure 1 is an example of what a graph for social media may look like:

Figure 1.

A sample graph for social media

978-1-6684-6903-3.ch001.f01
Top

Background Of Graph

A network can be represented mathematically as a graph, (Deo, 2017) and a graph's purpose is to depict the relationship that exists between lines and points. The components of a graph are points and the lines that link those points. It does not make a difference how long the lines are or where the points are located. A “node” is the name given to each individual component of a graph. The graph may be seen in Figure 2 and contains 5 vertices and 5 edges (Ray, 2013).

Def: A non-empty collection of vertices or nodes V and a set of edges E are required for the definition of a graph, which is written as G= (V, E). The letter G identifies the graph here. E(G) or just E signifies the edge, whereas V(G) or simply V denotes the vertices of a polygon (Deo, N. (2017).

Let us take, a graph G= (V, E) where V= {P, Q, R, S, T} and E= {{P, Q}, {Q, R}, {P, R}, {P, S}, {P, T}}.

Figure 2.

Graph with and five edges five vertices

978-1-6684-6903-3.ch001.f02

Directed Graph

A directed graph (digraph) (Trudeau, 2013) is a graph that involves the collection of vertices connected by edges, where each edge also has a direction (Deo, 2017). Figure 3 demonstrates the directed graph with five edges and five vertices.

Figure 3.

Directed graph with five vertices and five edges

978-1-6684-6903-3.ch001.f03

Key Terms in this Chapter

Graph Embedding: Graph Embedding is a technique used in Graph Neural Networks (GNNs) to represent each node and the overall graph as a low-dimensional vector or embedding.

Graph Neural Networks (GNNs): GNNs are a type of neural network that is designed to operate on graph-structured data, which is a type of data that is naturally represented as a set of nodes and edges. In a GNN, each node in a graph is associated with a vector representation, which is updated based on the node's own features as well as the features of its neighbors in the graph. The goal of a GNN is typically to perform some kind of prediction or classification task on the graph-structured data, such as predicting the category of a node or predicting the presence of certain types of edges in the graph.

Graph: In mathematics and computer science, a graph is a collection of points, called vertices or nodes, connected by lines or arcs, called edges. Graphs are often used to model relationships between objects, with the nodes representing the objects and the edges representing the relationships between them.

Simple Graph: An undirected graph without parallel edges or self-loops is called as simple graph.

Complete Chapter List

Search this Book:
Reset