Explainable Deep Reinforcement Learning for Knowledge Graph Reasoning

Explainable Deep Reinforcement Learning for Knowledge Graph Reasoning

Copyright: © 2023 |Pages: 16
DOI: 10.4018/978-1-6684-9189-8.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Artificial intelligence faces a considerable challenge in automated reasoning, particularly in inferring missing data from existing observations. Knowledge graph (KG) reasoning can significantly enhance the performance of context-aware AI systems such as GPT. Deep reinforcement learning (DRL), an influential framework for sequential decision-making, exhibits strength in managing uncertain and dynamic environments. Definitions of state space, action space, and reward function in DRL directly dictate the performances. This chapter provides an overview of the pipeline and advantages of leveraging DRL for knowledge graph reasoning. It delves deep into the challenges of KG reasoning and features of existing studies. This chapter offers a comparative study of widely used state spaces, action spaces, reward functions, and neural networks. Furthermore, it evaluates the pros and cons of DRL-based methodologies and compares the performances of nine benchmark models across six unique datasets and four evaluation metrics.
Chapter Preview
Top

Introduction

Knowledge Graphs (KG) are structured representations of entities and relationships with the graph format, which enables knowledge graphs to efficiently comprehend complicated information like human cognitive processes. The most recent application is the combination of knowledge graph reasoning and large language models (Carta et al., 2023; Meyer et al., 2023; Trajanoska et al., 2023). Knowledge graph reasoning is an inference-making and summarization process according to the structured information collected in the KG. Knowledge graph reasoning is widely applied in question-answering, information retrieval, and recommendation systems. Knowledge graph reasoning aims to find missing information in the form of ‘entity-relation-entity’ knowledge triple (h, r, t). The head entity h has the relation r with the tail entity t. For example, given the KG and the query (h, r, ?), the value of t will be inferred. In detail, the objective is to model the probabilistic distribution and learn explicit inference formulas (Liu et al., 2022).

The difficulties and challenges in KN reasoning include the following items:

  • 1.

    Knowledge graphs often include missing links and entities, which require the ability to predict the missing values.

  • 2.

    Knowledge graphs often include incorrect or noisy, or misleading information for reasoning tasks.

  • 3.

    The size of the knowledge graph increases exponentially as the number of entities increases. Scanning and abstracting large knowledge graphs efficiently is difficult.

  • 4.

    It is necessary to infer indirect relationships. For example, if A is related to B, and B is related to C, then A is related to C.

  • 5.

    The reasoning process should be logically consistent.

  • 6.

    Uncertain and dynamic data are embedded in knowledge graphs.

  • 7.

    The reasoning process needs to be interpretable and explainable for users’ trust.

Existing research can be divided into three categories, Rule-based, Embedding-based, and Path-based algorithms, based on the way of inference (X. Wang et al., 2022). For the rule-based method, the idea is to design rules to capture patterns and dependencies for predicting missing information. The quality of predefined rules directly determines the performance of algorithms. Namely, domain knowledge is required. For the embedding-based method, the idea is to learn how to represent entities and relationships data as continuous embedding vectors within low-dimensional spaces. The following inferences and prediction works are based on the learned embedding vectors. For example, similarity methods can be taken to fetch related entities or links. A lot of related works are proposed in this research field, including translation models, graph neural networks, convolutional neural networks, attention mechanisms, and adversarial training. A lot of details will be discussed in the Section ‘RELATED WORK.’ For the path-based method, multiple patterns and dependencies are explored by traversing paths from one entity to another via the links. In this way, complicated and deep relationships can be learned.

On the other hand, the knowledge graph reasoning algorithms can be categorized as single-hop reasoning and multiple-hop reasoning based on the number of intermediate hops in traversing entities (Zhou et al., 2021). For the single-hop knowledge graph reasoning, only the direct connection is studied. For example, if A is related to B and B is related to C, single-hop reasoning infers the relationship between A and C without considering any intermediate nodes. For multiple-hop reasoning, multiple paths are studied. For example, if A is related to B, B is related to C, and C is related to D, multiple-hop reasoning studies the path from A to D.

Key Terms in this Chapter

Neural Network: A framework involves interconnected layers of weighted neurons, which is motived by the human brain.

Knowledge Graph: A structured representation of entities and relationships with the graph format, which enables knowledge graph to efficiently comprehend complicated information like human cognitive processes. The most recent application is the combination of knowledge graph reasoning and large language models.

Markov Decision Process: A decision-making process where the system moves between different states in discrete time steps. The selection of actions based on a transition probability. The whole procedure is reward driven.

Deep Reinforcement Learning: A Markov Decision Process based learning framework, where a series of optimal actions are made via the repeated interaction among agents and environments.

Knowledge Reasoning: An inference making and summarization process according to the structure information collected in the KG.

Complete Chapter List

Search this Book:
Reset