Multi-Level Testing Approach for Multi-Agent Systems

Multi-Level Testing Approach for Multi-Agent Systems

Yacine Kissoum, Mohammed Redjimi
Copyright: © 2022 |Pages: 23
DOI: 10.4018/IJOCI.304883
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The software development cycles need product testing. There is a crucial lack in testing phases of multi-agent systems. To this end, a call for an investigation of appropriate testing techniques is necessary to provide adequate software development processes and supporting tools. Among all existing solutions of the test, the model-based testing technique –MBT) has gained attention with the popularization of models both in software design and development. This technique uses a so-called abstract test model to generate abstract test cases. After their concretization, concrete test cases are submitted to the system under test. The systems outputs are finally compared to the abstract test model expected results. In this context, a model-based testing approach for multi-agent systems based on the Reference net paradigm is proposed in this paper. A running example supported by a multi-agent testing prototype, which aims at simplifying and providing a uniform and automated way for multi-agent systems testing is presented and discussed.
Article Preview
Top

1. Introduction

Agents and multi-agent systems (Dorri et al., 2018; Ferber and Gutknecht, 1998; Abar et al., 2017) are a useful technology for building complex applications that operate in dynamic domains, often distributed over multiple sites (Ge et al., 2018; Xie and Liu, 2017; Calvaresi et al., 2017; González-Briones et a., 2018; Singh and Chopra, 2017). Testing of multi-agent systems is an important task of the development cycle that asks for new methods dealing with the specific nature of such systems (Earle and Fredlund, 2019; Ashamalla et al., 2017). These methods need to be effective and adequate to evaluate agents' behaviours and build solid user confidence about software correctness (Kissoum and Sahnoun, 2006; Barnier et al., 2017; Elkholy et al., 2020; Mecheraoui et al., 2020).

Among all existing solutions of the test, the model-based testing (MBT) technique has gained attention with the popularization of models in both software design and development (Prenninger et al., 2005; Kerraoui et al., 2016; Kissoum et al., 2010). The idea of such a technique is to have a model of the system and use this model to generate sequences of input and expected output. The input, after concretization, is applied to the system under test and the system's outputs are compared to the model's outputs, as given by the generated sequence (Ouriques and al., 2018; Marques et al., 2014; Ahmad et al., 2019; Petry et al., 2020; Dias et al., 2007; Muniz et al., 2015; Lie et al., 2017; Schieferdecker, 2012).

It is important to note that the number of possible tests is normally very large or even infinite. But, during the testing phase, only a finite number of tests can be executed. Therefore, a finite selection from the infinite exhaustive test suite is necessary. In addition, these test cases have to be concretized. In other words, the concretization step acts as a translator which bridges the abstraction gap between the test model and the system under test by adding missing information and translating entities of the abstract test case to concrete constructs of the test platform’s input language. This step is a very difficult task because it invokes generally complex algorithms together with human tester intervention.

To solve those problems we aim to propose a new approach based on a simple testing framework that lets developers build a test suite effortlessly in a cheaply and incrementally way. The proposed approach aims at supporting the developer in creating and executing tests in a uniform and automatic way. Uniformly because all tests follow the same pattern of execution and in an automatic way since the concretization stage require user intervention neither during execution nor to detect whether the test has passed or failed.

The rest of this paper is organized as follow: Section 2 presents some theoretical backgrounds about software engineering, software quality and software testing and some dedicated tools. Section 3 discusses significant multi-agent system testing approaches. In section 4, a short introduction to the multi-agent net architecture is presented. Section 5 describes the proposed approach. In section 6, a simple running example is used. Section 7 presents a concluding discussion about the proposed approach. Finally, section 8 concludes this work with issues and future works.

Complete Article List

Search this Journal:
Reset
Volume 14: 1 Issue (2024): Forthcoming, Available for Pre-Order
Volume 13: 1 Issue (2023)
Volume 12: 4 Issues (2022)
Volume 11: 4 Issues (2021)
Volume 10: 4 Issues (2020)
Volume 9: 4 Issues (2019)
Volume 8: 4 Issues (2018)
Volume 7: 4 Issues (2017)
Volume 6: 4 Issues (2016)
Volume 5: 4 Issues (2015)
Volume 4: 4 Issues (2014)
Volume 3: 4 Issues (2012)
Volume 2: 4 Issues (2011)
Volume 1: 4 Issues (2010)
View Complete Journal Contents Listing