Bayesian Agent Adaptation in Complex Dynamic Systems

Bayesian Agent Adaptation in Complex Dynamic Systems

Mair Allen-Williams, Nicholas R. Jennings
DOI: 10.4018/978-1-60566-669-3.ch007
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multi-agent systems draw together a number of significant trends in modern technology: ubiquity, decentralization, openness, dynamism and uncertainty. As work in these fields develops, such systems face increasing challenges. Two particular challenges are decision making in uncertain and partiallyobservable environments, and coordination with other agents in such environments. Although uncertainty and coordination have been tackled as separate problems, formal models for an integrated approach are typically restricted to simple classes of problem and are not scalable to problems with many agents and millions of states. We improve on these approaches by extending a principled Bayesian model into more challenging domains, using heuristics and exploiting domain knowledge in order to make approximate solutions tractable. We show the effectiveness of our approach applied to an ambulance coordination problem inspired by the Robocup Rescue system.
Chapter Preview
Top

Introduction

As computing power and ubiquity increase, the use of multi-agent technology in complex distributed systems is becoming more widespread. Consequently, the scalability of such systems is becoming increasingly important. Furthermore, the inherent dynamism in many of these problems calls for timely online responses, rather than the offline computation of strategies. As an example, consider a street taxi business. Between fares, the taxis roam the streets looking for custom. If taxis are able to automatically share their locations using GPS and a dashboard display, they can attempt to spread out over different areas. Now, for multi-agent problems in which the complete state is not observed by any one agent, recent work has advanced the state of the art for finding offline solutions in networks of stationary agents, with solutions in systems containing at least fifteen agents (Marecki et al., 2008). Building on this, in this chapter we describe a related online approach which is suitable to complex dynamic processes with mobile agents, also scalable into tens of agents. In order to provide a focus, and as a motivation for this work, we will consider the disaster response domain. Disaster scenarios form rich grounds for multi-agent distributed problem solving, allowing us to explore several features of complex multi-agent problems. While there are many characteristics which may be present in disaster scenarios, we will find that there are two common themes: uncertainty, and coordination.

The first of these, uncertainty, may concern the environment (“What’s going on?”) and the agent’s position in the environment (“Where am I?”); it may be about any other agents which might exist in the environment (“Who else is around? Where are they?”) and their behavior (“What are they going to do?”). In these uncertain situations, each agent must do some form of discovery to determine the essential characteristics of the situation, including the agent’s collaborators, before and alongside directly working to achieve its goals. This discovery phase in a multi-agent system is tightly linked with the presence of other agents in the system. As well as determining which other agents are present, agents may be able to cooperate to search over different regions, sharing information with each other as appropriate.

In addition to explicitly sharing information, observing the behavior of the other agents allows an autonomous agent to make inferences about the system. For example, in a scenario involving a burning building, a rational agent will not enter the building. Out of the disaster domain, consider perhaps a car manufacturing company, receiving orders from a regular customer base of car dealerships, able to make judgements about local economics based on the orders. Beyond discovery, there will continue to be interaction between the agents in a multi-agent system, whether explicit via communications and negotiations, or implicit through activity. Achieving some subgoals may involve a collaboration between several agents, as in a rescue operation where two ambulance members are required to carry a stretcher, or a car manufacturer where different parts are manufactured in different local factories.

Now, this general problem of taking others into account, coordination, is the second key issue we have identified for multi-agent systems. In uncertain, changing or open systems, fixed protocols for coordination must function against a background in which agents are not fully aware of the situation; their environment, the resources available to them, or the behavior of the other agents. For example, a particular car part manufacturer may be manufacturing parts in assorted colours without necessarily knowing what orders are coming in or whether a particular colour is newly in vogue in one of the key towns supplied by the factories. The negotiation of coordinated behavior in such systems is intertwined with the discovery phase, as agents interact with one another, perhaps cooperating to determine properties of the situation. Another example might be of a team of milkmen, needing to prepare for adjustments to their regular standing orders: they will expect that in the summer customers are more likely to go on holiday; they may be able to make inferences about school holidays within a particular catchment area, and the teams may be able to compare notes when they meet at the depot.

Key Terms in this Chapter

Bayesian probability: is an interpretation of probability which describes probability as a “personal belief”, based on combining any prior with observed information.

Disaster Response: Large-scale disasters include earthquake, fire and terrorist attack, and require a timely coordinated multi-agency response.

Agent: An agent receives input from the environment through its sensors and interacts with the environment to try and achieve some goal.

Finite State Machine: A finite state machine has a set of internal states, and rules for movement between internal states. When describing the behavior of an intelligent agent, internal states prescribe actions, and movement between states is conditioned on observations from the environment.

Uncertainty: An agent in an uncertain environment does not know of all the parameters within that environment.

Coordination: When several agents are interacting with the same environment, their actions may affect one another, directly or indirectly—this is coordination.

Markov Decision Process: In a Markov Decision Process, changes in the environment in response to an agent’s actions are determined only by the immediate state and actions, and not by any historical information.

Bayes’ rule: Bayes’ rule is the equation specifying how to update beliefs about the world, given new information: P(world = w | observations) P(observations | world = w)P(world = w).

Belief state: A belief state encapsulates the beliefs an agent has about its current state: that is, probability distributions for each variable within the state.

Complete Chapter List

Search this Book:
Reset