Search the World's Largest Database of Information Science & Technology Terms & Definitions
InfInfoScipedia LogoScipedia
A Free Service of IGI Global Publishing House
Below please find a list of definitions for the term that
you selected from multiple scholarly research resources.

What is Autonomous Agents

Handbook of Research on Technoethics
Programs or programmed devices which act autonomously, without supervision of a human, often in a remote location (e.g. on a remote server or another planet). Since such agents are per definition required to operate without supervision, responsibility attribution for their actions to a human is especially difficult.
Published in Chapter:
From Coder to Creator: Responsibility Issues in Intelligent Artifact Design
Andreas Matthias (Lingnan University, Hong Kong)
Copyright: © 2009 |Pages: 16
DOI: 10.4018/978-1-60566-022-6.ch041
Abstract
Creation of autonomously acting, learning artifacts has reached a point where humans cannot any more be justly held responsible for the actions of certain types of machines. Such machines learn during operation, thus continuously changing their original behaviour in uncontrollable (by the initial manufacturer) ways. They act without effective supervision and have an epistemic advantage over humans, in that their extended sensory apparatus, their superior processing speed and perfect memory render it impossible for humans to supervise the machine’s decisions in real-time. We survey the techniques of artificial intelligence engineering, showing that there has been a shift in the role of the programmer of such machines from a coder (who has complete control over the program in the machine) to a mere creator of software organisms which evolve and develop by themselves. We then discuss the problem of responsibility ascription to such machines, trying to avoid the metaphysical pitfalls of the mind-body problem. We propose five criteria for purely legal responsibility, which are in accordance both with the findings of contemporary analytic philosophy and with legal practise. We suggest that Stahl’s (2006) concept of “quasi-responsibility” might also be a way to handle the responsibility gap.
Full Text Chapter Download: US $37.50 Add to Cart
More Results
Narrative Learning Environments
These are Artificial Intelligence procedures having internal goals to achieve and able to make decisions on the actions to execute, without direct human intervention. They are typically used to implement virtual characters in interactive narrative learning environments.
Full Text Chapter Download: US $37.50 Add to Cart
Contract Negotiation in E-Marketplaces
In pure autonomous agents systems, the concern of the designer is with the performance of the individual agent, and the system level performance is left to emerge from the agents’ interactions without taking into consideration the interdependency between the system’s components.
Full Text Chapter Download: US $37.50 Add to Cart
Multi-Agent Models in Healthcare System Design
Independent entities within MAS to make decisions based on the information.
Full Text Chapter Download: US $37.50 Add to Cart
eContent Pro Discount Banner
InfoSci OnDemandECP Editorial ServicesAGOSR