Machine Creativity

Machine Creativity

Copyright: © 2022 |Pages: 38
DOI: 10.4018/978-1-7998-7840-7.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, the theme of machine creativity is examined. The main questions underlying this topic are illustrated, reporting and discussing the views of authors involved in early artificial intelligence (AI) research. These were the scientists engaged in experiments and theoretical studies in expert systems, logical reasoning, logical programming, machine learning, and philosophy of mind. In addition, the issue of machine intelligence is raised in reference to Alan Turing's question: Can machines think? Knowledge acquisition and representation as well as machine learning have also briefly been introduced, so as to prepare the ground for discussing the question: Can machines be creative? Finally, a problem that worries many journalists, socio-philosophers, and people in general is briefly explored, namely: Will technology grow out of human control? Will there be a moment when this transformation becomes irreversible?
Chapter Preview
Top

Introduction

Ada Lovelace, the daughter of the famous romantic poet Lord Byron, is considered the first programmer for the algorithm she wrote for Babbage’s Analytical Engine (Plant, 1997). In her honor, the U.S. Defense department named ADA a programming language developed at the end of the 1970s. Ada Lovelace expressed her ideas about programmable machines in Note G to the English translation of Babbage’s lectures, transcribed by the Italian Luigi Menabrea:

It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable. The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. This it is calculated to effect primarily and chiefly of course, through its executive faculties; but it is likely to exert an indirect and reciprocal influence on science itself in another manner. For, in so distributing and combining the truths and the formula of analysis, that they may become most easily and rapidly amenable to the mechanical combinations of the engine, the relations and the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated. (Toole, 1991, p. 68)

She sustained two principles that were agreed upon for a long time among engineers and computer programmers:

  • 1.

    A machine can do whatever we know how to order it to perform.

  • 2.

    A machine can follow the programmer’s instructions but has no power to produce anything autonomously.

These principles appear to reflect common sense: if a machine is programmed, it is not autonomous, and, accordingly, its behavior necessarily depends on the programmer who writes the program. However, this reasoning does not consider the question, relevant in the light of the advances that have been made: Can a machine be programmed to have autonomous behavior?

To respond to this, one must define, in operational terms, what autonomous behavior is. In other words, what is necessary to identify a behavior as autonomous.

In psychology, autonomous behavior refers to self-government and responsible control for actions. Autonomy includes behavioral, emotional, and cognitive self-government, and is essentially the capacity to make free choices.

Nowadays, autonomous vehicles are an example of machine autonomy (Schwarting, Alonso-Mora, & Rus, 2018; Wiseman, 2021). Nevertheless, can one consider an autonomous vehicle as self-directed? And what about the responsibility of a machine? Does the line by Richard L. Gregory (1981, p. 74), that “Machines are seen as free of moral culpability as they are not self-directed, though they can of course be instruments for good or ill”, still hold true?

This is an intricate and still open question of social ethics (Bonnefon, Shariff, & Rahwan, 2016). What, for instance, should the ethical requirements be, for algorithms implemented into autonomous vehicles for supporting decision-making in critical cases, e.g., whether to sacrifice the car’s passengers or save one or more pedestrians in the event of an impending accident?

Although the question of whether a machine can be creative does not pose such ethical issues, it is no less difficult to answer. As such, to consider the topic of machine creativity, it is useful to first address the old problem of machine intelligence. Indeed, as the previous chapters have illustrated, psychologists have generally considered creativity and intelligence to be strictly related. Moreover, it will also be necessary to tackle the problems of knowledge acquisition and knowledge representations. These problems are crucial in AI.

Key Terms in this Chapter

Smart Autonomous Robot: An intelligent machine that acquires autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) or self-learning; it adapts its behavior and actions to its environment.

Popperian Refutation Method: Karl Raimund Popper (1902-1994), an Austrian-British philosopher who was one of the most influential philosophers of science of the 21st century, refuted the classical positivist account of the scientific method according to which one assumes a scientific theory is true because it has been proven through experiment. Popper argued that if theory A predicts phenomenon p, and phenomenon p is observed through experiment, this does not prove that A is true. Popper sustained that a scientific theory is an act of creation, based more on a scientist’s intuition than on pre-existing empirical data.

Subliminal Techniques: Techniques used in marketing and other media to influence people without their being aware of what the messenger is doing. They may involve the use of split-second flashes of text, hidden images, or subtle cues that affect the audience at a level below conscious awareness.

Analytical Engine: A mechanical general-purpose computer designed and partially built by English mathematician and computer pioneer Charles Babbage. Its first description was published in 1837, and it is generally considered to have been the first computer.

Heliograf: The Washington Post developed Heliograf to enhance storytelling for large-scale, data-driven coverage of major news events. The technology was first introduced during the 2016 Rio Olympics to assist journalists with reporting the results of medal events ( https://www.washingtonpost.com/pr/2020/10/13/washington-post-debut-ai-powered-audio-updates-2020-election-results/ ).

Intentional Stance: A term coined by Dennett to address “the strategy of interpreting the behavior of an entity (person, animal, artifact, whatever) by treating it as if it were a rational agent who governed its ‘choice’ of ‘action’ by a ‘consideration’ of its ‘beliefs’ and ‘desires’” ( Dennett, 1996 , p. 27).

Autonomous Vehicles: Self-guided vehicles that are expected to be able to circulate on the road without any help from a human driver.

Open-Source Software: software that is freely distributed with its source code, making it available for use, modification, and distribution with its original copyright.

Complete Chapter List

Search this Book:
Reset