Article Preview
TopIntroduction
Searle's “Chinese Room” argument, I take it, establishes that the behavioristic Turing Test criterion does not afford a standard that is theoretically sufficient for the purpose of discriminating between the causal properties of systems with and without mentality or, as he uses the term, intelligence (Searle, 1984). And that is because it does not distinguish between the input/output behavior of systems involving minds, the input/output behavior of systems using minds combined with look-up tables, and the input/output behavior of mindless look-up tables alone (Fetzer, 1995). Properly understood, therefore, Searle’s argument supports the necessity to differentiate between relations of simulation that display the same input/output behavior, of replication by simulations that are brought about by the same or by similar processes, and of emulation, where those replications are produced by systems that are composed of the same kind of stuff (Fetzer, 1990).
Since simulation is the weakest similarity relationship between animate and inanimate systems, the question I am going to address concerns whether an inanimate system, such as a robot, can simulate non-trivial behavior that is displayed by humans as the effects of their internal states of motives, beliefs, ethics, abilities and capabilities, relative to those systems' opportunities (the historical situations in which specific behaviors take place). I have in mind the actual behavior of real persons living their historical lives. One reason for not thinking so is that digital computers—classic von Neumann machines—are not the possessors of minds. At one time, I supposed that this was the key to my argument. But today I think that the ontic and epistemic problems that matter to this question apply across the board, even to other systems that have mentality.
Suppose, for example, that the ontic problems that are confronted here involve the complex causal interplay of values of those kinds, which might assume the form of deterministic causation (where the same cause yields the same effect, in every case without exception) or of indeterministic causation (where the same causes yields one or another effect within the same class of possible outcomes, without exception). But bear in mind that some classes of cases of deterministic causation are chaotic (which entails acute sensitivity to initial conditions), where the least change can bring about the most drastic alteration in effects, such as the use of a comma instead of a period in the program for Mariner I, which has been described as the most expensive grammatical mistake in history (Littlewood & Strigini, 1992), though which mistake occurred is disputed (Mariner 1, Wiki).
Even though future connectionist machines, which employ networks of neuron-like nodes, might eventually be developed that possess the mental capabilities of human minds—which may be more subtle than it seems, since meanings and minds are dependent upon their bodies and behavioral abilities—the prospects for simulations in the mode of replication or of emulation will still tend to be unrealizable, in theory as well as practice, for similar ontic and epistemic reasons (Fetzer, 1992, 1996). Although scripted or stereotypical behaviors—restaurant behavior, conventional exchanges, and ordinary discourse—initially appear to pose no problems for the simulation of input/output behavior, they are subject to parallel constraints since the target may not follow the script, especially when they may be affected by the influence of unconscious or of subconscious factors of which they are unaware.