Article Preview
TopIntroduction
Over the last 60 years automation has played an increasing role in our society. With the reduction of prime human control of systems has come the situation whereby supervision of automated systems is now common. The role of the computer has made this almost inevitable. Computer controlled systems are more reliable than operator controlled situations. However when the certain failure of a computer system happens, our society demands that some backup human control, or at least monitoring exists.
In 1973 an Eastern Airlines flight 401 was landing and the pilot was trying to sort out a problem with the undercarriage when the autopilot disengaged. Neither officer noticed a problem due to their preoccupation with the landing procedures. The report (NTSB, 1992) said that the crew had become complacent and had not monitored the instruments effectively.
This crash and many other examples of similar events exhibit pilot over-reliance on automation (Lee & Moray, 1992; Mosier & Skitka, 1994; Riley, 1994). On June 30th 1994 an Airbus A330 crashed while on a test flight. The aircraft was being tested to ascertain how well the autopilot could control an engine out situation, with different loading conditions. A later investigation concluded that the crew were overconfident and did not intervene early enough to prevent an accident. It was alleged that if they had responded 4 seconds, earlier then the accident could have been avoided.
Parasuraman (1993) gives a pertinent example of a system where the staff are poorly trained, paid little and have a small opportunity for advancement, staff that monitor x-ray apparatus at airports for detecting weapons and hazardous or explosive materials. They have an excellent detection record exceeding 90%, despite long hours and poorly presented information.
The accident at the Three Mile Island Nuclear plant is another example of a critically complex system where the monitoring process failed. (Bignell & Fortune, 1984) show that the failure of one valve and the way the information was presented to the operators was the overwhelming cause of the disaster. The confusion in the data and the amount of signals to be monitored by the operators was too great for effective intervention.
Sheridan (1997) describes the whole process by which human interaction with control systems can be categorised. His descriptions of the stages of supervisory control are reproduced here for clarity. There are five components in Sheridan’s’ description: a sensor, actuator, display, controller and computer (Figure 1).
He details the five stages required for supervisory control:
- •
Planning off-line what tasks to do;
- •
Teaching or programming the computer;
- •
Monitoring the automatic action on-line to detect failures;
- •
Intervening i.e. Taking over control in emergencies;
- •
Learning from experience to do better in future.
In his seminal paper (Sheridan, 1985) on trends in man-machine systems Sheridan relates how the mental model (Figure 2) of the operator affects all parts of the control process.
Figure 2. Mental model modified from Sheridan