Collaborative interactions, for example, over the Internet, carried out in real time, such as chat, video/audio conferencing, and shared applications.
Published in Chapter:
Classroom Critical Incidents
John M. Carroll (The Pennsylvania State University, USA), Dennis C. Neale (Virginia Tech, USA), and Philip L. Isenhour (Virginia Tech, USA)
Copyright: © 2009
|Pages: 7
DOI: 10.4018/978-1-60566-198-8.ch040
Abstract
Evaluating the quality and effectiveness of user interaction in networked collaborative systems is difficult. There is more than one user, and often the users are not physically proximal. The “session” to be evaluated cannot be comprehensively observed or monitored at any single display, keyboard, or processor. It is typical that none of the human participants has an overall view of the interaction (a common source of problems for such interactions). The users are not easily accessible either to evaluators or to one another. In this article we describe an evaluation method that recruits the already-pervasive medium of Web forums to support collection and discussion of user critical incidents. We describe a Web forum tool created to support this discussion, the Collaborative Critical Incident Tool (CCIT). The notion of “critical incident” is adapted from Flanagan (1956), who debriefed test pilots in order to gather and analyze episodes in which something went surprisingly good or bad. Flanagan’s method has become a mainstay of human factors evaluation (Meister, 1985). In our method, users can post a critical incident report to the forum at any time. Subsequently, other users, as well as evaluators and system developers, can post threaded replies. This improves the critical incident method by permitting follow-up questions and other conversational elaboration and refinement of original reports.