Article Preview
Top1. Introduction
Throughout our lives, we are often in a situation where our well-being (or in fact our survival) depends on the intentions and actions of others. In this context the engineering of secure information systems is only one of several examples. As we cannot fully escape such dependencies, it is essential we manage them effectively. Trust allows for such management, either by anticipating other’s intentions or by influencing them, and in that regard, trust plays a central role in almost all human relationships. Alternatively, dependencies can be managed through control (Mollering, 2005; Cofta, 2007).
Trust is a very important psychological and social construct, pervading our lives, contributing to risk-taking, combating complexity and uncertainty, enabling survival and cooperation, facilitating communication and trade, and shaping identity (Cofta, 2007). Unfortunately, the definition of trust is far from being settled because trust suffers from a relative conceptual 'fuzziness', hence being notoriously hard to specify. Suffice it say that trust has 17 different definitions (McKnight & Chervany, 1996) and that the meaning of trust varies greatly between disciplines. Conceptualisations of trust (e.g. Li, 2008), or formal ontologies of trust (Huang & Fox, 2006) attempt to bridge this gap.
The unfortunate effect of this situation is that any interdisciplinary discussion about trust (i.e. any discussion that addresses the real position of trust in modern life) falls apart (Cofta & Lacohée, 2006). This makes it near impossible to present a coherent view of trust that can make a theory of trust useful to practitioners. This means that trust, whilst central to our lives, is always sidelined in its applicability by constructs that have developed a more coherent approach, i.e. mostly by control-based constructs of security and power.
The most important rift that the authors observed lies between widely understood social sciences (including social psychology, cognitive sciences, economics, politics etc.) and technology research (most notably digital communication technology and more specifically information security). While both frequently use the term 'trust', they apparently perceive it in very different ways.
The purpose of this paper is to propose an approach to trust that hopefully bridges this rift and identifies a minimum set of concepts needed to explain a variety of trust-related considerations and actions. Once such a minimum set is accepted, the discussion of trust can be re-structured and common elements can be established, hopefully resulting in trust (in all its richness) being ready for a more pragmatic discussion.
Throughout this paper we adopt a behavioural definition of trust as recognition of one's trustworthiness though one's trusted behaviour. Such trusted behaviour is usually characterised by a willingness to accept one's vulnerability, and dependence on someone who is to be trusted because one has found him trustworthy (Mayer, Davis, & Schoorman, 1995). In this context the typical behavioural definition of personal trust states that trust is:
The willingness of a party (trustor) to be vulnerable to the actions of another party (trustee) based on expectation that the other party will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party.
This paper begins with a brief discussion of the perceived rift that motivates the research presented here, demonstrated by the literature review. Next, we present the outline of a proposition of how trust can actually be modelled as an interdisciplinary signalling game. This paper then presents the proposition in detail, re-examining trust from the perspective of survival strategies to identify, and then elaborate upon, two main variants of trust.