On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces

On the Development of Adaptive and User-Centred Interactive Multimodal Interfaces

David Griol, Zoraida Callejas, Ramón López-Cózar, Gonzalo Espejo, Nieves Ábalos
DOI: 10.4018/978-1-4666-0954-9.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Multimodal systems have attained increased attention in recent years, which has made possible important improvements in the technologies for recognition, processing, and generation of multimodal information. However, there are still many issues related to multimodality which are not clear, for example, the principles that make it possible to resemble human-human multimodal communication. This chapter focuses on some of the most important challenges that researchers have recently envisioned for future multimodal interfaces. It also describes current efforts to develop intelligent, adaptive, proactive, portable and affective multimodal interfaces.
Chapter Preview
Top

1. Introduction To Multimodal Interfaces

With the advances of speech, image and video technology, human-computer interaction (HCI) has reached a new phase, in which multimodal information is a key point to enhance the communication between humans and machines. Unlike traditional keyboard- and mouse-based interfaces, multimodal interfaces enable greater flexibility in the input and output, as they permit users to employ different input modalities as well as to obtain responses through different means, for example, speech, gestures and facial expressions. This is especially important for users with special needs, for whom the traditional interfaces might not be suitable (McTear, 2004; López-Cózar & Araki, 2005; Wahlster, 2006).

In addition, the widespread use of mobile technology implementing wireless communications such as personal digital assistants (PDAs) and smart phones enables a new type of advanced applications to access information. As the number of ubiquitous, connected devices continues to grow, the heterogeneity of client capabilities and the number of methods for accessing information services also increases. As a result, users can effectively access huge amounts of information and services from almost everywhere and through different communication modalities.

Multimodality has been traditionally addressed from two perspectives. On the one hand, human-human multimodal communication. Within this area we can find in the literature studies concerned with speech-gesture systems (Catizone et al., 2003), semiotics of gestures (Radford, 2003; Flecha-García, 2010), structure and functions of face-to-face communication (Bailly et al., 2010), emotional relations (Cowie & Cornelius, 2003; Schuller et al., 2011), and intercultural variations (Endrass et al., 2011; Edlung et al., 2008). On the other hand, human-machine communication and interfaces. Topics of interest in this area include, among others, talking faces, embodied conversational agents (Cassell et al., 2000), integration of multimodal input, fission of multimodal output (Wahlster, 2003), and understanding of signals from speech, text, and visual images (Benesti et al., 2008).

This chapter focuses on some of the most important challenges that researchers have recently envisioned for future multimodal interfaces. It describes current efforts to develop intelligent, adaptive, proactive, portable and affective multimodal interfaces. All these concepts are not mutually exclusive, for example, the system’s intelligence can be concerned with the system's adaptation enabling better portability to different environments.

There are different levels in which the system can adapt to the user (Jokinen, 2003). The simplest one is through personal profiles in which the users have static choices to customize the interaction (e.g., whether they prefer a male or female system’s voice), which can be further improved by classifying users into preference groups. Systems can also adapt to the users’ environment, for example, Ambient Intelligence (AmI) applications such as ubiquitous proactive systems. The main research topics are the adaptation of systems to different expertise levels (Haseel & Hagen, 2005), knowledge (Forbes-Riley & Litman, 2004), and special needs of users. The latter topic is receiving a lot of attention nowadays in terms of how to make systems usable by handicapped and elderly people (Heim et al., 2007; Batliner et al., 2004; Langner & Black, 2005), and how to adapt them to user features such as age, proficiency in the interaction language (Raux et al., 2003) or expertise in using the system (Haseel & Hagen, 2005).

Complete Chapter List

Search this Book:
Reset