Evaluation of Personalised Software in Cultural Heritage: An Exploratory Study

Evaluation of Personalised Software in Cultural Heritage: An Exploratory Study

Katerina Kabassi
DOI: 10.4018/IJCMHS.2017010103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Personalised software has been extensively used in museum guides and recommendation systems for tours because it provides added value to the interaction of the user with cultural heritage. However, this added value can only be confirmed through an evaluation experiment. Therefore, this paper reviews on the evaluation experiments of personalised software for cultural heritage. More specifically, the paper categorises the experiment with respect to the method of evaluation and the criteria used. Finally, it provides a discussion on the main conclusions drawn the researchers conducting these experiments.
Article Preview
Top

Introduction

Information and Communication Technologies (ICTs) changed the way users interact with cultural heritage. Visitors acquired easy access to cultural heritage regardless of place and time through the Internet. Additionally, even physical access to the cultural heritage in museums or outdoors changed significantly with the use of mobile guides. However, the interaction with cultural heritage still encounters several problems. The vast amount of information presented in museums is often overwhelming to a visitor, making it difficult to select personally interesting exhibits (Ardissono et al., 2012). Indeed, as Jeong and Lee (2006) have noticed, visitors of museums may suffer physical and/or psychological fatigue during their long-lasting walking and focusing on exhibits. This often results on quitting the tour and feeling tiredness mainly due to the tediousness of the uninteresting exhibits.

As Falk (2009) points out, museum visitors differ and their visit experience is composed of the physical, the personal, as well as the socio-cultural context, and identity-related aspects. Therefore, it is really difficult for a museum to cover the interests and desires of all different users. Therefore, a solution to this problem may be given through the personalisation of user experience. Indeed, Petrelli et al. (1999) support that most museum visitors are willing to receive recommendations and assistance during their interaction with the exhibits. As a result, different recommendation systems have been used for personalising interaction indoors or outdoors.

In this respect, personalisation methods have been applied to adapt the route followed in a museum (LISTEN, PEACH, PIL, MNEMOSYNE, ec(h)o, Museum Wearable etc.) or in a city (GUIDE), the menus of a system (e.g. CHESS), the virtual reality route (e.g. MoMa), the text following an exhibit (PEACH, AmI, MyMuseum), etc. For this purpose, these systems use different user modelling techniques for acquiring and maintaining information about the users’ interests, knowledge and other characteristics.

Despite all the interesting approaches for personalising cultural heritage, there are researchers that express concerns about the effectiveness of some of these approaches (e.g. Marty, 2011). As Lanir et al. (2011) point out it is not clear how reducing choice in terms of the number of content items that are presented to the visitor affects visitor behaviour and satisfaction. This fact shows that personalisation methods on their own are not enough for improving a cultural recommendation system or cultural website. Their effectiveness can only be confirmed through a proper evaluation experiment. Towards this direction, Hendrix (2010) supports that personalisation methods should be evaluated based on their ability to meet user needs. Similarly, Bowen & Filippini Fantoni (2004) add that the personalisation methods cost and this cost is only justified if it brings added value to the museum for a good percentage of museum visitors. This added value can only be confirmed through an evaluation experiment.

Evaluation as part of the personalised software’s life cycle has not been studied extensively in the personalisation literature. Evaluation of ubiquitous computing systems is extremely complex (Spasojevic & Kindberg, 2001) and there are only a few articles that focus only on the evaluation phase of a personalised system (Kadobayashi et al., 1998). Most evaluation experiments are part of a paper that presents both the personalisation system and the evaluation experiment. Some of them use ad-hoc evaluation approaches borrowed from other better-established domains (Hatala & Wakkary, 2005). As a result, Pechenizkiy & Calders (2007) try to attract the attention to the problem of scientific evaluation of personalisation. The seriousness as well as the complexity of the evaluation of personalised software is emphasised by Ardissono et al. (2012) in a review of personalisation methods used in cultural heritage.

Complete Article List

Search this Journal:
Reset
Open Access Articles: Forthcoming
Volume 3: 2 Issues (2019)
Volume 2: 2 Issues (2018)
Volume 1: 2 Issues (2017)
View Complete Journal Contents Listing