Article Preview
Top1. Introduction
The usage of social media enables access to real-time data provided by citizens, the news, organizations and companies. Using Twitter communication during disasters is a major challenge because access to tweets is real-time and short-lived. This requires fast decisions on which information to select. This hidden implicit knowledge could add significant value to manage disasters. Many studies during the last decade covered the analysis of social media in disaster management mainly in the USA (starting with Murphy and Jennex (2006) on PeopleFinder and ShelterFinder following Katrina and Palen and Liu (2007), who were anticipating a future of ICT-supported public participation), but only a few case studies about countries such as Germany exist (Reuter et al., 2012). However, Twitter in Germany is used in a different way from that in the US, e.g. usage frequencies. In Germany 56% of internet-users are active on Facebook, whereas just 6% are active on Twitter (BITKOM, 2013). The question arises whether, in general, German tweets contain relevant information as compared to US disaster management studies (e.g. Vieweg, Hughes, Starbird, & Palen, 2010). Furthermore the applicability of existing mining methods to non-English tweets and the selection of appropriate technology is a challenge.
The availability of sources of data, a taxonomy and ontology for guiding search, retrieval and storage have been identified as some key points for organizations to focus on when considering social media (Jennex, 2010). Suggestions for dynamic quality assessment of citizen generated content (Ludwig et al., 2015), implemented as tailorable quality assessment services (Reuter, Ludwig, Ritzkatis, et al., 2015) can only be successful, if these requirements are fulfilled.
In order to address these points, our study (1) aims to first examine whether German emergency tweets contain additional and relevant information, useful for forecasting, prevention or crisis intervention. This objective is evaluated with retrospective Tweet analysis of the European Flood 2013 data in Germany. Following the structure suggested by Vieweg et al. (2010) this study also investigates (2) if existing computational data mining systems can be applied to German crisis Tweets. Furthermore, we examine (3) which methods (computational versus manual-supervised) are valuable and practical in producing trustworthiness and secure information (see Figure 1).
Figure 1. Research gap and research questions
TopThere has been much research about the use of social media in emergencies. For more than one decade, social media has been used by the public in crisis situations (Reuter et al., 2012): after the terrorist attacks of September 11th, 2001, wikis created by citizens were used to collect information on missing people (Palen & Liu, 2007). During the 2004 Indian Ocean tsunami (Liu et al., 2008) as well as the 2007 Southern California wildfires (Shklovski et al., 2008) photo repository sites were used by citizen to exchange information.
Many other published research papers focus on the use of Twitter during disasters, mainly in the USA since 2008 (Reuter, 2014): The use of Twitter has been analyzed scientifically in the context of various crises such as the 2008 hurricanes Gustav and Ike leading the observation of differences between the use of Twitter in crisis and the general use (Hughes & Palen, 2009), the 2008 Tennessee River technological failure, outlining the phenomena of broadcasting (Sutton, 2010), the 2009 Red River Floods, highlighting broadcasting by people on the ground as well as activities of directing, relaying, synthesizing, and redistributing (Vieweg et al., 2010).