Article Preview
TopIntroduction
A huge amount of new data is created and stored every minute by users in order to be retrievable and discoverable. In modern organisations, both in the private and public sector, textual information in electronic documents is stored in big volumes in Document Management Systems (DMS). DMS were first introduced in enterprise environments both in the private and the public sector over 30 years ago to receive, track, manage and store documents. Over time, along with the dramatic increase in the pace of data creation and increasing storage needs, these systems saw little improvement concerning information retrieval functionalities. This resulted in difficulties in locating, identify, retrieve information in collections that often expand to millions of documents. That is due to the fact that these systems cannot “look into” the textual information they store, but rather treat it as a black-box described by user-provided metadata. Inevitably, this human-created metadata often suffers from low quality.
With the rise of open Government and open data rhetorics and practices, some of these public sector DMS publish their content as open data to the Web to improve transparency and accessibility. Benefits of open data besides increased transparency also include democratic control, improved or new public products and services, improved government services, innovation and new knowledge creation from combined data sources and the possibility to identify patterns in large data volumes, among others (Pereira et al., 2017). For these reasons, in the last decade, open government policies have started to gain ground in an increasing number of countries globally, while several projects based on open data are executed all over the world (Mohamed et al., 2020; Zuiderwijk & Janssen, 2014).
This is the case of the Greek portal DIAVGEIA1 (in English: Clarity) in which all public sector administrative decisions are published, as mandated by law, forming a huge and fast-growing collection of more than 43 million documents. Essentially, DIAVGEIA is providing access to the governmental DMS, which stores all the documents. The huge volume of this textual information, combined with the lack of high quality and standardised metadata, poses several problems and processing challenges, justifying the use of the term “big data” to describe such a corpus of information.
Open (big) data must be available in a convenient and modifiable form, in order to be easy to exploit, i.e., to increase data interoperability, be able to combine different datasets together. Towards improving information and knowledge extraction, Semantic Web technologies like RDF and OWL were developed and standardized in the form of (meta-)data graphs consisting of elementary vertice-edge-vertice triples (subject, predicate, object) (Zaveri et al., 2016). Tim Berners-Lee (2010), the inventor of the Web and linked data initiator, suggested a 5-star deployment scheme for open data quality that constitutes the status quo in Semantic Web best practices (Hasnain & Rebholz-Schuhmann, 2018). This scheme proposes publishing machine-readable structured data and using open standards from W3C that are also linked to other linked open data. These linked data principles can also provide the basis for complying data to other recommendations employed by the research community like the Findable Accessible Interoperable Reusable (FAIR) principles indicating that data resources should support discovery and reusability by different stakeholders (Garijo & Poveda-Villalón, 2020).