Beyond RoboDebt: The Future of Robotic Process Automation

Beyond RoboDebt: The Future of Robotic Process Automation

Michael D'Rosario, Carlene D'Rosario
DOI: 10.4018/978-1-6684-3694-3.ch013
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Automated decision support systems with high stake decision processes are frequently controversial. The Online Compliance Intervention (herewith “OCI” or “RoboDebt”) is a system of compliance implemented with the intention to facilitate automatic issuance of statutory debt notices to individuals, taking a receipt of welfare payments and exceeding their entitlement. The system appears to employ rudimentary data scraping and expert systems to determine whether notices should be validly issued. However, many individuals that take receipt of debt notices assert that they were issued in error. The commentary on the system has resulted in a lot of conflation of the system with other system types and caused many to question the role of decision of support systems in public administration given the potentially deleterious impacts of such systems for the most vulnerable. The authors employ a taxonomy of Robotic Process Automation (RPA) issues, to review the OCI and RPA more generally. This paper identifies potential problems of bias, inconsistency, procedural fairness, and overall systematic error. This research also considers a series of RoboDebt specific issues regarding contractor arrangements and the potential impact of the system for Australia's Indigenous population. The authors offer a set of recommendations based on the observed challenges, emphasizing the importance of moderation, independent algorithmic audits, and ongoing reviews. Most notably, this paper emphasizes the need for greater transparency and a broadening of criteria to determine vulnerability that encompasses, temporal, geographic, and technological considerations.
Chapter Preview
Top

Australian And International Examples Of Automation In The Public Sector

Decision support systems are certainly not a new fad or phenomena. Automated decision-making within the Australian public sector has been employed for nearly thirty years (see inter alia Perry 2019 and Elvery 2019 for a useful precis). In 1994, the Department of Veterans’ Affairs introduced what is asserted to be the first automated decision-making in Australia, though the OCI is arguably the first high stakes decision support system.

Technologies have been trialed to identify aggressive behaviours amongst crowds (see Sjarif et al. 2012). Further examples of supported decision platforms include the use of algorithmic approaches and decision scorecards such as the FVRAT to domestic violence cases. Beyond simple decision support, more advanced applications of machine learning and deep learning technologies include the use of computer vision to identify individuals committing traffic crime, such as the use of cameras and computer vision to detect illegal mobile phone use by drivers, as noted by Perry (2019). Within other jurisdictions, so prolific is the use of machine learning and automated decision making more broadly city-based task forces have been established to review the use of automated decision making. Some applications appear to be without reproach. For example, computer vision in risk management and risk identification. An instructive example is the use of a tool by the Fire Department in New York City to analyze data from other city agencies to predict buildings most vulnerable to fire outbreaks and to prioritize these for inspection, Ibid (2019).

Other applications have been highly contentious, consider for example the use of the COMPAS algorithm in support of judicial decision making. The much-criticized COMPAS algorithm, designed to determine the viability of diversionary programs and non-custodial sentences rather than sentencing per se has been used to determine whether individuals are likely to recidivate, and has influenced the outcomes of judicial processes within a number of jurisdictions. Its application beyond its original scope and the opacity of the method itself call into question the application of such technologies beyond their original design. Originally authored at Northpointe (now Equivant), developed by psychologists at the proprietary algorithm is not available for public scrutiny. The highly cited work of ProPublica (2016) highlights the inconsistencies of the COMPAS algorithm; its race-based classification divergences within regard to false positives are well known. Dressel and Farid (2018) have shown that COMPAS is no more accurate than untrained individuals.

Complete Chapter List

Search this Book:
Reset