Building Democratisation and Agency: The Adoption of Ethical AI Practice in Pedagogy

Building Democratisation and Agency: The Adoption of Ethical AI Practice in Pedagogy

Copyright: © 2024 |Pages: 19
DOI: 10.4018/979-8-3693-1666-5.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The aims of this chapter are threefold, namely to consider global ethics and the impact that AI could potentially have in terms of increasing societal inequalities in terms of existing infrastructure, to provide an insight into the developmental and progressive use of AI across organizational infrastructures in pedagogic practice and finally, and to embed the concept of ethical AI and the potential for its praxis across all aspects of its integration in the building of global democracy and agency. Debates and sensationalized presentations of artificial intelligence (AI) across the media and in scientific and industrial contexts have shaped public perception of its potential benefits and the profound way the potential for harm ought to be acknowledged. This chapter provides a theoretical insight into how AI can be objectively debated amidst the controversy surrounding its implementation and the potential for the inaccessible to be made accessible over forthcoming months and years.
Chapter Preview
Top

Introduction

“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

– Alan Turing (1912-1954)

This chapter will consider applied ethics in the context of pedagogical practice in relation to the potential impact of Artificial Intelligence (AI) in pedagogic practice. The extent to which AI has been progressively integrated into society over the last three years, has exponentially increased media and scientific debates of what is and what might be, if all we are to believe about AI is realised (Bareis & Jatzenbach, 2022). As with any landmark paradigmatic shift in society’s use of and access to technological advance, the widespread introduction of AI has both positive and negative aspects which can be harnessed for both human progression and decimation (Mikalef et al, 2022). Initial debates surrounding AI focused predominantly on their functionalist capacities to reduce the complexity and challenge of largely physical tasks, where algorithmic decision-making could be used as an adjunct support to mundane and burdensome human work (Mirbabaie et al 2022). This has now progressed to the widespread cognitive debates as to how AI can address tasks which before, seemed impossible and largely inaccessible, in terms of the decision-making processes necessary to undertake them (Madhav & Tyagi, 2022). One of the key aspects of these debates is how humanity has often designed AI in human form, to the extent technological artefacts are often perceived as robots with a high degree of sentient ambition (Owe and Baum, 2021). It is often thought AI can compete for cognitive advantage rather than simply being a design artefact used to extend the reach of humanity’s applied intellect (Mele, & Russo-Spena, 2023; Yamin, et al, 2021). This has led to widespread hyperbole that AI somehow has the capacity to override human cognition and that its capacity for extended algorithmic thinking may eventually pose a huge threat to mankind, alongside offering some of the greatest technological developments of our age (Cools, Van Gorp & Opgenhaffen, 2022). Unlike other technological advances whether the choice to engage with them was always an option, AI poses a wider societal issue where that choice may no longer be possible, should the self-advancement of algorithmic decision making pose an overriding threat to humanity in terms of speed and capacity for action (Igna & Venturini, 2023). As such the ethical principles of AI are factors that all organisations now must contemplate so that the integration of ethical practice becomes a societal norm in terms of the use of AI in practice. Beyond an anthropological perspective, social ethics and the philosophies underpinning them all impact upon the capacity of organisational decision making and how AI may remain fully controllable and where the algorithms within which it operates may be constructively aligned with those affective attributes of humanity, that as a society, we would wish to promulgate, rather than any degree of negativity (Henin & Le Métayer, 2021). Whilst every organisational infrastructure operated by humans is designed on principles of altruism, equity and equality, the integration of AI poses several questions which necessitate critical reflexivity of the situated nature of their use within highly sensitive contexts such as healthcare, law, and education, all of which can have the potential for an inordinate impact on society as it is currently known (Cheng, Varshney & Liu, 2021). Perhaps then it is rather an issue with the design of AI rather than implementation, which ought to serve the hyperbole and hypothesising that surround it (Bareis & Jatzenbach, 2022). This chapter will also explore how the social implications of AI are being posited, often sensationalised as a threat to humanity, rather than being framed in something humanly designed that ought to remain within the control of its maker, transparent in terms of capacity to undertake complex decision making and which most importantly is accountable for every individual action made in terms of design and programming (Novelli, Taddeo & Floridi, 2023).

Key Terms in this Chapter

Algorithm: An algorithm is a process or set of rules to be followed in decision-making or other problem-solving operations, especially by computing technology.

Epistemic Bias: The lens of subjective interpretation which influences systematic research practice due to failing to acknowledge or detail the ideals of human impartiality and value-freedom which may potentially be influencing it.

Sentient/Sentience: Sentience is the capacity of a being to experience feelings and sensations.

Reliability: The extent to which a research instrument can repeatedly provide the same results in temporally separated incidences of measurement.

Hacking: The gaining of illegal, unlawful, or unofficially unauthorised access to data in within the context of computing and technology.

Agency: Is the capacity for action or intervention producing a particular effect.

Validity: Is the state of being officially true or legally acceptable.

Existentialism: The philosophy of the nature of human existence as determined by capacity and capability for free will and free choice.

AI Safety: AI safety pertains to the interdisciplinary field which prevents the misuse, accidental or other consequences which could be the resultant outcome of an AI system.

Complete Chapter List

Search this Book:
Reset