AI as an Accessibility Tool: Using Generative AI to Support Universal Design for Learning Approach

AI as an Accessibility Tool: Using Generative AI to Support Universal Design for Learning Approach

Copyright: © 2024 |Pages: 13
DOI: 10.4018/979-8-3693-0240-8.ch009
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

While generative AI is often discussed in terms of challenging notions of academic integrity, it is increasingly looked at as a potential tool to break down barriers in post-secondary education. In particular, this chapter considers how generative artificial intelligence technologies could be leveraged to provide multiple means of engagement, representation and expression (CAST, n.d.). In other words, how can the use of generative AI support social justice in higher education. An ethical approach to generative AI requires respect for the diversity among learners. An ethical approach to generative AI must reflect fairness, which requires recognizing the inherent biases built into the post-secondary learning environments. Using generative AI to support universal design for learning shifts this narrative, considering how technology acceptance could support social justice in post-secondary learning.
Chapter Preview
Top

A Social Justice Approach To Generative Artificial Intelligence

A social justice approach to education starts with the awareness of how embedded practices and educational traditions have privileged particular learners over others (Evans et al., 2018; Hanesworth et al., 2019). For example, Domage (2017), in his analysis of post-secondary educational institutions, highlights how these institutions have been developed to meet the needs of certain learners, based on gender, racial, class and ability lines. Learners who differ from these norms do not have the privilege of assuming their needs will be met in these settings. As a result, non-normative learners are not given the same opportunity for success. In the context of individuals with disabilities, Domage highlights that these students have to work through an accommodation model to address their needs; in contrast, normative students can meet their needs without this labour.

The rise of machine learning and Gen AI has created new concerns regarding equity in education. One of the common concerns is the use of student data to determine educational approaches and placement decisions. Braun and others (2023) highlight how many of the decision-making associations with AI involve the recognition of variation. In this way, machine learning products often result in an invisibility or hypervisibility of difference. In the education context, the outcomes of both models may continue perpetuating existing biases towards these groups. In addition to perpetuating existing harms, these tools may create new harms by not providing equal benefits from the new technology. For example, benefits based on machine learning in education may only provide positive results for normative learners as the models don’t address the marginalized learners because of data bias or lack of data (Baker & Hawn, 2022; Pessach & Shmueli, 2023; Zajko, 2022).

In contrast to concerns about the unexpected bias within machine learning tools, other authors highlight how these models can also be used to identify inequalities. Graham and Hopkins (2022) note how machine learning can also counter the oppression of statistical data by providing new ways to code and understand patterns in data. In this way, they position AI as a tool to reveal oppression that might otherwise be implicit or overlooked.

Complete Chapter List

Search this Book:
Reset