Empowering Students From Assumptions to Knowledge: Making Integrity Everyone's Business

Empowering Students From Assumptions to Knowledge: Making Integrity Everyone's Business

Copyright: © 2024 |Pages: 18
DOI: 10.4018/979-8-3693-0240-8.ch014
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

The arrival of ‘free to access' generative AI platforms in late 2022 has irrevocably changed the educational landscape. Of concern to many is how to uphold academic integrity in assessments ensuring students are able to demonstrate their learning in this new environment where a ‘machine' can research and write for you. In this chapter, the authors present a process which is designed to uncover the true nature of the ‘work' expected to be performed by students during assessment, which then allows GenAI tools to be knowingly used by students as just another research tool. While it is still early days, changing our understanding that what should be assessed is ‘the work' expected of students rather than the ‘output' suggests that all is not lost when it comes to academic integrity.
Chapter Preview
Top

Introduction

This chapter seeks to support teaching academics with a process to design assessment that addresses some of the integrity concerns that have been raised due to the ‘explosion’ in access to, and use of, generative forms of artificial intelligence (GenAI) since late 2022. The widespread availability of GenAI has left many in higher education concerned about issues such as the authorship of assessment, academic integrity and the impact on student learning and knowledge acquisition. While GenAI will and does impact other areas of education such as admissions, student support services, research and scholarship, this chapter exclusively considers assessment design and academic integrity concerns. It should be noted that it only briefly touches upon a student view, noting that students may be less aware of when and how they are using GenAI products and the importance of demonstrating their academic ability along with mastery of knowledge remains central to their success at university and beyond.

The reach and impact of artificial intelligence (AI) extends well beyond learning environments, with a growing acceptance that AI will also impact the world of work and society more broadly (Bannon, 2023). In this uncertain environment, fundamental to all these concerns is the question of trust: how do we have trust in who ‘did the work’ and how ‘the work’ was undertaken or produced? In higher education, questions such as how do we ensure that students have done the work being submitted for assessment, and therefore have the level of knowledge and skills we are ‘certifying’ have been keeping many awake at night since the advent of the internet (Hinman, 2002). Our discussions with students tell us, they also care about ensuring no student is awarded or rewarded unfairly. This becomes even more important when the course is a means of credentialing professional expertise for example for lawyers, doctors, engineers, and accountants. In academia at the end of 2023, it is GenAI that is attracting most attention because GenAI can produce a largely fully formed output as would be written by a person.

The advent of the internet changed access to information (Fuller and Kuromiya 1981, Luo, et al., 2018); no longer is education (particularly higher education) meant to be the mere transfer of knowledge from expert to student. In part this is a result of the acceptance of employability as a central outcome of a university education (Prokou, 2008; Collini, 2012; Wheaton, 2020). The ideal of ‘learning for learning’s sake’ has been replaced (if it did ever truly exist) with the idea of learning as a means to improve personal futures through access to professional careers, and higher paid jobs. At the same time, the rise of student fees and loans in systems that previously offered ‘free’ higher education, as well as changing economic conditions more broadly mean that students often approach higher education as a transaction or task to be completed. Competing demands on student’s time can create pressure on students to be more strategic learners, trying to minimise efforts while still wanting to maximise outcomes. The broad access to GenAI which is now available requires us to reimagine both teaching and learning approaches as well as associated assessment practices (OECD, 2023a) to measure learning outcomes. This chapter proposes a model to support academics to explore what may be appropriate assessment in the face of GenAI while at the same time informing students of their responsibilities in using such tools.

Academic integrity matters and according to the Australian Tertiary Education Quality and Standards Agency (TEQSA) who both govern and oversee educational standards in Australia, academic integrity is “the expectation that teachers, students, researchers and all members of the academic community act with: honesty, trust, fairness, respect and responsibility” (TEQSA, 2022). The ever-changing landscape means constantly updated resources and in November 2023, Lodge et al. note in TEQSA’s “Assessment reform for the age of artificial intelligence” that:

Good assessment design that allows for ‘rich portrayals’ of student learning is critical. Thus, we take as given that assessment should engage students in learning, provide a partnership between teachers and students, and promote student participation in feedback. These key elements of assessment can then guide how best to consider the role of AI in assessment design.

Complete Chapter List

Search this Book:
Reset