Mitigation of Test Bias in International, Cross-National Assessments of Higher-Order Thinking Skills

Mitigation of Test Bias in International, Cross-National Assessments of Higher-Order Thinking Skills

Raffaela Wolf, Doris Zahner
DOI: 10.4018/978-1-4666-9441-5.ch018
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The assessment of higher-order skills in higher education has gained popularity internationally. In order to accurately measure the skills required for working in the 21st century, a shift in assessment strategies is required. More specifically, assessments that only require the recall of factual knowledge have been on the decline, whereas assessments that evoke higher-order cognitive skills are on the rise. The purpose of this chapter is to discuss and offer strategies for mitigating bias for a computer-administered performance-based assessment of higher-order skills. Strategies to abate the effects of bias are discussed within the test design and test implementation stages. A case study of a successful adaptation and translation of CAE's Collegiate Learning Assessment (CLA+) is presented to guide the discussion throughout the chapter.
Chapter Preview
Top

Background

Increased globalization, among other factors, has created an increased interest in making cross-national comparisons on underlying constructs that the research instrument purports to measure. Occasionally, this necessitates translation of the instrument into a different language.

Due to differences in culture and language, among other differences in the population, it is evident that an examination of the degree to which the instrument measures the same construct across these cultural and language groups is a precursor to drawing valid score interpretations. In order to draw valid score inferences, it is assumed that individuals who earn the same observed score on these instruments have the same standing on the constructs underlying the measurement instrument. The evaluation of several criteria could aid in meeting the aforementioned assumption:

  • 1.

    The construct measured exists across nations.

  • 2.

    The construct is measured in the same manner across nations.

  • 3.

    Items that are believed to be equivalent across nations are linguistically and statistically equivalent.

  • 4.

    Similar scores across different adapted versions of the assessment reflect similar degrees of proficiency.

Key Terms in this Chapter

CLA+: A college outcome assessment that purports to measure critical thinking and writing skills.

Confirmatory Factor Analysis: Statistical procedure to test whether the data fit a hypothesized measurement model.

Differential Item Functioning: Occurs when individuals from different groups (i.e., gender or ethnicity) with the same ability or skill level have a different probability of giving a certain response on a measurement instrument.

Psychometrics: Field of study concerned with the theory and technique of educational psychological measurement.

Construct: A proposed attribute of an individual that often cannot be measured directly, but can be assessed using a number of indicators or manifest variables.

Item Response Theory: Theory grounded on the idea that the probability of a correct response to an item on an assessment is a mathematical function of person and item parameters.

Measurement Equivalence: Statistical property of measurement that indicates that the same construct is being measured across groups.

Bias: Any systematic differences in the meaning of test scores that are associated with group membership.

Structural Equation Modeling: Statistical technique for testing and estimating causal relationships.

Complete Chapter List

Search this Book:
Reset