Journal article
Generalizability Theory for Randomly Parallel Testing
Journal of educational measurement, Vol.63(1), e70029
Spring 2026
DOI: 10.1111/jedm.70029
Abstract
Advancements in artificial intelligence (AI) have brought significant changes to testing practices, including the emergence of randomly parallel testing (RPT), in which examinees receive different but psychometrically similar sets of items generated from templates or AI‐based systems. This paper presents a generalizability theory (GT) framework for estimating conditional standard errors of measurement (CSEMs) and related reliability indices, with a particular focus on design structures commonly encountered in RPT within domain‐referenced testing contexts. The proposed framework supports the evaluation of score precision across a variety of operational designs, including crossed, nested, and multivariate configurations. Several illustrative examples are provided to demonstrate the methodology in practical settings. The paper also addresses key psychometric and interpretive challenges associated with RPT and outlines promising directions for future research.
Details
- Title: Subtitle
- Generalizability Theory for Randomly Parallel Testing
- Creators
- Won‐Chan Lee - University of IowaStella Y. Kim - University of North Carolina at CharlotteSeungwon Shin - University of Iowa
- Resource Type
- Journal article
- Publication Details
- Journal of educational measurement, Vol.63(1), e70029
- DOI
- 10.1111/jedm.70029
- ISSN
- 0022-0655
- eISSN
- 1745-3984
- Publisher
- Wiley
- Language
- English
- Electronic publication date
- 01/21/2026
- Date published season
- Spring 2026
- Date published
- 2026
- Academic Unit
- Psychological and Quantitative Foundations
- Record Identifier
- 9985132066202771
Metrics
1 Record Views