
Scott Wood, Dongmei Li, and Sungjin Nam Authors’ Note After we completed this research, ACT increased the threshold for confidence scoring. Theexact agreement rate, exact-plus-adjacent agreement rate, and quadratic weighted kappastatistics will all be larger than the values listed in this report. Distributions of the overall writingscore, total raw score, and writing scale score now will more closely align with the scores ofrecord obtained from human scoring. Therefore, compared to the results in this report, theconcordance between ACT confidence scores and the scores of record will more closely align. 1. Introduction In October 2022, ACT started using CRASE®, its automated scoring engine, to provide one ofthe two scores for online essays in the international administration of the ACT®test with writing.Since then, CRASE has been used to score essays for ACT District (spring 2023), ACT State(fall 2023), and ACT National (December 2023) online testing. A validity argument for usingCRASE to score ACT writing test essays is in the June 2023CRASE+®for ACT WritingTechnical Report. Beginning in April 2026, ACT confidence scoring will be applied to selected essays on the ACTwriting test for National online testing. ACT confidence scoring is defined as using onlyautomated scoring to score a subset of writing test essays—if CRASE deems that it is highlyconfident in its scores—while sending all the other essays to a human to determine the secondscore as ACT has traditionally done since October 2022. In this report, Section 2 provides a concise history of CRASE and its newest version, CRASE5.Section 3 briefly explains the essay-scoring process.Section 4 describes the data and researchquestions for this study. Section 5 provides the results from applying ACT confidence scoring onACT writing test essays, including information about rater accuracy and the expecteddifferences in raw and scale scores. Finally, Section 6 presents our conclusions about ACTconfidence scoring. 2. Automated Scoring and CRASE5 Automated scoring (or automated essay scoring) uses a computer algorithm to emulate ahuman’s scoring behavior on constructed-response items or essays. The scoring algorithm iscalled the engine, and preparing the scoring algorithm for operational use is called training theengine. There are four parts to a scoring engine: (a) a means of reading text data, (b) apreprocessor that standardizes and initially processes the text, (c) a means of extracting thequantitative characteristics of the text (called features), and (d) a means of mapping thesecharacteristics to the human-scored data. For engines that capitalize on advanced deeplearning methods, such as neural networks, the process of extracting features and mapping thefeatures to human-scored data may be combined into one process. ACT created CRASE (Constructed Response Automated Scoring Engine) in 2007 for a U.S.state’s summative assessment program. The system has since been enhanced to includemethods for scoring additional types of free-response items and to incorporate newtechnologies in text processing and modeling. CRASE has been used operationally in multiplestate-testing programs (formative and summative) and in many research programs, includingscoring the assessments for a U.S. Department of Education Enhanced Assessment Grant. Asmentioned in the introduction, ACT has used CRASE to score online essays for the ACT withwriting since 2022. When ACT first used CRASE to score essays, this engine was called CRASE+, which was thefourth major version of the software. Since September 2025, ACT has been using CRASE5, anupdated fifth version, to score essays for the ACT writing test. All analyses presented in thisreport were produced with CRASE5 scores. This report assumes readers are basically familiar with automated scoring concepts. For thosenew to automated scoring, the CRASE research team recommends the following resources: Wood, S., Yao, E., Haisfield, L., & Lottridge, S. (2021).Establishing standards of best practice inautomated scoring. ACT.https://www.act.org/content/dam/act/unsecured/documents/R2100-auto-scoring-standards-2021-07.pdf McCaffrey, D., Casablanca, J., Ricker-Pedley, K., Lawless, R., & Wendler, C. (2021).Bestpractices for constructed-response scoring. ETS.https://www.ets.org/content/dam/ets-org/pdfs/about/cr_best_practices.pdf Yan, D., Rupp, A. A., & Foltz, P. W. (Eds.). (2020).Handbook of automated scoring: Theory intopractice. CRC Press. Lottridge, S., Burkhardt, A., & Boyer, M. (2020). Digital module 18: Automated scoring.Educational Measurement: Issues and Practice, 39(3), 141–142. Shermis, M. D., & Burstein, J. (Eds.). (2013).Handbook of automated essay evaluation: Currentapplications and new directions.Routledge/Taylor & Francis Group. 3. ACT Writing Test Essay-Scoring Process ACT writing test essays are scored on a rubric with four domains of writing: Ideas and Analysis,Development and Support, Organization, and Language Use and Con




