Curriculum Associates logo.
i-Readyi-Ready Teacher Login

Kevin Cappaert

Director, Psychometrics

About

Kevin Cappaert attended the University of Wisconsin – Milwaukee and graduated with a PhD in Educational Psychology; he graduated with a concentration in Educational Statistics and Measurement. Kevin spent a year as a visiting professor teaching graduate courses in statistics and measurement to a wide range of graduate students and also taught undergraduate courses in classroom assessment to future teachers. He then decided to become an applied psychometrician working to ensure a high degree of validity and reliability for end-of-year accountability assessments. After 3 years he realized he wanted to make a greater impact in the classroom, so he decided to shift from summative assessment to formative assessment by joining Curriculum Associates. Kevin has since been at CA for 6 years.

At CA, Kevin has focused heavily on the logic for, and maintenance of, the adaptive diagnostic assessment, drift analysis, and Spanish reading assessments. Kevin has since shifted into a Director role, where he manages the teams responsible for operational psychometric efforts for the i-Ready Diagnostic, Assessment of Spanish Reading, and Literacy efforts. In his free time Kevin enjoys the winter sport of curling, downhill skiing, waterskiing, cooking, and rooting on the Green Bay Packers and Wisconsin Badgers.

Publications

  • Morrison, K., & Cappaert, K. (2023). Investigating conditional adaptivity for targeting item development. Presented at the 2023 annual meeting of the National Council on Measurement in Education, Chicago, IL.
  • Cappaert, K., Morrison, K., & Su, Y. (2022). Relaxed CAT: An extended b-matching method for content relaxed item selection. Presented at the 2022 annual meeting of the National Council on Measurement in Education, San Diego, CA.
  • Morrison, K., & Cappaert, K. (2022). School versus remote testers: Rating the consistency of test scores. Presented at the 2022 annual meeting of the National Council on Measurement in Education, San Diego, CA.
  • Hochstetter, A., Page, A., Chang, Y., & Cappaert, K. J. (2018). Validity inferences for different types of technology-enhanced items. Presented at the 2018 annual meeting of the National Council on Measurement in Education, New York, NY.
  • Cappaert, K. J., Wen, Y., & Chang, Y. (2017). Evaluating item parameter drift methods in a CAT environment. Presented at the 2017 annual meeting of the National Council on Measurement in Education, San Antonio, TX.
  • Chang, Y., Wang, C., Bynum, B., Griff, G., & Cappaert, K. (2017). Item parameter drift analysis for a computer adaptive test. Presented at the 2017 annual meeting of the National Council on Measurement in Education, San Antonio, TX.
  • Cappaert, K., & Chang, Y. (2016, August). Assessment 101. Presented at the 2016 annual Minnesota Assessment Conference, Saint Paul, MN.
  • Cappaert, K. J. (2015, April). The influence of DIF/DBF on ability estimation. Presented at the 2015 annual meeting of the National Council on Measurement in Education, Chicago, IL.
  • Cappaert, K. J., Finch, W. H., & Walker, C. M. (2014, April). Simultaneous uniform and non-uniform DBF detection: A MIMIC model approach. Presented at the 2014 annual meeting of the National Council on Measurement in Education, Philadelphia, PA.
  • Cappaert, K. J., & Wen, Y. (2014, April). Missing not at random: A cause of DIF? Presented at the 2014 annual meeting of the National Council on Measurement in Education, Philadelphia, PA.
  • Cappaert, K. J., Walker, C. M., & Zhang, B. (2013, April). Partial cancellation in differential bundle functioning: Influences on the detection rate, ability estimates, and the standard error of the beta statistic. Presented at the 2013 annual meeting of the National Council on Measurement in Education, San Francisco, CA. Also presented at the 7th Conference of the International Test Commission, Hong Kong.
  • Cappaert, K., Peterson, J., & Luo, W. (2012, April). Modeling partially cross-classified multilevel data. Presented at the 2012 annual meeting of the American Educational Research Association, Vancouver, British Columbia.
  • Walker, C. M., Zhang, B., Banks, K., & Cappaert, K. (2011, April; 2010, July). Establishing effect size guidelines for interpreting the results of differential bundle functioning analyses using SIBTEST. Presented at the 2011 annual meeting of the National Council on Measurement in Education, New Orleans, LA. An earlier version of this paper was presented at the 7th Conference of the International Test Commission, Hong Kong.

Academic Presentations

  • Morrison, K., & Cappaert, K. (2023, April). Investigating conditional adaptivity for targeting item development. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, IL.
  • Cappaert, K., Morrison, K., & Su, Y. (2022, April). Relaxed CAT: An extended b-matching method for content relaxed item selection. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego, CA.
  • Morrison, K., & Cappaert, K. (2022, April). School versus remote testers: Rating the consistency of test scores. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego, CA.
  • Hochstetter, A., Page, A., Chang, Y., & Cappaert, K. J. (2018, April). Validity inferences for different types of technology-enhanced items. Paper presented at the annual meeting of the National Council on Measurement in Education, New York, NY.
  • Cappaert, K. J., Wen, Y., & Chang, Y. (2017, April). Evaluating item parameter drift methods in a CAT environment. Paper presented at the annual meeting of the National Council on Measurement in Education, San Antonio, TX.
  • Chang, Y., Wang, C., Bynum, B., Griff, G., & Cappaert, K. (2017, April). Item parameter drift analysis for a computer adaptive test. Paper presented at the annual meeting of the National Council on Measurement in Education, San Antonio, TX.
  • Cappaert, K., & Chang, Y. (2016, August). Assessment 101. Paper presented at the annual Minnesota Assessment Conference, Saint Paul, MN.
  • Cappaert, K. J. (2015, April). The influence of DIF/DBF on ability estimation. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, IL.
  • Cappaert, K. J., Finch, W. H., & Walker, C. M. (2014, April). Simultaneous uniform and non-uniform DBF detection: A MIMIC model approach. Paper presented at the annual meeting of the National Council on Measurement in Education, Philadelphia, PA.
  • Cappaert, K. J., & Wen, Y. (2014, April). Missing not at random: A cause of DIF? Paper presented at the annual meeting of the National Council on Measurement in Education, Philadelphia, PA.
  • Cappaert, K. J., Walker, C. M., & Zhang, B. (2013, April). Partial cancellation in differential bundle functioning: Influences on the detection rate, ability estimates, and the standard error of the beta statistic. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco, CA; also presented at the 7th Conference of the International Test Commission, Hong Kong.
  • Cappaert, K., Peterson, J., & Luo, W. (2012, April). Modeling partially cross-classified multilevel data. Paper presented at the annual meeting of the American Educational Research Association, Vancouver, BC.
  • Walker, C. M., Zhang, B., Banks, K., & Cappaert, K. (2011, April). Establishing effect size guidelines for interpreting the results of differential bundle functioning analyses using SIBTEST. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans, LA.

Distinctions

National Council on Measurement in Education