Curriculum Associates logo.
i-Readyi-Ready Teacher Login

Laurie Laughlin Davis, Ph.D.

Associate Vice President, Psychometrics

About

Dr. Davis is the Associate Vice President for Psychometrics for Curriculum Associates. She has more than 20 years of experience in measurement and assessment in both applied and research settings. She directs the psychometric support for the i-Ready suite of assessments including the i-Ready Inform (formerly i-Ready Diagnostic) adaptive test. Prior to joining Curriculum Associates, Dr. Davis worked for ACT where she directed the psychometric support for the ACT and affiliated assessment programs and at Pearson where she led work in support of assessment programs for the states of Texas and Virginia as well as for the PARCC consortium. She has published research in the areas of device comparability, computer-based testing (CBT), and computerized-adaptive testing (CAT). Dr. Davis holds a Ph.D. in Educational Psychology with a specialization in Quantitative Methods and Psychometrics from the University of Texas. Her current research interests include the incorporation of technology to enhance assessment and measurement.

Publications

  • Camara, W., & Davis, L. L. (2022). Fairness concerns resulting from innovations and applications of technology to assessment. In J. Johnson & K. Geisinger (Eds.), Fairness in educational and psychological testing (pp. 153–186). Washington, DC: American Educational Research Association.
  • Davis, L. L., Morrison, K., Schnieders, J., & Marsh, B. (2021). Developing authentic digital math assessments. Journal of Applied Testing Technology, 21(2), 12–24.
  • Kong, X., Davis, L. L., McBride, Y., & Morrison, K. (2017). Response time differences between computers and tablets. Applied Measurement in Education, 31(1), 17–29.
  • Davis, L. L., Kong, X., McBride, Y., & Morrison, K. (2017). Disaggregated effects of device on score comparability. Educational Measurement: Issues and Practice, 36(3), 35–45.
  • Davis, L. L., Kong, X., McBride, Y., & Morrison, K. (2017). Device comparability of tablets and computers for assessment purposes. Applied Measurement in Education, 30(1), 16–26.
  • Davis, L. L., Orr, A., Kong, X., & Lin, C. (2015). Assessing student writing on tablets. Educational Assessment, 20, 180–198.
  • Way, W. D., Davis, L. L., Keng, L., & Strain-Seymour, E. (2015). From standardization to personalization: The comparability of scores based on different testing conditions, modes, and devices. In F. Drasgow (Ed.), Technology and testing: Improving educational and psychological measurement (NCME Applications of Educational Measurement and Assessment, Vol. 2, pp. 260–284). New York: Routledge.
  • Keng, L., McClarty, K. L., & Davis, L. L. (2008). Item-level comparative analysis of online and paper administrations of the Texas Assessment of Knowledge and Skills. Applied Measurement in Education, 21(3), 207–226.
  • Davis, L. L., & Dodd, B. G. (2008). Strategies for controlling item exposure in computerized adaptive testing with the partial credit model. Journal of Applied Measurement, 9(1), 1–17.
  • Davis, L. L. (2004). Strategies for controlling item exposure in computerized adaptive testing with the generalized partial credit model. Applied Psychological Measurement, 28(3), 165–185.
  • Davis, L. L., & Dodd, B. G. (2003). Item exposure constraints for testlets in the verbal reasoning section of the MCAT. Applied Psychological Measurement, 27(5), 335–356.
  • Davis, L. L., Pastor, D., Dodd, B. G., Chiang, C., & Fitzpatrick, S. J. (2003). An examination of exposure control and content balancing restrictions on item selection in CATs using the partial credit model. Journal of Applied Measurement, 4, 24–42.

Academic Presentations

  • Davis, L. L. (2021, April). The resurgence of interim assessment—bringing teaching and testing back together. Paper presented at the annual meeting of the National Council on Measurement in Education, held remotely.
  • Davis, L. L. (2020, August). CATs, BATs, and RATs—the value of CAT for educational assessment. Paper presented at the annual meeting of the National Council on Measurement in Education, held remotely.
  • Davis, L. L. (2018, February). Interaction of device hardware and software usability—implications for score comparability. Poster presented at the annual meeting of the Association of Test Publishers, San Antonio, TX.
  • Davis, L. L., Schwartz, R., Janiszweska, I., Holland, L., Businovski, B., & Lazendic, G. (2017, April). Evaluation of device effects in NAPLAN online assessments. Paper presented at the annual meeting of the National Council on Measurement in Education, San Antonio, TX.
  • Schwartz, R., Davis, L. L., Janiszweska, I., Holland, L., Businovski, B., Cohen, A., Treacy, K., & Lazendic, G. (2017, April). Interaction of device effects and item type in the NAPLAN online assessments. Paper presented at the annual meeting of the National Council on Measurement in Education, San Antonio, TX.
  • Davis, L. L., Kong, X., McBride, Y., & Morrison, K. (2016, June). Device comparability: Score range and subgroup analysis. Presented at the annual meeting of the Council of Chief State School Officers, Philadelphia, PA.
  • Davis, L. L. (2016, June). Virginia's perspective on scoring for technology enhanced items. Presented at the annual meeting of the Council of Chief State School Officers, Philadelphia, PA.
  • Keng, L., Davis, L. L., McBride, Y., Glaze, R., & Steedle, J. (2015, April). Study of device comparability within the PARCC field test. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, IL.
  • Davis, L. L., Strain-Seymour, E., & Bleil, B. (2014, June). What you need to know before testing on tablets. Presented at the annual meeting of the Council of Chief State School Officers, New Orleans, LA.
  • Davis, L. L., & Monfils, L. (2014, June). PARCC mode and device comparability research. Presented at the annual meeting of the Council of Chief State School Officers, New Orleans, LA.
  • Davis, L. L., Strain-Seymour, E., & Bleil, B. (2013, June). Technology diversity: Testing, tablets, and transitions. Presented at the annual meeting of the Council of Chief State School Officers, National Harbor, MD.
  • Davis, L. L., Strain-Seymour, E., Lin, C., & Kong, X. (2008, March). Evaluating the comparability between online and paper assessments of essay writing in the Texas Assessment of Knowledge and Skills. Presented at the annual meeting of the Association of Test Publishers, Dallas, TX.
  • Davis, L. L., Way, W. D., Fitzpatrick, S. J., & Meyers, J. (2006, April). Design and evaluation of a state assessment for Limited English Proficient students using computerized adaptive testing. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco, CA.
  • Way, W. D., Davis, L. L., & Fitzpatrick, S. J. (2006, April). Score comparability of online and paper administrations of the Texas Assessment of Knowledge and Skills. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco, CA.
  • Keng, L., McClarty, K. L., & Davis, L. L. (2006, April). Item-level comparative analysis of online and paper administrations of the Texas Assessment of Knowledge and Skills. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco, CA.
  • McClarty, K. L., Keng, L., & Davis, L. L. (2006, April). Secondary analysis methods in comparability research: A review of methods used with the Texas Assessment of Knowledge and Skills. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco, CA.

Distinctions

AERA and NCME