
Screen Time in Schools: It’s Right to Ask Questions
3-min. read
3-min. read
By: Dr. Kristen Huff

Over the past few weeks, I’ve been following the growing conversation about screen time in schools. Much of it is thoughtful and raises important concerns. But a few comments draw sweeping conclusions that don’t reflect how teaching and learning actually work in classrooms.
The concern about screen time is valid. Families and educators are navigating an increasingly complex landscape both in school and at home. Parents are concerned not just about how much time is spent on screens, but what content their children are engaging with online. In school, students are often interacting with multiple digital tools across subjects, and not all those experiences are equally meaningful. It’s reasonable to ask about using digital tools for learning: Is this time well spent?
The way the question is framed matters.
Right now, much of the debate reduces everything to a single measure: how much time students spend on screens. That framing is intuitive. It is also incomplete. It treats all digital experiences as equivalent—collapsing entertainment, disconnected practice, passive use, and meaningful learning into the same category.
A more useful question is: What is happening when a student is using a screen, and is it helping students learn?
The design of technology use matters. Its implementation matters—and time spent also matters. I’ve spent more than 25 years designing assessments, and one of the cardinal rules is: assessments must be designed from the outset to support the intended purpose and use. When Curriculum Associates set out to design i-Ready, an integrated assessment and personalized learning system, our goals were to:
Today, classrooms are filled with students who are performing on or above grade level on some skills and below grade level on others. To assess all students’ abilities with precision, we needed to develop a computer-adaptive model to allow us to assess and provide feedback on students with complex profiles of on- and off-grade level knowledge and skills.
A computer-based assessment model was the most effective way to efficiently gauge performance across a classroom and give teachers usable information to drive their instructional approaches throughout the school year. And digital assessments can make learning more accessible for students with disabilities through built-in supports like audio, captions, and screen readers.
For decades, research has shown that one of the most powerful drivers of learning is the ability to adjust instruction in response to student understanding. Assessments designed to inform instruction work because they give teachers insight into what students know—and the ability to act on it. Those insights can be used throughout the year, helping teachers make the most of the data to guide daily, off-screen instruction, even in classrooms where students have varied strengths and needs.
High-quality assessment systems are designed to make student learning and outcomesi more visible and more actionable by answering two essential questions:
These questions are not theoretical. They are the daily work of teaching. And technology can significantly aid educators in answering both questions.
We designed i-Ready Personalized Instruction (PI) to be delivered via computerized technology for a reason. It uses the information from the assessment to provide a digital supplement of lessons aligned to the needs of each student. i-Ready PI does not replace teacher-led instruction or a teacher’s judgment on how a digital (or any) instructional supplement is used. Instead, it helps augment practice for students, under the direction of a teacher, in targeted areas where they need help.
Without the use of digital tools, making sure each student is working on a lesson that is most advantageous to them in the moment is challenging at scale in today’s classrooms.
Yes. This is not hypothetical. The evidence includes large-scale studies conducted across states, districts, and diverse student populations.
Across those studies, the pattern is consistent: when i-Ready PI is used as recommended, students demonstrate stronger outcomes in reading and mathematics than comparable peers. These are not isolated findings. They come from:
The results are typically in the range of .14 to .24 effect sizes—what education research considers meaningful gains in student learning over time.ii
Just as important, those results are tied to how the tool is used—in short, structured sessions within a broader, teacher-led instructional model. We work with each of our schools to ensure that i-Ready PI connects with their curricular materials and instructional approach. Implementation fidelity, which includes teachers in the room with students while they are working on digital lessons, matters.
Some recent articles describe classrooms where students spend large portions of their day disengaged on screens, completing tasks disconnected to everything else happening in the classroom, and with little teacher interaction.
Where that is happening, it is a real problem—and should be addressed. But it is not an accurate representation of how tools like i-Ready PI are designed to be used, nor is it aligned with the professional learning that we provide to educators using i-Ready that clearly states the importance of limited, responsible use of technology in classrooms.
i-Ready PI is built around a clear model:
Our implementation model for digital instruction represents a very small portion of the school day. It is not the dominant learning experience. The majority of class time should focus on teacher-led instruction, discussion, reading, and hands-on learning.
When implementation drifts from those principles, outcomes can suffer. We acknowledge that and we agree. That is why we train and support educators to use our tools in line with the above guidelines, and we monitor and support data analysis working hand in hand with district leaders to promote practice in line with limited, responsible use of screens. This kind of implementation oversight is not unique to technology. It is true of any instructional approach.
In classrooms where i-Ready PI is used as intended, it is part of a broader instructional cycle:
That cycle is not new. It is the foundation of effective teaching. Technology, when thoughtfully designed and implemented, can make that cycle more visible and more responsive—especially in classrooms where one teacher is responsible for many students, each with very different needs.
We should hold all educational tools to a high standard. That includes asking:
Those are the right questions. i-Ready PI has been built and rigorously studied against these high standards. That does not mean every implementation is perfect. No tool guarantees outcomes on its own. But it does mean we should evaluate it—and any tool—based on evidence and use, not assumptions.
We hold ourselves accountable to these high standards in many ways. We submit our research for review to the What Works Clearinghouse and Evidence for ESSA. We work with highly respected institutions like the Center for Research and Reform in Education at Johns Hopkins University, NORC at the University of Chicago, and HumRRO to conduct quasi-experimental and longitudinal efficacy research on our products. And, in 2018, we established an Efficacy Advisory Committee of external, nationally recognized experts in educational research to review and advise on our study designs, analyses, and interpretations.
Learning is a sociocultural phenomenon. Technology cannot and should not replace teaching. It cannot replicate the relationships, judgment, and responsiveness that define effective classrooms and effective educators. Teacher–student and student–student interactions must be centered for learning to take hold.
In a limited school day, every minute matters. Educators and their partners, like us, deserve to ask and be asked the tough questions about effective use of instructional tools. The goal should not be to eliminate technology altogether, but to use it as a means to an end. When the benefits of using it are clearly beneficial to the end goal of teaching and learning, we design from the outset with that end purpose and use in mind. The ultimate goal is to ensure that any time students spend using digital tools is intentional, instructionally aligned, safe, and effective—to make it worthy of educators’ and students’ time and attention. That is the standard we should apply—not just to i-Ready, but to every tool we choose to bring into classrooms.
Learn more about i-Ready’s research-backed approach.
iHuff, K., Nichols, P., & Schneider, M. C. (2025). Designing and developing educational assessments. In L. L. Cook & M. J. Pitoniak (Eds.), Educational measurement (5th ed., pp. 441–511). Oxford University Press.
iiKraft, M. A. (2020). Interpreting effect sizes of education interventions. Educational Researcher, 49(4), 241–253.

3-min. read

2-min. read

2-min. read