Newly published, peer-reviewed research out of Michigan State University and the University of California, Irvine suggests that one-to-one laptop programs improve student academic achievement in K-12 classrooms.
The primary author of the study (Binbin Zheng, Ph.D., Assistant Professor at the Department of Counseling, Educational Psychology, and Special Education at Michigan State University) was kind enough to provide me a review copy and permission to blog about the findings. Here is the citation for my review copy:
Zheng, G., Warschauer, M., Lin-C.-H., & Chang, C. (in press). Learning in one-to-one laptop environments: A meta-analysis and research synthesis. Review of Educational Research. DOI: 10.3102/0034654316628645.
Given the top line finding, I suspect the study will garner much attention and – at the same time – be subject to much spin. While you can read the full 30+ page study yourself (and I’d certainly encourage you to do so if this is a topic of interest), here is my summary, with accompanying analysis of what we should reasonably take away from the findings.
How was the study conducted?
Dr. Zheng and colleagues did not collect original data; rather, they used a statistical technique (meta-analysis) to combine the findings of prior studies meeting specific quality criteria in an effort to identify larger trends in the emerging body of empirical evidence on laptop programs’ impact on learning. While literature reviews and research syntheses have been conducted previously on laptop programs, this meta-analytic study is the first ever on this topic.
A total of ten (10) studies met the criteria for inclusion in the meta-analysis, meaning all ten studies:
- Were published in the English language after having gone through a formal peer review process;
- Were conducted between 2001 and 2015 (more specifically, the included studies were published between 2005 and 2012);
- Were conducted in K-12 school settings in the U.S. and abroad;
- Were focused on one-to-one laptop (i.e., not tablet, desktop, smartphone or other technologies) programs rather than on shared laptop programs or other interventions that occurred within a laptop environment;
- Had experimental and control groups, or an identified reference norm;
- Reported quantitative findings of the impact on students’ academic achievement
- Employed measures of academic achievement that were standardized assessments or norm-referenced district- or school-wide tests;
- Reported the duration of the study; and,
- Otherwise provided sufficient statistical data to calculate effect sizes.
What did the meta-analysis reveal?
Across all studies and in the main, one-to-one laptop programs were found to increase academic achievement by 0.16 standard deviations (statistically significant at the p<.001 level). More specifically, one-to-one laptop programs:
- Improved English language arts achievement by a statistically significant (p < .05) 0.15 standard deviations (based on 19 effect sizes within 6 studies);
- Had no statistically significant effect on reading achievement (based on 13 effect sizes within 4 studies);
- Improved writing achievement by a statistically significant 0.20 standard deviations (based on 11 effect sizes across 3 studies);
- Improved mathematics achievement by a statistically significant 0.16 standard deviations (based on 21 effect sizes across 7 studies); and,
- Improved science achievement by a statistically significant 0.25 standard deviations (based on 3 effect sizes from 2 studies).
What should we make of these findings?
First, a finding of negative effects would have and should have sounded a clarion call on the use of laptops for learning; however, the study found small positive effects in the main and also in many (but not all) academic subject areas. In reporting modest effect sizes, the study does not support the notion that one-to-one laptop programs are a silver bullet solution to raising student achievement or to closing achievement gaps. Rather, it suggests that one-to-one laptop programs can contribute to a broader academic improvement strategy and set of solutions. Moreover, this study did not evaluate other important decision making factors in deploying or enhancing a one-to-one program, such as the cost effectiveness of deploying devices for all students and teachers, whether or how to implement a personalized or blended learning program, or what specific instructional software or tools work best for which types of students. Finally, the authors themselves caution that the findings of this study should not be considered generalizable to deployments of tablets, desktops, smartphones or other classroom technologies, as those technologies have different affordances.
Two, it is only common sense to suggest that how and why one-to-one laptop programs are implemented in K-12 education matters, and the study provides insights into other important features of one-to-one laptop programs and the sorts of student and teacher outcomes that could be expected beyond traditional measures of academic achievement, albeit on a weaker evidence base. Of note, the authors find that existing high-quality studies of these programs provide only weak empirical evidence for the effects of laptops on gains in students’ ’21st century learning skills’ and reveal a complex relationship between one-to-one laptop programs and equity goals. That is, positive equity-related outcomes for at-risk learners were not achieved in all one-to-one laptop programs.
Three, because the question matters, there will always remain debates about the quality of the research base on technology in schools and for learning, about what we can infer from the research base (case in point: the 2015 OECD study), and about how best to study new innovations. What is incontrovertible, however, is that policymakers and practitioners need and deserve high-quality empirically-based insights on how best to meet the academic needs of children and youth, including with technology-based interventions. That only ten high-quality, empirical studies of one-to-one laptop programs were suitable for inclusion in this meta-analysis speaks volumes about the research-to-practice gap and also about federal priorities for education research.
My congratulations to Dr. Zheng and colleagues for a solid contribution to the emerging research base on the impact of technology in K-12 schools on student academic achievement.