Prashant Loyalka
Encina Hall East, 4th floor
616 Jane Stanford Way
Stanford, CA 94305-6055
Prashant Loyalka is an Associate Professor at the Graduate School of Education and a Senior Fellow at the Freeman Spogli Institute for International Studies at Stanford University. His research focuses on examining/addressing inequalities in the education of children and youth and on understanding/improving the quality of education received by children and youth in multiple countries including China, India, Russia, and the United States. He also conducts large-scale evaluations of educational programs and policies that seek to improve student outcomes.
Assessing College Critical Thinking: Preliminary Results from the Chinese HEIghten® Critical Thinking Assessment
Assessing student learning outcomes has become a global trend in higher education. In this paper, we report on the validation of the Chinese HEIghten® Critical Thinking assessment with a nationally representative sample of Electrical Engineering and Computer Science students from 35 institutions in China. Key findings suggest that there was a test delivery mode effect favoring the paper tests over the online tests. In general, the psychometric quality of the items was satisfactory for low-stakes, group-level uses but there were a few items with low discrimination that awaits further investigation. The relationships between test scores and various external variables such as college entrance examination scores, university elite status and student perceptions of the test were as expected. We conclude with speculations on the key findings and discussion of directions for future research.
The Impact of Online Computer Assisted Learning at Home for Disadvantaged Children in Taiwan: Evidence from a Randomized Experiment
In Taiwan, thousands of students from Yuanzhumin (aboriginal) families lag far behind their Han counterparts in academic achievement. When they fall behind, they often have no way to catch up. There is increased interest among both educators and policymakers in helping underperforming students catch up using computer-assisted learning (CAL). The objective of this paper is to examine the impact of an intervention aimed at raising the academic performance of students using an in-home CAL program. According to intention-to-treat estimates, in-home CAL improved the overall math scores of students in the treatment group relative to the control group by 0.08 to 0.20 standard deviations (depending on whether the treatment was for one or two semesters). Furthermore, Average Treatment Effect on the Treated analysis was used for solving the compliance problem in our experiment, showing that in-home CAL raised academic performance by 0.36 standard deviations among compliers. This study thus presents preliminary evidence that an in-home CAL program has the potential to boost the learning outcomes of disadvantaged students.
Examining Mode Effects for an Adapted Chinese Critical Thinking Assessment
We examine the effects of computer-based versus paper-based assessment of critical thinking skills, adapted from English (in the U.S.) to Chinese. Using data collected based on a random assignment between the two modes in multiple Chinese colleges, we investigate mode effects from multiple perspectives: mean scores, measurement precision, item functioning (i.e. item difficulty and discrimination), response behavior (i.e. test completion and item omission), and user perceptions. Our findings shed light on assessment and item properties that could be the sources of mode effects. At the test level, we find that the computer-based test is more difficult and more speeded than the paper-based test. We speculate that these differences are attributable to the test’s structure, its high demands on reading, and test-taking flexibility afforded under the paper testing mode. Item-level evaluation allows us to identify item characteristics that are prone to mode effects, including targeted cognitive skill, response type, and the amount of adaptation between modes. Implications for test design are discussed, and actionable design suggestions are offered with the goal of minimizing mode effect.
Schooling and Covid-19: Lessons from Recent Research on EdTech
The wide-scale global movement of school education to remote instruction due to Covid-19 is unprecedented. The use of educational technology (EdTech) offers an alternative to in-person learning and reinforces social distancing, but there is limited evidence on whether and how EdTech affects academic outcomes. Recently, we conducted two large-scale randomized experiments, involving ~10,000 primary school students in China and Russia, to evaluate the effectiveness of EdTech as a substitute for traditional schooling. In China, we examined whether EdTech improves academic outcomes relative to paper-and-pencil workbook exercises of identical content. We found that EdTech was a perfect substitute for traditional learning. In Russia, we further explored how much EdTech can substitute for traditional learning. We found that EdTech substitutes only to a limited extent. The findings from these large-scale trials indicate that we need to be careful about using EdTech as a full-scale substitute for the traditional instruction received by schoolchildren.
The wide-scale global movement of school education to remote instruction due to Covid-19 is unprecedented. The use of educational technology (EdTech) offers an alternative to in-person learning and reinforces social distancing, but there is limited evidence on whether and how EdTech affects academic outcomes, and that limited evidence is mixed.1,2 For example, previous studies examine performance of students in online courses and generally find that they do not perform as well as in traditional courses. On the other hand, recent large-scale evaluations of supplemental computer-assisted learning programs show large positive effects on test scores. One concern, however, is that EdTech is often evaluated as a supplemental after-school program instead of as a direct substitute for traditional learning. Supplemental programs inherently have an advantage in that provide more time learning material.
Recently, we conducted two large-scale randomized experiments, involving ~10,000 primary school students in China and Russia, to evaluate the effectiveness of EdTech as a substitute for traditional schooling.3,4 In both, we focused on whether and how EdTech can substitute for in-person instruction (being careful to control for time on task). In China, we examined whether EdTech improves academic outcomes relative to paper-and-pencil workbook exercises of identical content. We followed students ages 9–13 for several months over the academic year. When we examined the impacts of each supplemental program we found that EdTech and workbook exercise sessions of equal time and content outside of school hours had the same effect on standardized math test scores and grades in math classes. As such, EdTech appeared to be a perfect substitute for traditional learning.
In Russia, we built on these findings by further exploring how much EdTech can substitute for traditional learning. We examined whether providing students ages 9–11 with no EdTech, a base level of EdTech (~45 min per week), and a doubling of that level of EdTech can improve standardized test scores and grades. We found that EdTech can substitute for traditional learning only to a limited extent. There is a diminishing marginal rate of substitution for traditional learning from doubling the amount of EdTech use (that is, when we double the amount of EdTech used we do not find that test scores performance doubles). We find that additional time on EdTech even decreases schoolchildren’s motivation and engagement in subject material.
The findings from the large-scale trials indicate that we need to be careful about using EdTech as a full-scale substitute for the traditional instruction received by schoolchildren. There are two general takeaways: First, to a certain extent, EdTech can successfully substitute for traditional learning. Second, there are limits on how much EdTech may be beneficial. Admittedly, we need to be careful about extrapolating from the smaller amount of technology substitution in our experiments to the full-scale substitution in the face of the coronavirus pandemic. However, these studies may offer important lessons. For example, a balanced approach to learning in which schoolchildren intermingle work on electronic devices and work with traditional materials might be optimal. Schools could mail workbooks to students or recommend that students print out exercises to break up the amount of continuous time schoolchildren spend on devices. This might keep students engaged throughout the day and avoid problems associated with removing the structure of classroom schedules. Schools and families can devise creative remote learning solutions that include a combination of EdTech and more traditional forms of learning. Activities such as reading books, running at-home experiments, and art projects can also be used to break up extensive use of technology in remote instruction.
The Impact of Computer Assisted Learning on Rural Taiwanese Children: Evidence from a Randomized Experiment
The effectiveness of educational technology (EdTech) in improving the outcomes of poor, marginalized students has primarily been documented by studies conducted in developing countries; however, relevant research involving randomized studies in developed country contexts is relatively scarce. The objective of the current study is to examine whether an in-school computer assisted learning (CAL) intervention can improve the math performance (the primary outcome) and academic attitudes (secondary outcomes) of rural students in Taiwan, including a marginalized subgroup of rural students called Xinzhumin. We also seek to identify which factors are associated with the effectiveness of the intervention. In order to achieve this, we conducted a randomized control trial involving 1,840 sixth-grade students at 95 schools in four relatively poor counties and municipalities of Taiwan during the spring semester of 2019. According to the ITT analysis, the O-CAL intervention had no significant ITT impacts on the primary outcome of student math performance as well as on most secondary outcomes of the overall treatment group (who on average used the software for only about one quarter of the protocol’s minimum required time of 30 minutes per week, indicating that compliance was low). However, the LATE analysis revealed significant improvements in the math performance of the 30% most active students in the treatment group (who used the software for about two thirds of the minimum required time). Effect sizes of active users overall (0.16 SD-0.22 SD) increased in accordance with increases in usage and were larger for active Xinzhumin users specifically (0.21 SD-0.35 SD). A wide range of student-level and (in particular) teacher-level characteristics were associated with the low compliance to the intervention, which are findings that may help inform educational policymakers and administrators of the potential challenges of introducing school-based interventions that depend heavily on teacher adoption and integration.
Education and EdTech during COVID-19: Evidence from a Large-Scale Survey during School Closures in China
In response to the COVID-19 epidemic, many education systems have relied on distance learning and educational technologies to an unprecedented degree. However, rigorous empirical research on the impacts on learning under these conditions is still scarce. We present the first large-scale, quantitative evidence detailing how school closures affected education in China. The data set includes households and teachers of 4,360 rural and urban primary school students. We find that although the majority of students engaged in distance education, many households encountered difficulties including barriers to learning (such as access to appropriate digital devices and study spaces), curricular delays, and costs to parents equivalent to about two months of income. We also find significant disparities across rural and urban households.
Isolating the "Tech" from EdTech: Experimental Evidence on Computer Assisted Learning in China
EdTech, which includes computer assisted learning (CAL), online education, and remote instruction, was expanding rapidly even before the current full-scale substitution for in-person learning at all levels of education around the world because of COVID-19. Studies of CAL interventions often find positive effects, however, these “CAL programs” often include non-technology based inputs such as more time on learning and instructional support by facilitators in addition to technology-based components. In this paper, we discuss the possible channels by which CAL programs affect academic outcomes among schoolchildren. We isolate the technology-based effects of CAL from the total program effects by designing a novel multi-treatment field experiment with more than four thousand schoolchildren in rural China. For the full sample, we find null effects for both the total CAL program and the technology-based effects of CAL (which are measured relative to a traditional pencil-and-paper learning treatment) on math test scores. For boys, however, we find a positive and statistically significant effect of the CAL program, but do not find evidence of a positive effect for the technology-based effect of CAL. When focusing on grades, we find evidence of positive CAL program effects but find null effects when we isolate the technology-based effects of CAL. Our empirical results suggest that the “Tech” in EdTech may have relatively small additional effects on academic outcomes and yet that tech programs can substitute at least to a certain extent for traditional learning.
Institutions, Implementation, and Program Effectiveness: Evidence from a Randomized Evaluation of Computer-Assisted Learning in Rural China
There is limited evidence on the degree to which differences in implementation among institutions matter for program effectiveness. To examine this question, we conducted an experiment in rural China in which public schools were randomly assigned to one of three treatments: a computer-assisted learning program (CAL) implemented by a government agency, the same program implemented by an NGO, and a pure control. Results show that compared to the pure control condition and unlike the NGO program, the government program did not improve student achievement. Analyzing impacts along the causal chain, we find that government officials were more likely to substitute CAL for regular instruction (contrary to protocol) and less likely to directly monitor program progress. Correlational analyses suggest that these differences in program implementation were responsible for the lack of impacts.