Joyce, K. & Cartwright, N. (forthcoming) "How Should Evidence Inform Education Policy?" in Routledge Handbook of Philosophy of Education.

Abstract: This chapter explores how evidence from various sources can support education policy decisions. Although policy arguments include some normative premises, we focus on the evidence needed to support their descriptive premises, homing in on predictions about how candidate policies are likely to perform in specific target sites. Although evidence from RCTs is viewed as the gold-standard, we argue that it’s insufficient and unnecessary for predictions about education policies. Trustworthy predictions require information about how the policy operates, the conditions under which it can do so, and the conditions present in the target setting, which comes from a mix of research methods, theory, and local sources. This evidence is also useful for feasibility assessments and implementation planning.

Joyce, K. (2019) The Key Role of Representativeness in Evidence-based Education Educational Research and Evaluation 25 (3-4).

(Reprinted in The Evidential Basis of Evidence-based Education, Routeledge 2020; ISBN 9780367520335)

Abstract: Within evidence-based education, results from randomised controlled trials (RCTs), and meta-analyses of them, are taken as reliable evidence for effectiveness – they speak to “what works”. Extending RCT results requires establishing that study samples and settings are representative of the intended target. Although widely recognised as important for drawing causal inferences from RCTs, claims regarding representativeness tend to be poorly evidenced. Strategies for demonstrating it typically involve comparing observable characteristics (e.g., race, gender, location) of study samples to those in the population of interest to decision makers. This paper argues that these strategies provide insufficient evidence for establishing representativeness. Characteristics typically used for comparison are unlikely to be causally relevant to all educational interventions. Treating them as evidence that supports extending RCT results without providing evidence demonstrating their relevance undermines the inference. Determining what factors are causally relevant requires studying the causal mechanisms underlying the interventions in question.

Joyce, K. & Cartwright, N. (2019) Bridging the Gap Between Research and Practice: Predicting What Will Work Locally, American Educational Research Journal 57 (3).

Abstract: This article addresses the gap between what works in research and what works in practice. Currently, research in evidence-based education policy and practice focuses on randomized controlled trials. These can support causal ascriptions (‘‘It worked’’) but provide little basis for local effectiveness predictions (‘‘It will work here’’), which are what matter for practice. We argue that moving from ascription to prediction by way of causal generalization (‘‘It works’’) is unrealistic and urge focusing research efforts directly on how to build local effectiveness predictions. We outline various kinds of information that can improve predictions and encourage using methods better equipped for acquiring that information. We compare our proposal with others advocating a better mix of methods, like implementation science, improvement science, and practice-based evidence.

Joyce, K. & Cartwright, N. (2018) Meeting Our Standards for Educational Justice: Doing Our Best with the Evidence with Nancy Cartwright in Theory and Research in Education 16 (1).

Abstract: The United States considers educating all students to a threshold of adequate outcomes to be a central goal of educational justice. The No Child Left Behind Act introduced evidence-based policy and accountability protocols to ensure that all students receive an education that enables them to meet adequacy standards. Unfortunately, evidence-based policy has been less effective than expected. This article pinpoints under-examined methodological problems and suggests a more effective way to incorporate educational research findings into local evidence-based policy decisions. It identifies some things educators need to know and do to determine whether available interventions can play the right casual role in their setting to produce desired effects. It examines the value and limits of educational research, especially randomized controlled trials, for this task.

Teaching Philosophy through a Role-Immersion Game: Reacting to the Past, with Andy Lamey and Noel Martin in Teaching Philosophy 41 (2), June 2018.

Abstract: A growing body of research suggests that students achieve learning outcomes at higher rates when instructors use active-learning methods rather than standard modes of instruction. To investigate how one such method might be used to teach philosophy, we observed two classes that employed Reacting to the Past (hereafter, Reacting), an educational role-immersion game. We chose to investigate Reacting because role-immersion games are considered a particularly effective active-learning strategy. Professors who have used Reacting to teach history, interdisciplinary humanities, and political theory agree that it engages students and teaches general skills like collaboration and communication. We investigated whether it can be effective for teaching philosophical content and skills like analyzing, evaluating, crafting, and communicating arguments in addition to bringing the more general benefits of active learning to philosophy classrooms. Overall, we find Reacting to be a useful tool for achieving these ends. While we do not argue that Reacting is uniquely useful for teaching philosophy, we conclude that it is worthy of consideration by philosophers interested in creative active-learning strategies, especially given that it offers a prepackaged set of flexible, user-friendly tools for motivating and engaging students.