Philosophy of Education

My primary research in this area concerns the use of evidence-based policy/practice in education. My publications on this topic (three coauthored with Nancy Cartwright) focus on methodological and epistemological issues. My works in progress (see below) consider the role of values in evidence-based decision-making and evaluate the evidence-based approach in terms of fairness and equality. I also have related interests in pedagogical methods.

Background: In 2001, the No Child Left Behind Act (NCLB) introduced an evidence-based strategy for school improvement to narrow achievement gaps and raise achievement across all students. Essentially, it seeks to improve the quality of schools nationwide by encouraging them to base their instructional decisions on scientific research that speaks to the causal effectiveness of available interventions (e.g., programs, practices). The U.S. Institute of Education Sciences sponsors rigorous experimental research to identify effective educational interventions. Schools nationwide are encouraged to implement these 'evidence-based' interventions with fidelity to produce similar effects, thereby improving student outcomes and opportunities for learning. Unfortunately, evidence-based education has yet to yield the intended results. This is partly because evidence-based interventions that produce positive effects in research settings often fail to produce similar effects when used in practice. This is known as the ‘research-practice gap.’ Nancy Cartwright and I argue that this gap stems in part from unsupported causal inferences and discuss ways in which the knowledge produced by various research methods can evidence the causal claims relevant to decision-making within education. 


"Prioritizing Disadvantaged Students in Principle and in Practice" (forthcoming) in Philosophical Inquiry in Education

Abstract: The U.S. uses an evidence-based approach to education (US-EBE) as a strategy for pursuing two major goals: (1) raise achievement in the U.S. overall by facilitating improvement among all students including students in disadvantaged groups; (2) narrow achievement gaps between socially advantaged and disadvantaged groups by leveling-up achievement among disadvantaged students. While both goals prioritize improvement among disadvantaged students in absolute terms, only the second attempts to address unequal achievement by prioritizing improvement among disadvantaged students relative to advantaged students. I argue that US-EBE can be reasonably expected to advance either the first goal or the second goal but not both simultaneously as intended. This descriptive point raises a normative question: which goal should we pursue using US-EBE? I explore moral considerations that bear on this question, focusing on costs and benefits for students. I argue, provisionally, that we ought to use US-EBE to narrow gaps because the costs associated with doing so are morally justifiable whereas those associated with the alternative are not.

"Fair Accountability for Educators in the Context of Evidence-based Education" (forthcoming) in Public Affairs Quarterly

Abstract: It’s only fair to hold someone accountable for outcomes over which they have sufficient control. The evidence-based approach to education (EBE) promises to give educators sufficient control over their students’ outcomes by providing access to interventions that are effective according to scientific research. I argue that EBE fails to secure sufficient control because the research on which it relies doesn’t establish that interventions are generally effective. If they are to be fair, accountability practices must reflect the limited control educators have, even when using evidence-based interventions that have improved outcomes in other settings. I glean relevant insights by considering accountability for medical practitioners within the context of evidence-based medicine (EBM).

"Revisiting the Role of Values in Evidence-based Education" (forthcoming) in The Journal of Philosophy of Education.

Abstract: Evidence-based practice in education involves basing decisions about what to do on evidence about the relative effectiveness of available interventions (e.g., programs, products, practices). This paper considers two influential critiques of evidence-based education (EBE) pertaining to its treatment of values. The ‘general critique’ condemns EBE for excluding values from decisions about what to do in education. The ‘specific critique’ condemns EBE for relying on a deterministic view of causality in education which disregards the complex, value-laden nature of educational contexts. I argue that virtually all versions of EBE escape the general critique, including the dominant intervention-centered approach that relies on experimental research to discover ‘what works,’ because the predictions EBE aims to support are only one premise in a broader normative argument. Further, intervention-centered EBE can avoid much of the specific values-based critique because it is consistent with a probabilistic, rather than deterministic, understanding of causality. However, I argue that only a context-centered approach to EBE that relies on evidence about the specific target setting from local sources in addition to evidence from theory and research can fully address the specific critique by accommodating critics’ descriptive claims about the nature of educational contexts.

Joyce, K. & Cartwright, N. (2022) "How Should Evidence Inform Education Policy?" in Routledge Handbook of Philosophy of Education.

This chapter explores how evidence from various sources can support education policy decisions. Although policy arguments include some normative premises, we focus on the evidence needed to support their descriptive premises, homing in on predictions about how candidate policies are likely to perform in specific target sites. Although evidence from RCTs is viewed as the gold-standard, we argue that it’s insufficient and unnecessary for predictions about education policies. Trustworthy predictions require information about how the policy operates, the conditions under which it can do so, and the conditions present in the target setting, which comes from a mix of research methods, theory, and local sources. This evidence is also useful for feasibility assessments and implementation planning. 

Joyce, K. & Cartwright, N. (2020) Bridging the Gap Between Research and Practice: Predicting What Will Work Locally, American Educational Research Journal 57 (3)

Abstract: This article addresses the gap between what works in research and what works in practice. Currently, research in evidence-based education policy and practice focuses on randomized controlled trials. These can support causal ascriptions (‘‘It worked’’) but provide little basis for local effectiveness predictions (‘‘It will work here’’), which are what matter for practice. We argue that moving from ascription to prediction by way of causal generalization (‘‘It works’’) is unrealistic and urge focusing research efforts directly on how to build local effectiveness predictions. We outline various kinds of information that can improve predictions and encourage using methods better equipped for acquiring that information. We compare our proposal with others advocating a better mix of methods, like implementation science, improvement science, and practice-based evidence.

Joyce, K. (2019) The Key Role of Representativeness in Evidence-based Education Educational Research and Evaluation 25 (3-4)

(Reprinted in The Evidential Basis of Evidence-based Education, Routeledge 2020; ISBN 9780367520335)

Abstract: Within evidence-based education, results from randomised controlled trials (RCTs), and meta-analyses of them, are taken as reliable evidence for effectiveness – they speak to “what works”. Extending RCT results requires establishing that study samples and settings are representative of the intended target. Although widely recognised as important for drawing causal inferences from RCTs, claims regarding representativeness tend to be poorly evidenced. Strategies for demonstrating it typically involve comparing observable characteristics (e.g., race, gender, location) of study samples to those in the population of interest to decision makers. This paper argues that these strategies provide insufficient evidence for establishing representativeness. Characteristics typically used for comparison are unlikely to be causally relevant to all educational interventions. Treating them as evidence that supports extending RCT results without providing evidence demonstrating their relevance undermines the inference. Determining what factors are causally relevant requires studying the causal mechanisms underlying the interventions in question.

Joyce, K. & Cartwright, N. (2018) Meeting Our Standards for Educational Justice: Doing Our Best with the Evidence with Nancy Cartwright in Theory and Research in Education 16 (1)

Abstract: The United States considers educating all students to a threshold of adequate outcomes to be a central goal of educational justice. The No Child Left Behind Act introduced evidence-based policy and accountability protocols to ensure that all students receive an education that enables them to meet adequacy standards. Unfortunately, evidence-based policy has been less effective than expected. This article pinpoints under-examined methodological problems and suggests a more effective way to incorporate educational research findings into local evidence-based policy decisions. It identifies some things educators need to know and do to determine whether available interventions can play the right casual role in their setting to produce desired effects. It examines the value and limits of educational research, especially randomized controlled trials, for this task.

Pedagogy Research

Joyce, K., Lamey, A., Martin, N. Teaching Philosophy through a Role-Immersion Game: Reacting to the Past, (2018) Teaching Philosophy 41 (2). 

Abstract: A growing body of research suggests that students achieve learning outcomes at higher rates when instructors use active-learning methods rather than standard modes of instruction. To investigate how one such method might be used to teach philosophy, we observed two classes that employed Reacting to the Past (hereafter, Reacting), an educational role-immersion game. We chose to investigate Reacting because role-immersion games are considered a particularly effective active-learning strategy. Professors who have used Reacting to teach history, interdisciplinary humanities, and political theory agree that it engages students and teaches general skills like collaboration and communication. We investigated whether it can be effective for teaching philosophical content and skills like analyzing, evaluating, crafting, and communicating arguments in addition to bringing the more general benefits of active learning to philosophy classrooms. Overall, we find Reacting to be a useful tool for achieving these ends. While we do not argue that Reacting is uniquely useful for teaching philosophy, we conclude that it is worthy of consideration by philosophers interested in creative active-learning strategies, especially given that it offers a prepackaged set of flexible, user-friendly tools for motivating and engaging students.