Educational policy-makers around the world constantly make decisions about how to use scarce resources to improve the education of children. Unfortunately, their decisions are rarely informed by evidence on the consequences of these initiatives in other settings. Nor are decisions typically accompanied by well-formulated plans to evaluate their causal impacts. As a result, knowledge about what works in different situations has been very slow to accumulate. Over the last several decades, advances in research methodology, administrative record keeping, and statistical software have dramatically increased the potential for researchers to conduct compelling evaluations of the causal impacts of educational interventions, and the number of well-designed studies is growing. Written in clear, concise prose, Methods Matter: Improving Causal Inference in Educational and Social Science Research offers essential guidance for those who evaluate educational policies. Using numerous examples of high-quality studies that have evaluated the causal impacts of important educational interventions, the authors go beyond the simple presentation of new analytical methods to discuss the controversies surrounding each study, and provide heuristic explanations that are also broadly accessible. Murnane and Willett offer strong methodological insights on causal inference, while also examining the consequences of a wide variety of educational policies implemented in the U.S. and abroad. Representing a unique contribution to the literature surrounding educational research, this landmark text will be invaluable for students and researchers in education and public policy, as well as those interested in social science.
Les mer
1 The Challenge for Educational Research 1.1 The Long Quest 1.2 The Quest is World-Wide 1.3 What this Book is About 1.4 What to Read Next 2 The Importance of Theory 2.1 What is Theory? 2.2 Theory in Education 2.3 Voucher Theory 2.4 What Kind of Theories? 2.5 What to Read Next 3 Designing Research to Address Causal Questions 3.1 Conditions to Strive for in All Research 3.2 Making Causal Inferences 3.3 Past Approaches To Answering Causal Questions in Education 3.4 The Key Challenge of Causal Research 3.5 What to Read Next 4 Investigator-Designed Randomized Experiments 4.1 Conducting Randomized Experiments 4.1.1 An Example of a "Two-Group" Experiment 4.2 Analyzing Data from Randomized Experiments 4.2.1 Better Your Research Design, the Simpler Your Data-Analysis 4.2.2 Bias and Precision in the Estimation of Experimental Effects 4.3 What to Read Next 5 Challenges in Designing, Implementing, and Learning from Randomized Experiments 5.1 Critical Decisions in the Design of Experiments 5.1.1 Defining the Treatment Being Evaluated 5.1.2 Defining the Population from Which Participants Will Be Sampled 5.1.3 Deciding Which Outcomes to Measure 5.1.4 Deciding How Long To Track Participants 5.2 Threats to Validity of Randomized Experiments 5.2.1 Contamination of the treatment-control contrast 5.2.2 Cross-Overs 5.2.3 Attrition from the sample 5.2.4 Participation in an Experiment Itself Affects Participants' Behavior 5.3 Gaining Support for Conducting Randomized Experiments: Examples from India 5.3.1 Evaluating an Innovative Input Approach 5.3.2 Evaluating an Innovative Incentive Policy 5.4 What to Read Next 6 Statistical Power and Sample Size 6.1 Statistical Power 6.1.1 Reviewing the Process of Statistical Inference 6.1.2 Defining Statistical Power 6.2 Factors Affecting Statistical Power 6.2.1 The Strengths and Limitations of Parametric Tests 6.2.2 The Benefits of Covariates 6.2.3 The Reliability of the Outcome Measure Matters 6.2.4 The Choice between One-Tailed and Two-Tailed Tests 6.3 What to Read Next 7 Experimental Research When Participants Are Clustered within Intact Groups 7.1 Using the Random-Intercepts Multilevel Model to Estimate Effect Size When Intact Groups of Participants Were Randomized To Experimental Conditions 7.2 Statistical Power When Intact Groups of Participants Were Randomized To Experimental Conditions 7.2.1. Statistical Power of the Cluster-Randomized Design and Intraclass Correlation 7.3 Using Fixed-Effects Multilevel Models to Estimate Effect Size When Intact Groups of Participants are Randomized To Experimental Conditions 7.3.1 Specifying a "Fixed-Effects" Multilevel Model 7.3.2. Choosing Between Random- and Fixed-Effects Specifications 7.2 What to Read Next 8 Using Natural Experiments To Provide "Arguably Exogenous" Treatment Variability 8.1 Natural- and Investigator-Designed Experiments: Similarities and Differences 8.2 Two Examples of Natural Experiments 8.2.1 The Vietnam Era Draft Lottery 8.2.2 The Impact of an Offer of Financial Aid for College 8.3 Sources of Natural Experiments 8.4 Choosing the Width of the Analytic Window 8.5 Threats to Validity in Natural Experiments with a Discontinuity Design 8.5.1 Accounting for the Relationship between the Forcing Variable and the Outcome in a Discontinuity Design 8.5.2 Actions by Participants Can Undermine Exogenous Assignment to Experimental Conditions in a Natural Experiment with a Discontinuity Design 8.6 What to Read Next? 9 Estimating Causal Effects Using a Regression-Continuity Approach 9.1 Maimonides' Rule and the Impact of Class Size on Student Achievement 9.1.1 A Simple "First Difference" Analysis 9.1.2 A "Difference-in-Differences" Analysis 9.1.3 A Basic "Regression-Discontinuity" Analysis 9.1.4 Choosing an Appropriate "Window" or "Bandwidth" 9.2 Generalizing the Relationship between Outcome and Forcing Variable 9.2.1 Specification Checks Using Pseudo-Outcomes and Pseudo-Cutoffs 9.2.2 RD Designs and Statistical Power 9.3 Additional Threats to Validity in an RD Design 9.4 What to Read Next 10 Introducing Instrumental Variables Estimation 10.1 Introducing Instrumental Variables Estimation 10.1.1 Investigating the Relationship Between an Outcome and a Potentially- Endogenous Question Predictor Using OLS Regression Analysis 10.1.2 Instrumental Variables Estimation 10.2 Two Critical Assumptions That Underpin Instrumental Variables Estimation 10.3 Alternative Ways of Obtaining the IV Estimate 10.3.1 Obtaining an IV Estimate by the Method of Two-Stage Least-Squares 10.3.2 Obtaining an IVE by Simultaneous Equations Estimation 10.4 Extensions of the Basic IVE Approach 10.4.1 Incorporating Exogenous Covariates into IV Estimation 10.4.2 Incorporating Multiple Instruments into the First-Stage Model 10.4.3 Examining the Impact of Interactions between the Endogenous Question Predictor and Exogenous Covariates in the Second-Stage Model 10.4.4 Choosing Appropriate Functional Forms for the Outcome/Predictor Relationships in the First- and Second-Stage Models 10.5 Finding and Defending Instruments 10.5.1 Proximity of Educational Institutions 10.5.2 Institutional Rules and Personal Characteristics 10.5.3 Deviations from Cohort Trends 10.5.4 The Search Continues 10.6 What To Read Next 11 Using IVE to Recover the Treatment Effect in a Quasi-Experiment 11.1 The Notion of a "Quasi-Experiment" 11.2 Using IVE to Estimate the Causal Impact of a Treatment in a Quasi-Experiment 11.3 Further Insight into the IVE (LATE) Estimate, in the Context of Quasi- Experimental Data 11.4 Using IVE to Resolve "Fuzziness" in a Regression Discontinuity Design 11.5 What To Read Next 12 Dealing with Bias in Treatment Effects Estimated from Non-Experimental Data 12.1 Reducing Observed Bias by the Method of Stratification 12.1.1 Stratifying on a Single Covariate 12.1.2 Stratifying on Covariates 12.2 Reducing Observed Bias by Direct Control for Covariates Using Regression Analysis 12.3 Reducing Observed Bias Using A "Propensity Score" Approach 12.3.1 Estimation of the Treatment Effect by Stratifying on Propensity Scores 12.3.2 Estimation of the Treatment Effect by Matching on Propensity Scores 12.3.3 Estimation of the Treatment Effect by Weighting by the Inverse of the Propensity Scores 12.4 A Return to the Substantive Question 12.5 What to Read Next 13 Substantive Lessons from High-Quality Evaluations of Educational Interventions 13.1 Increasing School Enrollments 13.1.1 Reduce Commuting Time 13.1.2 Reduce Out-of-Pocket Educational Costs 13.1.3 Reduce Opportunity Costs 13.1.4 Does Increasing School Enrollment Necessarily Lead To Improved Long-Term Outcomes? 13.2 Improving School Quality 13.2.1 Provide More or Better Educational Inputs 13.2.1.1 Provide More Books 13.2.1.2 Teach Children in Smaller Classes 13.2.1.3 Recruit Skilled Teachers or Provide Training to Enhance Teachers' Effectiveness 13.2.2 Improve Incentives For Teachers 13.2.3 Improving Incentives for Students 13.2.4 Increase Families' Schooling Choices 13.3 Summing Up 14 Methodological Lessons from the Long Quest 14.1 Be Clear About Your Theory of Action 14.2 Learn about Culture, Rules, and Institutions in the Research Setting 14.3 Understand the Counterfactual 14.4 Worry about Selection Bias 14.5 Measure All Possible Important Outcomes 14.6 Be On the Lookout for Longer-Term Effects 14.7 Develop a Plan for Examining Impacts on Subgroups 14.8 Interpret Your Research Results Correctly 14.9 Pay Attention to Anomalous Results 14.10 Recognize That Good Research Always Raises New Questions 14.11 Final Words
Les mer
"Policy discussions today routinely demand that proposals be evidence-based -- without really understanding that the reliability and validity of what passes as evidence varies widely. Murnane and Willett have done a remarkable job of helping both producers and consumers to understand what is good evidence and how it can be produced. Methods Matter explains lucidly how the causal impact of educational and social interventions can be estimated from quantitative data, using a panoply of innovative empirical approaches." --Eric A. Hanushek, Senior Fellow, Hoover Institution, Stanford University "Methods Matter is about research designs and statistical analyses for drawing valid and reliable causal inferences from data about real-world problems. The book's most telling feature is the wide range of education research examples that it uses to illustrate each point made. By presenting powerful research methods in the context of important research questions the authors are able to draw readers quickly and deeply into the material covered. New and experienced researchers from many fields will learn a lot from reading Methods Matter and will enjoy doing so."--Howard S. Bloom, Chief Social Scientist, MDRC "Richard J. Murnane and John B. Willett provide a broadly accessible account of causal inference in educational research. They consider basic principles- how to define causal effects, frame causal questions, and design experiments- while also gently introducing important topics that have previously been obscure to non-specialists: randomization by group, natural experiments, instrumental variables, regression discontinuity, and propensity scores. Using a wide range or examples, the authors teach their readers to identify and challenge key assumptions underlying claims about what works in education. This book will improve educational research by challenging researchers and policy-makers to think more rigorously about the evidence and assumptions underlying their work." -- Stephen W. Raudenbush, Lewis Sebring Distinguished Service Professor, Department of Sociology, University of Chicago "I strongly recommend Methods Matter to anyone who intends to conduct research on the causal impact of education programs and policies. Henceforth, a graduate course in education research methods that doesn't rely on it should be considered suspect. Methods Matter should also be essential reading for those who want to be critical consumers of advanced education research. Methods Matter very much, and so does this book. It is a very good book that signals a coming of age of the field." --Grover Whitehurst, Director, Brown Center on Education Policy, Brookings Institute "To be useful for development policy, educational research has to shed more light on how resources for education can produce more learning, more knowledge, more skills. In this book, Professors Richard Murnane and John Willett discuss a range of empirical methods for estimating causal relationships and review their applications in educational research. They translate complex statistical concepts into clear, accessible language and provide the kind of analytical guidance that a graduate student or young researcher might obtain only after years of experience with these methods. This volume is a very readable companion to any statistics textbook or statistical program on evaluation methods." --Elizabeth M. King, Director, Education, The World Bank
Les mer
"Policy discussions today routinely demand that proposals be evidence-based -- without really understanding that the reliability and validity of what passes as evidence varies widely. Murnane and Willett have done a remarkable job of helping both producers and consumers to understand what is good evidence and how it can be produced. Methods Matter explains lucidly how the causal impact of educational and social interventions can be estimated from quantitative data, using a panoply of innovative empirical approaches." --Eric A. Hanushek, Senior Fellow, Hoover Institution, Stanford University "Methods Matter is about research designs and statistical analyses for drawing valid and reliable causal inferences from data about real-world problems. The book's most telling feature is the wide range of education research examples that it uses to illustrate each point made. By presenting powerful research methods in the context of important research questions the authors are able to draw readers quickly and deeply into the material covered. New and experienced researchers from many fields will learn a lot from reading Methods Matter and will enjoy doing so."--Howard S. Bloom, Chief Social Scientist, MDRC "Richard J. Murnane and John B. Willett provide a broadly accessible account of causal inference in educational research. They consider basic principles- how to define causal effects, frame causal questions, and design experiments- while also gently introducing important topics that have previously been obscure to non-specialists: randomization by group, natural experiments, instrumental variables, regression discontinuity, and propensity scores. Using a wide range or examples, the authors teach their readers to identify and challenge key assumptions underlying claims about what works in education. This book will improve educational research by challenging researchers and policy-makers to think more rigorously about the evidence and assumptions underlying their work." -- Stephen W. Raudenbush, Lewis Sebring Distinguished Service Professor, Department of Sociology, University of Chicago "I strongly recommend Methods Matter to anyone who intends to conduct research on the causal impact of education programs and policies. Henceforth, a graduate course in education research methods that doesn't rely on it should be considered suspect. Methods Matter should also be essential reading for those who want to be critical consumers of advanced education research. Methods Matter very much, and so does this book. It is a very good book that signals a coming of age of the field." --Grover Whitehurst, Director, Brown Center on Education Policy, Brookings Institute "To be useful for development policy, educational research has to shed more light on how resources for education can produce more learning, more knowledge, more skills. In this book, Professors Richard Murnane and John Willett discuss a range of empirical methods for estimating causal relationships and review their applications in educational research. They translate complex statistical concepts into clear, accessible language and provide the kind of analytical guidance that a graduate student or young researcher might obtain only after years of experience with these methods. This volume is a very readable companion to any statistics textbook or statistical program on evaluation methods." --Elizabeth M. King, Director, Education, The World Bank
Les mer
Selling point: A clear and concise guide for evaluating causal impacts of educational interventions Selling point: Includes numerous useful examples of high-quality studies Selling point: Helps readers produce better educational research Selling point: For students and researchers in education and public policy, social science, economics, and educational psychology
Les mer
Richard J. Murnane, Juliana W. and William Foss Thompson Professor of Education and Society at Harvard University, is an economist who focuses his research on the relationships between education and the economy, teacher labor markets, the determinants of children's achievement, and strategies for making schools more effective. John B. Willett, Charles William Eliot Professor of Education at Harvard University, is a quantitative methodologist who has devoted his career to improving the research design and data-analytic methods used in education and the social sciences, with a particular emphasis on the design of longitudinal research and the analysis of longitudinal data .
Les mer
Selling point: A clear and concise guide for evaluating causal impacts of educational interventions Selling point: Includes numerous useful examples of high-quality studies Selling point: Helps readers produce better educational research Selling point: For students and researchers in education and public policy, social science, economics, and educational psychology
Les mer

Produktdetaljer

ISBN
9780199753864
Publisert
2010
Utgiver
Vendor
Oxford University Press Inc
Vekt
726 gr
Høyde
237 mm
Bredde
163 mm
Dybde
26 mm
Aldersnivå
UP, 05
Språk
Product language
Engelsk
Format
Product format
Innbundet
Antall sider
416

Biographical note

Richard J. Murnane, Juliana W. and William Foss Thompson Professor of Education and Society at Harvard University, is an economist who focuses his research on the relationships between education and the economy, teacher labor markets, the determinants of children's achievement, and strategies for making schools more effective. John B. Willett, Charles William Eliot Professor of Education at Harvard University, is a quantitative methodologist who has devoted his career to improving the research design and data-analytic methods used in education and the social sciences, with a particular emphasis on the design of longitudinal research and the analysis of longitudinal data .