Building in an Evaluation Component for Active Labor Market Programs


Book Description

The guide outlines the main evaluation challenges associated with ALMP s, and shows how to obtain rigorous impact estimates using two leading evaluation approaches. The most credible and straightforward evaluation method is a randomized design, in which a group of potential participants is randomly divided into a treatment and a control group. Random assignment ensures that the two groups would have had similar experiences in the post-program period in the absence of the program intervention. The observed post-program difference therefore yields a reliable estimate of the program impact. The second approach is a difference in differences design that compares the change in outcomes between the participant group and a selected comparison group from before to after the completion of the program. In general the outcomes of the comparison group may differ from the outcomes of the participant group, even in the absence of the program intervention. If the difference observed prior to the program would have persisted in the absence of the program, however, then the change in the outcome gap between the two groups yields a reliable estimate of the program impact. This guideline reviews the various steps in the design and implementation of ALMP s, and in subsequent analysis of the program data, that will ensure a rigorous and informative impact evaluation using either of these two techniques. -- active labor market programs ; policy evaluation ; randomized trials ; difference in difference ; average treatment effect on the treated ; development effectiveness.







Impact Evaluation


Book Description




Evaluating the Labor-market Effects of Social Programs


Book Description

"Jointly sponsored by the Industrial Relations Section and the Office of the Assistant Secretary for Policy, Evaluation and Research of the U.S. Department of Labor." Includes bibliographies.







Active Labor Market Policy Evaluations


Book Description

This paper presents a meta-analysis of recent microeconometric evaluations of active labor market policies. Our sample contains 199 separate "program estimates"--Estimates of the impact of a particular program on a specific subgroup of participants - drawn from 97 studies conducted between 1995 and 2007. For about one-half of the sample we have both a short-term program estimate (for a one-year post-program horizon) and a medium- or long-term estimate (for 2 or 3 year horizons). We categorize the estimated post-program impacts as significantly positive, insignificant, or significantly negative. By this criterion we find that job search assistance programs are more likely to yield positive impacts, whereas public sector employment programs are less likely. Classroom and on-the-job training programs yield relatively positive impacts in the medium term, although in the short-term these programs often have insignificant or negative impacts. We also find that the outcome variable used to measure program impact matters. In particular, studies based on registered unemployment are more likely to yield positive program impacts than those based on other outcomes (like employment or earnings). On the other hand, neither the publication status of a study nor the use of a randomized design is related to the sign or significance of the corresponding program estimate. Finally, we use a subset of studies that focus on post-program employment to compare meta-analytic models for the "effect size" of a program estimate with models for the sign and significance of the estimated program effect. We find that the two approaches lead to very similar conclusions about the determinants of program impact.










The Development Dimension Strengthening Accountability in Aid for Trade


Book Description

This book looks at what the trade and development community needs to know about aid-for-trade results, what past evaluations of programmes and projects reveal about trade outcomes and impacts, and how the trade and development community could improve the performance of aid for trade interventions.




Impact Evaluation in Practice, Second Edition


Book Description

The second edition of the Impact Evaluation in Practice handbook is a comprehensive and accessible introduction to impact evaluation for policy makers and development practitioners. First published in 2011, it has been used widely across the development and academic communities. The book incorporates real-world examples to present practical guidelines for designing and implementing impact evaluations. Readers will gain an understanding of impact evaluations and the best ways to use them to design evidence-based policies and programs. The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation. The handbook is divided into four sections: Part One discusses what to evaluate and why; Part Two presents the main impact evaluation methods; Part Three addresses how to manage impact evaluations; Part Four reviews impact evaluation sampling and data collection. Case studies illustrate different applications of impact evaluations. The book links to complementary instructional material available online, including an applied case as well as questions and answers. The updated second edition will be a valuable resource for the international development community, universities, and policy makers looking to build better evidence around what works in development.