Experimental Evaluation Design for Program Improvement


Book Description

The concepts of cause and effect are critical to the field of program evaluation. Experimentally-designed evaluations—those that randomize to treatment and control groups—offer a convincing means for establishing a causal connection between a program and its effects. Experimental Evaluation Design for Program Improvement considers a range of impact evaluation questions, particularly those questions that focus on the impact of specific aspects of a program. Laura R. Peck shows how a variety of experimental evaluation design options can provide answers to these questions, and she suggests opportunities for experiments to be applied in more varied settings and focused on program improvement efforts.




Experimental Evaluation Design for Program Improvement


Book Description

The concepts of cause and effect are critical to the field of program evaluation. Experimentally-designed evaluations—those that randomize to treatment and control groups—offer a convincing means for establishing a causal connection between a program and its effects. Experimental Evaluation Design for Program Improvement considers a range of impact evaluation questions, particularly those questions that focus on the impact of specific aspects of a program. Laura R. Peck shows how a variety of experimental evaluation design options can provide answers to these questions, and she suggests opportunities for experiments to be applied in more varied settings and focused on program improvement efforts.




Experimental Evaluation Design for Program Improvement


Book Description

The concepts of cause and effect are critical to the field of program evaluation. Experimentally-designed evaluations—those that randomize to treatment and control groups—offer a convincing means for establishing a causal connection between a program and its effects. Experimental Evaluation Design for Program Improvement considers a range of impact evaluation questions, particularly those questions that focus on the impact of specific aspects of a program. Laura R. Peck shows how a variety of experimental evaluation design options can provide answers to these questions, and she suggests opportunities for experiments to be applied in more varied settings and focused on program improvement efforts.




Research Handbook on Program Evaluation


Book Description

In the Research Handbook on Program Evaluation, an impressive range of authors take stock of the history and current standing of key issues and debates in the evaluation field. Examining current literature of program evaluation, the Research Handbook assesses the field's status in a post-pandemic and social justice-oriented world, examining today’s theoretical and practical concerns and proposing how they might be resolved by future innovations. This title contains one or more Open Access chapters.




Evaluating Programs to Increase Student Achievement


Book Description

This updated edition on evaluating the effectiveness of school programs provides an expanded needs-assessment section, additional methods for data analysis, and tools for communicating program results.




Evaluating AIDS Prevention Programs


Book Description

With insightful discussion of program evaluation and the efforts of the Centers for Disease Control, this book presents a set of clear-cut recommendations to help ensure that the substantial resources devoted to the fight against AIDS will be used most effectively. This expanded edition of Evaluating AIDS Prevention Programs covers evaluation strategies and outcome measurements, including a realistic review of the factors that make evaluation of AIDS programs particularly difficult. Randomized field experiments are examined, focusing on the use of alternative treatments rather than placebo controls. The book also reviews nonexperimental techniques, including a critical examination of evaluation methods that are observational rather than experimentalâ€"a necessity when randomized experiments are infeasible.




Program Evaluation and Performance Measurement


Book Description

Program Evaluation and Performance Measurement: An Introduction to Practice, Second Edition offers an accessible, practical introduction to program evaluation and performance measurement for public and non-profit organizations, and has been extensively updated since the first edition. Using examples, it covers topics in a detailed fashion, making it a useful guide for students as well as practitioners who are participating in program evaluations or constructing and implementing performance measurement systems. Authors James C. McDavid, Irene Huse, and Laura R. L. Hawthorn guide readers through conducting quantitative and qualitative program evaluations, needs assessments, cost-benefit and cost-effectiveness analyses, as well as constructing, implementing and using performance measurement systems. The importance of professional judgment is highlighted throughout the book as an intrinsic feature of evaluation practice.




Small-Scale Evaluation


Book Description

How can evaluation be used most effectively, and what are the strengths and weaknesses of the various methods? Colin Robson provides guidance in a clear and uncluttered way. The issue of collaboration is examined step-by-step; stakeholder models are compared with techniques such as participatory evaluation and practitioner-centred action research; ethical and political considerations are placed in context; and the best ways of communicating findings are discussed. Each chapter is illustrated with helpful exercises to show the practical application of the issues covered, making this an invaluable introduction for anyone new to evaluation.




10-Step Evaluation for Training and Performance Improvement


Book Description

Written with a learning-by-doing approach in mind, Yonnie Chyung’s 10-Step Evaluation for Training and Performance Improvement gives students actionable instruction for identifying, planning and implementing a client-based program evaluation. The book introduces readers to multiple evaluation frameworks and uses problem-based learning to guide them through a 10-step evaluation process. As students read the chapters, they produce specific deliverables that culminate in a completed evaluation project.




Evidence Matters


Book Description

Researchers use a variety of tools to determine their impact and efficacy, including sample surveys, narrative studies, and exploratory research. However, randomized field trials, which are commonly used in other disciplines, are rarely employed to measure the impact of education practice. Evidence Matters explores the history and current status of research in education and encourages the more frequent use of such trials.