Experimental Evaluation Design for Program Improvement


Book Description

The concepts of cause and effect are critical to the field of program evaluation. Experimentally-designed evaluations—those that randomize to treatment and control groups—offer a convincing means for establishing a causal connection between a program and its effects. Experimental Evaluation Design for Program Improvement considers a range of impact evaluation questions, particularly those questions that focus on the impact of specific aspects of a program. Laura R. Peck shows how a variety of experimental evaluation design options can provide answers to these questions, and she suggests opportunities for experiments to be applied in more varied settings and focused on program improvement efforts.




Experimental Evaluation Design for Program Improvement


Book Description

The concepts of cause and effect are critical to the field of program evaluation. Experimentally-designed evaluations—those that randomize to treatment and control groups—offer a convincing means for establishing a causal connection between a program and its effects. Experimental Evaluation Design for Program Improvement considers a range of impact evaluation questions, particularly those questions that focus on the impact of specific aspects of a program. Laura R. Peck shows how a variety of experimental evaluation design options can provide answers to these questions, and she suggests opportunities for experiments to be applied in more varied settings and focused on program improvement efforts.




Evaluating Programs to Increase Student Achievement


Book Description

This updated edition on evaluating the effectiveness of school programs provides an expanded needs-assessment section, additional methods for data analysis, and tools for communicating program results.




Evaluating AIDS Prevention Programs


Book Description

With insightful discussion of program evaluation and the efforts of the Centers for Disease Control, this book presents a set of clear-cut recommendations to help ensure that the substantial resources devoted to the fight against AIDS will be used most effectively. This expanded edition of Evaluating AIDS Prevention Programs covers evaluation strategies and outcome measurements, including a realistic review of the factors that make evaluation of AIDS programs particularly difficult. Randomized field experiments are examined, focusing on the use of alternative treatments rather than placebo controls. The book also reviews nonexperimental techniques, including a critical examination of evaluation methods that are observational rather than experimentalâ€"a necessity when randomized experiments are infeasible.




Research Handbook on Program Evaluation


Book Description

In the Research Handbook on Program Evaluation, an impressive range of authors take stock of the history and current standing of key issues and debates in the evaluation field. Examining current literature of program evaluation, the Research Handbook assesses the field's status in a post-pandemic and social justice-oriented world, examining today’s theoretical and practical concerns and proposing how they might be resolved by future innovations. This title contains one or more Open Access chapters.




Program Evaluation and Performance Measurement


Book Description

Program Evaluation and Performance Measurement: An Introduction to Practice, Second Edition offers an accessible, practical introduction to program evaluation and performance measurement for public and non-profit organizations, and has been extensively updated since the first edition. Using examples, it covers topics in a detailed fashion, making it a useful guide for students as well as practitioners who are participating in program evaluations or constructing and implementing performance measurement systems. Authors James C. McDavid, Irene Huse, and Laura R. L. Hawthorn guide readers through conducting quantitative and qualitative program evaluations, needs assessments, cost-benefit and cost-effectiveness analyses, as well as constructing, implementing and using performance measurement systems. The importance of professional judgment is highlighted throughout the book as an intrinsic feature of evaluation practice.




How to Design a Program Evaluation


Book Description

The objective of this book is to acquaint the reader with the ways in which evaluation results can be made more credible through careful choice of a design prescribing when and from whom, the data will be gathered. The book helps the reader choose a design, put it into operation and analyze and report the data that has been gathered.




10-Step Evaluation for Training and Performance Improvement


Book Description

Written with a learning-by-doing approach in mind, Yonnie Chyung’s 10-Step Evaluation for Training and Performance Improvement gives students actionable instruction for identifying, planning and implementing a client-based program evaluation. The book introduces readers to multiple evaluation frameworks and uses problem-based learning to guide them through a 10-step evaluation process. As students read the chapters, they produce specific deliverables that culminate in a completed evaluation project.




Evidence Matters


Book Description

Researchers use a variety of tools to determine their impact and efficacy, including sample surveys, narrative studies, and exploratory research. However, randomized field trials, which are commonly used in other disciplines, are rarely employed to measure the impact of education practice. Evidence Matters explores the history and current status of research in education and encourages the more frequent use of such trials.




Designing Educational Project and Program Evaluations


Book Description

Drawing upon experiences at state and local level project evaluation, and based on current research in the professional literature, Payne presents a practical, systematic, and flexible approach to educational evaluations. Evaluators at all levels -- state, local and classroom -- will find ideas useful in conducting, managing, and using evaluations. Special user targets identified are state department of education personnel and local school system administrative personnel. The volume can be used by those doing evaluation projects `in the field', or as a text for graduate courses at an introductory level. The book begins with an overview of the generic evaluation process. Chapter Two is devoted to the criteria for judging the effectiveness of evaluation practice. Chapter Three addresses the all important topic of evaluation goals and objectives. Chapters Four, Five and Six basically are concerned with the approach, framework, or design of an evaluation study. Chapter Four contains a discussion of four major philosophical frameworks or metaphors and the implications of these frameworks for conducting an evaluation. Chapters Five and Six describe predominantly quantitative and qualitative designs, respectively. Design, implementation and operational issues related to instrumentation (Chapter Seven), management and decision making (Chapter Eight), and reporting and utilization of results (Chapter Nine) are next addressed. The final chapter of the book (Chapter Ten) considers the evaluation of educational products and materials.