Author : Michael S. Trevisan
Publisher : SAGE Publications
Page : 282 pages
File Size : 36,98 MB
Release : 2014-08-06
Category : Social Science
ISBN : 1483324621
Book Description
Evaluability assessment (EA) can lead to development of sound program theory, increased stakeholder involvement and empowerment, better understanding of program culture and context, enhanced collaboration and communication, process and findings use, and organizational learning and evaluation capacity building. Evaluability Assessment: Improving Evaluation Quality and Use, by Michael Trevisan and Tamara Walser, provides an up-to-date treatment of EA, clarifies what it actually is and how it can be used, demonstrates EA as an approach to evaluative inquiry with multidisciplinary and global appeal, and identifies and describes the purposes and benefits to using EA. Using case examples contributed by EA practitioners, the text illustrates important features of EA use, and showcases how EA is used in a variety of disciplines and evaluation contexts. This text is appropriate as an instructional text for graduate level evaluation courses and training, and as a resource for evaluation practitioners, policymakers, funding agencies, and professional training. “The most impressive aspect of this book is that it positions EA as an approach that perfectly fits within the current philosophical views on program evaluation… The authors do a great job connecting these theories to practice, and provide good guidelines.” —Sebastian Galindo-Gonzalez, University of Florida “This book is focused on one very important topic in the scope of program evaluation content. It establishes the foundation for a variety of applications: impact assessment, program development, and formative evaluation. This text provides new insights and methods for conducting evaluability assessment.” —S. Kim MacGregor, Louisiana State University “The book is written in a very readable style, is well organized and referenced. I like the inclusion of case studies, guidelines for actually doing EA, and the extensive discussion of its alignment with other models of evaluation process.” —Iris Smith, Emory University