Evaluating Practice


Book Description

Evaluating Practice comes with a free CD-ROM featuring numerous programs, including the unique and innovative SINGWIN program for analyzing single-system design data (created by Charles Auerbach, David Schnall, and Heidi Heft Laporte of Yeshiva University); the CASS and CAAP programs for managing cases and scoring scales (created by Walter Hudson); and a NEW set of Microsoft Excel Workbooks and interactive exercises. Book jacket.




Impact Evaluation in Practice, Second Edition


Book Description

The second edition of the Impact Evaluation in Practice handbook is a comprehensive and accessible introduction to impact evaluation for policy makers and development practitioners. First published in 2011, it has been used widely across the development and academic communities. The book incorporates real-world examples to present practical guidelines for designing and implementing impact evaluations. Readers will gain an understanding of impact evaluations and the best ways to use them to design evidence-based policies and programs. The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation. The handbook is divided into four sections: Part One discusses what to evaluate and why; Part Two presents the main impact evaluation methods; Part Three addresses how to manage impact evaluations; Part Four reviews impact evaluation sampling and data collection. Case studies illustrate different applications of impact evaluations. The book links to complementary instructional material available online, including an applied case as well as questions and answers. The updated second edition will be a valuable resource for the international development community, universities, and policy makers looking to build better evidence around what works in development.




Program Evaluation Theory and Practice


Book Description

This engaging text takes an evenhanded approach to major theoretical paradigms in evaluation and builds a bridge from them to evaluation practice. Featuring helpful checklists, procedural steps, provocative questions that invite readers to explore their own theoretical assumptions, and practical exercises, the book provides concrete guidance for conducting large- and small-scale evaluations. Numerous sample studies—many with reflective commentary from the evaluators—reveal the process through which an evaluator incorporates a paradigm into an actual research project. The book shows how theory informs methodological choices (the specifics of planning, implementing, and using evaluations). It offers balanced coverage of quantitative, qualitative, and mixed methods approaches. Useful pedagogical features include: *Examples of large- and small-scale evaluations from multiple disciplines. *Beginning-of-chapter reflection questions that set the stage for the material covered. *"Extending your thinking" questions and practical activities that help readers apply particular theoretical paradigms in their own evaluation projects. *Relevant Web links, including pathways to more details about sampling, data collection, and analysis. *Boxes offering a closer look at key evaluation concepts and additional studies. *Checklists for readers to determine if they have followed recommended practice. *A companion website with resources for further learning.




A Social Worker's Guide to Evaluating Practice Outcomes


Book Description

"Thyer and Myers have written an easy-to-read primer on the topic of empirically evaluating the outcomes of social work practice. This resource, for social work students--graduate and undergraduate-- and for social work practitioners, presents outcome studies using both group-research and single-case designs. Unlike other books dealing with the topic of evaluating practice which use theoretical cases, Thyer and Myers use real-life examples of evaluating social work practice, ranging from those fairly low on the scale of internal validity to those that are pretty rigorous. The book begins with a refresher on evaluation research, provides a balanced approach to both single-system and group-evaluation designs, and closes with a discussion of ethical issues, myths, misconceptions, and practical cinsiderations in evaluation"--Back cover.




Evaluating in Practice


Book Description

Evaluation is not a self-contained phase of social work practice - one more dimension of the process - but a dimension of every phase. In this fully rewritten and updated second edition of his groundbreaking text Evaluating in Practice, Ian Shaw demonstrates how evaluation and inquiry are just as much practice tasks as planning, intervention and review. By demonstrating that good evaluating in practice helps sustain a commitment to evidence, understanding and justice, Shaw shows that for this to be achieved, evaluating in practice must permeate every aspect of social work. He: 1. Develops a framework for embedding evaluation and inquiry as a dimension of good practice in social work. 2. Demonstrates the central significance of a 'methodological practice' in social work that adapts, infuses, and translates social research methods as a dimension of the different aspects of social work, viz. assessment, planning, intervention, review and outcomes. 3. Facilitates good practice by exemplifying the argument through extensive worked examples and exercises. This book has much to say about the demanding skills that are necessary to achieve this shaping of practice and is a must-read for any social work student or practitioner.




Evaluating Public Communication


Book Description

Evaluating Public Communication addresses the widely reported lack of rigorous outcome and impact-oriented evaluation in advertising; public relations; corporate, government, political and organizational communication and specialist fields, such as health communication. This transdisciplinary analysis integrates research literature from each of these fields of practice, as well as interviews, content analysis and ethnography, to identify the latest models and approaches. Chapters feature: • a review of 30 frameworks and models that inform processes for evaluation in communication, including the latest recommendations of industry bodies, evaluation councils and research institutes in several countries; • recommendations for standards based on contemporary social science research and industry initiatives, such as the IPR Task Force on Standards and the Coalition for Public Relations Research Standards; • an assessment of metrics that can inform evaluation, including digital and social media metrics, 10 informal research methods and over 30 formal research methods for evaluating public communication; • evaluation of public communication campaigns and projects in 12 contemporary case studies. Evaluating Public Communication provides clear guidance on theory and practice for students, researchers and professionals in PR, advertising and all fields of communication.




Evaluating e-Learning


Book Description

How can novice e-learning researchers and postgraduate learners develop rigorous plans to study the effectiveness of technology-enhanced learning environments? How can practitioners gather and portray evidence of the impact of e-learning? How can the average educator who teaches online, without experience in evaluating emerging technologies, build on what is successful and modify what is not? By unpacking the e-learning lifecycle and focusing on learning, not technology, Evaluating e-Learning attempts to resolve some of the complexity inherent in evaluating the effectiveness of e-learning. The book presents practical advice in the form of an evaluation framework and a scaffolded approach to an e-learning research study, using divide-and-conquer techniques to reduce complexity in both design and delivery. It adapts and builds on familiar research methodology to offer a robust and accessible approach that can ensure effective evaluation of a wide range of innovative initiatives, including those covered in other books in the Connecting with e-Learning series. Readers will find this jargon-free guide is a must-have resource that provides the proper tools for evaluating e-learning practices with ease.




Validity Assessment in Clinical Neuropsychological Practice


Book Description

Practical and comprehensive, this is the first book to focus on noncredible performance in clinical contexts. Experts in the field discuss the varied causes of invalidity, describe how to efficiently incorporate validity tests into clinical evaluations, and provide direction on how to proceed when noncredible responding is detected. Thoughtful, ethical guidance is given for offering patient feedback and writing effective reports. Population-specific chapters cover validity assessment with military personnel; children; and individuals with dementia, psychiatric disorders, mild traumatic brain injury, academic disability, and other concerns. The concluding chapter describes how to appropriately engage in legal proceedings if a clinical case becomes forensic. Case examples and sample reports enhance the book's utility.




Evaluating R&D Impacts: Methods and Practice


Book Description

A critical issue in research and development (R&D) management is the structure and use of evaluative efforts for R&D programs. The book introduces the different methods that may be used in R&D evaluation and then illustrates these methods by describing actual evaluation in practice using those methods. The book is divided into two sections. The first section provides an introduction and details on several popular methodologies used in the evaluation of research and development activities. The second half of the book focuses on evaluation in practice and is comprised of several chapters offering the perspectives of individuals in different types of organizations. The book concludes with an annotated bibliography of selected R&D evaluation literature, focusing on post-1985 literature, on research evaluation.




Evaluating Practice


Book Description

Evaluating Practice comes with a free CD-ROM featuring numerous programs, including the unique and innovative SINGWIN program for analyzing single-system design data (created by Charles Auerbach, David Schnall, and Heidi Heft Laporte of Yeshiva University); the CASS and CAAP programs for managing cases and scoring scales (created by Walter Hudson); and a NEW set of Microsoft Excel Workbooks and interactive exercises. Book jacket.