This book addresses the challenges of conducting program evaluations in real-world contexts where evaluators and their clients face budget and time constraints and where critical data may be missing. The book is organized around a seven-step model developed by the authors, which has been tested and refined in workshops and in practice. Vignettes and case studies—representing evaluations from a variety of geographic regions and sectors—demonstrate adaptive possibilities for small projects with budgets of a few thousand dollars to large-scale, long-term evaluations of complex programs. The text incorporates quantitative, qualitative, and mixed-method designs, and this Second Edition reflects important developments in the field since the publication of the First Edition.
"This book represents a significant achievement. The authors have succeeded in creating a book that can be used in a wide variety of locations and by a large community of evaluation practitioners."—Michael D. Niles, Missouri Western State University
"This book is exceptional and unique in the way that it combines foundational knowledge from social sciences with theory and methods that are specific to evaluation."—Gary Miron, Western Michigan University
"The book represents a very good and timely contribution worth having on an evaluator's shelf, especially if you work in the international development arena."—Thomaz Chianca, independent evaluation consultant, Rio de Janeiro, Brazil
What's new in the Second Edition?
· A greater focus on responsible professional practice, codes of conduct and the importance of ethical standards for all evaluations.
· Some new perspectives on the debate over the "best" evaluation designs. While experimental designs can address the important issues of selection bias, such statistical designs are potentially vulnerable to a number of important threats to validity. These include process and contextual analysis, collecting information on sensitive topics and from difficult-to-reach groups, difficulties in adapting to changes in the evaluation design and implementation strategies. Experience also suggests that strong statistical designs can only be applied in a very small proportion of evaluations.
· There are many instances in which well-designed non-experimental designs will be the best option for assessing outcomes of many programs; particularly for evaluating complex programs and even "simple" programs that involve complex processes of behavioral change.
· The importance of understanding the setting within which the evaluation is designed, implemented and used.
· Program theory as a central building block of most evaluation designs. The expanded discussion incorporates theory of change, contextual and process analysis, multi-level logic models, using competing theories and trajectory analysis.
· The range of evaluation design options has been considerably expanded, and case studies are included (Appendix 6) to illustrate how each of the 19 designs has been applied in the field.
· Greater emphasis is given to the benefits of mixed method evaluation designs
· A new chapter has been added on the evaluation of complicated and complex development interventions. Conventional pretest-posttest comparison group designs can rarely be applied to the increasing proportion of development assistance channeled through complex interventions and a range of promising new approaches – still very much "work in progress" -- are presented
· Two new chapters on organizing and managing evaluations and strengthening evaluation capacity. This includes a discussion of strategies for promoting the institutionalization of evaluation systems at the sector and national levels.
· The discussion of quality assurance and threats to validity has been expanded and checklists and worksheets are included (Appendices 1-5) on how to assess the validity of quantitative, qualitative and mixed method designs.