Evaluation and Improvement

Compendium of Measures of Quality

Quality in Early Childhood Care and Education Settings: A Compendium of Measures (2nd ed.) (Mirjam Neunning, Debra Weinstein, Tamara Halle, Laurie Martin, Kathryn Tout, Laura Wandner, Jessica Vick Whittaker, Heather See, Meagan McSwiggan, Megan Fletcher, Juli Sherman, Elizabeth Hair, and Mary Burkhauser, 2010).
This compendium was prepared by Child Trends for the Office of Planning, Research and Evaluation of the Administration for Children and Families to provide uniform information about quality measures and a consistent framework with which to review existing measures of the quality of early care and education settings.

Data and Data Systems Resources

"About CLASP" (Center for Law and Social Policy n.d.).
The CLASP DataFinder is a custom, easy-to-use tool developed to provide select demographic information as well as administrative data on programs that affect low-income people and families.

"Frequently Asked Questions on the Statewide Longitudinal Data Systems (SLDS) Grant Program" (National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education, n.d.).
This website has information and resources on the Statewide Longitudinal Data Systems (SLDS) Grant Program, which helps states make better decisions by requiring improved data and information. Through grants and a growing range of services and resources, the program helps propel the successful design, development, implementation, and expansion of K–12 and P–20W (prekindergarten through workforce) longitudinal data systems.

KIDS COUNT Data Center (Annie E. Casey Foundation, n.d.).
A project of the Annie E. Casey Foundation, KIDS COUNT is the premier source for data on child and family well­being in the United States. Users can access hundreds of indicators, download data, and create reports and graphics that support smart decisions about children and families.

"Using Qualitative Data in Program Evaluation: Telling the Story of a Prevention Program" (FRIENDS National Resource Center for Community-Based Child Abuse Prevention, 2009).
This guide was developed for program administrators, managers, direct-service practitioners, and others expanding and enhancing current and future evaluation efforts using qualitative methods.

Outcome-Based Evaluation Tools

"Evaluation Toolkit" (FRIENDS National Center for Community-Based Child Abuse Prevention, n.d.).
The FRIENDS Evaluation Toolkit is a resource for developing an individualized outcome evaluation plan from the ground up. It is an online compendium of information and resources. The toolkit is not intended to take the place of hands-on training or technical assistance; rather, it is intended to serve as an entry-level guide for programs to help build evaluation capacity.

ORS Impact (ORS Impact, n.d.).
Since 1989, ORS Impact has been delivering outcome-based knowledge, understanding, and application to public and private organizations to pursue the change they seek and to improve their communities’ health, well-being, and prospects to flourish. Through this Web site, ORS Impact shares these resources with a view to building capacity for evaluation and outcome-based thinking and acting in organizations doing good work around the world.

W.K. Kellogg Foundation Logic Model Development Guide (W.K. Kellogg Foundation, updated 2004).
This guide focuses on the development and use of the program logic model. Logic models and their processes facilitate thinking, planning, and communication about program objectives and actual accomplishments. Through this guide, the W.K. Kellogg Foundation provides an orientation to the underlying principles and language of the program logic model so that it can be effectively used in program planning, implementation, and dissemination of results. The premise behind this guide is simple: good evaluation reflects clear thinking and responsible program management.

Resources for Evaluating Systems Initiatives and Complexity

A Framework for Evaluating Systems Initiatives (Julia Coffman, 2007).
This paper introduces a framework to help advance the discussion about evaluating systems initiatives. The framework helps clarify what complex systems initiatives are doing and aiming to accomplish and thereby supports both initiative theory-of-change development and evaluation planning. Because this paper grew out of a symposium focused on early childhood, concepts presented throughout are illustrated with examples from that field. The framework and ideas presented also apply, however, to systems initiatives in other fields.

“An Introduction to Context and Its Role in Evaluation Practice” (Jody L. Fitzpatrick, 2012).
This publication reviews the evaluation literature on context and discusses the two areas in which context has been more carefully considered by evaluators: 1) the culture of program participants when their culture is different from the predominant one, and 2) the cultural norms of program participants in countries outside the West. We have learned much—and should continue learning—about how the culture of participants or communities can affect evaluation. Evaluators also need to expand their consideration of context to include the program itself and its setting as well as the political norms of audiences, decisionmakers, and other stakeholders of the program.

“Putting the System Back into Systems Change: A Framework for Understanding and Changing Organizational and Community Systems” (Pennie G. Foster-Fishman, Branda Nowell, and Huilan Yang, 2007).
This paper provides one framework—grounded in systems thinking and change literatures—for understanding and identifying fundamental system parts and interdependencies that can help explain system functioning and leverage systems change. The proposed framework highlights the importance of attending to the deep and apparent structures within a system as well as interactions and interdependencies among system parts. This includes attending to the value of engaging critical stakeholders in problem definition, boundary construction, and systems analysis.

The “Most Significant Change” (MSC) Technique: A Guide to Its Use (Rick Davies, and Jess Dart, 2005).
This publication is aimed at organizations, community groups, students, and academics who wish to use the MSC technique to help monitor and evaluate their social-change programs and projects or to learn more about how it can be used. The technique is applicable in many different sectors, including education and health. It is also applicable to many different cultural contexts. MSC has been used by a range of organizations in various diverse communities and countries.

"Unique Methods in Advocacy Evaluation" (Julia Coffman, and Ehren Reed, 2009).
There are systematic approaches for gathering qualitative and quantitative data that can be used to determine whether a program or strategy is making progress or achieving its intended results. Evaluations draw on a familiar list of traditional data collection methods, such as surveys, interviews, focus groups, or polling. However, some early childhood programs, policies, and initiative processes can be complex, fast-paced, and dynamic, which can make data collection a challenge. This brief describes four new methods that were developed to respond to unique measurement challenges in the early childhood field.