In a widely circulated 2013 essay, the philanthropist Bill Gates extolled the role that measurement plays in improving the human condition. He offered examples of ways in which measurement had improved the delivery of vital services worldwide. But he also offered a rueful observation. “This may seem basic,” he wrote, “but it is amazing how often it [measurement] is not done and how hard it is to get right.”
We wholeheartedly agree with Gates: measurement matters greatly, particularly in the nonprofit sector, and it is done too rarely. Indeed, in due diligence that we conducted for the Henry R. Kravis Prize in Nonprofit Leadership, more than 75 percent of the organizations that we reviewed did not have reliable impact data. Similarly, in the Stanford Survey on Leadership and Management in the Nonprofit Sector—a study that we conducted, which drew responses from more than 3,000 people—respondents cited “inadequate and/or unreliable measurement/evaluation of organizations’ impact and performance” as one of the top three challenges faced by the sector.
Impact measurement, or impact evaluation, is one component of the engine of impact that every nonprofit organization must build and tune to become truly effective. Nonprofit leaders who want their organization to achieve maximum impact must embrace the essentials of strategic leadership. We compare this kind of intentional leadership to a high-performance engine. Impact evaluation—the third essential part of this engine, after mission and strategy—helps an organization to know if its strategy is working and whether it is achieving its mission.
Challenging though impact evaluation may be, it’s hardly impossible. But an organization must have both the motive to do it and the means to do it. Donors, philanthropists, and grant-making professionals can lead the way here by demanding that nonprofit grantees regularly evaluate the impact of their work At the same time, these funding entities also need to pay for evaluation efforts. Too many of them are still reluctant to invest in impact measurement. Organizations like GuideStar, meanwhile, can play a vital role in underscoring the importance of impact evaluation and in providing tools that help donors evaluate a nonprofit’s efficacy.
In an ideal world, impact evaluation starts early. Indeed, in the best-case scenario, an organization will evaluate its programs from the moment it launches them, and it will use the results to guide its growth and development. But if you haven’t yet embraced impact evaluation, don’t despair. It’s never too late.
When nonprofits develop impact measures, they should make sure that those measures are quantifiable. Doing so may require an organization to translate qualitative factors into quantitative ones, but taking that step will strengthen its intervention. Randomized controlled trials, or RCTs, can be an especially beneficial tool not only for demonstrating impact but also for guiding strategic decision making. RCTs are not suited to every kind of intervention; they often aren’t feasible in public policy efforts, for instance. But they have emerged as the gold standard in some fields—development economics, for example—because they establish causality and permit a comparison between an intervention and a counterfactual case in which that intervention did not occur.
Youth Villages is an organization that has benefited greatly from conducting rigorous external evaluation of its programs. (In our book, Engine of Impact, we offer numerous examples of organizations that have used impact evaluation effectively.) The Memphis-based nonprofit provides support services to at-risk and foster youth and their families in 20 U.S. states Currently, it is conducting an RCT of its Transitional Living program. According to early results from this evaluation, one year after participants completed the program, they had achieved average earnings that were 17 percent higher than the earnings of people in a control group. Housing stability was also higher among program participants than among the control group, and the study found improvements in some outcomes related to health and safety. However, the RCT did not reveal statistically significant improvements in outcomes related to education, social support, or criminal involvement. After learning about early results, Youth Villages began using the data to improve its programs.
Impact evaluation has the power to unleash a dynamic feedback loop that can drive strategic thinking. It thus becomes part of a virtuous cycle in which an organization adapts its mission into a theory of change (which is a core aspect of strategy) and then uses evaluation to test and hone that theory of change. We exhort you to join the impact evaluation movement and promise that you, like us, will learn a great deal from it.
Originally published by Guidestar