Measuring What Matters: Rethinking Impact Evaluation
In Canada’s charitable sector, “impact” has become a buzzword. It is embedded in grant applications, strategic plans, and funder conversations as a signal of seriousness and accountability. Driven by well-intentioned demands from donors and funders, organizations are increasingly pressured to prove the profound, long-term change they create. However, data from the Charity Insights Canada Project (CICP) reveal a significant gap between what funders expect and what organizations can realistically deliver.
Confusion in the Language of Evaluation
A significant hurdle is the inconsistent use of terminology. In evaluation practice, outputs, outcomes, and impact are distinct. Outputs describe the direct product of activities, such as “the number of workshops delivered”. Outcomes capture the short-to-medium term changes for participants, such as “improved knowledge or skills”. Impact, by contrast, refers to broader and longer-term systemic change, for example “reduction in unemployment rates”.
In practice, however, these distinctions are often blurred. Reviewing evaluation reports in Canada and elsewhere, Susan Phillips and Victoria Carlan found that nonprofits frequently equate “impact evaluation” with simpler measures such as outputs or outcomes. As one CICP panellist expressed, “Not sure how to measure impact and outcomes other than collecting numbers of participants and some qualitative feedback from participants.” (CICP 2.10.38) Therefore, it is important to discern: are Canadian organizations evaluating impact or counting outputs.
Findings from the CICP: Are Charities Truly Measuring “Impact”?
At its core, impact measurement or evaluation seeks to determine whether an intervention causes a particular outcome. It relies on counterfactual or quasi-experimental designs that compare what happened with what would have happened without the intervention in order to isolate causality.
CICP findings show that most evaluation in the sector focuses on monitoring and learning, not causal proof. While 81 % of charities say they measure impact in some form (CICP 2.10.38), only 8% conduct evaluations using counterfactual methods – the methodological standard for establishing causation (CICP 1.11.44). Most charities rely on self-designed questionnaires (50%) and recorded outputs or outcomes (around 50%), rather than externally validated tools from government (10%) or researchers (8%) (CICP 2.10.38).
These patterns reveal that charities are indeed evaluating, but primarily to learn, improve, and remain accountable to their missions, but not to prove causality. Similar observations have been reported in earlier sector studies, including the 2003 VSERP report on evaluation in the voluntary sector and Imagine Canada’s 2018 State of Evaluation report.
The question, then, is whether the push for “impact” has outpaced the sector’s actual capacity, and even its purpose.
A Cautionary Tale from the “Impact Revolution” in the US and the UK
Experiences in the United States and the United Kingdom offer a cautionary lesson. In the 2000s, charities were encouraged to measure and publish their own impact. Reflecting later on this “impact revolution,” philanthropy advisors Caroline Fiennes and her colleague Ken Berger acknowledged that this practice backfired because the system was stacked against producing reliable evidence.
Fiennes argues that expecting charities to conduct rigorous impact evaluations is often unrealistic. Organizations may feel pressure to present flattering results, or what Ben McNamee from DARO calls “vanity metrics.” Many charities also lack the methodological expertise, financial resources, and sample sizes needed for rigorous evaluation. In short, charities are experts in delivering services, not necessarily in conducting causal social science research.
Rethinking What Counts as Evaluation
If rigorous experimental impact evaluations are not feasible for most charities, what does meaningful evaluation look like instead.
Research by Phillips and Carlan suggests that evaluation may be more productive when centered on learning, reflection, and adaptation. Several approaches reflect this shift. For example, developmental and participatory evaluation support innovation by helping organizations adapt their programs as conditions evolve. Developmental and participatory evaluation involve staff, communities, and sometimes funders in defining what success looks like. Collective approaches, where funders and grantees jointly design learning frameworks, can better align expectations with the realities of organizational capacity. Utilization-focused evaluation, meanwhile, prioritizes the practical use of findings so organizations can adjust programs and make decisions in real time.
Concluding remarks
The bottom line is that evaluation should ultimately be about learning, not just compliance. For charities, this means listening closely to clients, making better use of their own data, and drawing on high-quality external research rather than trying to produce impact studies on their own. But meaningful evaluation also requires resources. Funders therefore have a critical role to play, not only by investing in independent evaluations that can serve as public goods, but also by supporting the learning-oriented evaluation activities that charities undertake as part of their everyday work.
Author
Want to receive our blog posts directly to your email? Sign-up for our newsletter at the following link, and follow us on social-media for regular project updates:
- Newsletter sign up: https://confirmsubscription.com/h/t/3D0A2E268835E2F4
- LinkedIn: https://www.linkedin.com/company/cicp-pcpob/
- Socials: @CICP_PCPOB
Measuring What Matters: Rethinking Impact Evaluation
In Canada’s charitable sector, “impact” has become a buzzword. It is embedded in grant applications, strategic plans, and funder conversations as a signal of seriousness …
Setting Priorities, Building Strength: How Charities Are Entering 2026
The CICP’s analysis of its first survey of 2025 captured a charitable sector caught in the cross-currents of economic instability, political uncertainty, and escalating community …
From Crisis to Caution? Tracking Turnover
In a recent blog, we explored the deepening HR crisis in Canada’s charitable sector, drawing on data from the Charity Insights Canada Project (CICP). Between …