by Magda Goemans, CFICE Evaluation and Analysis Working Group RA

A book laying on a table with some pages floating up as the book is opened.As the new RA for the Evaluation and Analysis working group, I was recently tasked with developing a literature review around the subject of ‘community impact’ in community-campus engagement (CCE). Scholars have drawn attention to the need for greater focus on community impact in CCE (Mitchell, 2008; Chupp & Joseph, 2010); in addition, almost all of the literature I have reviewed to date has noted that evaluations of CCE [most often relating to the impact of community service-learning (CSL) projects] have historically more often focused on assessing impacts on students than on community partners (see for example Cruz & Giles, 2000; Basinger & Bartholomew, 2006; Minn & Moely, 2006; Sandy & Holland, 2006; Worrall, 2007).

There are many frames through which community impact may be perceived, which requires an understanding of what is meant by ‘community’ in each context. The community that is impacted by a CCE project may consist of a community-based organization (CBO) (in some cases along with associated funders), or may also include a wider community (e.g. a marginalized population) that the CBO represents (Cruz & Giles, 2000; Sandy & Holland, 2006; Carpenter, 2011).

Impacts may be tangible or intangible (Sandy & Holland, 2006). Tangible impacts may include physical improvements to a community, increased availability of human resource hours to support CBO efforts, or extended networking connections fostered among community members (Pillard Reynolds, 2014; Srinivas et al., 2015). Intangible impacts of CCE — beyond general goals of advancing social justice – may include a strengthened sense of pride or empowerment for CBOs or within wider communities (Pillard Reynolds, 2014; Srinivas et al., 2015).

Impacts may be perceived as beneficial (or not) to the community, and particular outcomes may be unintentional, unanticipated, or not necessarily certain (Marullo et al., 2003; Worrall, 2007). There may be outright negative outcomes for community partners in CCE projects, such as a drain on community partner resources that may result from supervising CSL students.  Or CCE projects may result in more beneficial impacts for students than community partners (Basinger & Bartholomew, 2006). Impacts may evolve during the process of engagement, or may only become evident after active engagement between community and academic partners has ended (Worrall, 2007).

A row of smiling figures being examined under a magnifying glass that reveals one frowning figure amid the bunch.

“Impacts may be perceived as beneficial (or not) to the community…”

My initial review of the literature has also revealed several elements to consider when conducting an evaluation of community impact in CCE. One significant influence relates to the voices that are consciously included in the evaluation, as scholars contend that adequate assessment ideally includes extended input and guidance from community partners. Pillard Reynolds (2014) notes: “an intentional focus on the community’s perspectives leads to a broader conceptualization of outcomes…and highlights more nuanced views of how communities perceive and understand outcomes in partnerships.”

Scholars also note that evaluation of community impact should address the specific contexts within which community partners work, and correspond with CCE project goals “to redistribute power” in support of marginalized populations (Chuff & Joseph, 2010; see also Marullo et al., 2003). There may be very different impressions of community impact between academic and community partners, or among members of the same community. What academic partners recognize as valid or measurable outcomes may not be seen in the same way by community partners. Perceptions of community impact may change over time; this may require an iterative approach to evaluation that allows sufficient time for community reflection, and for modifications to the evaluation process to accommodate unintentional impacts and revised community goals. This may also require that additional evaluation data be gathered, or that different data gathering methods be employed (Basinger & Bartholomew, 2006; Sandy & Holland, 2006; Srinivas et al., 2015).

A picture of a mail truck parked on a hill with houses in the background. The picture is taken with the car aligned horizontally instead of the houses so the houses look as if they are slanted to the left.

It’s all about different perceptions of the same reality.

As part of her examination of literature related to nonprofit service learning engagements, Carpenter (2011) cites several examples of evaluation tools that address community impact in CCE projects. She describes commonly employed data collection tools in evaluation that include surveys and focus groups, and that measure immediate and longer-term impacts for communities. Carpenter notes that researchers must consider the availability of time and resources for community partners to participate in evaluations of community impact, as well as the degree to which community partners understand the evaluation methods being employed and criteria being considered (also mentioned by Chupp & Joseph, 2010).

Carpenter also presents specific evaluation methods that include:

  • Clarke’s (2000) 3-I model (Initiator – Initiative – Impact), which measures the community impact of a CSL project from planning to implementation, places large focus on the CCE process in addition to outcomes, and was generated with input from a diverse group of academic and community partners; and
  • Gelmon et al.’s (2003) evaluation work, which measures organizational benefits as well as economic and social impacts for community partners, and aims for an approach that will not be interpreted as a “performance review” of the CBO.

One additional example of an evaluation tool is Srinivas et al.’s (2015) Community Impact Scale, which was developed with the active involvement of community partners to assist communities in evaluating their own experiences and outcomes in CCE. The authors note that the tool was designed in part to avoid “imposing a framework that categorizes outcomes according to benefit and cost a priori”.

In general, my review of the literature so far has revealed the complexity of issues surrounding community impact in CCE, and the need for further exploration to more fully grasp how community impacts may be understood. The bulk of the research I have examined has been oriented primarily toward CSL-related experiences, and I’m curious about what additional insights may be gleaned from an examination of literature more focused on impacts coming out of other community-based research practices in CCE. I have learned that determining the community impact of CCE over time may be somewhat like aiming at a moving target, and that there may be a range of perspectives about impacts within communities that participate in these engagements. As my review of the literature continues, I look forward to learning more.

References

Basinger, N., & Bartholomew, K. (2006). Service-Learning in Nonprofit Organizations: Motivations, Expectations, and Outcomes. Michigan Journal of Community Service-Learning, 12(2), 15-26.

Carpenter, H. (2011). How We Could Measure Community Impact of Nonprofit Graduate Students’ Service-Learning Projects: Lessons from the Literature. Journal of Public Affairs Education, 17(1), 115-131.

Chupp, M.G., & Joseph, M.L. (2010). Getting the Most Out of Service Learning: Maximizing Student, University and Community Impact. Journal of Community Practice, 18, 190-212.

Clarke, M.M. (2003). Finding the community in service-learning research: The 3-“I” model. In Billing, S.H., & Eyler, J. (Eds.), Deconstructing service-learning (pp. 3-21). Nashville: Information Age Publishing.

Gelmon, S.B. (2003). Assessment as a means of building service-learning partnerships. In B. Jacoby and Associates (Eds.), Building partnerships for service-learning (pp. 42-64). San Francisco: Wiley.

Marullo, S., Cooke, D., Willis, J., Rollins, A., Burke, J., Bonilla, P., & Waldref, V. (2003). Community-Based Research Assessments: Some Principles and Practices. Michigan Journal of Community Service Learning, 9(3), 57-67.

Miron, D., & Moely, B.E. (2006). Community Agency Voice and Benefit in Service-Learning. Michigan Journal of Community Service Learning, 12(2), 27-37.

Mitchell, T.D. (2008). Traditional vs. critical service-learning: Engaging the literature to differentiate two models.  Michigan Journal of Community Service Learning, 14(2), 50-55.

Pillard Reynolds, N. (2014). What Counts as Outcomes? Community Perspectives of an Engineering Partnership. Michigan Journal of Community Service Learning, 21(1), 79-90.

Sandy, M., & Holland, B.A. (2006). Different Worlds and Common Ground: Community Partner Perspectives on Campus-Community Partnerships. Michigan Journal of Community Service Learning, 13(1), 30-43.

Schmidt, A., & Robby, M.A. (2002). What’s the Value of Service-Learning to the Community? Michigan Journal of Community Service Learning, 9(1), 27-33.

Srinivas, T., Meenan, C.E., Drogin, E., & DePrince, A.P. (2015). Development of the Community Impact Scale Measuring Community Organization Perceptions of Partnership Benefits and Costs. Michigan Journal of Community Service Learning, 21(2), 5-21.

Worrall, L. (2007). Asking the Community: A Case Study of Community Partner Perspectives. Michigan Journal of Community Service Learning, 14(1), 5-17.