The Building Blocks of Demonstrating Impact

Monday, September 9, 2013
by Hui Wen Chan, Impact Analytics and Planning Officer, Citi Foundation
and Christine Rhee, Manager, Philanthropy, American Express Corporate Social Responsibility
Recently, Philanthropy New York convened funders for a conversation on the best ways to measure and demonstrate the impact of their organization’s charitable work. Lisa Frantzen, Senior Director of Performance Assessment at Changing Our World, presented five building blocks that she uses in her client work, and we provided real-life examples for each “building block.” Philanthropy New York encouraged the 27 attendees to partake in a peer-learning conversation to also share their own experiences and lessons learned in demonstrating impact — wherever they are on a range from a simple, easily-executed measurement program to an intense measurement strategy.
The five building blocks discussed were:
  1. Gain Goal Clarity.
    What are the goals of your program(s)? All stakeholders of a program must first be clear and in agreement on program objectives in order to develop an effective measurement system.

    Some recommended practices to accomplish this include: using SMART Goals, strategic planning, shifting to investment thinking, and processes that ensure appropriate stakeholder involvement. In Christine’s experience, while this is the first and most obvious step, it can be one of the most difficult (and sometimes even emotional) pieces. People may carry personal assumptions about their work, or small differences of opinion on the funding themes may come to light during these conversations. Take the time to discuss, but ensure all stakeholders are in agreement and have firmly set goals for a strong foundation before moving forward.
  2. Decide What to Measure.
    To determine what to measure, consider these questions: Why do you want to measure and who is the audience for the information? What program(s) will you measure (a signature program, all grants, employee engagement program, etc.) and to what depth should your measurement go (outputs vs. outcomes)? Realistically, what you can measure also depends on the resources (time and money) that you can invest in measurement and impact, as well as the resources your grantees can invest.

    Hui emphasized that more is not always better. Reporting and data collection can be challenging, time-consuming and expensive; and excessive requests for information from funders can place unnecessary burdens and costs on grantees. Her advice is to prioritize and only collect what is necessary to understand your core impacts and make informed strategic decisions, and to make sure you have the internal resources needed to analyze data you collect to glean meaningful insights.

    Sometimes starting backwards can be helpful in determining what to measure. What would you like to be able to say about your grants and your impact at the end of the year? Then think about what you need to collect in order for you to know whether you have achieved that.

  3. Enable Strong Data Collection. 
    In Christine’s experience, this requires having impact measurement and communication become an integrated part of all grantmakers’ responsibilities. The American Express Foundation incorporates data collection throughout every step of the grantmaking process. Nonprofits self-identify program goals on the grant application, and then report back on those goals at the final grant report. In this way, each program officer is responsible for working with their grantees to ensure program goals are aligned with the data being measured, collected and reported. For grantees, it’s not necessarily about hiring an external evaluator to complete a scorecard, but to simply let us know, “With limited resources, why is this a program you want to focus on? How will you know if you’re successful?”

    Unless you can afford to fund evaluations, you will be relying on self-reported information from grantees. There are ways to gain confidence about the data you receive. Look for internal or external benchmarks. For example, for a college success program, you can compare grantee-reported results against national or local data on college enrollment and completion. You can also compare results to prior data from your grantee and to similar programs that you also fund.

    You can also utilize data to help you hone in on grants that need more attention — if a grant performed significantly below or above the expected, or other similar grants, this is a sign that you should focus more time on these to understand whether the grantee is being realistic about their impact, what worked and what did not.

  4. Aggregate Multi-Program Results. 
    It’s important to communicate the breadth of the work that your team does, yet sometimes different programs may have very different measurements. Aggregating your metrics to tell your story in a cohesive way may include practices like requiring core metrics across grants; showcasing individual stories; and the use of dashboards.

    Hui explained that the Citi Foundation has established a short set of standardized metrics focused on participants’ behavior change and outcomes that it asks all their direct service grantees to report on. This provides the Foundation with the information they need and limits the burden on their grantees while making it easy to roll up information across their large portfolio of grants.

    Christine talked about a spectrum of measurement and the balance of working with grantees to allow for flexibility while still enabling metrics aggregation. Depending on the funding focus, the American Express Foundation may have a greater diversity of what a success metric means, allowing for nonprofits to propose more flexible and innovative programs for funding.

  5. Integrate Findings into Strategy & Communications. 
    Every piece of data that’s collected should have a use. Ensuring that you have methods of integrating findings from your data collection into your strategy and your communications will help you grow and develop thought leadership as a funder.

Overall, the conversations focused on how measurement and demonstrating impact is an iterative process. Hui noted that your measurement system and processes can and should evolve over time — it has at the Citi Foundation — as it is impossible to get it right the first time. If your programs change, or you realize that what you’ve collected is not what you need, you can modify your process. Communications and training for grantees may be needed to ensure they understand what you want, particularly if things have changed.

Christine emphasized that the main goal of any measurement program is to learn and improve behaviors. Learning must be a key part of reporting and measurement — failure to achieve expected results is not necessarily a bad thing if grantees are learning and improve over time. Failure may also be a reflection of the risk appetite of your organizations. For example, if you are testing or piloting innovative initiatives, expect higher rates of failure. If senior management cannot accept this, take this into consideration in your grantmaking strategy.
Thanks to Lisa Frantzen for a terrific job moderating the conversation and breaking down the topic into the five building blocks. And thanks to PNY’s Assistant Director of Learning Services Beeta Jahedi and Learning Services Associate Crystal Ovalles for bringing us together for this peer learning session.
Find More By

News type 
Related Organizations