A recurring theme in the last few months is the challenge and need for nonprofit and ministry organizations to find a decisive way to self-evaluate. By extension, Catalyst and other grantmakers need criteria to evaluate not only applications but also the outcomes of projects after they receive funding.
There are a few particular difficulties in this:
-Reluctance: Nonprofit leaders are largely involved in matters and issues that are driven by compassion rather than efficiency. Many times they are resistant to the "corporate" emphasis on numerical evaluation. In my years at camps I saw the anger that was prompted when someone from the board asked how many campers had made faith commitments during a particular program. Staff found that offensive and felt it diminished the nature of relationships to something merely transactional and manipulative.
-Ambiguity: Social services and spiritual projects are notoriously difficult to quantify. How do you measure the benefits of a relationship? Rarely is there a single point of emphasis and the people involved may all have different value and priority for the varied outcomes.
-Tools: Cultural change doesn't lend itself to a simple bar graph. There is a shortage of recognizable methods for identifying and communicating the kinds of outputs and outcomes we're interested in supporting.
-Objectivity: Nonprofit workers are almost always passionate about their work. (We wouldn't partner with any that aren't). Their clientele are understandably grateful for the most part for what is being accomplished. It is a lot to ask of either of those groups to provide a relatively unbiased perspective.
-Hope: As may be the case in other fields (but I suspect is exaggerated in these ones), nonprofit and ministry people are optimists. They look for the signs of life in even the most desperate situations. It's a necessary prerequisite of much off what they do, but when applied to evaluation is clearly distorted.
-Narrative: When numbers are hard to generate or interpret we rely on stories. Funding for charities has always been based more on tugging the heart strings than swaying the intellect. The traditional pitch of "a sob story and a slideshow" is deeply entrenched and typically effective. Anyone can come up with at least one compelling account of someone who's life is being bettered from their efforts.
Despite all of these impediments there are efforts being made widely to develop useful and relevant ways of measuring the results of nonprofits. As I wrote about previously, Jim Collins has produced a monograph of Good to Great aimed specifically at the social sector where he argues for the necessity of determining standards of evaluation that are measurable. It is a very live discussion among the professionals I met at the recent PIGS conference as well.
Prior to the start of Catalyst I was involved in starting a new church in our community. We were under the authority and support of the church where I had been staff for several years; and I reported to the leadership there. When after more than a year our new congregation wasn't significantly growing the leadership began to question the wisdom in continuing. Of course I resisted. I could see the sparks of potential and the impact we were having on the few people who were involved. Ultimately the decision was made to close the new church. It was difficult for all involved (new congregation, myself, and the leaders of the larger church), and was made all the more difficult because there was no standard of measure by which to evaluate what was happening.
I admit that I find the process of determining objective measurement criteria for matters of spirituality and social justice to be both daunting and dangerous. Obviously we don't want to reduce the efforts of our partners to spreadsheet entries. At the same time, I have become increasingly aware that with those criteria established and agreed upon there is a freedom to pursue a vision with greater confidence that you have defined your purpose and won't be dissuaded by the inevitable swings of energy and enthusiasm.
I am eager to work with our partners to figure out how to farily and helpfully evaluate their honourable efforts.
Subscribe to:
Post Comments (Atom)
1 comment:
thoughts on evaluation from the director of IDE Canada (by email)
Just read through some of your blog posts re: evaluation. Some good thoughts in there. Quick thoughts on your perceived barriers to measurement:
1. Reluctance: This is where non-profits must be challenged. Reluctance to take measurement seriously is often a warning sign that you are losing your way. Too many use “soft” social goals as a smokescreen for avoiding a hard look at our effectiveness. Our clients and customers generally don’t have that luxury.
2. Ambiguity: Well, yes and no... I believe that most important outcomes can be measured. A disciplined look generally reveals measurable indicators of success (don’t confuse measurable with count-able as qualitative measurement can be every bit as rigorous as quantitative methods).
3. Tools: I think this is key. We have sets of academic tools that are far too arcane and complex for normal organizations to use effectively (here are the keys to the helicopter... have fun!). We have sets of management indicators that often don’t get to the heart of our business. I have been interested for some time in developing tools that combine usability and simplicity with the right amount of rigour and precision. Tough but essential.
4. Objectivity: As opposed to all those cold, objective machines in the for-profit world? I’m not sure that non-profit managers are that much more passionate than good for-profit managers (or even bad ones, starry-eyed over the next perfect product). The for-profit guys know that if they don’t temper their enthusiasm with cold doses of reality from time to time, there won’t be any money in the bank. We owe it to our clients and customers to do the same.
5. Hope: I think this is a sub-set of 4 and not unique to non-profits. Many for-profit managers can see hope in the most dismal of P/L statements... to the chagrin of their shareholders.
6. Narrative: No measurement system is any good if it doesn’t tell a story. Stories are often very helpful ways of understanding outcomes in a deeper way. Good assessments use both narrative and numbers to tell a story that is both compelling and credible. I remember once working with an organization in Honduras to help them with the complexities of analyzing nutrition survey data for a CIDA report (post Hurricane Mitch). There was a lot of emphasis on the survey data and we spent a lot of time and effort getting it right. At the end, we threw in some text boxes with individuals’ stories. At a meeting later with the CIDA officer, she did not mention the survey results (they met the standard and were checked off the bureaucratic checklist) but spent the whole meeting animatedly discussing the stories in the text box, which offered so much more colour and insight.
Not sure exactly how relevant this is to the discussion, but I was just reflecting on the sacking of Avram Grant – interim manager for Chelsea. Chelsea lost to Spurs in the Carling Cup Final, came second in the Premiership (decided on the last day) and lost the Champions League Final to Man U on penalties (John Terry slipping on wet grass as he teed up what would have been the winning pen). All in all, a great season for a team that lost its manager close to the start of the season – two finals and a second-place finish in the most competitive league in the world. But the Chelsea manager is hired to win trophies, not to come second. It seems harsh to me that if Terry keeps his footing for that one penalty in the rain in Moscow, Grant likely keeps his job. That’s pretty hard-nosed results-based management. Abramovich – Chelsea’s owner – obviously feels this is necessary to reach the top. Is that good management practice? Doesn’t Grant deserve some credit? Is it just a mechanistic response – No Trophy/No Job? Pretty tough. I’m glad my board knows nothing about English football.
Post a Comment