- We (nonprofits) have to do more with less, and need to justify additional funding;
- Our funders, operating in a scarce resource environment, have a right to demand metrics of effectiveness;
- Our successful work is inextricably tied to social and economic impact in the US
By Susan Davis, Executive Director, Improve International
You can lead a horse to water but you cannot make it drink. You can do an evaluation but you cannot make us think.
The good news is there is a proliferation of evaluation databases. Donors like the US Agency for International Development (USAID) and the Norwegian Agency for Development Cooperation (NORAD) and implementing organizations like CARE and Catholic Relief Services (CRS) are publishing evaluation reports online. This gets them off people’s desks and into the world. (Update: another handy link to multilateral development bank evaluation groups and their reports is here.)
But is anyone learning from them? Evaluations and other reports from many years ago show, for example:
- It has become overwhelmingly clear from both research and field observations (Warford and Saunders, 1976; Elmendorf, 1978; Burton, 1979) that the main obstacle in the use and maintenance of improved water and sanitation systems is not the quality of technology, but the failure “in qualified human resources and in management and organization techniques, including a failure to capture community interest” (Nieves, 1980). An appalling 35 to 50% of systems in developing countries become inoperable after five years (Imboden, 1977;
Warford and Saunders, 1976; White et al, 1972) (from USAID, 1981).
- National and central institutions are beginning to recognize that for community management to achieve its promise, long-term nurturing and support will be needed. Water supply and sanitation systems have costs and responsibilities that must be met, whether the systems are operated by local or central authorities (USAID 1992).
Yet, decades later, there are still many organizations that tell their donors that $25 or so can save a life (usually referring simply to the costs of building a water system). And many current budgets and implementation plans still focus on short-term programs for access to water with no plans for long-term monitoring or support when things break (whether by government or local institutions or the organization). And current evaluations show the same problems.
So what will it take to get the horse to drink? Donors typically like to fund tangible things, so time for learning often isn’t considered “paid work.” Furthermore, we are dealing with a big issue – millions or billions still without access. Who has time to slog through one or more 30-page documents? Even those who do might not have the power to change the way their organizations do business or raise funds.
It seems that in addition to empowering practitioners to learn, educating donors, fundraisers, and the executives at implementing organizations must be part of the solution. One way is to make the evaluations more accessible and digestible. Washfunders.org is beginning to assemble one-page summaries of key WASH reports and evaluations in its Knowledge Center. Improve International is investigating ways to digest the information even further: at the organization level and by theme.
Another method to encourage learning is to get donors together. For example, the SustainableWASH folks are planning a March 12 donor gathering in DC where donors can share challenges and solutions. WASHfunders.org also has a Funders’ Forum and a Funder Toolkit.
Or perhaps we should think more of “horses for courses,” as the Brits say. Maybe formal, expensive evaluations by outsiders are not useful. What if we engaged the customer communities (participatory or empowerment evaluation), practitioners peers and donors in evaluation? Imagine the learning! This is actually what the Accountability Forum is attempting to do. After the pilot in Honduras, COCEPRADIL (the local organization that was evaluated) is addressing the recommendations and asking for new types of funding. At least one horse is drinking!
By Susan Davis, Executive Director, Improve International
I have always grown from my problems and challenges, from the things that don’t work out, that’s when I’ve really learned. - Carole Burnett
I founded Improve International to help address the underlying causes that lead to the high failure rates for water systems and toilets in developing countries. A big part of this should be charitable organizations and donors learning from past work, and evaluations of programs are one tool for this. However, people need to be able to find the evaluation reports for them to be useful. I’ve assembled a list of organizations with independent evaluations as a start, but I’m more excited about the growing Knowledge Center on WASHFunders.org. The Foundation Center built and manages WASHFunders.org; their expertise in knowledge management makes this a good home for evaluation reports and others related to water and sanitation programs. You can search by key word or focus area. Improve International has been working with them to identify and summarize reports and there are many more to come.
Go forth and learn!
By Susan Davis, Executive Director, Improve International
Last week I attended the 1st Pan-Asia-Africa Learn MandE Conference in Bangkok. (M & E = monitoring and evaluation; in this case it was for international development programs). The conference was well organized and there was a good mix of implementing organizations, academics, donors, and a few software folks. I liked the size of the conference because all of us were able to attend all the sessions – no rushing around to other rooms to find particular topics of interest.
Marla Smith-Nilson and I presented on the Accountability Forum / WASH Sustainability Rating. We were surprised to find that we were among the very few people talking about 1) independent evaluations and 2) doing evaluations years after program completion.
Other presenters talked about a dizzying array of acronyms, methods and buzzwords – results based monitoring, outcome mapping, data flow diagrams, causality, value for money. I learned a great deal about M&E. I also learned that for most organizations, the M&E ends when the program ends. Because that’s when the funding ends.
I just don’t think you can measure the true results / outcomes / causality / value of any program while you are doing the program. Based on comments and questions from others at the conference, many them understand this and were frustrated by the resource limitations and lack of donor interest.
I also sat on a panel where we discussed our thoughts on the statement “Monitoring and evaluation is preoccupied with reporting to donors rather than ensuring projects make a valuable contribution to host communities.” Most of the panelists (except for the CIDA representative) agreed that yes, monitoring and evaluation is mostly about reporting. One panelist suggested that organizations could do a better job explaining to donors how monitoring and evaluation are integral to success and learning. I suggested that we have a huge rich database of past international development projects to review. Rather than trying to guess at what indicators might predict success, we could analyze projects that have led to lasting results and identify what factors contributed to that. This would save us all some time in program design and monitoring.
Many would ask, who would do that? Who would pay for it? I would ask instead, if we really are trying to improve the lives of poor people, how can we afford not to do it?
Links to all of the presentations, including ours, can be found here.
By Susan Davis, Executive Director, Improve International
Last week’s Water & Health conference at the University of North Carolina – Chapel Hill was a great one. It’s always great to see old friends from the water sector and make new ones. Here are brief descriptions of the various things I was helping to promote:
The WASH Monitoring Exchange (WASHME) is a simple online platform for organizations that implement rural water systems to share their functionality data (and see data from other organizations). We got commitments from several organizations to share their data and currently there are about 3000 water points uploaded. Join the movement and share your data!
WASH Advocates, Global Water Challenge, and IRC presented a self-assessment tool for Sustainable WASH that is the next step after endorsing the Charter. While it is still under development, you can test it here.
The Millennium Water Alliance (MWA) Collective Impact Report will soon be ready for external distribution. Susan Dundon of MWA and I presented a poster about the findings.
Marla Smith-Nilson of Water 1st and I presented this poster about the need for a WASH Sustainability Rating & plans for the future.
Emma Bones, Lily Ponitz, and Allie George (left to right), my summer interns from Georgia Tech, also presented a poster with the results of their research in Nicaragua. They have developed a useful rubric for comparing mobile monitoring and mapping tools and they had lots of people interested.
By Susan Davis, Executive Director, Improve International
One of the themes of this year’s Water and Health Conference is monitoring and evaluation. Below I’ve listed side events and presentations that are related to monitoring, evaluation, indicators, and learning (at least what I can tell from the titles). Improve International is proud to be collaborating with several organizations at this conference (see events in blue). I look forward to seeing you there!
For the overall conference program click here.
Monday October 29
8:30 am – 10:15 am – Planning & Monitoring for Greater Sustainability: Rural and Urban Perspectives on Sustainable Solutions for Improved WASH Services. Convened by Rotary International, Aguaconsult, Building Partnerships for Development, Water and Sanitation for the Urban Poor, USAID WASHplus Project, and Ideo.org
10:45 am – 12:15 pm – Establishing Common Indicators for WASH. Convened by Improve International, Water For People, A Child’s Right, Blue Planet Network, Water 1st and IRC
1:15 pm – 5:00 pm – Building Blocks for WASH: How Well are We Addressing Sustainability? Convened by WASH Advocates, GWC, Aguaconsult, IRC, and Improve International
1:15 pm – 3:00 pm – Monitoring Equity and Pro-Poor Performance – Inputs, Outputs, and Outcomes. Convened by SHARE, UNICEF, WHO, WSSCC and UNC Water Institute
3:30 pm – 5:00 pm – Indicators for the Human Right to Water. Convened by UNC Water Institute
Tuesday October 30
8:30 am – 10:15 am – Evidence Based Decision Making for WASH Investments. Convened by Clarissa Brocklehurst
10:45 am – 12:15 pm – After the MDGs: Post-2015 Indicators and Monitoring for Urban WASH. Convened by UNC Water Institute and UNICEF
2:30 pm – 3:30 pm [WASH in Schools track] Impact of a School-Base Water Supply and Treatment, Hygiene, and Sanitation Program on Pupil Diarrhea: A Cluster Randomized Trial. Matthew Freeman, Emory University ; Impact of a School-Base Hygiene Promotion and Sanitation Intervention on Pupil Hand Contamination in Western Kenya: A Cluster Randomized Trial. Leslie Green, Emory University
2:30 pm – 3:30 pm [Ecosystem Protection & Drinking Water Safety] Examples of Cross-Program Approaches for Sharing Drinking Water Indicators, Danette Boezio, RTI International
4:00 pm – 5:00 pm [WASH and Child Health] Impact of a Combined Water, Sanitation and Hygiene Intervention on Environmental Fecal Contamination and Child Parasite Infections in Western Kenya, Amy Pickering, Stanford University
5:00 pm – 6:00 pm [Poster presentation] Collective Impact: Do WASH partnerships create more impact than organizations working independently? And how do you measure that? Susan Dundon, Millennium Water Alliance and Susan Davis, Improve International
Wednesday October 31
9:45 am – 10:45 am [WASH and Child Health] Randomized, Controlled Intervention Trial of a Village Level Intervention to Promote Handwashing with Soap in Rural Indian Households, Adam Biran, LSHTM ; Assessing Water Filtration and Safe Storage in Households with Young Children of HIV-Positive Mothers: a Randomized, Controlled Trial in Zambia, Rachel Peletz, LSHTM
11:15 am – 12:15 pm [Beyond 2015: Realizing Universal Access and Human Rights] – Best Practice in Hygiene Promotion Programmes: an Evaluation Template to Determine the Cost-Effectiveness of Different Strategies, Dr. Juliet Waterkeyn, Africa AHEAD; A Global Review of Capacity Building Organizations in Water and Sanitation for Developing Countries, Melinda Foran, CAWST
11:15 am – 12:15 pm [Sanitation Achievable and Sustainable to All] Special Challenges in Designing a Health Impact Evaluation of Rural Sanitation: A Cluster-Randomized Trial in Orissa, India, Thomas Clasen, JD, PhD, LSHTM; Sustainability of a School Water, Sanitation and Hygiene Intervention Two Years Following Implementation in Nyanza Province, Kenya, Richard Rheinghans, Emory University
1:15 pm – 2:15 pm [Southeastern US Water Challenges] Assessing Contaminant Intrusion in the City of Atlanta Distribution System Using and Automated Monitoring and Sampling Device, Dr. Ethell Vereen, Jr., Emory University
1:15 pm – 2:15 pm [M&E for Sustainability] Lessons Learned Using a Mobile Data Application for Monitoring in a Household Water Program Employing Biosand Filters, Ray Cantwell, Samaritans Purse; Sustainability Check: A Rural Water Supply and Sanitation Programme Monitoring Tool, Kristen Downs, UNC Chapel Hill
2:45 pm – 3:45 pm [Governance] Governance as a Predictive Indicator for Water Point Sustainability: Results from Ethiopia and Mozambique, Peter Lochery, CARE
2:45 pm – 3:45 pm [Household Centered WASH] Sustainability and Scalability of Ceramic Water Filters in Households with Inadequate Piped Water: Evidence from Honduras, Dr. Georgia Kayser, UNC Chapel Hill
2:45 pm – 3:45 pm [Water Supply & Water Quality] Development and Evaluation of Behavior Change Campaigns to Increase Fluoride-free Water Consumption: 3 Field Studies in Rural Ethiopia, Alexandra Huber, EAWAG; Resilient Infrastructure for Water Treatment: A Comparative Evaluation of AguaClara Plants and Package Plants in Honduras, Victoria King, Cornell University
5:00 pm – 6:00 pm [Poster presentation] A novel way to promote accountability in WASH: the first Water & Sanitation Accountability Forum and plans for the future, Marla Smith-Nilson, Water 1st and Susan Davis, Improve International
5:00 pm – 6:00 pm [Poster presentation] Mapping the way: testing methods to map water points in developing countries, Lily Ponitz, Allie George, Emma Bones, Georgia Institute of Technology [my summer interns]
Thursday November 1
9:45 am – 10:45 am [Making Sanitation Benefits Achievable and Sustainable for All] An ME& System for Measuring Compliance of Rural Water and Sanitation Projects in South Africa with National Policy, Design Standards, and Norms, L.C. Duncker, CSIR; Evaluation of Strengthening the Enabling Environment for Large Scale Sustainable Rural Sanitation Programs, Eduardo Perez, WSP
9:45 am – 10:45 am [M&E for Sustainability] Monitoring & Evaluation for Sustainable Scale-up: the Case of Dispensers for Safe Water, Jeremy Hand, Innovations for Poverty Action; Sustainability of Water, Sanitation, and Hygiene Interventions in Communities in Central America, Rick Gelting, CDC
11:15 am – 12:15 pm [Making Sanitation Benefits Achievable and Sustainable for All] Community Led Total Sanitation: A Comprehensive Review of the Approach, Its Effectiveness and the Role of Key Internal Actors, Marissa Streyle, UNC-Chapel Hill; Factors Associated with Achieving and Sustaining Open Defecation Free Communities: Learning from East Java, Nilanjana Mukherjee, WSP
11:15 am – 12:15 pm [[M&E for Sustainability] Remotely Accessible In-Situ Instrumentation to Improve Accountability in Public Health Interventions, Evan Thomas, Portland State University
Friday November 2
8:30 am -12:15 pm – Measuring Hygiene Behavior Change – A Decade of Community Health Club Case Studies. Convened by Africa AHEAD
8:30 am – 10:15 am – Water Quality and Emerging Contaminants: How to Assess, Improve, and Inform through Measurements. Convened by National Institute of Standards and Technology
10:45 am – 12:15 pm – Networking Session and Forum for mWASH Implementers. Convened by Pacific Institute
By Susan Davis, Executive Director, Improve International
In June 2012, we published a blog on assembling a list of WASH organizations and which had independent evaluations. We are working with WASHfunders.org to incorporate this information, but in the meantime, based on popular demand, we’re sharing the list of organizations with independent evaluations (out of the approximately 550 WASH organizations identified). We understand this list is not complete or comprehensive. Rather it is a list of evaluations that we assembled using the tools that a regular donor would have – web searches, word of mouth.
Let us hear from you – has your organization had an independent evaluation? Have you done an independent evaluation? Please send an email or include the link in the comments section below.
This is a guest blog by Vanitha Sivarajan. Her background includes over 10 years of biodiversity conservation and participatory water resource management with local communities, non-governmental organizations, governmental agencies, and the private sector. She has worked both domestically and internationally, with a focus on Latin America and India. Currently Vanitha is an Environmental Consultant who provides NGO clients in the water and climate sectors with a variety of programmatic services.
Improve International is working to promote accountability from groups who build water and sanitation systems in developing countries. One way for those groups to demonstrate accountability (and to learn from their successes and mistakes) is to have an independent evaluation done. Like an independent financial audit, independent program evaluations are an objective way to determine the ability of a water and sanitation organization to successfully deliver safe water and convenient toilets for their constituents. The difference is that rather than looking through the financial files, a program evaluation requires experts to visit all or a sample of the assisted communities.
Out of the 500+ water organizations identified, we could only find 37 independent evaluations by web searches.
This information can be useful for donors, supporters, the communities they are serving, the governments of the communities they are serving, the general public, etc.
I offered to help Improve International find out how many water and sanitation organizations have had independent evaluations. Improve International challenged me to first develop a list of at least 100 water and sanitation organizations, and to find at least 20 independent evaluations. First, I wanted to make sure I knew what I was looking for.
What is an independent evaluation?
According to the World Bank, the definition of an independent evaluation is: “If independent evaluation is to be impartial, its findings, analyses, and conclusions must be free from bias. This means that [the evaluator] must be independent from line management at all stages of the process, including planning of work programs, formulation of terms of reference, staffing of evaluation teams, and clearance of reports.” I used this as a guide in identifying which evaluations were independent.
How many WASH organizations are there?
I began my quest by compiling a list of organizations that are implementing, supporting, and/or funding water, sanitation, and hygiene projects in developing countries. I quickly found that not all water and sanitation organizations are created equal. When I think of water organizations, traditional ones such as CARE, World Vision, Catholic Relief Services, etc. immediately come to mind. However, in the last decade or so, there has been an exponential increase in water organizations, from traditional NGOs, to startup companies, to social enterprises, to bottled water companies that support global water projects with their profits. I couldn’t find one go-to directory for all the organizations that fit these categories. However, I got a good start with World Bank’s Water and Sanitation Program, Water for the Ages, filtersfast, WASHfunders.org, Twitter lists, and Google searches. Susan at Improve International added in several that she found as well. By the time we were done, we came up with a list of more than 500 water and sanitation organizations! This list will, in the near future, be included on WASHfunders.org.
How do we know who’s doing good work?
Now that I had a list of organizations, it was time to look for independent evaluations. Looking to the organizations’ websites themselves proved fruitless, as it was rare to find actual independent evaluations listed on their site. While searching, I realized that sometimes independent evaluations were called “external” evaluations or “third-party” evaluations. Out of the 500+ water organizations identified, we could only find 37 independent evaluations by web searches.
I did not find a site that listed organizations that had either a) done independent evaluations, b) rated them using standardized indicators, or c) provided easy and clear information on how to determine the efficiency or success of a water and sanitation organization (or any international aid or development organization for that matter).
Philanthropedia’s Ranked Nonprofits: International Water, Sanitation, and Hygiene site came the closest with opinion-based rankings, while the Better Business Bureau’s 20 Standards for Charity Accountability and Charity Navigator’s Accountability and Transparency Rating provided overall rankings based on the organizations’ self reported information.
My search resulted in some interesting finds:
- The oldest independent evaluations I found were several for UNICEF dated 1991-1995 by various organizations such as the Swedish International Development Cooperation Agency and the Regional Center for Development Management & Research. UNICEF also evaluates other organizations such as PlayPumps International in 2007.
- The American Red Cross, which I don’t usually think of as a water organization, has multiple evaluations of water systems they built in Central America after Hurricane Mitch that were done by the Centers for Disease Control between the years 2001-2009.
- There were several other evaluations of water and sanitation projects in response to disaster relief such as tsunamis or earthquakes. For example, a group called the Active Learning Network for Accountability and Performance in Humanitarian Action did a Mid-Term Independent Evaluation Report of Save The Children’s Tsunami Response Programme in 2008 that encompasses multiple aspects besides water and sanitation such as construction, health, education, livelihoods, child protection, etc.
- Multilateral banks such as the World Bank, Asian Development Bank, and Inter-American Development Bank have independent evaluation departments that serve as an unbiased separate arm that evaluates the Bank’s program work. However, the evaluation documents that I found were long and complex and didn’t highlight its water and sanitation components as they primarily aimed to provide comprehensive programmatic information.
What I didn’t find
The wide discrepancy of the ratio of easily found independent evaluations to the total number of water organizations out there indicates that these are an important, but overlooked, aspect of an organization’s monitoring and evaluation program, not to mention its fundraising program. Imagine if I was a donor trying to find objective information on which water organizations had done effective and sustainable work. It was a challenge to find just a comprehensive list of water organizations, not to mention a list of independent evaluations.
Other aspects of this research that were difficult include:
- Easily identifying the stage in the project that the evaluation was done. It would be ideal to do an evaluation mid-project, post-project, and then another at least once 10-50 years later, all using the same indicators.
- Knowing what is done with the evaluations after they are read: do organizations address challenges surfaced in the report and then re-evaluate later?
- Finding evaluations with a completely unbiased or uninvolved team. Some evaluations that claimed to be done independently included a staff member of the organization it was evaluating.
- Finding highlighted mentions of failed aspects of the project/program. Who follows up after these types of negative evaluation findings? What and how can other organizations learn from these?
- Who funds the independent evaluations? Are they included in organizational budgets for monitoring & evaluation?
Improve International and I hope that publishing this list of organizations will prompt water and sanitation organizations to share their independent evaluations. We imagine that they might be in paper version, or on someone’s hard drive. We’d like to get the information out there so we can all learn from it. Are there common challenges we all face? Is there someone who’s figured out how to promote sustainable hand washing behaviors? It’s not too late – let us know if your organization has had (or has done) an independent evaluation – email the link or the document to email@example.com
Yesterday I chaired a panel at the Philadelphia Global Water Initiative conference on performance indicators. Our panel (“Perspectives & Experiences from National & International Organizations” had the honor of being interrupted by the Mayor of Philadelphia, Michael Nutter (a “green city” rock star). Below I’ve shared my talking points for introducing the panel in blog form.
Indicator: A thing, esp. a trend or fact, that indicates the state or level of something
Monitor: Observe and check the progress or quality of (something) over a period of time; keep under systematic review.
We are intimately familiar with performance indicators and monitoring them in our personal lives (body weight, baby length, miles per gallon) and in world in general (unemployment rates, housing starts, Dow Jones Index). So why do we need to monitor performance indicators in international humanitarian and development work?
- Doing good work: Well, simply because we want to know if we’re doing what we think we’re doing. When monitored during a program, data on these indicators can help us make mid-course corrections. When monitored after program completion, they can help us change the way we design future programs.
- Reporting to donors: Some of our donors want to know if their money accomplished what we said we’d accomplish.
- Advocacy for issue: And it’s helpful for advocacy efforts to be able to say to the general public that our organization is moving the needle on the big social problem we’re trying to solve; advocacy to governments where we work and their role in scaling and sustaining service delivery; and advocacy to our peer organizations on why it’s important to work in coordination, etc.
There are many organizations who have some variation of a monitoring program. Monitoring status of indicators is just part of a cycle, however. We need to also evaluate, learn, and reform our work based on what we’ve learned.
It may be actually a good sign, although it perhaps should have been discussed earlier in the history of development, that many questions are being debated in the water & sanitation sector:
- What indicators should we measure? Outputs or outcomes? Quantitative or qualitative? For example, the WASH Cost initiative is telling us we need to start considering costs. UNICEF is looking hard at equity of water and sanitation access.
- What should be mandatory (and measured the same way across the board) and which should be optional (recognizing that there are scarce financial resources)? There is an effort underway to identify common core indicators called WASH Monitoring & Evaluation initiative
- Should we / how do we consider cross-sectoral indicators (agriculture, health, economic, environment)
- Should we / how can we include customer (aka beneficiary) goals and satisfaction?
- Do we need to be able to dis-aggregate the data by gender, socio-economic status, age, ethnicity, or other factors?
- What tools (surveys or assessments) should we use?
- Who should collect the information, how much (every single water point?) and how often (every day, every month, every year)? And for how long?
- Who should pay for collecting information?
- Who should “own” the information? Is it ethical to make this information public if it contains financial and/or health information?
- How do we know the information are accurate? And does access to safe water mean the same thing as use of safe water?
- Who is responsible for doing something about the information we get (for example, if we see failures in water / sanitation projects years after implementation, who should do something about that?)
Okay, I would love to tell you that yesterday’s conference provided all the answers, but I can’t. [There were some good relevant resources highlighted in last week's TweetChat on water, sanitation & hygiene evaluation.] However, it’s only by knowing what we don’t know that we can start learning. What’s your question about performance indicators? Better yet, what’s your answer?
The first Accountability Forum focused on the evaluation of COCEPRADIL (Central Committee for Water and Comprehensive Development Projects in Lempira), a local NGO in Lempira, Honduras that has been implementing water and sanitation programs for over 20 years. Over one week in December 2011, independent evaluators and peer organizations evaluated COCEPRADIL based on 22 criteria of program effectiveness. We also considered the likelihood of continuing long-term service provision. Though this first Forum served as a pilot for the criteria and the evaluation process (including survey instruments and study format), the evaluation of COCEPRADIL provided valuable information on their credibility as an organization as well as lessons that can be shared from both their successes and current challenges.
Bottom line: future funding for COCEPRADIL is highly recommended, particularly for “software” such as training, which is often challenging to secure funding for, yet is key to COCEPRADIL’s continuing success.
Read the full report here: Accountability Forum Evaluation Report Dec 2011
Summary of Results
As shown in the table below, COCEPRADIL meets or exceeds basic standards for 21 of the 22 criteria.There are 11 variables where COCEPRADIL meets high expectations (blue), 10 variables where they met the basic expectations (green) and only one (1) where they do not meet basic expectations (yellow).
Based on numeric scores associated with “meets high expectations”/blue (3), “meets basic expectations”/green (2) and yellow and red (0), COCEPRADIL scores 53 points out of a possible 66.