Partnership Monitoring and Evaluation
By Stella Pfisterer and Cintia Carneiro
Given the growing number of partnerships, monitoring and evaluation of, for and through partnerships faces new challenges. Partnering raises critical questions about the extent to which collaboration actually adds value in terms of both process and outcomes, and how judgments are made (Atkinson, 2005). Evaluating partnerships is difficult for various reasons such as long timescales for achieving impact, different perspectives on what success means, the complexity and variability of partnership interventions, and the different context within which partnerships work. Neither the efficiency of the partnering process itself, nor its effectiveness in addressing set goals is easy.
Introducing M & E
Monitoring is the routine process of data collection and measurement of progress towards programme/project objectives. Monitoring is answering the question “what are we doing?”
Monitoring is done to:
- Track inputs and outputs and compares them to the plan;
- Identify and address problems;
- Ensure effective use of resources;
- Ensure quality and learning to improve activities and services;
- Strengthen accountability;
- Be used as programme management tool.
Evaluation is the periodic assessment of the design, implementation, outcome, and impact of a development intervention. Evaluation is a systematic way of learning from experience to improve current activities and promote better planning for future actions. In general, evaluation answers the question “what have we achieved and what impact have we made”. Reasons for evaluation include:
- Determine programme effectiveness
- Show impact
- Strengthen financial responses and accountability
- Promote a learning culture focused on service improvement;
- Promote replication of successful interventions;
- Involve the collection of information that helps to make judgments fairly.
Evaluations almost always involve multiple and diverse audiences: those who use the evaluation to make decisions, individual administrators or legislators, staffs, or the large group of consumers who purchase the goods and services being assessed. Other typical audiences would be the individuals and groups whose work is being studied, those who are affected by the results, community organizations, and possibly the general public. In order to ensure that the evaluation has utility, all the details must be worked out early in the programme - the earlier the better. All these details are what we refer to here as planning the evaluation.
Evaluation must be carefully planned from the beginning of the project in order to be useful. The evaluation design has one purpose: to provide a framework for planning and conducting the study. Benson and Michael (1990) suggest that there are two major components of evaluation design: (1) defining the criteria by specifying exactly what information is needed to answer substantive questions regarding the effectiveness of the programme and (2) selecting the method by determining an optimal strategy or plan through which to obtain descriptive, exploratory, or explanatory information that will permit accurate inferences concerning the relationship between the programme implemented and the outcomes observed. The evaluation should be designed so that it meets the needs of the programme.
Monitoring and evaluation of partnerships
In the context of partnership monitoring and evaluation, scholars claim that partnerships require specific evaluation and monitoring frameworks since they can be distinguished from other relationship types (Brinkerhoff, 2002). Partnerships are specific collaborative arrangements and collaboration is hard to conceptualize because partnerships usually have many players and components, and tend to operate within complex systems.
There is a wide diversity of criteria for monitoring and evaluating partnerships. Criteria such as financial factors, organizational effectiveness, and efficiency have been applied. Partnership evaluation can be studied from either a societal perspective or a managerial perspective. Instead of objective measures, some researchers have emphasized perceptual measures of partner performance. No consensus has been reached yet on “the best approach”. Some evaluation frameworks for partnerships exist (Brinkerhoff, 2002; Atkinson, 2005; Asthana et al., 2002) but are not generic or only make insufficient linkages between activities and outcomes and impacts. Most evaluation frameworks have been developed primarily as management tools, to help participating partners to identify obstacles and progress in both the process and objectives of the partnership (Glendinning, 2004). In order to measure the real effectiveness of partnerships, however, we are interested to investigate how the working relationship contributes to the ultimate outcomes and objectives (e.g. whether and how multiple perspectives were considered in the analysis and development of solutions). Therefore, evaluation of process objectives (e.g. characteristics of the implementation process) and impact objectives (e.g. intermediary goals considered essential to the attainment of the outcomes) are essential if we are to understand the contributions of the partnership itself to the attainment of the outcome objectives of the partnership members (Schulz et al., 2003).
Partnerships are characterized by interactions of individuals who represent organizations which are operational in a specific environment. It is therefore relevant to take four perspectives into consideration for the evaluation:
- Individual level: motivations, abilities and skills of individuals to manage the partnerships.
- Organizational level: benefits and costs for a specific partner organization.
- Institutional level: evaluating the partnership as a whole (and comparing it with alternatives).
- Institutional environment: evaluating the context.
Partnership evaluations can have different functions. They can be summarized as following (based on Stern, 2004):
- Evaluation as design: appraisal or ex-ante exercise.
- Evaluation as development: formative evaluation in the course of the programme which allows for corrections and capacity building.
- Evaluation as management: monitoring progress, providing feedback, supporting reflection and enabling accountability or reinforcing transitions within partnerships.
- Evaluation as explanation: boundary between research and evaluation.
Often, the purpose and function of an evaluation is not explicit to each partner involved in a partnership. This may lead to the problem that evaluation results are not understood fully by each partner involved.
Who evaluates what?
Different partners have different needs for information. What kind of evaluation output is most useful for whom? When partnerships only involve non-governmental partners, the reason for conducting evaluations may be different from those involving governments. Methodologies developed for use by government partners may be inappropriate for evaluations conducted by non-governmental partners.
When evaluating a partnership, a business partner would likely focus on the private costs such as the financial or in-kind contribution to the partnership and on the benefits to the business, such as increased profits attributed to the partnership. Governments addressing the same partnership are inclined to examine both the private and social costs/benefits (OECD, 2006).
Who evaluates whom?
M&E in partnerships is a delicate process, since we have to ask the question: who is evaluating whom? Key questions are how should evaluations be optimally designed, given the attributions of the different partners? The trend in partnership M&E is to develop stakeholder and/or participatory systems. These interactive approaches are based on the understanding that a range of stakeholder views have to be considered, since different stakeholders will have differential access and influence over the evaluation process. Indeed, Halliday et al. (2004) showed that those partners identified as “key actors” for a partnership evaluation were the most likely to comment extensively on questionnaires and highlight ambiguities between their understandings of the partnership process and to rank individuals strengths and weaknesses. Particularly in complex partnerships, respondents might be aware of only a particular area of operation of the programme. Literature suggests that evaluation should be collaborative, engaging input from all players to incorporate a range of perspectives. It is acknowledged, however that good collaborative evaluations are time-consuming.
How to evaluate?
Besides the issues of how to design a partnership evaluation while keeping time and cost aspects in mind, we need to consider how the process of evaluation itself influences the partnership relationship. There might be political issues and resistance dynamics. On the one hand, partnership actors may be disinclined to address issues of trust and other relationship dynamics. On the other hand, managers may be reluctant to state the relative importance of the different impacts. Policymakers may be reluctant to clarify their intentions (Toulemonde et al., 1998). In a partnership, objectives are based on compromise between partners with different political, social and economic aims. Therefore, Toulemonde et al. (1998) state that evaluation teams are often unable to find clear and simple criteria in the official documents on which to base their work.
The most common way to solve this problem is to involve external evaluators. Toulemonde et al. (1998) mentioned that evaluators often settle for statistics and indicators which are too sector-specific to be relevant and fail to examine the programme rationale. Often, the evaluation report amounts to little more than a collection of opinions by the actors involved, and by experts. It becomes obvious that partnership evaluators require an additional set of skills in order to do their job effectively. Besides characteristics such as independence and neutrality, knowledge of M&E and sector expertise, a partnership evaluator requires a set of skills falling under research, self-management, reading, writing and listening skills (see partnering skills).
But how can M&E be carried out to improve the partnership itself? A strong M&E system, like the programme itself, must be supported through the use of management tools – budget for M&E, staffing, and activity planning. Building an effective M&E system involves administrative and institutional tasks such as:
- Establishing data collection, analysis, and reporting
- Setting M&E guidelines
- Designating who will be responsible for which activities
- Establishing means for quality control
- Establishing timelines and costs
- Establishing guidelines on transparency
Disseminating the information and analysis
Partnership monitoring and evaluation is one of the most pressing challenges for organizations dealing with cross-sector partnerships. It is doubtful whether the common M&E systems in place are sufficient for dealing with the complexity of partnership M&E. Therefore, the challenge for researchers is to further develop and validate a sound, comprehensive and practicable M&E framework for partnerships.
Table 1: Comparison Monitoring and Evaluation