WORKED EXAMPLE 1

Photo by Nick Youngson (via http://www.nyphotographic.com/)

This example focuses on a metrics-based evaluation and the possible format of an evaluation report.

It draws loosely on a UN project (Strengthening the Leadership of Women in Local Democracy). The full version of the original evaluation report is cited here. Some general suggestions as to the format of an evaluation report are given below.

Imagine you are seeking to curb physical and verbal violence against women voting and participating in national elections in Pakistan, with a particular focus on town X and town Y in two neighbouring regions.

STEP 1

This involves choosing your type of evaluation. You have chosen or your funder has told you to use a metrics-led approach focused on the logical framework. Have a go at filling in the log frame below:

Narrative SummaryIndicators Means of verificationAssumptions (& Risks)
Goal
Outcome
OutputOutput 1
Output 2
ActivitiyActivity 1
Activity 2

So now you have to fill in your logframe using the advice given here. For a completed and downloadable printable version of this logframe, please click here.


STEP 2

So, you have filled out your blank logframe (above) and built M&E into your project from the outset as per the attached worked example.

You have undertaken training workshops and engaged in advocacy and want to know how effective this work has been. How could you check this?

One way of answering this question is to use pre- and post-workshop quizzes. You ask a series of questions at the very start of the training and then ask the same questions at the end.

These questions would home in on people’s understanding of gender issues, the constraints facing women when it comes to voting (harassment and threats, time constraints, access to polling booths, false assumptions regarding their right to vote and their capacity to understand electoral issues).

Alternatively, you could use a mini-questionnaire at the start and end of the training. This survey would involve both closed and open-ended questions.

STEP 3

So, now you know that you are using metrics-based evaluation and you have undertaken a rapid appraisal of your training and found that it has increased local awareness of gender issues and particularly the need for greater participation by women in elections and campaigns. But you know less about how the campaigns and elections are actually being conducted.

How can you check this and also link it back to your own project?

Clearly, you need to be careful not to assume that more inclusive electoral campaigning or that the enhanced presence/ visibility of women in polling booths can be attributed directly to your project. You can only claim to have made a contribution to any increases in awareness/ participation.

As such, you might decide to undertake key informant interviews with individuals who benefited from the training. You might also complement this with interviews of local government officials responsible for the conduct of the election and possibly snapshot interviews with women leaving polling booths (though this raises ethical concerns and may not be advisable in some countries).

Such interviews would help you check the impact of your advocacy work. Have voters changed their behaviour in ways that reflect the increased awareness you have brought about?

So, you are seeking to establish whether their attitudes have changed and whether their actions have followed. It is not enough for, say, election campaign managers just to claim that their views have changed. How can this be backed up by evidence? Have they changed electoral campaign literature, allowed positive discrimination and so on?

An alternative approach would be participant observation, not so much in and around polling booths, but at events where the election campaign is underway. You could make a subjective judgment regarding the amount of time/ degree of emphasis attached to women’s participation and note down any key quotations, any incidents of violence or aggression.

On the basis of these findings and other data drawn from project records, you may be in a position to equip an independent evaluator with key data or to undertake your own informal evaluation of your project. One format for doing so is suggested below.

Before that, remember:

  • It is unrealistic to expect everything to have gone perfectly to plan and donors will be suspicious if you claim this
  • Do not just say what you did but spell out the difference it made
  • Highlight your learning
  • Spell out whether your assumptions held and what you might do differently in the future
  • Make the most important information easy to find
  • Report against the targets in your original proposal and make your numbers clear
  • Use tables and graphs where appropriate
  • Check your spelling and grammar
  • Where possible share real-life stories

Writing Your Evaluation Report

You might actually want to go beyond step 3 and consider writing or commissioning your own evaluation report. For excellent advice on writing your evaluation report, see:

For a possible format for your own informal/ self-evaluation report, see below. Note that a formal, independent evaluation would normally be commissioned and would have terms of reference.

Possible Format

The kind of sections/ topics that are normally covered include:

  • Executive Summary
  • Introduction
  • Description of the project
  • Evaluation purpose and Methodology – context of evaluation, questions, team, limitations…)
  • Findings, Conclusions and Recommendations
  • Lessons Learned
  • Appendices 
  • Terms of Reference
  • Evaluation design and methodology – more complete overview than in the introduction
  • List of persons interviewed
  • List of documents review

Possible Method

As noted earlier, one of the key methods of evaluation involves using the following five OECD criteria as a way of framing your findings:

Box 18.1 OECD-DAC evaluation criteria
Relevance: The extent to which the aid activity is suited to the priorities and policies of the target group, recipient and donor. To what extent are the objectives of the programme still valid? Are the activities and outputs of the programme consistent with the overall goal and the attainment of its objectives? Are the activities and outputs of the programme consistent with the intended impacts and effects?
Effectiveness: A measure of the extent to which an aid activity attains its objectives. To what extent were the objectives achieved/are likely to be achieved? What were the major factors influencing the achievement or non-achievement of the objectives?
Efficiency: Measuring the outputs in relation to the inputs. Were activities cost-efficient? Were objectives achieved on time? Was the programme or project implemented in the most efficient way compared to alternatives?
Impact: The positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. What has happened as a result of the programme or project? What real difference has the activity made to the beneficiaries? How many people have been affected?
Sustainability: Measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. To what extent did the benefits of a programme or project continue after donor funding ceased? What were the major factors which influenced the achievement or non-achievement of sustainability of the programme or project?

GOING FURTHER Writing Evaluation Reports
Your detailed evaluation report (The Evaluation Toolbox, developed by the Victorian Local Sustainability Accord, offers a detailed outline of the process for putting together a final report)

Evaluation Report Layout Checklist: distills the best practices in graphic design, particularly created for use on evaluation reports. (Stephanie D. H. Evergreen)Oxfam GB Evaluation Guidelines (accessed 2012-05-08): http://policy-practice.oxfam.org.uk/~/media/Files/policy_and_practice/methods_approaches/monitoring_evaluation/ogb_evaluation_guidelines.ashx

Stetson, Valerie. (2008). Communicating and reporting on an evaluation: Guidelines and Tools. Catholic Relief Services and American Red Cross, Baltimore and Washington, USA. Download: http://www.crsprogramquality.org/storage/pubs/me/MEmodule_communicating.pdf

Torres, Rosalie T., Hallie Preskill and Mary E. Piontek. (2005). Evaluation Strategies for Communicating and Reporting: Enhancing Learning in Organizations (Second Edition). University of Mexico.USAID. (2010). Constructing an evaluation report. Retrieved from http://transition.usaid.gov/policy/evalweb/documents/TIPS-ConstructinganEvaluationReport.pdf

For an example of an evaluation report relying on the logical framework, see for example AAP Tanzania Final Evaluation Report

How to use your evaluation findings to improve your work [NCVO]
How to share your evaluation findings with funders and donors [NCVO] 
Evaluation Reporting: A Guide to Help Ensure Use of Evaluation Findings [Better Evaluation] 
https://www.bond.org.uk/data/files/Effectiveness_Programme/Briefing_on_practices_and_debates_in_evaluation.pdf

Where to next?

Click here to return to the top of this page, here to return to the welcome page and here to move on to Theory-Based Evaluation.