Participation has become a critical concept in development, increasingly employed in the planning and implementation of development programmes. This book takes participation one step further by exploring its use in the monitoring and evaluation of these programmes. Bringing together a broad range of case studies (12 in total) and discussions between practitioners, academics, donors and policy makers, the book explores conceptual, methodological, institutional and policy issues in participatory monitoring and evaluation. It distils the common themes and experiences in participatory monitoring and evaluation to show the challenges - and far-reaching benefits - of the approach. The book starts with a general overview of participatory monitoring and evaluation, followed by a synthesis of case studies and regional reviews of practice and methodological innovations around the globe in Part 1. Part 2 then presents case studies of learning with communities; these illustrate the diverse range of settings and contexts in which participatory monitoring and evaluation is being applied. Part 3 raises the key issues and challenges for participatory monitoring and evaluation, including the need for changing institutions. The book concludes by way of proposing areas for future research and action.
Local people can generate their own numbers – and the statistics that result are powerful for them and can influence policy. Since the early 1990’s there has been a quiet tide of innovation in generating statistics using participatory methods. Across all sectors from local to national, participatory statistics are being generated in the design, monitoring and evaluation, and impact assessment of development interventions. This book, by describing policy, programme and project research, aims to provide impetus for the adoption and mainstreaming of participatory statistics within international development practice. It lays down the challenge of institutional change that allows a win-win outcome in which statistics are part of an empowering process for local people and a valuable information flow for those open to it in aid agencies and government departments.
This paper reviews the available literature on participatory monitoring and evaluation, focusing on how and where it is being used, the underlying concepts and issues involved and also, challenges for its use in the field. In addition, an annotated list of manuals and resources on the 'tools' and methods used in participatory monitoring and evaluation is included in the appendices.
NGO-IDEAs is a cooperation of about 40 non-governmental organisations (NGOs) from South Asia, East Africa and the Philippines, and 14 German NGOs working in the field of development cooperation. Together they have been developing tools that are specifically relevant for civil society involved in community development in a wider sense. As a result they have produced the Impact Toolbox which is organised along the lines of the project cycle.
The examples in this publication tell stories of how NGOs have applied elements of this tool box, with a view to giving people an idea of how the process works in practice. Hence, rather than presenting the impact of NGO development work, they give a practical description of how the tools have been applied, and the difference they have made. They only report on parts of the practice relevant to this publication and also highlight errors and learnings.
This article draws on literature from both monitoring and evaluation (M&E) and organisational learning to explore synergies between these two fields in support of organisational performance. Two insights from the organisational learning literature are that organisations learn through ‘double-loop’ learning: reflecting on experience and using this to question critically underlying assumptions; and that power relations within an organisation will influence what and whose learning is valued and shared. This article identifies four incentives that can help link M&E with organisational learning: the incentive to learn why; the incentive to learn from below; the incentive to learn collaboratively; and the incentive to take risks. Two key elements are required to support these incentives: (1) establishing and promoting an ‘evaluative culture’ within an organisation; and (2) having accountability relationships where value is placed on learning ‘why’, as well as on learning from mistakes, which requires trust.
Using participatory approaches in impact evaluation means involving stakeholders, particularly the participants in a programme or those affected by a given policy, in specific aspects of the evaluation process. The term covers a wide range of different types of participation and stakeholders can be involved in any stage of the impact evaluation process, including: its design, data collection, analysis, reporting and managing the study.
Like other agencies involved in international development cooperation, the Swiss Agency for Development and Cooperation (SDC) is committed to enhancing its results orientation, learning and effectiveness through more responsive and accountable programming. Encouraging a culture in which citizens participate in the planning, monitoring and evaluation of programmes and country strategies is essential for achieving these aims. This note is about SDC’s experience with beneficiary assessment (BA), an evaluation approach used to increase its responsiveness and accountability to the citizens who are the intended direct and indirect beneficiaries of its work. It aims to both raise awareness of the potential advantages of using Beneficiary Assessment to enhance learning, responsiveness, accountability and effectiveness, and to contribute to enhancing capacity and confidence to use BA in the evaluation of projects, programmes and country strategies by providing practical orientation and support.
Bridging and Bonding: Improving the Links Between Transparency and Accountability Actors. Learning and Inspiration Event Report
This report came out of the learning and inspiration event held in Dar Es Salaam, Tanzania from 26th - 28th May 2014, which was part of the Making All Voices Count programme. It is for participants and others with an interest in technology for transparency and accountability