Monitoring and evaluating knowledge-based initiatives in a Caribbean context

 By Valerie Gordon and Yacine Khelladi

 24 JANUARY 2014. SANTO DOMINGO, Dominican Republic — The Caribbean has benefitted from significant development assistance in the more than four decades since most of the countries became independent of colonial powers. The form and dollar value of this assistance has varied considerably, and has supported interventions ranging from infrastructure development to human and institutional capacity building, and civil society support to environmental management. However, with most Caribbean countries now classified as “middle income” under the World Bank’s most recent designation, funding priorities for the region are changing. Many traditional funding agencies are currently focussing instead on low-income countries in Asia and Africa to spend their increasingly scarce development assistance dollars.

While a recent reflection on the Caribbean’s status by the UN Secretary General credited the region for demonstrating resilience in the face of external shocks, such as natural disasters and the global financial crises[1], there was a concomitant call for the region to move forward with a development agenda, which underlines equality and sustained growth.

This agenda must address issues such as climate change, crime and violence and youth unemployment while maximising opportunities for trade competitiveness and high productivity activities within a context of good governance, transparency, and accountability.

At this transitional juncture, where many Caribbean societies find themselves, there is a challenge to employ the available technologies, and the data and research information generated by government and research institutions to solve the region’s problems.

In seeking to meet this challenge, the Caribbean Open Institute[2] (COI) was established in 2010 with the support of the International Development Research Centre (IDRC) out of Canada. The COI’s specific mandate is to engage and work with governments, researchers, journalists, technologists, NGOs, and academics to raise awareness, strengthen their capacity, and combine their efforts to adopt Open Development (OD) approaches in support of inclusion, participation and innovation.

Alongside the implementation of the various activities, there exists a need to understand and measure the effectiveness and impact of the process by which these initiatives transform into knowledge that inform critical decision-making, improve governance and contribute to sustained growth and development.

Assessing the results or outcomes of development interventions has traditionally been driven by donors needing to identify the respective returns on their investment and their need for accountability.

This type of monitoring and evaluation is implemented as discrete exercises at specific intervals by accountability interests external to the project’s principal stakeholders, and are adequate for measuring efficiency rates, outputs, and the extent to which the original objectives have been achieved.

Despite the stated focus of such methodologies, for example, Results Oriented Monitoring (ROM), or Results-based Management (RBM) on results or outcomes, and familiarity of project teams with tools, such as the “logical framework” of the project cycle management method, there is generally little appreciation of how outputs transform into outcomes and what, if any, are the incremental social changes which result from the interventions.

Development practitioners involved in more research-oriented activities and interested in understanding the effectiveness of processes and how to improve practices are challenged to employ methodologies, which are more learner oriented.

Such was the environment in which the Outcome Mapping Methodology (OMM) was born. In the early 1990s, having supported development research for many years and provided partners the leeway to define and use their own evaluation mechanisms in the way which best suited their purposes, the IDRC was challenged to demonstrate the impact of its interventions in ICT for development (ICT4D).

In response, an activity was undertaken in collaboration with partners in Asia, Africa and Latin America and the Caribbean (LAC) and resulted in the development of a set of tools and guidelines suitable for planning, monitoring and evaluating social change interventions.

The approach supports the concept of research for development and decision-making as it is not only based on actor-centred development and behaviour change but also:

1)      recognises that development change is non-linear, complex and occurs within dynamic systems;

2)      supports continuous learning of the project team and partners; and

3)      promotes participatory approaches, building and sustaining partnerships and accountability of all parties.

Figure 1: the three stages of Outcome mapping
Source: IDRC publication[3]

 

The OMM process involves three main stages, which include:

  • intentional design (vision, purpose of initiative, partners, etc);
  • monitoring (identifies the desired change in behaviour of and relationships between the various target/partners groups); and
  • the evaluation framework.

All this, while collaboratively determining the activities necessary to promote the desired change and identifying progress markers by which the extent of change can be measured.

While the initial set up of an OMM process benefits from skilled facilitation, it is not necessary for participants to master the sometimes technical expertise required in preparing log frames which are the basis of ROM and RBM evaluations. However, implementing OMM demands, like any participatory, iterative and learning-focussed activity, time and commitment of internal staff and external partners.

This presents a challenge as in most research environments time is the most costly commodity, especially if there is inadequate flexibility in overall project timelines, and if the project outputs significantly outweigh learning outcomes from the perspective of the funding agency and project management.

A derivative of OMM is Outcome Harvesting (OH),[4] which focuses on, as the name suggests, “harvesting” the outcomes of an initiative. It is not necessary to have utilised OMM as the evaluation methodology of choice for OH to be implemented. In fact, the method does not include measurement of progress towards predetermined outcomes or objectives, but rather collects evidence of what has been achieved, and works backward to determine whether and how the project or intervention contributed to the change.

A main characteristic is the involvement of stakeholders in a rigorous review of the outcomes or “substantiation” to validate and enhance the credibility of the findings. The outcomes are then organised in a database in order to make sense of them. The data is thereafter analysed and interpreted and, from this, the evidence-based answers to the useable harvesting questions are derived.

The final step is the proposing of points for discussion to harvest users, including how the users might make use of findings[5]. In its role as a funder of research for development and given the extent to which it has invested in ICT for development projects, the IDRC has been uniquely steadfast in its quest to have development practitioners and researchers engage in evaluation processes to support learning and understand the impact of ICT4D interventions.

The project “Developing Evaluation & Communication Capacity in Information Society Research” (DECI) commenced in 2007 with the primary objective to learning about Utilisation Focused Evaluation (UFE)[6], building evaluation capacity among select IDRC-PAN Asia Networking project partners through action research and finding ways to make the approach relevant to the five disparate research project teams.

The UFE approach was first developed over 30 years ago, and has evolved significantly since that time, but with the central tenet of (evaluation driven by) “intended use by intended users” remaining intact. The approach facilitates “a learning process in which people in the real world apply evaluation findings and experiences to their work”.

It is a guiding framework rather than a methodology and, as such, does not prescribe any specific content, method or theory. Critical to the success of the process is the evaluator stepping into the role of a facilitator instead of an expert and active stakeholder involvement.

In the context of the DECI project, case studies were carried out on the five participating projects based on their and experiences navigating the 12 steps involved in the process and the final outcomes[7].

 


While all the steps are important, the process is not a linear one, as seen in this diagram below

Figure 2 the overlap among the 12 UFE steps
Source: Utilization Focused Evaluation, A primer for evaluators,
by Ricardo Ramírez and Dal Brodhead[8]

 

The UFE approach was carried out to a limited extent in the process of assessing the achievements of the COI over the 2010-2012 period. At its outset, the project did not benefit from the establishment of a M&E framework, but the assessment drew upon several UFE steps, such as the participatory definition with major stakeholders and the PIUs, of the Key Evaluation Questions.

It also applied OH principles, in terms of the harvesting of outcomes of the individual sub-projects, to answer the KEQs, and from those answers, derived the extent to which the overall initiative had achieved its initial objectives. Similar to the challenges found in the DECI project, it was challenging in the COI assessment to determine, amid several competing issues, what exactly to evaluate.

Among some of the achievements reported by the DECI projects, was:

(a) change in the perception of evaluations, which were no longer seen as audits demanded by donors, but as learning opportunities; and

(b) enabling one organisation to reflect on its organisational shortcomings and stimulating the thinking of the leadership and stakeholders on a new strategic plan. 

A significant outcome of the DECI-1 was the Primer[9]prepared with the support of IDRC, which provides guidance for evaluators through the UFE process demonstrating how the steps were implemented through references to the case studies. A second phase (DECI-2) is to be implemented with the objective to provide mentorship in both UFE and Research Communication to selected organisations/projects.

While supporting capacity building in these two fields, the project will test the assumption that simultaneous application of the two approaches will enhance the internal learning culture within projects, and enable projects to focus attention early on communication planning to enhance the reach and use of research outcomes.[10] (There is no doubt that the COI can benefit from the outcomes of this project, given the fact that the research communication component of the initiative needs strengthening.)

Another IDRC-supported initiative, titled “Exploring the Emerging Impacts of Open Data in Developing Countries (ODDC)[11] ” will be exploring the use of evaluation methods to understand the impact of Open Data on social change. The two-year project will develop a shared toolkit of research methods that can be used to understand the nature, use and emerging impacts of Open Data in a range of different country and governance contexts around the world. Implemented by the World Wide Web Foundation, the project will collaborate with 17 initiatives being undertaken in Africa, Asia and the LAC.

One initiative collaborating with the ODDC, the Open Data Barometer (ODB),[12] has a focus on the context, availability and emerging impacts of Open Government Data (OGD). It intends to support improved understanding of the development of Open Data globally among advocates, researchers and policy makers, while contributing to a growing evidence base on OGD. It builds on the inclusion of a number of Open Data questions in the 2012 Web Index, which covered 61 countries.  For 2013, the ODB is a separate study, with an independent expert survey, extended questions, and scope, with 81 participating countries.

It is clear that there is a growing body of knowledge and evidence of the utility of the various evaluation approaches, which have evolved over the last decade, and there are a number of tools that can support assessment of the impact of social changes due to various interventions, including ICT4D and Open Data projects.

The emerging approaches are more participatory, user driven and oriented and lend themselves to blending and fusions that can meet the needs of myriad programme contexts and complexity. The main challenge remains for researchers and institutions intent on influencing policy to make the necessary commitment to become more engaged with the policy community, to understand the needs and how policymakers are influenced, and develop the necessary communication skills that enable them to tell the stories in language that is responsive to these needs.

In the Caribbean, now that Open Data, Open Access and Open Government are being adopted by our governments[13], it is the right moment to get a hand on these M&E tools and harvest the best of these investments along with the relevant emerging and contextually valid best practices.

-30-

 Valerie Gordon is an independent development consultant. Yacine Khelladi is an economist and international ICT4D consultant and project management professional. You can e-mail him at yacine@yacine.net.

 Recommended Readings

Outcome Mapping

-          http://www.outcomemapping.ca

User Focussed Evaluation

-          Utilization Focused Evaluation: A Primer for Evaluators.pdf

evaluationinpractice.files.wordpress.com/2013/04/ufeenglishprimer.pdf

Patton, M.Q. (2011) Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press.

Patton, M.Q. 2008. Utilization-focused evaluation. 4th ed. Thousand Oaks, CA: Sage.

Roche 1999. Impact assessment for development agencies: learning to value change. Oxford. UK: Oxfam, http://policy-practice.oxfam.org.uk/publications/impact-assessment-for-development-agencies-learning-to-value-change-122808

 

Outcome harvesting

-          http://www.outcomemapping.ca/resource/resource.php?id=374

-          http://www.outcomemapping.ca/resource/resource.php?id=391

-          http://www.outcomemapping.ca/resource/resource.php?id=324

-          http://www.outcomemapping.ca/resource/resource.php?id=377

Open Data

-          http://www.opendataresearch.org/sites/default/files/posts/Researching%20the%20emerging%20impacts%20of%20open%20data.pdf

-          http://www.opendataresearch.org/emergingimpacts

-          http://www.slideshare.net/odrnetwork

-          http://www.opendataresearch.org/emergingimpacts/methods

-          https://asis.org/asist2012/proceedings/Submissions/341.pdf

-          http://www.opendataimpacts.net/2013/02/506/

-          http://www.opendataresearch.org/project/2013/odb

 Glossary

Monitoring: The periodic and systematic collection of data regarding the implementation and results of a specific intervention.

 

Outcome:  A change in the behaviour, relationships, actions, activities, policies, or practices of an individual, group, community, organisation, or institution.

 

Developmental evaluation: Informs and supports a change agent who is implementing innovative approaches in complex dynamic situations. The process applies evaluative thinking to project, programme or organisational development by asking evaluative questions, applying evaluation logic, and gathering and reporting evaluative data throughout the innovation process.

 

Utilisation Focussed Evaluation: An approach based on the principle that an evaluation should

be judged on its usefulness to its intended users.

 

Open Development: An emerging set of possibilities to catalyse positive change through “open” information-networked activities in international development.

Impact Evaluation: The systematic analysis of the significant changes — positive or negative, intended or not — in peoples’ lives brought about by a given action or series of actions.[14]

Outcome Harvest: The identification, formulation, analysis, and interpretation of outcomes to answer useable questions.

 


[1] United Nations’ Secretary-General, Ban Ki-moon in comments presented by Alicia Bárcena, ECLAC Chief at the  34th Conference of Heads of Government of the Caribbean Community (CARICOM) , Port of Spain Trinidad, June 2013

[2] http://caribbeanopeninstitute.org/

[4] Ricardo Wilson- Grau, Heather Britt.2012, Outcome Harvesting. Brief

[5]Ssee list of recommended reading at the end of this paper

[6] Michael Q Patton (2008), Utilisation Focussed Evaluation. 4th Edition

[7] The steps include: 1. Program/Organisational Readiness Assessment; 2. Evaluator Readiness and Capability Assessment 3. Identification of Primary Intended Users 4. Situational Analysis 5. Identification of Primary Intended Uses (PIUs). 6. Focusing the Evaluation .7. Evaluation Design 8. Data collection 9. Data Analysis 10. Facilitation of use and 12. Metaevaluation

[9] Ramirez, R & Brodhead, D. Utilisation Focused Evaluation: a Primer (2013) http://evaluationandcommunicationinpractice.ca/ufe-primer-get-it-here/

[11]http://www.opendataresearch.org/emergingimpacts *The number of steps have since 2013 been increased to 17 in Michael Patton’s latest book “ Essentials of UFE’

[13] Dominican Republic and Trinidad and Tobago have joined the Open Government Partnership. See http://www.opengovpartnership.org/countries

 

 

Design Downloaded from Free Wordpress Themes | free website templates | Free Vector Graphics | Bateaux de Plaisance.