+27 66 476 9422 info@thedatainnovator.com

Rama Raphalalani

As an emerging evaluator, I’ve asked, “what happens with the evidence and reports we put forward?” As I set my 2020 goals, to do better as an evaluator, this is the question I aimed to understand.  I am not the first or last to pose similar questions on evaluation use. So, I delved into the challenges and what I can do differently in my evaluation practice.

I found that limited or inappropriate use of evaluation findings drastically impedes the value of M&E and erodes any social justice or public good that the evidence could otherwise influence. There are several reasons which lead to the poor use of M&E: a lack of understanding of what exactly M&E entails and its far-reaching benefits, and the skills or and capacity to assimilate the evidence provided, to list a few.

Strategic documents, such as the South African Policy Framework for a Government-wide M&E system [1] indicate that M&E should be utilisation-focused and that the M&E products meet the knowledge and strategic needs. However, due to complex organisational structures, and political and ideological cycles this may not fit reality for government departments.

I’ve observed this in my emerging evaluation experience. Unfortunately, in many cases politics trumps evidence and key programmatic decisions may be made without the backing of the data. The data that I gathered through, maybe not blood but definitely, sweat and (some) tears.

Why is this the case? Well, M&E is widely seen as a compliance issue rather than a strategic solution. This process can feel imposed as the demand side of M&E is often from the providers of the resources, not necessarily all the users of M&E[2]. Here is where the evidence, politics and usages paths separate.

I began to see that the answer may not lie in asking what happens to the report, but how can I change the way programme teams think about evidence. Successful programmes are the product of evidence-based decision making AND critical thinking[3].  Thomas Archibald (2013) refers to this as evaluative thinking, which he defines as being, “Cognitive process in the context of evaluation, motivated by an attitude of inquisitiveness and a belief in the value of evidence…”.

I shifted my goal setting gears, from solely pushing for report uptake to exploring ways I can change thinking. Being an emerging evaluator I want to set goals that amount to outcomes in society that benefits from the appropriate use of evidence. I see this as my role in becoming a leader in the M&E field.

In this endeavour, I listed my actions to help change the way organisations think about evidence and evaluation, and use it. Here are my #goals:

  1. Create participatory M&E systems and processes that include touch-points with all stakeholders that are affected by the project.
  2. Make evidence more accessible by simplifying the language that we use and make it more relatable to people
  3. Build the capacity of the users of M&E
  4. Package and disseminate our evidence in more user-friendly ways (e.g. stories, data viz, short report outputs)

 

[1] DPME. (2007) Policy framework for the Government-wide Monitoring and Evaluation Systems. Published by THE PRESIDENCY

[2] Fraser, D. (2017) in a webinar for the South African Monitoring and Evaluation Association (SAMEA) titled Poor Monitoring

[3] Archibald, T. (2013) “Evaluative Thinking”. Free Range Evaluation, WordPress, 11 November 2013. https://tgarchibald.wordpress.com/2013/11/11/18/.