What stories would you tell with data from your daily life?

In September 2014, two award-winning information designers living on different sides of Atlantic, Giorgia Lupi and Stefanie Posavec, collaborated on a year-long project to collect, visualize, and share information about their daily lives. Each week they would hand-draw representations of their activities and thoughts as part of a process to use data to become more humane and connected on a deeper level. The end result is Dear Data: an award-winning project to make data artistic, personal, and open to everyone.

(Watch starting at 6:45)

In Spring 2016, we asked online students in our Tech for Monitoring and Evaluation Online Diploma Program to try their hand at a similar month-long project and send postcards to one another. As we have 110 students in 35 countries, many of whom had only just met in the preceding weeks, this provided an excellent opportunity to recreate some of what made Giorgia and Stefanie’s project so special. And it gives me great pleasure to share some examples of their work.

For one project, two of our students (Madison and Ann, who both work with Health Policy Plus), sent one another postcards every week on the following four topics:

  1. Daily activities
  2. Social media
  3. Food
  4. Emotions

These postcards are presented without commentary below, so that you can see the postcards as they did.

When asked for their reflections as part of the exercise, both Madison and Ann claimed that the process made them mindful of issues and habits that they had previously ignored, and that they were able to discern larger patterns when looking at their week as a whole. Though neither were professional information designers, their work improved over multiple iterations, as well as became a fun, inclusive process.

“Our different styles quickly became apparent and added to the reflective learning. Apparently Madison is calm and cool, and Ann is, well, a bit excitable,” said Madison.

“The involvement of friends and family was an unplanned bonus. Visualization and art attracts attention…they got drawn in (especially Ann’s 10 year-old son Harry) and got a kick out of seeing what arrived in the mail,” said Ann.

And both agreed: “Overall it was a fun challenge—we highly recommend it. Many thanks and acknowledgement to Giorgia Lupi and Stefanie Posavec, the Dear Data creators”

If you’re interested in learning more about how you can better collect, visualize, and make decisions based on data, check out our 16-week Tech for Monitoring and Evaluation Online Diploma Program and enroll today. First course in the track starts September 12th.

Week 1: Days in My Work Week

data viz slides_Page_03

data viz slides_Page_04

Week 2: Social Media

data viz slides_Page_06 data viz slides_Page_07

 

Week 3: Food

data viz slides_Page_10

 

data viz slides_Page_09

Let’s pick up where we left off in Part 1 of our survey design for quality data series, which was inspired by Dobility founder and CEO Dr. Christopher Robert’s presentation in the TechChange course “Technology for Data Collection and Survey Design.” Lesson 1 focused on designing your survey with empathy for field staff and respondents. Lesson 2 highlighted SurveyCTO tools for building in relevance and constraints. With Lesson 3, we’ll jump into a number of ways that SurveyCTO enables you to automate monitoring and workflow.

survey_design_2

Lesson 3: Automate Monitoring and Workflow 

The staffing structure for a typical survey might look something like this: a research team designs a survey. Thousands of miles away, a field team conducts the surveys. The collected data then goes back to the research team for analysis.

The research team wants to be able to monitor the field team and audit their work throughout the process. Supervisors on the field team may also want to monitor their enumerators. And, just to get complicated, the research team may also hire a survey firm to conduct the survey themselves or to provide an additional layer of monitoring for the field team.

In the case of traditional paper surveys, quality checks might include:

  • a member of the research team accompanies enumerators in the field
  • a supervisor reviews surveys as they come in
  • an independent team conducts “back-checks” after initial surveys are completed, to corroborate the results

Many of the quality checks available when conducting a paper survey occur AFTER the initial surveying is complete. You may not know you have bad data until it’s too costly – and too late – to do anything about it.

One of the most compelling opportunities afforded by SurveyCTO is the ability to easily program a number of quality checks into your survey that can automatically flag issues as they arise. Not only that, with a little extra work up-front, you can prep your data to make the transition to visualization and analysis even faster.

Example 1: Audio audits and speed limits
Back-checks are time-consuming and expensive, so why not listen in from the office? You can program your SurveyCTO surveys to randomly capture audio throughout an interview.

Or, even better, pair audio audits with “speed limits,” which allow you to indicate the minimum time that a particular question should take to ask and answer properly. For example, you can program your survey to automatically start recording after the enumerator violates three speed limits – meaning they didn’t take enough time on three different questions within the same survey.

Since audio audits and speed limits are programmed by the research team, the field team won’t know the specifics – they’ll just know that there’s an additional layer of accountability.

Sample speed limit:
speed_limits

 

Example 2: Automated checks
Our most sophisticated users write quality checks in Stata code, to automatically flag data that doesn’t behave as expected. But we wanted to ensure this best practice is available to all of our users, which is why we’ve built the feature into SurveyCTO.

Spend a few minutes during the survey design phase to set up at least one automated check and you’ll not only be able to identify and address issues right when they arise, you’ll have more reliable data to work with once your surveying is complete.

Sample automated check:
automated_check

 

Example 3: Concatenate and calculate
Let’s say your survey splits first name and last time into two fields but you would prefer it displays in one field during the analysis stage. You can easily program the form builder to concatenate – or link fields together – so that when you output the data, it’s already formatted the way you want it. You can also set up automated calculations, which can help with analysis or serve as a useful relevance trigger during the survey itself.

Sample calculation:
calculation

 

Example 4: Visualize and analyze
As soon as your data is uploaded, you can take advantage of our integrations with Statwing, Google Sheets, Google Earth, Stata, Excel, and Salesforce (via OpenFn.org), or export it to JSON or CSV file formats and start analyzing it in the platform of your choice.

Using a mobile data collection platform enables you to skip the laborious and error-ridden step of data-entry. Instead of spending months entering, checking, and rechecking the data you collected – not to mention storing hundreds (or thousands!) of survey booklets – start analyzing your data the day it’s collected.

Sample integration with Statwing:
statwing

 

Final Thoughts

Just remember that even experienced survey designers struggle at times with developing the best structure for exploring a research question and setting up the systems to minimize the risk of collecting bad data. Hopefully this series on survey design for quality data has given you some ideas for how to approach your next project. And if there are any additional topics you’d like us to cover, please leave them in the comments.

Read Part 1 of the series here. This article was originally published on the SurveyCTO blog

About Alexis
Alexis Ditkowsky is the community and business strategy lead for Dobility, the company behind SurveyCTO. Her experience spans social entrepreneurship, international education policy, higher education, and the arts. She holds a Master of Education from the Harvard Graduate School of Education.

headshot_alexis

In 2015, TechChange launched the Technology for Monitoring & Evaluation Diploma Program, which combined three TechChange courses (Tech for M&E, Tech for Data Collection and Survey Design, Tech for Data Visualization) into one comprehensive program. The program was meant to give busy working professionals a robust foundation in technology for M&E through the three core courses as well as workshops and office hours with course facilitators. Our first cohort is finishing up the program as we begin 2016, and a new session will launch on January 25.

Today we are very excited to chat with Sonja Schmidt, the Senior M&E Advisor to JSI’s AIDSFree project, who is one of the first participants to complete the Technology for Monitoring & Evaluation Diploma Program: Working Professionals Experience. She discusses her experience with the overall program, how each course influenced her work, as well as how she was able to better understand the use of ICTs in M&E.

How did you come across the Tech for M&E Diploma Program?

A colleague of mine from JSI had sent around some links for TechChange courses. When I clicked the links I noticed the Diploma Program, and thought that this would be a good option to take advantage of the three courses in order to get a wider foundation on the topic.

Have you taken online courses before? Did the program meet your expectations?

I had never taken an online course before, so this was a very new experience for me. I found it challenging in the beginning, particularly with the first course that I took, because I initially felt overwhelmed and struggled a bit with learning how to move around the platform and managing the material.

That being said, the program far exceeded my expectations. I have to compliment TechChange because, being an M&E expert, I look at most material with a critical eye, but I found that the material that was put together and all of the speakers/guest experts were stellar. I was also quite pleasantly surprised by the group dynamics present on the platform. I did not expect this from a virtual group, but in the end there were names that kept popping up, and I actually had the chance to meet someone from the course in person – I am almost said that it has ended.

Are you new to the field of M&E? If not, why did you think this would be valuable to your career?

I have many years of experience in the M&E field. Despite this fact, I realized that the concept of ICT and M&E emerged on the scene pretty suddenly – it did not really exist as an articulated concept even as recently as 3 years ago. I remember meeting someone a few years back who had created his own company around an app meant to improve data collection for surveys, and was surprised because I never thought that that would take off. Now, several years later I find it fascinating how this has become mainstream.

So, my main reason for taking the program was to learn more about this new and rapidly changing field, the intersection of technology and monitoring & evaluation, and get a better grasp of it.

How have you been able to use what you learned in the courses in your work, and how has the program overall been helpful to you?

I have definitely been able to use what I learned in the courses, and the Diploma Program, as foundations for my work. The Technology for M&E course, while a bit repetitive for me sometimes, as I’m an experienced M&E professional, still provided me with exposure to new materials as well as to other people’s perspectives and approaches. The Technology for Data Collection & Survey Design course was not as applicable to my personal work, however it did improve my capacities as an M&E advisor in terms of being able to recommend methods or software, or considerations to take into account, to in-country M&E folks who might be the ones actually designing M&E programs themselves. The Technology for Data Visualization course is the one that had the most impact on my work directly, because a big part of my work is reporting to stakeholders and presenting data. The Introduction to Excel for Data Visualization course was also extremely helpful because it is a familiar software, and Excel is something that I will always use; especially for organizations that do not have much funding, Excel is a very powerful and useful tool.

In general, I think the courses were useful in my work in that when I come across a particular issue, I can now think in a way where I ask myself how I can improve or do something better. I can then go back to the material and target specific areas and continue to use the program material as a tool for learning in my work. I am also currently working on developing a training in Tanzania on data quality, and I plan to discuss with my colleagues ways to use, for example, phones to more quickly submit data from site facilities to our central office.

Interested in the TechChange Technology for Monitoring & Evaluation Diploma Program? Get more information and apply here. Enrollment is open and on-going, but our next batch of courses begins January 25, 2016. It is still not too late to sign up and join this amazing program with participants from all corners of the globe!

About Sonja
Sonja has over 15 years of experience in international public health, with a focus on infectious diseases, including TB, HIV/AIDS and immunization programs. She has long-term country experience in Bangladesh, India, Nepal and Ethiopia and has worked for several UN organizations (UNIFEM, UNICEF, WHO) and numerous USAID-funded projects. Currently as the Senior M&E Advisor to JSI’s AIDSFree project, she oversees and coordinates the monitoring and evaluation of the project and guides country projects in M&E planning, data quality assessment, data analysis and use. Sonja has an MA in medical anthropology and an MPH with a focus on policy and management.

1. Privacy

Responsible data management is not new to development. However, with the use of technology-enabled tools for M&E, it has raised a few challenges related to the privacy of individuals. These include the growing use of biometric data for tracking and sensors to monitor daily habits. The collection of personal financial information and affiliation has also made it vital to consider data security when setting up an M&E framework. This can be addressed through data encryption, ensuring that individual data is not easily identifiable, and developing a policy that ensures responsible data practices. Furthermore, organisations need to be aware of the ethical implication of collecting data on people and the necessity to secure all the permissions and consents required. It is also important to be transparent about the methods of collection, why data is collected and how will it be used with the respective individuals. Finally, ownership has to be explicit when information is shared and a plan should be in place on what happens to data collected once a project ends. In South Africa, the Protection of Personal Information Act, 4 of 2013 also lends a relevant and interesting dynamic.

2. The end-user in mind

To select the most suitable technology-enabled tool(s), taking a human-centered design approach to the selection process will ensure that the organisation does not end up with an irrelevant or unnecessary tool. The approach starts with identifying what is desirable (one should consider project managers as well as community members, i.e. the people who will be using the tool), then viewing the solution through a feasibility and viability lens. This ensures and increases the usability of the tool as well as ensuring that no segment of the community is “ignored” as result of the selected tool, i.e. thinking of the accessibility of the tool and the training that would be required. Once identified, the tool should be piloted on one project before rolling it out.

3. Getting the right balance

Technology facilitates, but does not replace, M&E methodologies such as a well-thought out theory of change and quality M&E plan. So it may be tempting to fall into the habit of selecting or collecting data based on the easiest tool rather than what really matters to your program. Furthermore, technology can lead to over-dependence on digital data and missing the opportunity to observe and interact with communities in order to get a comprehensive picture of an intervention. To get the right balance, one must be very clear on the value the tool will add.

Although there are other factors to contemplate, the above three points offer a good guide to anyone considering the use of technology-enabled tools in their programs. With the ever-growing need to understand and measure impact, the integration of technology from delivery of services and monitoring of interventions to the evaluation of programs will continue as it offers possibilities and innovation to increasing reach, moving to scale and improving the efficiency and effectiveness of interventions.

This article was originally posted on the Tshikululu Social Investments blog. Photo courtesy of Jan Truter Creative Commons

About Amira

photo (1)

Amira Elibiary is a Monitoring and Evaluation (M&E) specialist with 10 years of experience in research, grant-making and program management; over two years of experience in the corporate social investment sector for education, health and social development projects. With a keen interest and extensive experience in democracy, governance, advocacy and rule of law work. Amira holds a Master’s degree in International Affairs from American University and a BA degree in Economics.

Today is International Day of Persons with Disabilities.

Persons with disabilities constitute the world’s largest minority, 80% of whom live in developing countries – an estimated 800 million people. Approximately 20% of the world’s poorest people have a disability. In response to this reality, the United Nations (U.N.) adopted seven disability-related targets (e.g. Targets 4.5, 11.2, 11.7) as part of the new Sustainable Development Goals (SDGs) developed in October 2015. (UN Enable)

Persons with disabilities should therefore be key stakeholders and beneficiaries of any development or humanitarian initiative, and monitoring and evaluation (M&E) systems should be capturing how programs impact them. Below are some resources that outline how technology can act as a tool to facilitate the inclusion of this largely marginalized population in M&E processes.

disability-logo

Core Data
Valuable program or country-specific data about persons with disabilities to collect during M&E may include:

  • Disability prevalence, disaggregated by type of disability, age and gender
  • Definitions of disabilities to compare to World Health Organization (WHO) and other definitions
  • Legal framework
  • Policies on segregation, institutionalization or community-based rehabilitation in health, education or penal systems
  • Education and employment rates
  • Representatives in government and civil society
  • Program administration data (i.e. rates and modes of inclusion in an activity, program or organization)

Early and consistent collection of this data is needed to determine where and how M&E can best occur in collaboration with persons with disabilities.

Using global datasets can increase the efficiency of data collection and facilitate comparative analysis. See:

E-accessibility
E-accessibility is a measure of the extent to which a product or service can be used by a person with a disability as effectively as it can be used by a person without that disability.

E-accessibility should be among the criteria for choosing a device for data collection or dissemination. Consulting with local Disabled Persons Organizations (DPOs) can help determine if a tech tool will create barriers or enhance participation.

For example, mobile phones with hands-free, voice command features (think “Siri”) can enhance accessibility. Large print and screen reader compatible formats should be used to collect and provide information electronically to persons with impaired vision, blindness or dyslexia. Visual elements of documents, like photos, should have captions that can be read aloud. Radio or audio recordings deposited with a DPO can make evaluation results accessible across a range of disabilities. Real time captioning, or printing hard copies of the main script of a video or lecture, can increase inclusion of the hearing impaired. Braille printers (often available through a local DPO) can produce reports or surveys for the blind. These technologies enable persons with disabilities to participate independently and confidentially in surveys and other feedback mechanisms.

E-accessible technologies not only enhance M&E processes, but also can be low cost. Some operating systems have built-in automated voice read or Assistive Touch technologies. Software to read aloud or translate documents into Braille on-screen is often free (to read DAISY books, download here). Self-captioning software, such as Overstream and MAGpie, is also widely available for free. Training sessions for DPOs or enumerators on how to use these features or software may be necessary.

The following provide more information on e-accessible tools, methods and procurement:

Technology can change an environment that is disabling into one that is empowering by creating channels for persons with disabilities to have an equal voice in the programs that affect them.

 

About Leigh-Ashley
headshotlipscomb

Leigh-Ashley Lipscomb is an independent analyst and Adjunct Research Fellow with the WSD Handa Center for Human Rights and International Justice at Stanford University.

By Ella Duncan

More than 300,000 people lost their lives in the bloody civil war that ravaged Burundi from 1993 to 2005. Almost twice as many were forced to leave their homes, slowly returning over the last decade. Today, the relative peace that Burundi so arduously achieved is again at risk; violent protests and rising tensions threaten to make the upcoming elections a moment of widespread chaos, rather than democracy and national unity.

Dr. David Niyonzima experienced the horror of the civil war first-hand. A member of the historically peaceful Quaker Church, which introduced him to peaceful social activism and helped him develop his spirituality, David survived a massacre in 1993. “I was teaching a training for young Quaker pastors when soldiers came to exact revenge, because they thought the students belonged to a rival ethnic group. I saw 8 out of my 11 students murdered,” he recalls.

David is now the Director of Trauma Healing and Reconciliation Services (THARS). He has dedicated his life to understanding the impact of peacebuilding programs on the communities they involve, in order to design better strategies. It was his experience as a survivor that inspired his determination to make his country a more peaceful place.

For David, meeting the man who led the soldiers to come to kill him and his students was a life-changing experience. “I extended forgiveness to him,” he says. “It gave me a great sense of transformation, even though I had not planned for it. I was inspired that there must be a reason I was alive, that God wanted me to plant a seed of peace.” Invigorated by the experience, David started community programs to foster reconciliation and healing, but met challenges that seemed insurmountable. “I began to organize peace workshops and seminars to promote peace through Quakerism. Yet we were not making progress, so I asked myself – why are we not making impact? Trauma healing was the missing element of peace.

David believes that peace is not possible without trauma healing, that “treating the past” is the key component of peacebuilding. “Peace is not only the absence of fighting,” he explains. “It is also the restoration of relationships.” As a result, THARS takes a holistic approach to interpersonal development and relationships through Sensitivity Training, Trauma Healing Capacity Building, Individual and Group Therapy, and Self-Help and Self-Reliance initiatives.

To monitor and measure its work, THARS has adapted an existing Post-Traumatic Stress Disorder tool to the context of Burundi. They assess participants as they enter a training or workshop, and then again six months later. Progress is measured on the basis of self-reported indicators of social behavior, the participants’ approach to conflict in their communities, and their decision-making strategies.

According to David, “the most important monitoring tool is testimony. It gives participants the opportunity to demonstrate through stories how they are self-reliant and addressing trauma on their own.” He strongly believes in the importance of developing a culture of speaking among the participants in THARS’ programs; “empowering people to speak truth about their history contributes to peace,” he says.

One of THARS’ current programs, Addressing the Past, focuses on villages where atrocities occurred during the civil war. It employs testimony and the growth of a culture of speaking to measure healing at all levels, using individual and group therapy to work through trauma issues. Addressing the Past doesn’t focus exclusively on those who were directly in the war, but also on people who are affected by the so-called remnants of war. David gives the example of wives who become victims of the untreated trauma of their husbands, when the latter return from fighting and engage in gender-based violence in the home.

David says that the major challenge in measuring THARS’ work is creating and communicating standard definitions of trauma and forgiveness. However, he and his colleagues have already made a lot of progress. “When THARS began in 2002, there was no discussion of trauma in Burundi,” he remembers. “We have brought awareness, and now people can see what trauma is, what to do to address it, where to go for help.”

David is hopeful that Trauma Healing will further spread as an accepted process in Burundi. He envisions it as an inclusive tool, that will unite all Burundians behind the common goal of emerging together, and stronger, from the violence of the past. “Peace will come when those who perpetrate violence join the healing process,” he concludes.

David Niyonzima Dr. David Niyonzima

Suggested resources to learn more

Visit THARS’ website at http://thars.org/.
Read a mid-term evaluation of THARS’ work with Search for Common Ground on a 2007 Victims of Torture program, funded by USAID here (https://www.sfcg.org/2004-victims-of-torture-mid-term-project-evaluation/).
Explore Trauma Healing related resources on DME for Peace http://dmeforpeace.org/category/themes/justice-truth-and-reconciliation/trauma-healing.

About Ella

 

 

Ella Duncan

Ella Duncan is the DME for Peace Project Manager, DME for Peace is a project of Search for Common Ground which connects a growing global community of over 4,000 members to over 800 resources on Design, Monitoring, and Evaluation for Peace and Peacebuilding programming. Ella received her B.S. from Cornell University.

This post originally appeared on DME for peace.
Featured image credit: Dave Proffer Flickr (Creative Commons License)

By Innocent Hitayezu and Ella Duncan

How can peacebuilding programs get a better understanding of trust? This question is at the heart of Innocent Hitayezu’s work with women and issues of reconciliation, and continues to shape his work as a peacebuilder and evaluator in Rwanda.

Innocent was working with a reconciliation project that brought together widows of victims of the Rwandan Genocide with widows of perpetrators of the genocide when he questioned, How do we know we are bringing community relationships to the next level? During group discussions and interviews, he asked the women if they were working together outside of project activities, and the group replied positively that, yes the project was changing their interactions and attitudes in daily life. However, during individual interviews, he found that some women were still experiencing extreme challenges of reconciliation and that outside of the mandatory group meetings these two groups were not coming together. Individual interviews showed that the project was not building trust in the community in the long term. To Innocent, this moment demonstrated that the power of group level projects could be better harnessed with the inclusion of reflective monitoring and evaluation on the individual level subsequently having positive effects on entire family and community at large.

Innocent works at the community level to examine the root causes of conflict in Rwanda, and on the issue of post-conflict intercommunal trust. Innocent’s work explores how new approaches to monitoring and trust-measuring evaluations can strengthen programs aimed at traditional community building through reconciliation and those focusing on the peaceful reintegration of former refugees.

Innocent’s own experience as a refugee sparked his passion for community building and trust building. In 1994, to escape the war, he and some of his family members fled Rwanda to the Democratic Republic of Congo. Upon returning to Rwanda the family faced the suspicion and distrust of those who had stayed behind and experienced the genocide. In search of community, Innocent joined with other repatriated refugee youth to create support systems for community action and economic opportunity through informal youth symposium.

This youth-led community building has been the foundation of his life’s work building peace in Rwanda through inclusive community development. Innocent says, “in some families, People are taught that communities are divided, but they must fight to say instead that ‘Rwanda is for all of us.’’

It is this conviction that reconciliation processes, in Rwanda and elsewhere, must be inclusive that inspired Innocent to pursue a career in Monitoring and Evaluation. Digging into the root causes of conflict, and extensively measuring trust within communities led Innocent to question, “Where is our evidence that we are making positive change?” Innocent, like many others in his field, believes that peacebuilders must be able to demonstrate success.

In the two decades since the Genocide, Rwanda has made great progress toward stabilization. As a result, Rwanda is often cited as a model of stability and security in East Africa. Rwanda’s security has come through traditional methods of community justice and reconciliation such as the Gacaca, a community court system, and Umuganda, which roughly translates to “coming together in a common purpose to achieve an outcome,” which is observed through a nationwide monthly day of service. Innocent believes that these traditional group based reconciliation methods can be strengthened through modern monitoring and evaluation methodologies. This is because traditional community-based methods are focused on the group level, which may miss tensions and challenges that participants are hesitant to express in front of their peers. In response to this challenge, Innocent advises that group reporting be validated by individual reporting, to make sure minority voices have an opportunity to be heard in the process of evaluating, and designing programs.

Giving space to individual voices also helps a program remain vigilant to the concept that progress should come from within the community, and that no pressure is applied from the top-down.

Based on his experience, Innocent sees individuals’ ability to express their views as an important indicator that communities are healing and maintaining peace. For Innocent, monitoring self-expression is key to understanding what is happening at the community and individual level.

Innocent is proud of the progress of reconciliation in Rwanda, and of the power and strength communities have drawn from traditions like the Gacaca and Umuganda. He is also hopeful that new approaches to reconciliation will help community trust and open expression continue to grow, enabling communities and individuals to build a lasting peace.

* * *

Innocent Hitayezu

Innocent Hitayezu is a peacebuilder and evaluator living and working in Kigali, Rwanda. He has over 13 years of experience, including consultancy work in sustainable agriculture, socio-economic assessment, strategic planning, market research and consumer behavior analysis, farmers’ trainings; 8 years working with International NGOs. He holds an MBA in NGO Management from Kampala International University (Uganda), a Bachelor’s Degree in Sociology; and a Diploma in Philosophical and Religious studies.

Screen Shot 2015-11-04 at 11.04.22 AM

Ella Duncan is the DME for Peace Project Manager, DME for Peace is a project of Search for Common Ground which connects a growing global community of over 4,000 members to over 800 resources on Design, Monitoring, and Evaluation for Peace and Peacebuilding programming. Ella received her B.S. from Cornell University.

This post originally appeared on DME for peace.
Featured image credit: Leandro Neumann Ciuffo Creative Commons License

By Celestin Nsengiyumva and Ella Duncan

“Rwandan society has suffered the wounds of genocide. To make sure that the heart of the community is healed, to know that there is no more fear in society, we must work in peacebuilding evaluation.”- Celestin Nsengiyumva

When asked how he was introduced to M&E, Celestin Nsengiyumva says that he “joined accidentally”. After graduating university with degrees in applied statistics and development studies, he thought he would become an accountant or statistician. Instead he was accepted for a position as an evaluator. Celestin now describes M&E tools and methodologies as the “cornerstone of success” for peacebuilding programs in his homeland of Rwanda.

Rwanda has achieved stability since its civil war that ended in 1994, but continues to be challenged by its violent past. Celestin advocates for peacebuilding and its measurement, though he faces skepticism from those who say the nation should focus on more tangible things. To peacebuilding skeptics, Celestin counters that building peace creates opportunities for other change. He says “Peace is the building block of economic and social progress”, and believes that M&E is the path to a deeper understanding of what communities need to achieve sustainable peace.

Even before becoming an evaluator, Celestin believed peace and development programs must be contextualized to the needs of the communities they aim to serve. Working on a land dispute program with Landesa Rural Development Institute, Celestin was able to see how DM&E supports that contextualization through program design and learning. Land is a focal point in Rwandan society, and played major roles in the genocide and recurring conflicts the country has experienced. Around this key Rwandan issue of land, Celestin was able to be a part of the program from the very beginning. This involvement allowed him to collaborate with local partners and get feedback from partners and participants as he developed his M&E tools. By being involved and able to incorporate community needs and perspective from the beginning, Celestin believes he helped the program reach a better result, with meaning and relevance for participant communities.

As a method to achieve depth and contextualized understanding, Celestin uses and recommends storytelling as a tool to answer the questions of “How?” and “Why?” With Landesa, Celestin and his team used storytelling to collect feedback and success stories, adding personal elements to data. When communicating back their participant communities, showcasing stories of disputes the Landesa program resolved strengthened the presentation of the program’s value back to the community. And sharing practical examples and solutions to land conflict helped spread the program’s messages.

Celestin draws not only on his experience as an evaluator, but also three years he spent as a teacher for his guidelines on how to collect and tell a good story. For him the value of a story that it can be both instructive and engaging, so that the audience doesn’t only learn but also cares, and is able to draw parallels to their own challenges and strengths.

Celestin’s Guidelines for Collecting and Telling a Good Story in Evaluation:

  • Know what kind of story you need;
  • Prepare the questions you will ask, use the structure as a balancing tool to be open to unexpected statements and still stay on task;
  • Focus on using the story to identify the most significant change resulting from the storyteller being exposed to programming;
  • Do not get hung up on only looking for successes, collect stories on what isn’t working and what is slowing processes;
  • Ask for stories that include not only individual beneficiary experiences but also capture how those around them are affected.

The growth of Peacebuilding M&E in Rwanda depends on individuals like Celestin, who come to value and advocate for contextualized and reflective practice. Celestin’s hope is that there will be more opportunities in Rwanda to study M&E, so that stronger local evaluators can emerge and bring local insight to peacebuilding programs. Stories like Celestin’s will be repeated as peacebuilders are asked to expand their skills and roles, learning by doing, to learn what works, what doesn’t, and why.

 

Celestin Nsengiyumva
Celestin Nsengiyumva is an M&E professional living and working in Kigali, Rwanda. Celestin received his BA in Applied Statistics from the National University of Rwanda.

Ella Duncan
Ella Duncan is the DME for Peace Project Manager, DME for Peace is a project of Search for Common Ground which connects a growing global community of over 4,000 members to over 800 resources on Design, Monitoring, and Evaluation for Peace and Peacebuilding programming. Ella received her B.S. from Cornell University.

This post originally appeared on DME for peace.
Featured image credit: Neil Palmer (CIAT) Creative Commons License

Today, most of us can have Pad Thai, a craft cocktail, and a professional masseuse all arrive at our doorstep with a click of a button on our phones, but the same can’t be said about data for our projects. I can’t tell you whether the thousands of schools we paid for last year were actually built and functioning! How about an on-demand service for that data-delivery?

The on-demand economy is delivering increasingly brilliant things for our daily lives – at least in advanced economies. There are so many on-demand food delivery options that investors now see the market is beginning to bottom out with saturation. Last year, over $3.89 billion, purely of venture financing, went to on-demand startups other than Uber.

But it’s yet to penetrate how we do business. First Mile Geo wants to change that.

Insights on Demand
We call it Insights On Demand. Drop a pin anywhere in the world, place a bid, task a local to capture data on your behalf, and generate near real-time dashboards, maps, and comparative analytics. No tech team, no GIS specialists, no field managers tabulating survey results. The entire process delivered, on-demand.

600x600-looping-compressor

How it works
The process is pretty simple. Like all of the other tools you may have seen in First Mile Geo (eg mobile, SMS, websurveys, physical sensors), all you have to do is create a form or survey then select the technology for collection –in this case ‘on demand’.

Drop a pin (or multiple), set a sample size (running a survey?), set a bid on how much you’re willing to pay, and you’ll see results shortly thereafter.

ondemand.task

As data arrives you’ll be greeted with real-time maps, dashboards, and powerpoint or pdf executive briefing documents in your preferred language.

dashboard.greekcrisis

A future envisioned
Today, there are over 4 dozen mobile data collection apps. And that’s not even including the other ways we use phones like SMS, IVR, or mass analysis of phone use patterns. But regardless of how we use these tools, data analytics can still be time-consuming: identify the need, allocate resources, create a survey or form, train enumerators, analyze results, write-up findings, brief it, and market the successes.

The future of data analytics in development, where systems are smarter and the institutional burden is lessened, is arriving. We think data delivered on demand, through services like our affiliate partners at Findyr, will have a major role to play in realizing it.

We are excited to have Matt present a demo of First Mile Geo’s Insight on Demand in our data collection course tomorrow! Interested in learning how to implement technology for your M&E needs? Check out our courses related to Technology for Monitoring and Evaluation

About Matt

Matt McNabb
Matt McNabb is CEO of First Mile Geo and a member of the TechChange Board of Advisors. He also serves as an Adjunct Fellow at the American Security Project and a member of the Board at Epirroi, a Beirut-based management consulting firm.

By Kevin Flanagan and Yuting Liao

A few weeks ago, my colleague Yuting Liao and I had the opportunity to attend MERL Tech—an industry conference of sorts designed to bring together M&E practitioners, researchers, technologists, and development professionals—on behalf of the Monitoring, Evaluation, and Learning (MEL) team at the National Democratic Institute (NDI).

At NDI, the MEL team is always on the lookout for innovative M&E practice that can be effectively applied to the democracy and governance sector and this event seemed like an excellent opportunity to explore the “big ideas” and partake in a larger discussion: what can information and communication technologies (ICT) offer to monitoring, evaluation, and learning work as the pressure to integrate ICTs into many aspects of development programming continues to rise.

Offering nearly thirty presentations, the event provided us ample opportunity to sit back and revel in the opinions of the experts, as well as contribute meaningfully to roundtable discussions and collaborative brainstorming activities. As such, these are the five takeaways:

1. More data does not necessarily mean more learning

ICT can make data collection easier, however, it’s crucial to ask the question: is this the data we need? “Big data” is enticing and a common mistake of the novice researcher is: let’s collect as much data as we can. But will that data answer your evaluation questions or will it simply be distracting? While collecting larger volumes of data could certainly result in unexpected observations, if data collection is not strategically tied to your evaluation questions, it does not necessarily lead to better learning. Quality is more important than quantity.

2. ICT can increase the level of risk for the subjects of the evaluation

Data hacks happen, so start by being scared. Whether we want to admit it or not, ICT implementations introduce additional risks to M&E work, particularly when it comes to privacy and data security. And yet, too often M&E practitioners don’t address the risks until after a breach happens. Worry about this in advance and create a threat model to assess assets, risks, and vulnerabilities.

3. Be a data-led organization, not just data-driven

While ICT does help improve data accuracy, organizations that embrace a “data-led” mentality will empower their users to strive to better understand data and incorporate it into their decisionmaking processes. Successful learning initiatives rely on better interpretation and analysis of data, and ICT for evaluation is useless without capable analytical and sector experts.

4. ICT can expand your sample size, but be mindful of the unexpected challenges in sample bias

When collecting data, ICTs can expand the reach of your evaluation efforts, creating opportunities to capture data beyond the traditional “beneficiaries” of a program. However, the “digital divide” may perpetuate the issue of sample bias, and your results may be valid only for those segments of the population with digital access.

5. There’s no ICT “quick-fix” to improve monitoring & evaluation

While it’s possible to achieve a high level of methodological rigor through carefully designed ICT studies, it’s not always easy to do so—often being technically complex, expensive, and time-consuming. Most importantly, effective ICT is built on sound monitoring & evaluation strategies, and incorporating ICTs into M&E requires long-term institutional commitment and evaluation capacity development.

Despite the wide breadth of content, there was a common theme: “It’s ok to reinvent the wheel, not the flat tire.” These words spoken by Susan Davis during a five-minute “lightning” presentation, struck an unexpected chord with the audience, attendees and presenters alike. Whether these are words of comfort for the tech-timid or caution for the tech-tenacious, Davis pointed us all to the indisputable fact that it’s okay to look to new technologies to address old problems in development as long as we are all aware that any new process, tool, or approach has just as much potential to fall flat as did their predecessors. The successful integration of M&E and ICT is fully reliant on sound monitoring and evaluation strategies and realistic expectations.

 

Kevin Flanagan
Kevin Flanagan is a TechChange alum from Technology for Monitoring and Evaluation course. He is a learning technologist on the Monitoring, Evaluation and Learning team at the National Democratic Institute.

yuting.liao
Yuting Liao is senior assistant for data analysis and visualization on the Monitoring, Evaluation and Learning team at the National Democratic Institute.

The National Democratic Institute is a nonprofit, nonpartisan organization working to support and strengthen democratic institutions worldwide through citizen participation, openness and accountability in government.

Featured image: Wayan Vota Flickr