By Kevin Flanagan and Yuting Liao

A few weeks ago, my colleague Yuting Liao and I had the opportunity to attend MERL Tech—an industry conference of sorts designed to bring together M&E practitioners, researchers, technologists, and development professionals—on behalf of the Monitoring, Evaluation, and Learning (MEL) team at the National Democratic Institute (NDI).

At NDI, the MEL team is always on the lookout for innovative M&E practice that can be effectively applied to the democracy and governance sector and this event seemed like an excellent opportunity to explore the “big ideas” and partake in a larger discussion: what can information and communication technologies (ICT) offer to monitoring, evaluation, and learning work as the pressure to integrate ICTs into many aspects of development programming continues to rise.

Offering nearly thirty presentations, the event provided us ample opportunity to sit back and revel in the opinions of the experts, as well as contribute meaningfully to roundtable discussions and collaborative brainstorming activities. As such, these are the five takeaways:

1. More data does not necessarily mean more learning

ICT can make data collection easier, however, it’s crucial to ask the question: is this the data we need? “Big data” is enticing and a common mistake of the novice researcher is: let’s collect as much data as we can. But will that data answer your evaluation questions or will it simply be distracting? While collecting larger volumes of data could certainly result in unexpected observations, if data collection is not strategically tied to your evaluation questions, it does not necessarily lead to better learning. Quality is more important than quantity.

2. ICT can increase the level of risk for the subjects of the evaluation

Data hacks happen, so start by being scared. Whether we want to admit it or not, ICT implementations introduce additional risks to M&E work, particularly when it comes to privacy and data security. And yet, too often M&E practitioners don’t address the risks until after a breach happens. Worry about this in advance and create a threat model to assess assets, risks, and vulnerabilities.

3. Be a data-led organization, not just data-driven

While ICT does help improve data accuracy, organizations that embrace a “data-led” mentality will empower their users to strive to better understand data and incorporate it into their decisionmaking processes. Successful learning initiatives rely on better interpretation and analysis of data, and ICT for evaluation is useless without capable analytical and sector experts.

4. ICT can expand your sample size, but be mindful of the unexpected challenges in sample bias

When collecting data, ICTs can expand the reach of your evaluation efforts, creating opportunities to capture data beyond the traditional “beneficiaries” of a program. However, the “digital divide” may perpetuate the issue of sample bias, and your results may be valid only for those segments of the population with digital access.

5. There’s no ICT “quick-fix” to improve monitoring & evaluation

While it’s possible to achieve a high level of methodological rigor through carefully designed ICT studies, it’s not always easy to do so—often being technically complex, expensive, and time-consuming. Most importantly, effective ICT is built on sound monitoring & evaluation strategies, and incorporating ICTs into M&E requires long-term institutional commitment and evaluation capacity development.

Despite the wide breadth of content, there was a common theme: “It’s ok to reinvent the wheel, not the flat tire.” These words spoken by Susan Davis during a five-minute “lightning” presentation, struck an unexpected chord with the audience, attendees and presenters alike. Whether these are words of comfort for the tech-timid or caution for the tech-tenacious, Davis pointed us all to the indisputable fact that it’s okay to look to new technologies to address old problems in development as long as we are all aware that any new process, tool, or approach has just as much potential to fall flat as did their predecessors. The successful integration of M&E and ICT is fully reliant on sound monitoring and evaluation strategies and realistic expectations.

 

Kevin Flanagan
Kevin Flanagan is a TechChange alum from Technology for Monitoring and Evaluation course. He is a learning technologist on the Monitoring, Evaluation and Learning team at the National Democratic Institute.

yuting.liao
Yuting Liao is senior assistant for data analysis and visualization on the Monitoring, Evaluation and Learning team at the National Democratic Institute.

The National Democratic Institute is a nonprofit, nonpartisan organization working to support and strengthen democratic institutions worldwide through citizen participation, openness and accountability in government.

Featured image: Wayan Vota Flickr

This post was originally published on the NDITech DemocracyWorks blog by Lindsay Beck (view original post), a student in TechChange’s recent course at George Washington University. For more information, please consider following @BeckLindsay and @NDITech.

As technology closes the time between when events happen and when they are shared with the world, understanding what approaches and tools are the best solutions to implement in crisis response and good governance programs is increasingly important. During the “Technology for Crisis Response and Good Governance” course, which I took earlier this month offered by TechChange at GW, our class was able to simulate different scenarios of how such tools can be used effectively.

The first simulation we did was on how to use FrontlineSMS and Crowdmap to track and respond to incidents in the event of a zombie apocalypse. Each team was responsible for managing FrontlineSMS, mapping incidents and other information on Crowdmap, and going into the field to get more information and verify reports. Management of the incoming data at this point becomes the highest priority. Designating specific responsibilities to different individuals, and determining how to categorize data (reports to be mapped, questions to be answered by other officials, overly panicked individuals, etc.) helps to more efficiently handle processing a large amount of information during a short timeframe.

The next simulation was on how to use a variety of open source tools and resources to enact an election monitoring mission. While the temptation was there to think about what the tools could do to meet specific aspects of the electoral process, instead it was quite clear that workflow and anticipated challenges needed to be identified first before using these technologies. For example, in a country where internet and mobile phone coverage does not reach the entire population, making sure that outreach is also accomplished through “low-tech” mediums like radio broadcasts as well as distribution of leaflets or other informational materials through local community organizers will reach a wider percentage of citizens. In countries like Liberia, use of “chalkboard blogs” that share community-relevant information could even be leveraged. Tech alone, even more ubiquitous mobile tech, is not sufficient to reach all potential voters.

Using tools during significant political and social moments is useful in attracting the attention of and inform the local and international community.  However, local context has to be taken into consideration, particurlarly in countries that discourage citizen engagement and transparency of political processes like elections, can emerge.

Could sending an SMS about violations being committed against members of a community put a sender at risk? In most countries now, a mobile phone user must provide some degree of personally identifiable information (PII) in order to purchase a SIM card, ranging from a name, home address to a photocopy of a passport or national ID card and even increasingly biometric information. Match this with the increasing efforts by governments to curtail use of mobile communications (particularly use of bulk SMS), along with pre-existing insecurities of the mobile network, and it becomes nearly impossible to exchange information securely over SMS, or send them to be reported on a platform like Crowdmap. While encrypted SMS tools like TextSecure exist, they are not available on feature phones or “dumb” phones that are the most widely used internationally nor are they easily deployed for crowdmapping efforts.

When making use of crowdsourcing and mapping applications to track incidents, such as during an election, a large amount of data is collected and can be shared with a wider community. But what happens to that data? Simply putting a map on a governance- or crisis response-focused project does not ensure continuity and sustainability of a project. Instead, defining an approach to make greater use of collected information can help strengthen follow-on activities beyond the event date. Establishing a bigger picture strategy, and then incorporating ICT elements as they fit makes for more effective projects, rather than creating “technology-first” projects that consider political and social considerations after the tools.