We consume and produce data at ever growing rates, aiming to better understand the past, observe the now, and to be better prepared for the future. However, data can only fulfill its purpose when we can make sense of it, generate insights, and put it into action. The process of turning data into insights requires many steps, and doing it effectively involves many strategies.

Step 1: Visualization

One of these key steps is visualization, which is the visual organization of data using various shapes, sizes, colors, and layouts. Visualization creates data charts such as bar graphs, line graphs, scatterplots, and even maps and networks. This step helps us make sense of large volumes of abstract information, without much effort. Effectively using the visual language for data provides a natural, intuitive way to see and understand features and trends in data.

Step 2: Interaction

Another key step is interaction. When you ask questions, focus on certain properties, or change the visual representations, you are engaging in an interactive dialogue with your data. The vast computing capabilities in our digital devices allow us to dynamically filter across categories, resort items, pan map views, retrieve details, and explore alternatives. Together, visualization and interaction with data lets you find the answers you are looking for, and answer the questions that you didn’t know you had!

Step 3: Putting It All Together

How can you start your data dialogue? There are many tools to help you collect, transform, and visualize data in many different forms. However, with so many options, choosing the best approach based on your needs, your data, and your experience level is not trivial. You may start visualizing your data with tools that offer a graphical interface. This allows you to import a dataset and construct data charts by selecting chart types or mapping data attributes to various visual elements and components (shapes, colors, layouts, x-axis, etc.). Existing graphical charting tools still require training to make effective visualization decisions, or they do not let you easily engage in rich analytical conversations with your data in multiple synchronized perspectives. For customized analysis and design needs, you can use programming based tools, but these require significant technical knowledge to figure out and execute the best strategies. How can you get the most from your data, with the least amount of effort, and in the shortest time?

The Solution: Keshif

Keshif is a new web-based tool that brings life to your tabular data by converting it into a beautiful interactive visual interface. Unlike other tools, it creates an environment where you focus on interpreting your  data, rather than specifying the visualization details and getting lost in the many visual  options that may slow you down, or mislead you. Keshif is designed to fit your data exploration needs, the structure of your data, and expands on the best practices. Your categorical data becomes bar graphs, your numeric data becomes histograms, your time data becomes line graphs, all without any effort. For more in depth analysis, you can view your data by percentiles and map regions. Each record of your data can be shown individually in a list, grid, map, or network (if your data supports it).

Everything in a Keshif data browser is connected and highly responsive so that every action is a potential to a new insight. You can highlight to get a quick preview, filter to focus on details, or lock to compare different sections of your data easily. You can import your data to Keshif from Google Sheets, CSV, or JSON files, and decide which attributes you want to explore. This can be done by summarising their characteristics, and manipulating  your data in various ways to explore different trends and relations. From journalism to government, transportation to finance, and music to sports, Keshif can be used for data in many different domains. With it’s minimal, yet powerful features, Keshif lets you make sense of your tabular data quickly, analyze it in multiple perspectives, and reach new insights.

Keshif-TechChange

Keshif is under active development by Ph.D. candidate M. Adil Yalcin and his advisors Professor Niklas Elmqvist and Professor Ben Bederson at the Human Computer Interaction Lab at University of Maryland College Park. To learn more about Keshif, visit its homepage www.keshif.me, find a topic that interests you across the 150 datasets specially compiled, and watch the short tutorial video

About the Author:

M. Adil Yalcin, is a Ph.D. candidate at the Department of Computer Science at University of Maryland, College Park. His goal is to lower human-centered barriers to data exploration and presentation. His research focuses on information visualization and interaction design, implementation, and evaluation. He is the developer of Keshif, a web-based tool for rapid exploration of structured datasets. In his previous work, he developed computer graphics techniques and applications.

If you have any further questions, join Keshif’s email list or contact yalcin@umd.edu.

adil-yalcin-headshot

 

Imagine a tool where you have text and a computer automatically highlights key themes. No need to do complex coding, no word counts that are used to explore the text — just keywords and phrases identified. This is exactly what the tool Textio does for job descriptions. It automatically provides an effectiveness score and identifies words and phrases that affect whether applicants will apply for a job: they identify words through color coding that can act as a barrier or incentive, ones that affect applicants based on gender and repetitive terminology. [Editor’s note: TechChange participated in a closed-beta test of the tool and we will write a separate blog post about Textio and hiring practices. This is not a sponsored post.]

This tool not only has great implications for hiring, but utilizes simple visualizations to analyze qualitative data. As Ann Emery and I have been preparing for the Technology for Data Visualization course, we discuss how best to address the topic of data visualization for qualitative data. While there have been data visualizations featured in art museums (e.g., Viégas and Wattenberg’s Windmap), most visualizations are designed to convey information first.

Textio is using a custom algorithm to do a type of sentiment analysis. Typically, sentiment analysis will analyze how positive or negative a text is based on a word’s meaning, connotation, and denotation. Textio, on the other hand, focuses on how effective words or phrases are at getting people to apply for jobs and whether those applicants are more likely to be female or male. Once their specified level of effectiveness or gendered language for a word or phrase is reached, they highlight it with colors based on whether it is positive or negative and/or masculine or feminine. The gender tone of the entire listing is shown along a spectrum.

Acumen, a tool created at Al Jazeera’s 2014 Hackathon: Media in Context, is another take on how to visualize sentiment analysis. With a focus on trying to uncover bias in news articles, they highlight how positive or negative an article is in relation to other articles on the topic. A separate analysis tab shows shows the two sentiment ratings on a spectrum and ‘weasel words,’ words that are indicative of bias in reporting. The viewer also has the option to highlight the weasel words in the news article.

Both Textio and Acumen are great examples of how qualitative data visualization can be used to aid in the analysis of text. Neither example is immediately suited for generalized needs and require programming knowledge to create a particularized purpose, which myself and Kevin Hong will discuss in a forthcoming blog post. Instead, they can be used as examples of how qualitative data can be visualized to help inform decision making.

Have you used Textio or Acumen? Share your thoughts with us below or by tweeting us at @techchange!

Data visualization requires more than design skills. You need both technical and critical thinking skills to create the best visuals for your audience. It is important to match your visualization to your viewer’s information needs. You should always be asking yourself: “What are they looking for?”

1. Understand your audience before designing your visualization
The first and most important consideration is your audience. Their preferences will guide every other decision about your visualization—the dissemination mode, the graph type, the formatting, and more. You might be designing charts for policymakers, funders, the general public, or your own organization’s leaders, among many others.

What type of decisions do your viewers make? What information do they already have available? What additional information can your charts provide? Do they have time (and interest) to explore an interactive website, or should you design a one-page handout that can be understood at a glance? A chart designed for local government leaders wouldn’t be appropriate for a group of program implementers, and vice versa.

2. Your audience determines the type of visualization you prepare
Spend some time thinking about your dissemination format before you sit down at the computer to design your visualization. The days of 100+ page narrative reports are long gone. Nowadays viewers want visual reports, executive summaries, live presentations, handouts, and more.

  • Visual Reporting
    Traditional M&E reports are 80% text and 20% graphics. Ready to break the mold? This visual report, State of Evaluation 2012 from Innovation Network, is about 20% text and 80% graphics.State of Evaluation 2012
  • One-Page Annual Reports
    If you know your viewers won’t read more than a page or two, try a one-page annual report. These “reports” focus on just the highlights of what was accomplished within the past year and leave out the lengthy narrative sections. Here is an annual report I created for the Washington Evaluators:
    Washington Evaluators
  • Online Reporting
    Maybe your viewers would respond better to a different reporting style altogether—an online report. These website-based reports can include images, videos, interactive visualizations, and more. My favorites include Datalogy Labs’ Baltimore report and the University of Chicago’s computer science report.

3. Remember that the key is to keep your audience engaged
If you are sharing results in client meetings, staff retreats, conferences, or webinar, try breaking up your charts into several slides so the chart appears to be animated. This storyboarding technique ensures that your audience is looking where you want, when you want.

  • Draw Attention to key charts with handouts
    If you are getting ready to share your M&E results during a meeting, rather than printing your full slide deck, select 3 to 5 key charts and print those slides on a full-page. Your full slide deck will likely end up the trash can as soon as the meeting ends, but your curated handouts will get scribbled on, underlined, and saved for future reference. I often see these handouts taped above meeting attendees’ desks weeks and months after my presentation.
    emery_handouts
  • Tweeting your results
    If you are planning to tweet a chart or two, be sure to adjust your charts to fit a 2:1 aspect ratio. Otherwise, your carefully crafted visualization will get chopped in half because when you are scrolling through your Twitter feed, the images automatically display about twice as wide as they are tall.

That’s all for my top tips to keep in mind when creating your visualization! How do you engage your team when creating and presenting reports for your organization? What types of communications modes are you currently using to share your visualizations? Tweet at us @TechChange and join the conversation!

Interested in learning more about how to best present findings for your team or organization, join me and Norman Shamas in TechChange’s brand new Technology for Data Visualization and Analysis online certificate course. The course begins on June 1, and you can register with code ‘DATAVIZ50′ for a $50 discount! Click here to register

About author

Ann K. Emery
Ann K. Emery is a co-facilitator for Technology for Data Visualization and Analysis course. Through her workshops, webinars, and consulting services, she equips organizations to visualize data more effectively. She leads 50 workshops each year both domestically and abroad. Connect with Emery through her blog.

Image Source: AidData

How do you analyze data you collect from surveys and interviews?

One way to analyze data is through data visualizations. Data visualization turns numbers and letters into aesthetically pleasing visuals, making it easy to recognize patterns and find exceptions.

We understand and retain information better when we can visualize our data. With our decreasing attention span (8 minutes), and because we are constantly exposed to information, it is crucial that we convey our message in a quick and visual way. Patterns or insights may go unnoticed in a data spreadsheet. But if we put the same information on a pie chart, the insights become obvious. Data visualization allows us to quickly interpret the data and adjust different variables to see their effect and technology is increasingly making it easier for us to do so.

So, why is data visualization important?

Patterns emerge quickly

Cooper Center's Racial Dot Map of the US
Cooper Center’s Racial Dot Map of the US

This US Census data (freely available online for anyone) is geocoded from raw survey results. Dustin Cable took the 2010 census data and mapped it using a colored dot for every person based on their race. The resulting map provides complex analysis quickly.

It is easy to see some general settlement patterns in the US. The East Coast has a much greater population density than the rest of America. The population of minorities is not evenly distributed throughout the US with clearly defined regional racial groupings.

Exceptions and Outliers are Made Obvious

San Luis Obispo, CA

As you scan through California, an interesting exception stands out just north of San Luis Obispo. There is a dense population of minorities, primarily African-Americans and Hispanics. A quick look at a map reveals that it is a men’s prison. With more data you can see if there are recognizable patterns at the intersection of penal policy and racial politics.

Quicker Analysis of Data over Time


Google Public Data Explorer

Google’s dynamic visualizations for a large number of public datasets provides four different types of graphs, each with the ability to examine the dataset over a set period of time. It is easy to see patterns emerge and change over time. Data visualization makes recognizing this pattern and outliers as easy as watching a short time-lapsed video.

What are some of your favorite data visualizations examples or tools, tweet at us @TechChange or share in the comments section below.

If you are interested in learning about how to better visualize and analyze data for your projects, join us in our new online course on Technology for Data Visualization and Analysis. The course begins on June 1, so save your seats now!

In his 2006 TED talk, Hans Rosling used data visualizations to deconstruct his students’ assumptions about the ‘developed’ and ‘developing’ dichotomy of countries. He looked at the patterns and demonstrated how they were easily recognizable and showed something contrary to the original belief. Pattern recognition is the core power of data visualization and more companies are embracing the notion of  “putting humans back in the decision making process”.

Good data visualizations make patterns and outliers easy to recognize and aesthetically pleasing. The data are “liberated” from numbers and letters into a form that can be easily analyzed and understood by everyone.

Here are some great examples of liberating data through data visualizations

1. Microsoft’s SandDance Project

Microsoft recognized the importance of humanizing data with the SandDance project in terms of designing the data exploration experience using “natural user interaction techniques.”

SandDance

2. Cooper Center’s Racial Dot Map of the US

US Census data is made freely available online for anyone to transform into a complex and understandable visualization. The data is available geocoded and as raw survey results. Last summer Dustin Cable took the 2010 census data and mapped it using a colored dot for every person based on their race: blue is White; green, African-American; red, Asian; orange, Hispanics; and brown, all other racial categories. The resulting map provides complex analysis quickly.

USA Racial Dot Map

At a glance, it is easy to see some general settlement patterns in the US. The East Coast has a much greater population density than the rest of America. It slowly gets less dense until the middle of America where there is extremely low density until the West Coast. Cities act as a grouping point: density typically decreases in relation to the distance from a city. The population of minorities is not evenly distributed throughout the US with clearly defined regional racial groupings.

San Luis Obispo, CA

As you scan through California, an interesting exception stands out just north of San Luis Obispo. There is a dense population of minorities, primarily African-Americans and Hispanics. A quick look at a map reveals that it is a men’s prison. With more data you can see if there are recognizable patterns at the intersection of penal policy and racial politics.

3. Google Public Data Explorer

Google has created dynamic visualizations for a large number of public datasets. There are four different graph types, each with the ability to examine the dataset over a set period of time. With the additional element of time, new patterns can emerge.

Examining the World Bank’s World Development Indicators data set to compare fertility rate and life expectancy a pattern emerges: as life expectancy increases, fertility rate decreases. However, some notable exceptions occur. In 1975, Cambodia has a life expectancy slightly over 20 years, less than half of most countries with a similar life expectancy. It is also the year the Khmer Rouge took power leading to mass killings in Cambodia.

This exception to the normal pattern shows how strong of an impact a single event made. Data visualization makes recognizing this pattern and outliers as easy as watching a short time-lapsed video.

I’ve always believed that data are more than just collected information. Data have a purpose and are meant to be analyzed. New technologies have made visualizing data easier than ever and the data are more accessible to everyone.

What are some of the best data visualizations that you have seen, or maybe even created yourself? Please feel free to share in the comments or tweet @normanshamas or @TechChange.

Want to learn more about data visualization and analysis? Enroll now in TechChange’s new online course on Technology for Data Visualization and Analysis  that runs June 1 – June 26, 2015.

On February 26, USAID received the “Best Government Policy for Mobile Development” award at GSMA’s Mobile World Congress 2013. And while the Mobile Solutions team was receiving an award in Barcelona, TechChange and the MS team were also receiving over 1,500 mobile poll responses from recipients in DRC taking part in an online exercise designed by 173 USAID staff and implementing partners in 21 countries. The way this was possible is through harnessing the same potential for public-private partnerships used for external implementation and applying it to internal education and collaboration at USAID.


Fig. 1: MapBox visualization of GeoPoll responses.

The exercise was part of a 4-week online course in Mobile Data Solutions designed to provide a highly interactive training session for USAID mission staff and its implementing partners to share best practices, engage with prominent technologists, and get their hands on the latest tool. Rather than simply simulating mobile data tools, USAID staff ran a live exercise in DRC where they came up with 10 questions, target regions, and desired audience. The intent was to not teach a tool-centric approach, but instead begin with a tech-enabled approach to project design and implementation, with an understanding of mobile data for analysis, visualization, and sharing.


Fig. 2: Student locations for TC311 class.

This would have been a formidable exercise for any organization, but fortunately we augmented USAID’s development capacity with the abilities of three organizations. TechChange provided the online learning space, facilitation, and interactive discussions. GeoPoll ran the survey itself using their custom mobile polling tool. And MapBox provided the analysis and visualization needed to turn massive data into a simple and attractive interface. (Want to check out the data for yourself? Check out the raw data Google Spreadsheet from GeoPoll!)

But while the creation of an interactive online workshop for small-group interaction requires barriers to scale, the content is under no such restrictions. One of the videos from our previous course on Accelerating Mobile Money provided an animated history of M-PESA, the successful mobile money transfer program in Kenya, which allows everything mobile phone users to pay for everything from school fees to utility bills and is proving transformative in cases such as Haiti.


Fig. 3: M-Pesa animation used for TC311 and USAID Video of the Week

But there’s still plenty of work to do. As mobile phones continue their spread to ubiquity, the challenges for applying their potential to development will only increase, along with the continuing possibilities as the technology continues to improve. However, in the short term, we’re focused on increasing mobile access, which is the topic of our next course. If you work at USAID or with an implementing partner, we hope that you’ll consider joining us and lending your voice to this process.

This is a guest post by Matt McNabb, Principal of Caerus Associates. If you are interested in using mapping for digital organizing, consider taking our course Digital Organizing and Open Government. 

 

Today, my colleagues at Caerus Associates and I are able to announce the BETA launch of a new tool that helps businesses, NGOs, and governments collect, visualize, and share geospatial data in less developed emerging markets. We call it, CaerusGEO.

Geospatial in the Last Mile

The premise is simple. How can we leverage the cloud to deliver geospatial analysis to non-GIS users most familiar with basic, paper based workflows?

In our experience, most businesses, government institutions, and organizations in frontier markets rarely use technology across the enterprise. In some cases it’s a cost issue, in others it’s social stigma related.  But whatever the reason, ICTs are often used simply to support manual, tabular processes that already exist.

Want to run a survey? Use Word, Excel, and printer.

When it comes to spatial data, this challenge is only magnified. Collecting geospatial information can be hard enough, visualizing and sharing it can be even harder. As a result, geospatial information is often relegated to the expert user.  Of course, the GIS industry as a whole is trending towards accessibility, but rarely is it truly meaningful for most enterprises in less developed markets that simply want to know where things happen.

This is what got me interested in a tool widely used within the humanitarian response community called Walking Papers. The value proposition of Walking Papers has been that it extends geospatial data collection to pen and paper. Print off a map, mark it up, then convert what’s written into geospatial data. No magic. No optical character recognition. Just a simple paper insert that allows people without GIS units to collect spatial information in a way that could be easily geo-rectified.

The problem with Walking Papers is that it offers little back to the data collector. There is no visualization or data management. In fact, it’s really only a lightweight tool that lets the user print off a map and, through some gymnastics, let’s her then use it to edit a basemap on Open Street Map. It offers nothing for the non-technical user simply interested in using paper to collect information about events, or perceptions, or whatever other kinds of information one might be interested in seeing over the basemap.

For the past year, we’ve been wondering what it would take to create a tool that filled this gap. Let normal users capture geospatial data in paper formats and return analytical value once collected.

How It Works

This BETA of CaerusGEO is our first answer to this need. A user is able to create her own survey, find a place in the world where it will be centered, create an atlas and data collection sheets through a standard schema they created, and then manage, visualize, and share the data once uploaded. By bridging cloud analytics to paper workflows, we are able to drive value at enterprise level.

If you’re an NGO and want to integrate mapping into your polling, you can create a survey, manage the data, and facilitate sharing from start to finish. If you’re a business looking to understand your market, you can integrate it into your customer registration process and benefit from basic market intelligence. Although basic in form, the value is derived from a more reality-based understanding of workflows in these markets. Paper matters.

Smarter Public Safety

The very first place we thought to experiment was in the domain of public safety. What could be more obvious than the need for taking those antiquated paper and pushpin constructions used for crude crime mapping and making it more dynamic, analytical, and transparent?

As the Deputy Minister of Justice in Monrovia told me, ‘we send the police where the people are, not where the crimes are… this could help us see how to use our resources in a smart way.’ We can address this challenge by finding minimally intrusive places to insert paper maps into the pre-existing workflows of policing institutions and fusing them together for digital analytics by a single node with connectivity to the cloud.

In parallel, NGOs and violence observatories have the capacity to collect and share their own data, creating a basic framework opportunity for enhancing social accountability within the security sector domain. Perhaps most interestingly, by integrating paper-based mapping that connects to real geospatial data, the longstanding art of Participatory GIS in conflict management and of Crime Prevention Through Environmental Design can be used in so many more ways.

Driving Value For The Private Sector

Public safety institutions are not the only ones we have learned can find value here. It’s also a pull for private sector development, particularly in the bottom of the pyramid. Microfinance institutions and others engaged in understanding their customer base face similar challenges.

By extending geospatial data capabilities to private sector development institutions and retail organizations, we have the prospect of significantly improving the precision and reach of private sector particularly to underserved areas.

So Much To Learn

Bending ICTs to the real-world challenges and workflows found in the last mile holds tremendous value to public and private sector institutions alike. For us, this experiment with geospatial information is only the beginning. We hope you’ll join us and give us feedback as our experiment moves on.

Matt McNabb is a member of the Board of Advisors for TechChange, and a Principal with Caerus Associates. For more, you can follow Matt and CaerusGEO on Twitter:  @mattrmcnabb  @caerusgeo

 

Are you interested in learning with TechChange? Check out our next class: Digital Organizing and Open Government. Class starts April 8, 2013. Apply Now.

How can USAID use mobile technologies to more effectively collect, analyze and share data?  These are the central questions we will be addressing as part of a new course TechChange has developed in partnership with the Mobile Solutions team at USAID and QED.

USAID, together with its partners, has the opportunity to increase efficiency, improve the quality of the information its uses, and better meet USAID goals related to its Forward Reforms, Evaluation Policy, and Open Data Initiative by utilizing mobile technologies to collect and disseminate data about people, projects, and programs. This course will help USAID Missions and implementing partners understand how to do just that.

Building off of the success of our 8-week online certificate course this fall on Accelerating Mobile Money, TC311 Mobile Data Solutions will be a four week online course (February 1-March 1, 2013) designed to build the necessary technical capacity to deploy mobile data collection strategies by bringing together Mission staff and implementing partners. The four weeks are structured as follows to provide a comprehensive overview of mobile devices in data collection.

Week 1: Introduction to mobile data solutions

  • What is mobile data? What are the benefits and challenges associated with collecting data wirelessly?

Week 2: Project design

  • Designing projects and preparing concept notes, scopes of work, other documents to include mobile technologies.

Week 3: Implementation

  • Study design and programming, training, field operations, data management

Week 4: Analysis, visualization and sharing

  • Utilizing data for decision-making, sharing with partners

The course will go beyond explaining the benefits of this approach. Participants will learn the questions to ask in order to assess projects (Are mobile technologies appropriate?); design them to achieve the maximum benefits possible (How should interventions be designed to take advantage of these technologies?), implement them (What device should we use? How do we train staff? What resources do we need in the field? At the Mission?), and report and share the data (How do we create visuals that can inform decision making? How do we share the results with beneficiaries and partners in-country?).

Featured tools, organizations and projects include: Episurveyor/Magpi, FormhubSouktel, EMIT, uReport, TexttoChange, RapidSMS, GeoPoll, iFormbuilder, PoiMapper, Catholic Relief Services, DAI, NASA, OpenDataKit at UW, SweetLab, JSI, ICF International, Tangerine at RTI, Futures Group. The course will be delivered on TechChange‘s custom learning platform and will include a mixture of presentations by experts, tool demonstrations, selected readings, and activities including designing and analysing a survey using mobile software.

This closed course is intended specifically for USAID and its implementing partners. But if are you interested in learning with TechChange and the topic of mobile data, Check out our upcoming course on Mobile Phones for International Development. Class starts on March 4, 2013. Apply now!

“Team Rubicon is doing for disaster response what the Obama team did for political campaigns,” said Jonathan Morgenstein while taking a break from tearing down moldy drywall in hurricane-damaged Rockaway, Brooklyn. A New York native and US Marine Corps veteran who served two tours in Iraq, Morgenstein had spent the last month working on the campaign trail with Veterans and Military Families for Obama. He was referring not to the nearly fifty volunteers he was coordinating that afternoon, but rather the sophisticated software back-end that he was relying on to provide the correct information attached to the clipboard he was carrying. In the same way that better technology such as “Narwhal” had been credited with assisting him only weeks earlier for turning out more volunteers, donors and voters than in 2008 for Obama (“When The Nerds Go Marching In,” The Atlantic, 11/16/12), it was now playing a core role in coordinating disaster response in New York.

Jon Morgenstein in Rockaway, Brooklyn

Jon Morgenstein in Rockaway, Brooklyn

And on November 18, Morgenstein needed the help. In collaboration with Team Rubicon, he was responsible for supervising 48 Clinton Foundation volunteers to gut ten hurricane-damaged homes in preparation for their restoration by contractors. Morgenstein was one of hundreds of volunteers helping out with Team Rubicon during the Clinton Global Initiative’s “Day of Action for New York,” which pushed Team Rubicon organizing capacity to the limit. While it’s difficult to estimate exactly how much value has been returned to the community, gutting just one of the houses was estimated at $5,000-$8,000 for a homeowner without insurance (in this case a 91-year-old), making a direct value-add beyond food and shelter relief. And each house was tied to a work order and a map on Morgenstein’s clip board, just like while canvassing before the election.

But this particular software by Palantir Technologies wasn’t designed for campaigns, rather having been used recently for finding IEDs in war zones like Iraq and Afghanistan. According to a post on CNN (10/4/12), Palantir “software ties together intelligence data to improve information for troops about the possible location of roadside bombs planted by insurgents.” Nonetheless, it was also a perfect fit for an organization like Team Rubicon, which “unites the skills and experiences of military veterans with medical professionals to rapidly deploy emergency response teams into crisis situations.”  While the outpouring of people wanting to help has been heartening, new problems arise when organizing large groups of ad-hoc volunteers.

Volunteers from the Clinton Foundation  (Credit: Jon Morgenstein)

Volunteers from the Clinton Foundation

Fortunately, the tech fit the mission. Far from having an existing organizational structure or a known set of capabilities (like a proper military unit), this had been a seat-of-the-pants improvised human logistics, making those most in need with those most capable.  Palantir’s philanthropic team had been discussing doing some disaster-relief simulations to test its capabilities for this use.  When Sandy suddenly threatened the eastern seaboard, the drill became the real thing, with Palantir scrambling to set up the server infrastructure and mobile handsets for Team Rubicon’s use. (“Philanthropy Engineers Embed with Team Rubicon for Hurricane Sandy Relief,” Palantir Blog, 11/14/12)

The setup was ready by November 4th, just as recovery operations were swinging into gear. Imagined as operating system for data problems, Palantir’s software was able to pull in information from multiple sources of data, fuse it together into a coherent picture of the state of the peninsula, and then allow Team Rubicon operators to efficiently dispatch volunteers (say, a chainsaw team) to where they were needed the most (a list of the fifteen biggest downed trees). But tech isn’t perfect. “Check the data. At the end of the day, just because it’s in Palantir doesn’t make it right.” stated Brian Fishman of Palantir from inside the bus HQ. “Circumstances change, and a functional technology infrastructure requires regular updates to the data in the system.”

So, will Palantir and Team Rubicon change the way organizations think about disaster response? “I don’t know, maybe,” stated Morgenstein, “In the military we say, ‘Amateurs talk strategy, pros talk logistics’. These tech guys have made the logistics a lot easier at the operational level, and the military culture you see in Team Rubicon of delegating decision-making downwards to the person closest to the problem, is perfectly suited to an operation like this.”

Brian Fishman of Palantir at Team Rubicon FOB Hope

What we do know, however, is that putting the right tools in the right hands has the potential to create a team where the whole is greater than the sum of the parts. With Palantir and Team Rubicon, response operations will continue to iterate and improve over time, with the ultimate goal being to develop better response mechanisms for the next time disaster strikes. The best indicator of Team Rubicon as a learning organization may have nothing to do with the technology. At the end of the “Day of Action,” our team leader Zach (pictured, below right) turned to the group and asked us: “What could we do differently? If you see something you think we could be doing better, please let us know so that we can keep getting better at this.” Even when it comes to disaster response, tech is only ten percent.

TechChange provides online training in Tech Tools for Emergency Management. If you’re interested in learning more, consider applying for our next course. Class starts Jan. 14!

Interested in joining Team Rubicon? Please consider donating time or money to further their work. Learn more about Team Rubicon.

Zach and Dan of Team Rubicon

Zach and Dan of Team Rubicon

This post was originally published on the NDITech DemocracyWorks blog by Lindsay Beck (view original post), a student in TechChange’s recent course at George Washington University. For more information, please consider following @BeckLindsay and @NDITech.

As technology closes the time between when events happen and when they are shared with the world, understanding what approaches and tools are the best solutions to implement in crisis response and good governance programs is increasingly important. During the “Technology for Crisis Response and Good Governance” course, which I took earlier this month offered by TechChange at GW, our class was able to simulate different scenarios of how such tools can be used effectively.

The first simulation we did was on how to use FrontlineSMS and Crowdmap to track and respond to incidents in the event of a zombie apocalypse. Each team was responsible for managing FrontlineSMS, mapping incidents and other information on Crowdmap, and going into the field to get more information and verify reports. Management of the incoming data at this point becomes the highest priority. Designating specific responsibilities to different individuals, and determining how to categorize data (reports to be mapped, questions to be answered by other officials, overly panicked individuals, etc.) helps to more efficiently handle processing a large amount of information during a short timeframe.

The next simulation was on how to use a variety of open source tools and resources to enact an election monitoring mission. While the temptation was there to think about what the tools could do to meet specific aspects of the electoral process, instead it was quite clear that workflow and anticipated challenges needed to be identified first before using these technologies. For example, in a country where internet and mobile phone coverage does not reach the entire population, making sure that outreach is also accomplished through “low-tech” mediums like radio broadcasts as well as distribution of leaflets or other informational materials through local community organizers will reach a wider percentage of citizens. In countries like Liberia, use of “chalkboard blogs” that share community-relevant information could even be leveraged. Tech alone, even more ubiquitous mobile tech, is not sufficient to reach all potential voters.

Using tools during significant political and social moments is useful in attracting the attention of and inform the local and international community.  However, local context has to be taken into consideration, particurlarly in countries that discourage citizen engagement and transparency of political processes like elections, can emerge.

Could sending an SMS about violations being committed against members of a community put a sender at risk? In most countries now, a mobile phone user must provide some degree of personally identifiable information (PII) in order to purchase a SIM card, ranging from a name, home address to a photocopy of a passport or national ID card and even increasingly biometric information. Match this with the increasing efforts by governments to curtail use of mobile communications (particularly use of bulk SMS), along with pre-existing insecurities of the mobile network, and it becomes nearly impossible to exchange information securely over SMS, or send them to be reported on a platform like Crowdmap. While encrypted SMS tools like TextSecure exist, they are not available on feature phones or “dumb” phones that are the most widely used internationally nor are they easily deployed for crowdmapping efforts.

When making use of crowdsourcing and mapping applications to track incidents, such as during an election, a large amount of data is collected and can be shared with a wider community. But what happens to that data? Simply putting a map on a governance- or crisis response-focused project does not ensure continuity and sustainability of a project. Instead, defining an approach to make greater use of collected information can help strengthen follow-on activities beyond the event date. Establishing a bigger picture strategy, and then incorporating ICT elements as they fit makes for more effective projects, rather than creating “technology-first” projects that consider political and social considerations after the tools.