Paul Currion wrote a fairly lengthy guest piece on Mobile Active’s blog (http://mobileactive.org/how-useful-humanitarian-crowdsourcing) last week. Unfortunately, for the most part he wasn’t even criticizing the right people, software or communications.
From January to April this year I coordinated the translation, geolocation and categorization of emergency text messages sent in Haiti in the wake of January 12 earthquake. This was the first step in the only emergency response service available to people within Haiti during this critical period. I worked with some amazing people during this process, including 1,000s of Kreyol-speakers around the world who were translating these messages as they arrived, categorizing them and clicking on a map where they knew locations to be. When I say that I was ‘coordinating’ this, what it really means is that I was the cheer-squad and tech-support for these people globally who had come together to help, many of them for weeks on end, to do what the majority of the responders could not: translate from Kreyol to English and identify locations not yet labeled on any map. The messages were streamed back to the emergency responders within Haiti. Shortly after launch, the messages were also taken by a group of volunteers at Tufts University who put many of them on a public map. They had been mapping crisis information since the day of the earthquake and added many of these messages to this map, in many cases refining the coordinates and working with the responders to flag ‘actionable’ messages. The group at Tufts were using the Ushahidi platform to do this, primarily because Patrick Meier of Ushahidi was there. We were brought together by Josh Nesbit of FrontlineSMS:Medic and there were many other organizations involved (see http://www.mission4636.org/history/ – I’m with Energy for Opportunity), but it is easy to see why Ushahidi received the lion’s share of publicity: they were the public face of the effort thanks to hosting the map and provided the media-friendly images of students working together.
It is understandable that the press have been wrong in thinking that this was the extent of the effort. I have never bothered to correct the press when they have attributed (and in some cases awarded) our individual or collective achievements to the wrong people (or a subset of the right people) because the overall message was spot on: you can make a real-time difference from anywhere in the world. But it is a real disappointment to see someone in the humanitarian space make the same error and unintentionally spread bad information within our community, so it is worth correcting.
Currion’s article was a review of Mission 4636, undertaken with evidence from the popular press and a data-dump from the Ushahidi-Haiti map. It went no deeper than this – he did not contact those of us working on the system or those responding to the messages. He makes an excellent point that only people on the ground can define exactly what constitutes ‘actionable’ data, but then rests his entire argument on a scenario where he simply imagines himself on the ground.
This is not how you carefully review an information system and it is not surprising that his assumptions were way off. He presupposed that most of these messages were not actionable and there was not the capacity to respond. In fact, the majority of the messages he read were actionable; it was the responders on the ground who rightly defined what was ‘actionable’; and within two weeks of the earthquake they told us they had the capacity to respond to more messages than were coming through our system.
This should be enough to simply ignore his post, but this doesn’t help anyone trying to do a better job of this. Like Currion, I have worked in information management for about 10 years. This does not qualify me to review information systems. As part of my work, I have trained and worked as a systems analyst. This does qualify me. Reviews of humanitarian organizations should take place in private for a number of reasons, not least of which is people’s willingness to be honest and open when they are not subject to the scrutiny of commentators working with partial information. I know of half a dozen people/orgs currently conducting critical reviews of crowdsourcing for humanitarian response, mostly very detailed studies about specific aspects of translation and geolocation. They are rightly not blogging as they go. It is very flattering that commentators on the web want to get involved too: systems analysis is not very exciting and won’t get CNN banging on your door. Methodologies for critical analysis such as Currion’s would not pass review by engineers in our field, but this doesn’t (and perhaps shouldn’t) let Currion and those like him still report their opinions through other avenues like blogging. So here are some small (non-exhaustive) pointers about how to conduct reviews for information management systems with reference to a few of the misconceptions in the original article:
– Review all data. The publicly downloadable data from Ushahidi Haiti (about 3000 records) is only about 1-2% of all structured data that went through Mission4636. For example, there were 80,000 messages to 4636, this data doesn’t contain the ‘actionable’ flag, and it doesn’t contain phone numbers which is one part of identifying the source of information to establish veracity (and allows a quick ‘tell us more’ response). It would be ideal to get as close to the actual reports as possible, too: strictly speaking Currion did not read the messages sent to 4636, he read the (crowdsourced) translations.
– Establish use-cases with people who used the data. I don’t know of any aid workers/emergency responders who simply took data dumps in the manner reported in the article. Those who passively received the data took category and/or geographic specific reports in real-time as they were posted. Actionable items were identified in conjunction with the responders. Like Currion said, only people on the ground can define actionable data and the format in which to receive them, which was exactly what did happen – why would anyone who knows about humanitarian response assume otherwise? It was misleading to report otherwise and created a false-dichotomy between us and the broader response community. For example, we collected reports of unaccompanied minors according to specifications given by OXFAM. For obvious reasons, reports of (and sometimes from) unaccompanied minors were excluded from the public map/data, so these are not in the publicly available data-dump. We were able to turn these unstructured messages in Haitian Kreyol into structured, geolocated reports with English translations (and return numbers), taking the burden of filtering, translating and structuring reports off the already overstretched workers within Haiti.
– Talk to stakeholders. The article focused on water management. Individual requests for water through 4636 were defined as actionable by the responders, as were clustered requests for food, especially in areas unknown to aid workers at the time of report. Currion argued that he would not use these reports for water management? I don’t know which would be the better strategy. It is fine to establish hypothetical situations within your own area of expertise but for real, past events, it might be useful to ask what really happened. If someone sent a request for water within Haiti to 4636, the translation of that message ended up in the hands of one of the responders (in this case it was Southern Command), often within just a few minutes. My understanding is that their main concern was guarding against disease outbreaks, but you should ask them about specific response cases and the exact action taken. We are extremely fortunate that we did not see current the cholera outbreak during this period and it is the people in charge of water management that we have to thank. I am sure they were not idle and carefully balancing different information sources, so we can presume that they were not wasting their own time when they were requesting more information from the crisis-affected population.
– Establish appropriate metrics for evaluation. The raw number of messages is not really relevant – the majority of calls to 911 are not emergencies either. It is better to evaluate the uniqueness of individual reports when compared to what other information is available. For example, the Pakreport instance Currion cites as a failure for having too few records is actually the most detailed map of village-level assessment reports, thanks to the efforts by crowdsourced volunteers outside of Pakistan who geolocated text reports. We received thanks from MapAction within Pakistan for this last week: “we are in desperate need for
better information on village locations, so this is perfect!”. I extend this thanks to everyone who helped map these reports! To the doubters who voiced skepticism and did not help, I ask you to please swallow your pride, join us, and contribute just a few hours to help next time.
– Consider multiple interpretations before drawing conclusions. Currion did not find the Ushahidi map to be useful. The director of FEMA called it the most accurate map of the crisis. Neither analysis negates the other and I respect both positions equally. But perhaps this should give thought before anyone draws the broad conclusion that the entire deployment was not useful, and the even broader conclusion that crowdsourcing is not useful.
– Understand the user community. The biggest users of this system were not humanitarians, they were Haitians. A trickle of messages ended up on the public Ushahdi map, a river went through the entire system, but these pale in comparison to the absolute flood of messages between the crisis-affected population and their friends and relatives outside of the Haiti. Members of the Haitian community were using the map when in contact with their friends and relatives within Haiti that only possessed cellphones. They were directing people to the nearest locations that they could obtain food and explaining the system for obtaining food (eg: “there is a food distribution point 1 kilometer north, and you can only collect food for so many people”). The volunteers I was working with did this for many of the people who texted to 4636, too – it was a community helping itself and this was an order of magnitude greater than anything we in the humanitarian world achieved. People would not have been silent if there was no 4636 service or up-to-date crisis map: they would have been trying to help themselves through whatever information and communication means possible, so the more we can systematize this information, the more we can directly aid the crisis-affect population by helping them help themselves. These are the hardest stats to quantify, as the people who were helped this way are those who do not otherwise clog up the information and aid channels on the ground in unnecessary ways.
As for my opinion of the Ushahidi platform itself? I have never really used a Ushahidi deployment directly, so I am not qualified to comment. In Haiti, the translation platform was originally hosted on an Ushahidi server by a very talented Ushahidi developer, but the code was adapted from a missing persons’s platform, and we later transferred this service to CrowdFlower. A few weeks after the transfer to CrowdFlower we staggered the transfer of the translation service from volunteers to paid Samasource workers within Haiti. Maybe 90% of all work on what some people have called ‘Ushahidi Haiti’ did not take place on Ushahidi. But that misses the point that it was the people, not the technology, that made the biggest difference. I have worked with people from the Ushahidi team twice – in Haiti and in Pakistan. They are among the most professional software developers I have ever worked with, and also among the first to talk about the limitations of their own platform (http://blog.ushahidi.com/index.php/2010/05/19/allocation-of-time-deploying-ushahidi/). The press might be reporting that Ushahidi is the solution for all the world’s problems, but I have not observed this behavior in the organization itself. When people in Pakistan needed a (slightly different) crowdsourcing platform for part of the PakReport initiative, the first thing Ushahidi did was admit this and reach out. This is certainly not the behavior of an organization that believes they have a broad solution to all problems. On the back of my experience with their staff, I would certainly like to see them expand.
It is easy to get caught up in our own bubbles and overestimate the level of exposure that Ushahidi has received. Even within the crowdsourcing parts of industry and academia, most people have not heard of Ushahidi. Those who do might try to remember what they read about them many months ago. Just as many would ask me if ‘Ushahidi’ is that new conveyor-sushi restaurant (which is a form of crowdsourcing, I suppose). It is not that they have received too much attention, just that the rest of us receive very little, but that is business as usual. It worries me that at least some of the criticism they have received is the result of simple jealousy. We should be happy that an organization with the same goal as many of us is getting recognition.
I understand the concern about allocating resources, so here’s a simple comparison. One of the leaders of the search and rescue teams in Haiti told me that the average cost per successful rescue was about $1,000,000. If we had have paid for every 4636 message to be translated, categorized, mapped and flag as actionable/non-actionable, it would have cost $200,000-$300,000. In other words, the entire eco-system, (which was 50 times larger than Currion estimated) would probably have cost about one quarter as much as 1 single search and rescue success. Whatever percentage of that million dollars was gathering intelligence, this system would pay for itself very quickly. In the case of Haiti, it was overwhelmingly a volunteer effort – the first months were free, providing actionable data and supporting the crisis-affected community. In addition to the dozens of lives we know we helped save, and the 100s that the responders assured us we helped save, we took data-structuring off the hands of those within the crisis-affected region in such a way that was a net gain in monetary resources. Funding initiatives like this to be even more scalable, and just as importantly more prepared, is an obvious allocation of future resources.
All crisis response is an exercise in failure. We cannot help everyone that needs help and so we are simply trying to find the best ways to fall short. Hundreds of lives saved and tens of thousands receiving the first aid sounds large. To the people we helped it was everything, but in the scale of the whole crisis it was small. However, for a short time in Haiti the ability to respond to requests for help was much wider than at any point in Haiti’s past when even child-births reported through 4636 were being responded to. This level of response never occurred with the ‘114’ emergency reporting service that became inoperable at the time of the earthquake, or at any other time in Haiti’s past. Hopefully, it will again in the future. Response systems are evaluated on the entire effort, not individual cases, but it is positive to think about those handful of mothers who reached out to 4636 for help and received a level of aid that would have been beyond their expectations even prior to the earthquake.
One beauty of crowdsourcing is that anyone can step up to help, even if it is just tagging locations on a map, and this can truly have a multiplier effect for those on the ground. So please step up if you wish to help.
October 28th, 2010 at 9:42 pm
Please clarify the average cost of a successful search and rescue. $1 million?
October 28th, 2010 at 11:54 pm
Fair point you make but if you are so concerned about inaccurate analysis, you should criticise both the positive analysis (that gives the impression that Ushahidi is the end-all and be-all to the detriment of other systems that may be better and adopted by the public) as well as the negative (that gives the impression it’s a waste if time). Both inaccurate analyses do the humanitarian efforts a disservice, and both should be equally rejected..
October 29th, 2010 at 12:21 am
Nice post.
BTW: MobileActive seems to be down at the moment – here’s the cache:
http://webcache.googleusercontent.com/search?q=cache:Zsmdi-PuuOIJ:mobileactive.org/how-useful-humanitarian-crowdsourcing+mobileactive.org+currion&cd=3&hl=en&ct=clnk&gl=au
October 29th, 2010 at 9:32 am
[…] This post was mentioned on Twitter by Patrick Meier, Josh Nesbit, caitlin klevorick, Katie Dowd, anahi ayala iacucci, CrisisCommons and others. Josh Nesbit said: “Evaluating crowdsourcing for humanitarian response” – @WWRob on 4636, @ushahidi, efforts in Haiti: http://bit.ly/crwdeval […]
October 29th, 2010 at 1:08 pm
Robert – thanks for the comprehensive reply to my original post. I’m a little puzzled why you feel that I’m “pouring scorn” on the work of Ushahidi, but the bulk of your article is exactly the sort of discussion that I was hoping for from people like you, people actually developing these tools. I’ll read your response and post a reply as soon as I can.
October 29th, 2010 at 6:07 pm
Thank you for the responses all!
David: yes, US$1 million was the approximate for the average cost. Anything we can do to concentrate resources in the physical rescue, the better.
Peter: For the reasons I gave above, I’m not taking it upon myself to correct everyone who is incorrect, but it looks like Ushahidi themselves are keen to debunk at least some overly-positive accounts (http://irevolution.wordpress.com/2010/06/16/think-again/). Currion singled himself out by claiming the “expertise to dig deeper” but not displaying it. However, your general point is spot on: poor analyses can be equally detrimental, regardless of whether they are perceived as positive, negative, left, right, liberal, conservative, etc.
Will: thanks! It looks like their website is now active (their phones always were).
Paul: My article is not an exercise in generating discussion, or itself an analysis. It is simply correcting an error. When composing your reply, please do take into account that you have already spent much more time writing about crowdsourcing initiatives than contributing to them. This has the potential to be a distraction from the already active discussions and critical analyses that are correctly not taking place on blogs.
October 30th, 2010 at 2:32 pm
Robert: I’m sorry to hear that your article wasn’t an exercise in generating discussion, but perhaps you could direct me to where the active discussions you mention are taking place so I can contribute more constructively?
October 30th, 2010 at 5:59 pm
I did have one question: you point out that the Ushahidi data was only 1-2% of the structured data that went through 4636. What was the other data?
November 1st, 2010 at 5:40 pm
Paul, the nature of the data should be pretty clear from the numbers above combined with your knowledge of which fields are present in the public data of an Ushahidi platform.
I already pointed out in this post, twice, how you can contribute and requested that you please do: a little less conversation, a little more action.
Again, thank you all for the responses and emails.
November 8th, 2010 at 11:29 am
[…] This post was mentioned on Twitter by Ben Parker, peter Murimi. peter Murimi said: RT @swampcottage: Average cost of Search and Rescue per survivor in Haiti: "$1million" http://bit.ly/cdzNAW […]
January 8th, 2011 at 11:22 am
[…] a networked world, which includes an opening article on Internet Freedom by Alec Ross. My colleague Robert Munro and I were invited to submit write the following piece: The Unprecedented Role of SMS in Disaster […]
April 25th, 2011 at 3:46 pm
Has any thought been given to how Ushahidi could be used to assist reconstruction efforts?
April 25th, 2011 at 11:07 pm
Hi Chloe
There has been a lot of thought about how mapping strategies can assist reconstruction efforts. Open Street Map have been especially active. The Noula platform built by the Haitian company Solutions has much of the same functionality as the Ushahidi platform (http://www.noula.ht/) and is used by a number of national and international orgs. People who worked on Ushahidi Haiti helped with its development although I don’t think it actually uses any Ushahidi code or components (the strategy is more important than the actual technology used, as I’m sure both Ushahidi and Solutions would agree).
For Ushahidi itself, the only deployment in Haiti since that I am aware of is a non-public one that George Chamales and myself deployed for relief organizations in the lead up to Hurricane Tomas. Different organizations within Haiti had different pieces of vital information: official safe shelters; buildings declared strong enough to survive the hurricane; the distribution/population of people living in camps; and the contact details of community leaders in those camps. The various organizations had not yet agreed on data-sharing technologies (we had to collect from some data sources by memory-stick/motorcycle) so we used an Ushahidi instance to simply aggregate all the information (the plugin architecture gave it a slight advantage, but other mapping solutions could have been used too). That way, if it became necessary to evacuate the camps the powers-that-be could quickly contact the camp leaders who would in turn direct people to shelters/safe buildings with appropriate capacities. Haiti resident Sabina Carlson was the driving force behind the preparation (http://citizenhaiti.com/2010/11/ioms-sabina-carlson-takes-the-crisis-mappers-conference-by-storm.html). It was also tied into the Information Ministry who had previously used the Noula platform with success to send out cellphone-tower-specific SMS alerts to people in camps with imminent flood danger – one of the more innovative uses of SMS in Haiti that has been under-reported in the popular media/blogs.
We were all relieved when Tomas ultimately passed through Haiti much more mildly than was first feared. So the alerts were not sent and this particular instance was not made public.
April 27th, 2011 at 4:28 pm
Thanks for the quick response. The information is really helpful. It sounds like there are a lot of interesting prospects, but that more work needs to be done to promote widespread use.
April 28th, 2011 at 2:27 am
[…] couple of days ago, I posted the following comment on the Jungle Light Speed‘s article “Evaluating crowdsourcing for humanitarian response“. Below is their […]