BPM Futures


Flash-mob BPM?

2011 has been an interesting year on this blog. Having written nothing since February, the number of visits has been consistently higher this year than last – overall by about 50% – and October saw the BPM Futures blog’s highest visitor numbers ever. Actually writing a new post may therefore not be the smartest of moves – why disturb a winning formula? – but I really couldn’t let the year finish without giving some publicity to a great article on orchestration and crowdsourcing.

The source is the Quarterly Technology Review in the ever-readable Economist Magazine. The Return of the human computers is a must-read for everyone interested in BPM, even though you won’t find the term ‘BPM’ used. The article explores the opportunities and challenges of managing – think ‘orchestrating’ – large numbers of effectively anonymous workers, all accessing tasks through a single giant in-box. And if that doesn’t grab your attention, you’ve got to love a piece that brings together crowdsourcing, Charles Babbage and the Great Depression. Check it out.

The issue of quality assurance is central to the article but, constrained by space and aimed at a relatively broad readership, it leaves some other questions unanswered. Leaving aside contractual, ethical and commercial considerations – I’m sure the web is already full of articles on these topics – two of these questions will be familiar to anyone who has worked on a BPM project. How can service levels be achieved in this environment? What scope is there to automate task completion?

I had a look at probably the best known crowdsourcing web site, Amazon’s MTurk*, for answers. MTurk can be thought of as the world’s biggest in-box – as I write there are over 200,000 tasks awaiting completion. The relationship between worker and task can be mediated through ‘Qualifications’, over 7,000 of which are online to test skills and, in effect, allow workers to select the process/company that they want to work for at any given time. And MTurk provides a rich set of APIs that could be called by a BPMS, allowing the BPMS to orchestrate as required. (I should say at this point that I have seen no evidence of any COTS BPMS product being used in this way … yet. Orchestration certainly happens – as the Economist piece illustrates – but the specific underlying technology components used are not discussed and, I’d guess, are custom-built.)

In the traditional corporate or government BPM world, service levels are typically managed through forced prioritisation of work queues; escalation processes; and resource (work force) management. The last of these is clearly not available in the crowdsourcing world – rostering adequate numbers of shift workers makes no sense where the entire world is your potential workforce. However, the question arises – how do I persuade enough of them to do MY work (let’s call it ‘Data Munching’) when I need it, instead of working on the other 200,000 tasks?

One answer is that I can put together a ‘Data Munching’ Qualification, ensuring that only skilled Data Munchers can do the work and giving them the option of filtering the inbox so that they focus exclusively on Data Munching (yes – result!). Alternatively they could simply use a free text filter (eg on ‘Munch Corporation’) to limit the viewed Inbox to my tasks.

In MTurk there are a range of levers to pull for worker retention – reward levels, of course; prompt payment; fair quality assurance acceptance/rejection rules; even discretionary bonuses. It’s not immediately obvious to me how I would attract my ‘Data Munchers’ to my Qualification and tasks in the first place, though.

Also, once a worker is focussed on Data Munching, how would they know to do the most urgent tasks first? Sorting is the general answer. MTurk, for example, provides 6 sort criteria (each up/down), for example by ‘Creation Date’ and ‘Reward Amount’. So provided I mapped my process onto one of these sort criteria, then ‘my’ workers would have the option of sorting according to my task prioritisation rules, so providing me with the basis for service level management. Of course ‘option’ is an important word there – many operations managers have pondered the benefits of a free choice of inbox access versus locked down ‘push’ workflow. ‘Cherry picking’ is always a consideration.

An entirely different – and perhaps easier – approach to meeting service levels (particularly tiered service levels) in this environment would be to have the option to use a traditional captive work force as well as the crowd. Both particularly urgent/important tasks and those tasks not completed by the crowd in a timely manner could be automatically allocated to this work force instead of the crowd – an easy enough job for a BPMS to manage.

How about task automation? One of the benefits of using a BPMS is that it helps identify – through analysis and measurement – tasks that can be fully automated, and it provides the technical framework to support that automation. Such automation almost always results in faster, and usually cheaper, work processing.

Of course in the corporate world one is in a position to ensure that automation really is adding value. The challenge with the crowd is that some people will try to ‘game’ the system by using ‘bots’ to flood QA processes with sufficient (poor quality or random) results to get a reasonable percentage of acceptances, for which they are paid. This concern appears to be Amazon’s reason for disallowing full automation on MTurk.

Such prohibition looks like a blunt instrument, though, given that true automation (ie with integrity) would provide benefits for businesses and their customers alike, and would be entirely consistent with a free-market model. I wonder whether crowdsourcing providers have concerns about disintermediation in the event of positive automation experiences? It’s certainly something that can occur on BPM projects – ‘we’ve automated it, so what’s the point of the inbox again?’.

Another interesting angle on this is the identification of opportunities for automation. In the crowdsourcing environment where work of all sorts is aggregated, the inbox provider is uniquely positioned to spot and, in particular, to carry out accurate cost-benefit analysis on, automation opportunities. Again, I have not been able to find any data on anyone bringing automation tools into business processes through opportunities spotted during crowdsourcing. However it would be nice to think that someone out there is improving business process efficiency by doing so.

OK – I’ll leave it there. Crowdsourcing has been going for some years now, and there are plenty of blogs and provider sites out there**. This blog is – clearly – not intended as an expert view on crowdsourcing orchestration, but simply as another way to engage a different community – BPMers – with a topic that seems to have considerable overlap with our own.

* Other similar sites I came across in researching this piece were Micro-Workers and U-Test (for software testing).
** The MTurk site acts as a portal for a variety of value-add businesses, some of which look like they would have significant experience in the topics briefly discussed here. ScalableWorkforce.com’s site is particularly interesting.