2011 has been an interesting year on this blog. Having written nothing since February, the number of visits has been consistently higher this year than last – overall by about 50% – and October saw the BPM Futures blog’s highest visitor numbers ever. Actually writing a new post may therefore not be the smartest of moves – why disturb a winning formula? – but I really couldn’t let the year finish without giving some publicity to a great article on orchestration and crowdsourcing.
The source is the Quarterly Technology Review in the ever-readable Economist Magazine. The Return of the human computers is a must-read for everyone interested in BPM, even though you won’t find the term ‘BPM’ used. The article explores the opportunities and challenges of managing – think ‘orchestrating’ – large numbers of effectively anonymous workers, all accessing tasks through a single giant in-box. And if that doesn’t grab your attention, you’ve got to love a piece that brings together crowdsourcing, Charles Babbage and the Great Depression. Check it out.
The issue of quality assurance is central to the article but, constrained by space and aimed at a relatively broad readership, it leaves some other questions unanswered. Leaving aside contractual, ethical and commercial considerations – I’m sure the web is already full of articles on these topics – two of these questions will be familiar to anyone who has worked on a BPM project. How can service levels be achieved in this environment? What scope is there to automate task completion?
I had a look at probably the best known crowdsourcing web site, Amazon’s MTurk*, for answers. MTurk can be thought of as the world’s biggest in-box – as I write there are over 200,000 tasks awaiting completion. The relationship between worker and task can be mediated through ‘Qualifications’, over 7,000 of which are online to test skills and, in effect, allow workers to select the process/company that they want to work for at any given time. And MTurk provides a rich set of APIs that could be called by a BPMS, allowing the BPMS to orchestrate as required. (I should say at this point that I have seen no evidence of any COTS BPMS product being used in this way … yet. Orchestration certainly happens – as the Economist piece illustrates – but the specific underlying technology components used are not discussed and, I’d guess, are custom-built.)
In the traditional corporate or government BPM world, service levels are typically managed through forced prioritisation of work queues; escalation processes; and resource (work force) management. The last of these is clearly not available in the crowdsourcing world – rostering adequate numbers of shift workers makes no sense where the entire world is your potential workforce. However, the question arises – how do I persuade enough of them to do MY work (let’s call it ‘Data Munching’) when I need it, instead of working on the other 200,000 tasks?
One answer is that I can put together a ‘Data Munching’ Qualification, ensuring that only skilled Data Munchers can do the work and giving them the option of filtering the inbox so that they focus exclusively on Data Munching (yes – result!). Alternatively they could simply use a free text filter (eg on ‘Munch Corporation’) to limit the viewed Inbox to my tasks.
In MTurk there are a range of levers to pull for worker retention – reward levels, of course; prompt payment; fair quality assurance acceptance/rejection rules; even discretionary bonuses. It’s not immediately obvious to me how I would attract my ‘Data Munchers’ to my Qualification and tasks in the first place, though.
Also, once a worker is focussed on Data Munching, how would they know to do the most urgent tasks first? Sorting is the general answer. MTurk, for example, provides 6 sort criteria (each up/down), for example by ‘Creation Date’ and ‘Reward Amount’. So provided I mapped my process onto one of these sort criteria, then ‘my’ workers would have the option of sorting according to my task prioritisation rules, so providing me with the basis for service level management. Of course ‘option’ is an important word there – many operations managers have pondered the benefits of a free choice of inbox access versus locked down ‘push’ workflow. ‘Cherry picking’ is always a consideration.
An entirely different – and perhaps easier – approach to meeting service levels (particularly tiered service levels) in this environment would be to have the option to use a traditional captive work force as well as the crowd. Both particularly urgent/important tasks and those tasks not completed by the crowd in a timely manner could be automatically allocated to this work force instead of the crowd – an easy enough job for a BPMS to manage.
How about task automation? One of the benefits of using a BPMS is that it helps identify – through analysis and measurement – tasks that can be fully automated, and it provides the technical framework to support that automation. Such automation almost always results in faster, and usually cheaper, work processing.
Of course in the corporate world one is in a position to ensure that automation really is adding value. The challenge with the crowd is that some people will try to ‘game’ the system by using ‘bots’ to flood QA processes with sufficient (poor quality or random) results to get a reasonable percentage of acceptances, for which they are paid. This concern appears to be Amazon’s reason for disallowing full automation on MTurk.
Such prohibition looks like a blunt instrument, though, given that true automation (ie with integrity) would provide benefits for businesses and their customers alike, and would be entirely consistent with a free-market model. I wonder whether crowdsourcing providers have concerns about disintermediation in the event of positive automation experiences? It’s certainly something that can occur on BPM projects – ‘we’ve automated it, so what’s the point of the inbox again?’.
Another interesting angle on this is the identification of opportunities for automation. In the crowdsourcing environment where work of all sorts is aggregated, the inbox provider is uniquely positioned to spot and, in particular, to carry out accurate cost-benefit analysis on, automation opportunities. Again, I have not been able to find any data on anyone bringing automation tools into business processes through opportunities spotted during crowdsourcing. However it would be nice to think that someone out there is improving business process efficiency by doing so.
OK – I’ll leave it there. Crowdsourcing has been going for some years now, and there are plenty of blogs and provider sites out there**. This blog is – clearly – not intended as an expert view on crowdsourcing orchestration, but simply as another way to engage a different community – BPMers – with a topic that seems to have considerable overlap with our own.
* Other similar sites I came across in researching this piece were Micro-Workers and U-Test (for software testing).
** The MTurk site acts as a portal for a variety of value-add businesses, some of which look like they would have significant experience in the topics briefly discussed here. ScalableWorkforce.com’s site is particularly interesting.
The Utzon Room in the Sydney Opera House was the packed venue this week for the Asia-Pac launch of tibbr, the new Enterprise Social Network product from Tibco.
Tibco held earlier launch events in San Francisco and London and as a result the blogosphere is already awash with functional commentary, which I’m not going to duplicate. So for a good functional summary check out David Terrar of Enterprise Irregulars, then step it up a notch with Futurist Ross Dawson (who spoke at the launch event).
These blogs – and the presentation that I attended – make it clear that tibbr is a product that can be used by anyone but is of particular benefit to knowledge workers. As a lifelong knowledge worker and manager of knowledge workers, I feel that this is a technology that I can comment on as a potential user/customer, for a change.
So here goes – if you are the CEO of a company that relies on knowledge workers, then you should simply direct your CTO to do due diligence (are tibbr’s adaptors right for you?) and, subject to that, go ahead and buy. Parts of your organisation have almost certainly already been trying to improve the management of their information overload through the use of ‘wiki’ or similar technologies for years – cheap, open source products, well-meaning but generally ghastly to use – the digital equivalent of your staff putting up posters to cheer up an office that has peeling paint and unsafe electrics.
If you commit to tibbr – or a similar product* – your team will see you as a leader who recognizes the fact of their daily information overload and wants to help them overcome it. They will need little training in the familiar user interface, and there will be ample ‘soft’ benefits as they avoid errors of omission (‘if only I’d know about that I would have …’); react to important news faster; and provide better outcomes through a higher quality of collaboration (and, yes, synergy). Championing it shows respect for their efforts and the challenges that they – and probably you too – face.
If, on the other hand, you or your Board require proof of ‘hard’ benefits, complete with a list of FTEs that will be freed up, you may have a long wait. The folk who will benefit most from this product – your sales team, design engineers, marketing department, lawyers, IT crowd and most of your senior management team – are probably not subjects of your current Six Sigma initiative, so proving time savings will be hard. However for a cost of under $150 per user per year this shouldn’t be insuperable.
The only really significant decision will be how fast and where to roll it out. There are plenty of configuration choices (how are subjects organised, who gets to see what) and there will undoubtedly be learnings and improvements along the way, so starting in one business unit before rolling it out more widely would make sense (this was apparently what launch presenter and reference site Ciber did). And I expect that there will soon be a tibbr user group and the usual swag of books and consultancy to provide guidance (can ‘tibbr for dummies’ be far behind?).
One interesting option would be to consider whether there would be further benefits in encouraging tibbr use up and down your supply chain. Selective sharing of information has been standard for years, so extending that to this space, where process data and discussion naturally come together, could make a lot of sense.
Any risks? ‘Over-transparency’, perhaps. This sort of product has the potential to open up much of the dialogue within your organisation to scrutiny. In most cases this is a very good thing, but it will no doubt sometimes throw up management challenges. In particular expect middle management to come under more pressure as their tibbr participation comes under 360° scrutiny in a way that their emails rarely can.
So a quick decision then, which is handy, since Tibco say that they can have it up and running for you in just hours.
Finally, after you’ve finished talking with your CTO, why not drop a note to your Marketing Manager suggesting the Utzon Room is used for your next customer event – with the wonderful Utzon interior and huge harbour views it’s surely one of the classiest small corporate venues in Sydney?
This post is dedicated to the memory of Charles Brown, Australian BPM pioneer and a friend and colleague to many, particularly in the Staffware world. Charles would have celebrated his 60th birthday this weekend.
* Disclosure – I worked for Staffware, now part of Tibco, for some years. I don’t think that this clouds my judgement but it does mean that a reminder that there may be competing products out there is appropriate.
January in Sydney tends to be a quiet month for business, only really coming to life after Australia Day on the 26th. From the 4th onwards there is a gradual return to workplaces that are abuzz with new projects, new sales opportunities, new marketing campaigns, all scheduled to start towards month end or early February. As a result, energy levels of those at work tend to be higher and minds – less stressed than usual, perhaps – a little more open to reflection. So, with that audience in mind, here is a blog about the bedrock of BPM systems (BPMS), and indeed BPM in general, that is, the reason why any business should bother to use it.
The benefits of using a BPMS can be summarised as:
- Higher productivity
- Faster process times (eg end to end)
- An order of magnitude improvement in process transparency/visibility
- An entry point to a virtuous circle of process understanding, improvement and execution
- Better control over the process (eg through the use of embedded business rules)
- Improved job satisfaction for operational staff using the system
Of these the most significant – in terms of both impact and universality – is undoubtedly process visibility. Businesses that lack a BPMS – or alternatively core or ERP software that fully encompasses their business processes – are like vehicles driven in a thick fog. Data about the past is patchy and unreliable, data about today incomplete and highly reliant upon personal observation, and data about the future seriously compromised. (‘Data about the future’ can be both hard – today’s backlog, how much work is in pending and when it falls due – and relatively soft, for example resource forecasting based upon historic productivity data). Most obviously true of operations teams, this extends further into customer interaction. For example, without accurate end-to-end process times there is little chance of understanding the customer experience except by analysing complaints.
Whilst senior business managers can be passionate in demanding visibility, Boards can demand more. IT investments do not always produce a return and, to continue the fog analogy, we may not be able to see much out the windows but we’ve got this far OK, there is some rear and side vision, and the future’s always going to be largely uncertain.
Which is where the higher productivity benefit comes in handy. Higher productivity in the operations teams can pay for the initial project and the ongoing IT costs, whilst the benefits arising from enhanced visibility (tightly focussed process improvement in operations lowering costs, better informed product development and customer service increasing revenue) are spread across the business.
Some productivity benefits pretty much come with the territory – work distribution, management of pended/diarised work and enquiry handling are all areas where the BPMS will automate previously manual tasks, delivering benefits pretty much by default. Other areas like load balancing (between teams or individuals), exception handling (eg duplicate requests) and re-work management can take a little more work to achieve results. Hard benefits from these and other BPMS features are commonly augmented through add-ons such as automated outgoing document management and integration with core systems, which – whilst initially adding to project costs – can be relied on to provide further productivity improvements.
Faster process cycle times, particularly end-to-end times, routinely arise from a BPMS implementation. Naturally, additional focus and effort can drive further improvement. Where tiered service levels are required, the BPMS can use prioritisation to ensure that outcomes are optimised – something that can be especially hard to achieve where processing is manual.
Process control may be a more or less compelling reason to use a BPMS, and may mean high-end technical control and/or regulatory control. Running business processes through a BPMS will prove to the regulator – and to other stakeholders – that the process has been followed in a specific case (through the audit trail), and is followed in general (through sharing the process definition). Automating business rules will ensure 100% compliance with those rules that are automated and typically will make it harder for a careless or rogue employee to break others. It will also provide control where a – highly automated – process must happen so fast that people are no longer able to participate except on exception, that is, straight through processing.
And improved job satisfaction? Well, whilst rarely at the heart of the business case, the evident satisfaction of employees in receiving better tools for their work certainly makes change management that much easier. And doesn’t everyone want happier employees?
OK – that’s it. Your revision for the day is complete. And as a reward, here is my favourite viral video of the holiday – the Brooklyn Space Program. Any connection with BPM? Just inspiration from their ambition and ingenuity, and perhaps aspiration that we could achieve as much with so little. Maybe we need to recruit some younger project team members?
Another IBM Agility seminar at the Shangri-La Hotel, and some BPM announcements. And in contrast with the sunny spring skies warming Sydney’s harbour (for those of you in the northern hemisphere ) the best bit in here was the cloud.
But first …. Websphere Lombardi Edition is to have drag and drop integration with both FileNet P8 Content Server and Content Manager 8. The extent of the functionality involved wasn’t clear to me – presumably IBM will start with search/retrieval and later move on to others like metadata update and new document insertion? Anyway, further integration will be with Websphere Service Registry and Repository – useful for orchestration purposes – and with iLog, where it will be possible to browse and select an existing Ruleset on a predefined iLog JRules Execution Server.
In the meantime Websphere iLog itself is to be coupled with Websphere Business Events to become Webspher Decision Server, extending IBM’s business events capability, whilst the iLog BRMS SupportPac is to provide Websphere Business Monitoring and predictive analytics integration
All very worthy, but much less interesting than the next piece of news, which was the launch of Blueworks Live. This combines three elements – the Blueworks BPM collaboration community (blogs, wikis); the highly successful (Lombardi) Blueprint process discovery and definition environment; and a new workflow execution engine. All running in the Cloud and, apparently, available through your browser for a test drive from November 20th. (Yes, that’s this Saturday – perhaps one of the software world’s most specific launch dates ever…!).
Now, Cloud-based BPM is hardly new. Cordys was one of the first to offer it globally, and there are niche players too, such as Australian company OpenSoft, which uses open source products to provide integrated Cloud-based BPM to the burgeoning Australian energy and resources sectors. However, Cloud-based BPM from IBM is something else entirely. IBM’s existing mindshare in the global BPM market and its credibility as a corporate Cloud (and FM) provider mean that the interest in this product will be enormous, and as a result it could well be a game-changer for all BPM stakeholders.
The PowerPoint-based demo that followed included a marketing manager setting up a new process for her latest marketing initiative. Yes, that’s one process for one case/process instance. And if the Powerpoint is to be believed, it only took her a few minutes.
How can this fail? The CIO’s happy because it’s SaaS; the Board because it’s IBM; the Ops Manager is comfortable because its running in an IBM Datacentre; the process improvement people have Blueprint to play with; the IT teams can focus on integrated, production BPM system work; and best of all the Business can replace its endless email trails with easy to access, auditable business processes.
So what next? Well, here’s a prediction – Blueworks Live will do for business processes what Microsoft Sharepoint did for enterprise content – it will get everywhere. That means a step change in awareness regarding BPM (how many business – or even IT – people knew of ECM before Sharepoint?) and huge opportunities for BPM professionals to sort out all of those ‘home grown’ processes. Bring it on!
I’m running behind with my blogging. It’s now several weeks since the Pegasystems Business Process Symposium took place here in Sydney, however whilst not quite ‘hot off the press’ the event is easily worth reporting on, even now, for its excellence at three levels – case studies, product and philosophy.
Pega’s philosophy – or at least my understanding of it – puts top priority on ease of use for both developers and end users. This means plenty of functionality that is easy to put together into processes, and thereafter just as easy to maintain. This is a big ask – business processes tend to be complex, and the technology set required to support them is fairly broad – and can only be achieved through a pretty stubborn focus by the vendor.
This philosophy came across quite graphically in a Q&A session towards the end of the day. Alan Trefler, CEO and founder of the company, was asked why Pega wasn’t providing more extended support for custom Java user interface development. Now 9 out of 10 company representatives put in this position would have (a) spoken at length about the support that was already in place and (b) at least implied that further and even more exciting developments were on their way. Not Mr Trefler. He told the questioner that custom Java code was far too slow to develop to be useful in BPM deployments – instead, it was the responsibility of BPM vendors to provide a UI builder, fully integrated with the core product, that was fit for purpose. The Pega roadmap? It would continue to improve the built-in Pega UI builder …. and if any customer or prospect felt that there was functionality lacking in it, he would be delighted to make the investment necessary to develop the product further.
Now that’s focus. I have been responsible as a manager – and, going back a few years, as a developer – for BPM implementations with both flavours of UI, native and custom built (ie Java/.Net). From a productivity point of view the native (BPM) UI wins hands-down, both because it is simpler to use and because a single developer can define both the process flow and the accompanying screens together. There is no need for an interface, two sets of data definitions and, worst of all, two different developers each with a slightly different skillset and understanding of the requirements. The native UI has only one catch – without real commitment from the vendor, the UI builder tends to have significant functional gaps. Close those gaps and you have a winner.
On a different topic, he was asked about the rationale for the Chordiant takeover. The answer was interesting in that it emphasised Chordiant’s core differentiator, its predictive and adaptive capabilities, which support more intelligent management of (eg) customer retention, cross-selling and fraud processes. Applying this technology to end-to-end processes, rather than simply the CRM front end, has the potential for significant value-add.
It is perhaps this combination of a practical, experience-based development focus with innovation where it can really make a business impact – rather than simply following the latest technology trend – that explains why Pega tends to have rather interesting case studies. On this occasion it was Mike Efron, eBusiness Manager from Wesfarmers Insurance who spoke about using Pega to provide a rules- and process-based consumer portal through which Kmart Tyre & Auto Service is selling white-labelled personal lines insurance products. The key here was ‘building for change’ – Pega’s slogan, which this project realised through defining specifically those aspects of the solution that were not required to change – and then leaving it to the system’s designers and the system itself to ensure that everything else could change. He told the audience that once Kmart Tyres was safely live, it took the team just two weeks to change the system sufficiently to support a second ‘white label’ customer.
A second case study that was mentioned at the event was British Airports Authority. This is the sort of innovative case study that refreshes one’s interest in BPM. How many BPM solutions have as their primary input channel not email, not scanned mail … but radar? Rather than my re-writing it, check out Gartner’s take on it here.
The final topic is of course the latest product news. This is well-documented on the Pega site, and the highlights for me were:
– A new Case Management version of the product with a slick user interface and a process architecture that includes effectively unlimited nesting of cases. So a motor claim can include separate sub-processes for vehicle repair and personal injury; the personal injury claims can include separate processes for the several individuals involved, each with multiple different types of injury, and so on. All neatly tied together into the Case Manager’s desktop.
– Other Case Management features include ad hoc tasks, delegation, support for multiple parties and related cases, correspondence management and reporting.
– New Process Designer features that are used for Process Discovery. These are similar to those introduced by a number of other vendors in recent years with the important addition of requirements traceability. I understand this is made available as a cloud service to the Pega Developer Network.
– Project management tools (eg for task, risk and issue management, and including wiki and twitter-like functionality) that use Pega core technology and can be configured to fit the desired SDLC approach (waterfall, agile etc). This looks well-developed enough to use, though the overlap with third party systems is obvious. It’ll be interesting to see how this area develops.
Overall this was an excellent event, showcasing a product that is increasingly differentiating itself from its peers, and was much enhanced by the presence of the CEO himself in Sydney.
I attended an IBM ‘Business Agility’ workshop at Sydney’s Shangri-La Hotel yesterday – the first IBM event to feature BPM that I’ve managed to get to since the Lombardi purchase. It was a Websphere event, which meant that it included Lombardi and excluded FileNet, so I was a little concerned that the BPM section might be dominated by talk of process orchestration and middleware layers, rather than end-to-end processes.
I needn’t have worried. The Websphere team has embraced IBM Lombardi (as we must now know Teamworks) with great enthusiasm, and started a day of real (yes, live) demonstrations with several that showed off Lombardi to good effect. Point and click SLA setup; process stats (such as wait or execution times) displayed through a mouse-over in the unified process model-define-simulate view; colourful monitoring views populated with whatever defined field you required – just click that checkbox on the field definition dialogue; and so on.
There were also Websphere Dynamic Process Edition (Process Server, as was) demos. The emphasis there was on architecture, integration and transactional integrity. The latter featured a high-wire demo, with 100 updates to two databases on separate servers, interrupted by the speaker who pulled out the connecting cable to the second (Oracle, as it happened) with a flourish. 56 updates had been processed successfully and, to the relief of all, the other 44 were in a ‘failed’ queue, from which they were dispatched – to a successful completion – by a single click on the ‘resume’ button once the cable was re-connected. We were told that the product was unique amongst BPMS’s in fully supporting two-phase commits, with resume, restart and ‘compensate’ options for system administrators.
All of which provided – to this viewer – a pretty clear, if unspoken, message. For the human side of BPM (the typical financial services back office, perhaps), Lombardi is IBM’s answer, packed with business-friendly features. Alternatively, if the business depends on multiple integration points that require sophisticated sequencing, error handling and recovery options – bullet proof delivery, in other words, WDPE does the job (telco provisioning comes to mind). And for the business that needs both, well, integration between the two is currently available through web services, with work under way to convert Lombardi to IBM’s Service Component Architecture, the basis of the Websphere product range.
One other demonstrated feature of WDPE that I liked, by the way, is the easy way in which routing rules can be changed without re-deploying (or even opening for editing) the process itself. This seems like an obvious feature, but by no means all BPMS’s share it. Isolating the change eliminates the need for system and regression testing and even (depending upon the process design and one’s perception of risk) UAT. Now there’s something that offers Business Agility.
Conversations that include the words ‘Pegasystems’ and ‘buy’ in the same sentence tend to cast Pega as the vendor, so this move has the immediate benefit of surprise.
I’m not going to attempt too much learned reflection on the purchase, since I know very little about Chordiant. However, I do know that CRM / BPM mergers aren’t always easy – Staffware bought a small US CRM vendor in the late 90s (whose original name escapes me now) and my own sense was that in the end this was more distraction than synergy. It’s hard to make a world-beating BPM system and, no doubt, the same goes for CRM. Trying to maintain both of those market positions whilst simultaneously promoting an entirely new ‘CRM+BPM’ market position as well requires a near-superhuman organisation of the engineering teams, not to mention sales & marketing.
What leaves me with some interest and excitement with Pega is that their vision has for some years been focussed on all-round excellence in BPM with an emphasis on BPM’s original mission – providing systems that render the exceedingly complex (UI +process+rules+integration+data) sufficiently easy for business process deployment to be affordable and – crucially – for subsequent process change (=agility) to be realistic.
Given this track record, it could just be that Pega will use Chordiant’s technology to push ‘Build for Change’ in quite new and original directions. BPM needs a lift at the moment – perhaps from a ‘should buy’ to a ‘must buy’ – and the vendor that delivers this transition will be richly rewarded. My money to date has been on the big players – IBM, Oracle, SAP – simply because of the size of the challenge in terms of both re-engineering and re-launching into the market. However, perhaps Pegasystems will leverage its intense focus to take the lead, showing the rest of the market the way.
Let’s hope that the fact of the purchase is the least of the surprises Pegasystems has in store.
For more facts and figures, check out this article. Good stuff.
A full house today at Sydney’s Sofitel Wentworth for the first 2010 Tibco User Group meeting. Networking – with cold beer, very civilised – was followed by the corporate positioning pitch and then on to a tour of the 2010 roadmap. Whilst many individual points were of interest, the overall message was clear – we’re all business process specialists now, whether we use BPM tooling as such (Tibco’s iPE) or Complex Event Processing (‘BusinessEvents’) as the ‘pointy end’, the ‘stack’ is all about business process outcomes.
Highlights on the traditional BPM front include new Organisational Modelling extensions to iPE; new Forms options including Google Web Toolkit and Windows Presentation Foundation (the key here will be the depth of integration with the rest of the stack); and new process optimisation functionality arising from the convergence of iPE and Spotfire (the latter being a rich/easy to use BI tool that threatens to be as much loved by the business, and as hard to control by the IT department, as Sharepoint). And tibbr, Tibco’s corporate answer to Twitter, was also featured – very compelling, it takes the concept of ‘following the customer’ to a whole new level.
The session was topped off with a case study from Vodafone Hutchison Australia, a 2009 merger that claims 27% of the Aussie mobile market, and growing fast. Presentations like this, whilst always well-meaning, can be a bit repetitive – we’ve all heard similar ones before. This one stood out in two respects. Firstly it related the replacement of VHA’s core provisioning and customer service system, handling 100k+ tx/day, with the Tibco stack in just 6.5 months – this old hand was impressed.
And secondly, VHA used BusinessEvents rather than iPE, despite the latter having a significant track record in the telco space (very high volume provisioning, MNP and others). This remained unremarked in the presentation, and I was unable to reach the front of the queue to speak with their Architect afterwards. I did find myself speaking with one of VHA’s competitors though, who confided that if he had the choice he would really like to use both products, with BE orchestrating iPE. A topic I shall try to delve into further in future blogs….
Isn’t it time we re-thought the definition of BPM? It seems to be getting increasingly jelly-like – wobbling and spreading to encompass every possible interest group. Check this out – the ‘official’ definition (at least until the editing debate settles down) from Wikipedia: “Business process management (BPM) is a management approach focused on aligning all aspects of an organization with the wants and needs of clients.” Presumably contrasting sharply with previous generations of business management methodologies, which focussed on aligning all aspects of an organization with the wants and needs of small furry animals. I love Wikipedia, but if this is the wisdom of crowds, then civilization is surely doomed.
My recollection is that the term BPM came into usage in the late 90s as a way for new entrants (Savvion, Lombardi, Metastorm and Ultimus come to mind) to the existing workflow automation market to differentiate themselves from the incumbents. A key aspect of these newcomers was a serious attempt to make the process definition environment more friendly and useful to business process analysts/modellers, for example with simple BPMN flowcharts and built-in simulation.
However, the newcomers could not emphasise this aspect alone, partly because the incumbents already used graphical process definition (albeit, they would argue, a less business friendly version), and partly because their prospective clients were also concerned with other features, such as ease of integration and reporting. So “BPM” became associated not only with built-in business modelling/simulation but also better integration and reporting (think Business Activity Monitoring) – something not contested by workflow automation incumbents, since some already had excellent integration and they could swiftly match any reporting improvements. Very quickly, everyone sold “BPM” products.
And everyone bought them. The tremendous success of BPM technology, particularly in banking/financial services, telcos & the public sector, over the last 12-13 years (since the explosion of new “BPM” products in the late 90s) has had a further, diluting effect on the terminology. BPM projects are now routinely enterprise-scale, and are therefore attracting a spending level significantly higher than most process improvement /modelling initiatives. This in turn means that a much wider constituency of professionals wants to be involved in BPM projects – and often rightly so, given that enterprise roll-outs do require a wider range of skills, particularly in relation to business (process) analysis. Unfortunately some of these folk are taking positions in relation to BPM terminology that owes much more to their history in process improvement than to the technology that created BPM.
Is this just technology bias? Well, consider your favourite BPM project, and imagine the impact if all of the BPM technology was suddenly removed. What new analytical or process improvement method would remain to distinguish the activities of business improvement folk today from those you might have seen 15 years ago? Six Sigma, Lean, more general process improvement techniques are tremendously important – but have they changed so much in recent years that they constitute a new business management methodology called BPM?
So is BPM defined by technology alone? Think again of your favourite BPM project, and this time remove the entire concept of process improvement, Six Sigma and Lean. What are you left with? I suspect something that looks a lot like workflow automation, at least in its primitive form – a process defined and automated … then the project finishes and the process improvement team (suddenly reappearing) pulls out its Visio charts and starts to negotiate future change with the IT department.
If BPM is to have any substantive and paradigm-changing meaning, it must include both technology and process improvement in a way that reflects their roles and illuminates their synergies. A suggestion:
BPM is the superior state of process management attained when business process analysis and improvement activities are supported by technology workbenches that are themselves deeply integrated with the systems in which the processes are to be executed.
This definition addresses the relationship between BPM, process modelling/simulation, workflow and ERP (and other types of ‘Core’ systems). The following statements become true:
• “BPM” products that do not include deeply integrated workbenches for process modelling, simulation and analysis are not BPM – they are workflow. Nothing wrong with that – many, perhaps most, of the world’s “BPM” implementations to date probably fall into this category and they have provided significant return on investment to their customers.
• Where a workflow product does include deeply integrated workbenches for process modelling, simulation and analysis it is indeed BPM or, perhaps better, ‘BPM-enabled workflow automation’.
• ERP (and other) systems that include deeply integrated workbenches for process modelling, simulation and analysis may also be classified as BPM – or perhaps ‘BPM-enabled ERP’.
• Process improvement professionals can state that they are practicing BPM (or process improvement in a BPM environment) if and only if they are using BPM-enabled technology. They might be practicing BPM in relation to processes executed in a workflow automation or an ERP system.
Such a definition would set a standard for all stakeholders and provide a target for both business and vendors to aim at, with a vision of process improvement and execution progressing in harmony that is both radical (in the context of where the “BPM”/workflow journey has actually been over the last 20 years – whilst a number of products would pass the BPM test above, many solutions based on these products remove the levers that would put process improvement professionals in the driving seat) and ‘back to basics’, in that this is very much what the pioneers of workflow automation envisaged in using ‘graphical process definition’ in the early 90s.
It remains a compelling vision, and one that could drive competitive advantage for those that adopt it in the decade to come. The first step is to collectively recognise the vision and the terminology under-pinning it for what they are, and discard all wobbly-jelly BPM definitions along with sub-prime loans and easy credit, as relics of the decade we’ve just left.