BPM Futures


IBM to buy Lombardi

A few thoughts to add to those already covered by other BPM bloggers such as Sandy Kemsley, Bruce Silver, and Neil Ward-Dutton (who gets the prize for the best title so far) …

Lombardi’s particular strengths in relation to other IBM BPM products are its fully built-in process simulation function – including use of historic ‘live’ data – and the genuine business agility that arises from the 360° functionality (integration, rules, process, user interface) that is managed from a single development environment. Whilst it shares one or both of these with other ‘pure play’ BPM vendors, Lombardi has won enough gongs from industry analysts and others in recent years to regard themselves as leaders in the ‘pure play’ pack – no doubt a reason for IBM’s buy decision.

IBM says that it will be targeting Lombardi at ‘departmental’ and ‘human-centric’ solutions; it references speed of build (‘fast start for immediate value’), and, intriguingly, is looking in 2010 to ‘leverage Lombardi to expand BPM offerings in emerging markets’.

All of which suggests that IBM’s selling price for Lombardi products is likely to become the epicentre of activity for IBM’s BPM strategists over the coming vacation. Departmental and emerging markets – both great targets for Lombardi’s technology – are not known for big spending. Competition in these markets from still-independent (and local) BPM vendors will be intense, putting downward pressure on prices. Meanwhile technical differentiation with Websphere BPM and FileNet (in particular – if ‘content-centric’ BPM isn’t meant for humans, who is it for?) may well be less than IBM’s initial positioning suggests. As a user of the Websphere stack, or of FileNet content management, considering moving on to BPM and wanting to stay with IBM, why wouldn’t you buy Lombardi, particularly if the price is attractive?

IBM’s internal product management challenges aside, this purchase is likely to produce more winners than losers. Those who have invested in Lombardi already (an elite club here in Australia to date) and those considering doing so will be pleased to have their decision underwritten by IBM, with the prospect of improvements in support and service options to come; similarly, this is good news for Websphere and FileNet customers who have yet to invest heavily in BPM – it expands their choices too. And with what should be a significant boost to their market, some of the biggest winners could be Lombardi service providers. Watch out for skills shortages.

No doubt FileNet customers who have already invested in BPM will be looking for reassurance that their product roadmap will not be adversely influenced by the Lombardi purchase. In particular users of the Business Process Framework, FileNet’s equivalent of Lombardi’s forms builder, will be asking questions of IBM regarding future BPM forms developments; questions that may also interest some Lombardi customers. Does Lombardi’s ‘human-centricity’ imply that one day all IBM BPM users will use their forms? Or will an entirely new forms paradigm be available to all FileNet, Websphere and Lombardi users? My bet is on the latter.

And whilst IBM technologists ponder the future of enterprise BPM forms, where business rules (iLog?) drive super-flexible UI components (why do BPM vendors so readily refer to ‘orchestrating’ SOA components but never ‘orchestrating’ the user interface?), perhaps their thoughts will turn to the product that will really change the market. One that IBM is prepared to position as both enterprise level and – like Lombardi – truly agile.

btw if you want to read the IBM announcement, you can find it here. Onward links are top right – the ‘FAQ’ document expands considerably on the press release.


Business python management

In exciting news, bpm.futures announced the world’s first python-driven BPM solution. Inspired by a recent office visitor, Business Python Management will speed your processes with unparalleled flexibility.

See here for the full story (or don’t, if pythons aren’t your thing!)…


Some light reading

There is likely to be a gap of a couple of weeks until my next blog. In the meantime if you’re at a loose end here are a couple of recommendations.

Sandy Kemsley’s Column 2 blog is always worth checking – she’s an experienced practitioner who writes clearly on a range of topics, most relating to BPM, and also monitors and comments on a swag of other relevant blogs. My favourite BPM blogger so far!

And if you’re getting bored with everyday BPM then check out the latest thinking on Complex Event Processing here or on Paul Vincent’s blog for Tibco CEP.

I find PV’s blog clearly written (a top preference!), thoughtful and thought-provoking. Clearly one needs to duck the Tibco product pitches – Tibco are not the only vendor in this space, with Oracle and IBM, amongst others, now represented. The topic of how CEP can interact with – or conceivably replace – BPM should clear the most jaded BPM palate!

Happy trails…


Let data make that process dance

I was recently discussing BPM agility with a senior IT manager, when the topic of configuration files came up. “Many of my colleagues don’t like them” he said “we have hundreds of them, and they don’t like changing them because they don’t know what the consequences will be”.

It’s easy to smile at this, but practically all guardians of customised systems are in a similar position. The reason that I’m writing about it is because config files (or config data, to generalise) can be a powerful way to drive change to BPM systems if used correctly, because data can be significantly easier to change and deploy than code.

What sort of data might one want to externalise in order to improve BPM agility? A few examples:

  • Field labels
  • Text to be included in reports/alerts
  • Route/path ids and sub-process ids (to permit config-driven re-routing)
  • Data relating to rules (‘if value of loan > $1000 then’ will obviously be more flexible if the value – $1000 – is a variable) if the rules are not already externalised through a rules engine (which would be better still).
  • Data that drives which cases/process instances should adopt changed config values. Should changed values be picked up by closed or archived cases? Should they only apply to new cases, or cases with particular characteristics?

As the last point suggests, there is an important system design aspect to this, regarding when config data should be (re-)read and how much granularity is required to control change impact.

And of course, the data to be held will vary widely, depending both on the business processes and the BPMS used.

For this data driven approach to deliver significant improvements to BPM agility the mechanisms used must be thoroughly tested – part of ‘Agility Testing’. Unless one can be confident that changes to the config data will result in predictable outcomes, abstracting it will provide few benefits, since changes will once more need to be system tested, delaying deployment.

One significant source of comfort for the hard-pressed project manager is that Agility Testing can be carried out after the initial go-live. Not perfect, obviously, but a promise that ‘the system will become easier to change after the second deployment’ will put you ahead of many BPM solutions.

So are config files, as such, the best way of delivering the config data into production? Well, no. Config files are hard to add CRUD functions to (and so are often edited in Notepad – easy to make a mistake) and must be deployed (albeit it’s a very simple deployment).

Using a relational database (RDBMS) helps a lot, in terms of basic validation, and if already in use on your project it will be easy to piggyback on existing or planned resilience and recovery features. However, without a user interface (UI), the standard method of changing an RDBMS is via a stored procedure (for audit and repeatability reasons) and this, once more, will require testing and deployment.

A custom UI would be best. It needn’t be pretty – it will only be used by Admin users or Prod Support so some shortcuts may be possible in terms of corporate standards. However, it will need to be able to provide Read/Update (possibly Create/Delete) functions for quite an assortment of data, so some extendable way to segment the data, such as tabbing, will help. Most importantly it must include an audit log and, ideally, a function that will import an existing audit log, making the changes included.

The reason for the audit log import is procedural. Even though no technical issues should arise from changes made via the UI, there is still potential for the Business to have erred in their change request. So an initial deployment into a User Acceptance Test (UAT) environment may well be required. The UI may be used to directly change the UAT config data – indeed this should be so easy that it could be done with a business representative alongside. Once the Business is satisfied with the result, the audit log can be deployed into Production and run against the Production UI (as an import) with no further testing required, since the resulting Production changes will necessarily be identical to those in UAT.

The BPM project steering committee will need to assess the value of this sort of approach, since it clearly involves a cost. The degree to which config data can be used to drive agility in a BPM solution will vary, as will the value that the Business places on it.

And of course all of this works best within an A-BPM (Agile BPM – here today, Gartner tomorrow!) development framework.

Happy trails…


Getting Agile – Rules Engines show the way

So what are the really major product enhancements that would drive BPM into Agile territory? What new functionality would allow businesses to see change routinely happening within 24 hrs of the business request, and the disappearance of the dreaded ‘change backlog’?

The single biggest obstacle to Agility is the need to bring down the system whilst making changes, leaving the business at a halt. With business operations under seemingly relentless pressure to extend their operating hours, the windows available for system change are getting ever smaller. At the same time, the number and complexity of systems used to support business is growing, so the competition to use the available windows is intense. This in itself can drive system change back to weekends in a cycle that quickly produces a 2-4 weekly change cycle for any given system (such as BPM).

So one important way forward is to find ways to reduce deployment times (downtime) and keep risk associated with change to an absolute minimum. The latter is important since risk drives contingency planning, such as allowing for a multi-hour system restore period in the change plan – something that can be a big factor in pushing change back to the weekend.

Extreme Programming (XP) sets the pace in at least one aspect of this, by demanding that no development be carried out that cannot be tested, built and deployed in 10 minutes or less. Yes, you read that right – 10 minutes. If we take that as our goal, it has one obvious corollary – the entire process must be automated. That means a scripted build, scripted deployment and – the tough one – scripted testing. XP addresses this head-on by requiring that code is only developed after an automated test harness has been built. For the record, this is intended to drive more rigorous business requirement analysis before coding, as well as facilitating change after coding.

So what can BPMers learn from this? Well, an easy point is that requirements for BPM product selection criteria should include the ability to script build and deployment tasks. Not a very big ask, and one that simply brings BPM products into line with custom code, this isn’t a requirement that will trouble serious BPM vendors. Just remember when evaluating the responses though – it only takes one exception to prevent a fully automated build/deploy cycle.

The big and hairy beast in this discussion is automated testing. This has enormous benefits in terms of minimising the deployment risks I mentioned above, as well as saving time on the critical path of implementing change. As a result it is favoured by software ‘factories’, such as product vendors themselves (for their internal testing) and others where repeatability is attainable. The challenge for BPM solutions is that business processes are often unique to one business unit, which has to bear the entire overhead of creating and maintaining the tests and test harnesses as part of the cost of change. This is made worse by typical IT roles, where responsibility for test scripting (in both the sense of business logic, and automation) is more often than not given to the Test Manager, who quickly concludes that only the Developers have the knowledge and skills to support what is required. And – guess what – the Developers are too busy ‘developing’.

Happily we have – even outside of the world of XP – a shining example of how this can tackled in a way that works. Business Rule Engines are increasingly challenging BPM products as the ‘must have’ technology for BPM solutions. One of the reasons for this is that testing is typically built into development. That is, the same toolset that is presented to the Developer to define the rules compels – or at least encourages – the developer to create a test harness whereby the rule set and any changes to it can be tested in seconds. There are perhaps two reasons for this – firstly that rules are the hardest aspect of IT development to test intuitively and the easiest for people to test incompletely or erroneously, and secondly that automating rules testing is relatively easy. Most rule sets have limited inputs and outputs, with the complexity lying (inside the rule set) in the relationships between the inputs. So BRE product vendors have been able to include a user interface that captures test data and expected results alongside rules, and an automated test execution mechanism that is so easy to use that Developers are happy to use it for unit testing (traditionally part of their job), and see it as beneficial to their world, rather than as an overhead.

The prize here for BPM vendors is significant – the chance to demonstrate real Agility in comparison to their peers. And whilst the problem will not be an easy one to solve, it isn’t impossible (not least because those BPM product developers are clever people!). I expect the better solutions will draw clear boundaries around the product and/or aspects of it in such a way that change can be fully encapsulated, and then render activities at these boundaries clearly visible in the Developers UI. This will favour a native solution – the purchase by a BPM vendor of a third party testing product would at best only help a little, in practice it would likely just eat funds better spent on native development.

And what can those responsible for BPM solutions do to aid Agility, whilst waiting for their BPM product to include native test automation? Well quite a lot, as it happens. Mostly these have to do with solution design – I bet you didn’t read this far without wondering where a BRE could usefully fit into your BPM solution, for example – and the recognition that Agility requires focus. It is unlikely to happen by itself.

And if you are absolutely unable to reconsider your BPM solution design, then putting some investment dollars into a faster backup/restore solution might give a better RoI, in terms of positive impact on speed of change, than most.

Finally, the more I consider Agile BPM – in the sense of BPM that is easy to change – the more facets to it appear. As a result I’m starting a small and very reasonably priced consulting offering around it. So if this is a subject that keeps you awake at night, you might like to read more on www.bpmfutures.com.au/SpringOffer.html and get in touch.

Happy trails….


“Agile” – time for a change

Back in the early 90s we used to demonstrate workflow by building a 3-step ‘Leave Application’ process from scratch, run it from the user perspective, change it, then run it again. The message, reinforced by our patter and marketing collateral, was clear – workflow was Agile. And you can read the same from every BPM vendor today – BPM will result in (or facilitate, support – pick your own weasel word) Agility.

The reality is rather different. Most operations managers have to prioritise the changes they need to their BPM system and these typically get delivered according to a release schedule that is measured in weeks rather than hours.

Why? Well, there’s the complexity – of real business processes that are usually much more complex than their users first thought, defined in tools that are smart but not quite perfect (think workarounds), including multiple integration points (so we may need to change the Java code too), and a user interface that is shared with other systems and environments. So the typical end result of a single process implementation is something that is quite hard to ‘get your head around’ and requires special skills to change. For a multi-process deployment, this gets even harder.  And this complexity of interacting components results in genuine risk – of developer error, of user error (in terms of clearly thinking through what is required), and of deployment error.

These risks are typically mitigated through a series of tests. Systems Testing is specifically designed to test that the changed components work together as expected by the developer. Regression testing will test that the rest of the system has not been impacted by the change. Acceptance testing ensures that the user gets what they (thought they) asked for, and gives them a chance to change their minds. And post-implementation verification testing validates that the smorgasbord of changed components that has been deployed hasn’t destabilized the live system. All of this testing and deployment activity takes time and is more efficiently carried out on batches of process changes, rather than one change at a time. The actual deployment may well need to happen outside of working hours, too, as a further risk mitigation strategy. It therefore tends to happen every few weeks, not daily.

Is this inevitable? I don’t think so. After all, there are some changes that are always carried out swiftly – on the same day or, at worst, overnight. Adding a new user, complete with a required permissions profile, will have significantly complex effects – not only on that user’s system access, but also on reporting, enquiries, and supervisor access – but is typically carried out within hours. Why shouldn’t the same apply to process changes?

The answer, today, is that your current BPM solution – typically a BPM product that has been configured, customized and extended to meet your requirements –will not have been designed to support true Agility.

Does any of this matter? Well, think back to the much-derided paper-based process that the BPM system replaced. The process was largely determined by the contents of a tick sheet stapled to the front of the manila folder that contained the case documents. The users knew the process rules, which were based upon what was ticked and what wasn’t.  So changing the process involved changing the tick sheet (thanks, MS Word) and giving new instructions to the team. It could be done in hours. Sure, the end result was much more error prone, less efficient and lacking in MIS. But are we currently trading enormous improvements in all of these, when we deploy BPM, for a loss in agility?

I believe this to be the single biggest challenge to BPM, with truly agile BPM providing potentially one of the most radical changes that technology can contribute to business processing. As an industry, can we rise to the challenge? What did The Man say….? Yes We Can?

To be continued. Happy trails….


A Big Blue bird

I’ve been tweeted by IBM – via the BPM Network, admittedly – announcing the latest news on IBM’s community for BPM process fiends, BPM BlueWorks (beta). I’m glad I caught it because BPM BlueWorks looks like it could add real value – and it’s only 3 weeks old (always nice to catch innovation early).

The idea seems to be that companies are encouraged to join the community, each operating within its own private area, with employees defining and sharing process strategies, capabilities and definitions with fellow employees. At the same time employees can break out into communal areas, to blog and discuss issues that they – most likely – have in common with other similar groups. A great deal of relevant content (including white papers, process maps, case studies) has already been made available by IBM itself, and a partnership with APQC has added more.

It’ll be interesting to see how this develops. Perhaps it will particularly appeal to BPM champions within smaller organizations that lack an existing, coherent process repository. The tools, combined with the community, should be attractive. I can also imagine it being useful to BPM specialists within larger organizations, such as those already participating in a BPM Centre of Excellence, though more as one information source amongst many.

I’d write more, but although I could register for the site, logging on – to access full functionality – proved impossible due to ‘temporary capacity problems’. Looks like the marketing tweeters are slightly ahead of the rest of the big blue bird. Never mind, I’ll try again later….


BPM – the future starts here

Having read a few other ‘first blogs’ there seems to be a tradition of keeping the first one light. Well, I’m not going to do that – I want to really get stuck into a current preoccupation – what is the future of BPM? Now I admit that I’m biased to a positive view, having worked in BPM (nee workflow) since 1990, and the title of this blog ‘BPM Futures’ also indicates a certain confidence.

However, having been focused pretty much exclusively on business unit, account and project management for the last 5 years (mostly relating to BPM, btw), I feel in need of a domain refresh. So this is it – I hope you’ll also find it of some interest and use.

So why do organizations buy BPM, rather than other solutions available in the global software market? My answers are listed below, along with some observations that will provide recurring topics for future blogs.

1. The Spot Solution “We want to manage our process better, whilst keeping most or all of the standard data in systems other than the BPM system”.

Comment – essentially a tactical view, this generally happens because the client has calculated that it’s cheaper/easier to use a BPM system than to extend their existing system(s) for the purposes of process management.

Examples- streamlining a home loan approval process that straddles an enterprise customer information system  and  a ‘stovepipe’ home loans system;  managing an accounts payable process in which several thousand employees are occasional users and only half a dozen users – in the a/p section of the accounts department – need to use the a/p module of the accounting system itself.

Challenges and alternatives – if the cost of replacing the existing system were acceptable, most newer ‘core’ systems – such as home loans and a/p – include workflow (if not BPM) functionality appropriate to the specific application, eliminating the need for integration.

Opportunities and the future – merger and acquisition (M&A) activity, if nothing else, will maintain this market for BPM. And of course the gradual adoption of industry-standard interfaces and standards by core system vendors will make the integration challenges easier over time. That said, there is plenty of room for improvement in the BPM solutions offered.

2. The Strategist “We see benefit in using the same process management tool for a wide range of processes”.

Comment – Why? Because it makes skills management easier and allows standardization on common process management features such as process definition, version and release management;  data management/reporting; full cycle process improvement; process simulation. At a more strategic level, BPM is seen as protecting process assets from change at the transaction system level, in particular providing flexibility following merger or acquisition.

Examples – there are many large organizations that have invested heavily in BPM and deployed it very widely indeed, with enterprise process management as an explicit goal. Particular examples exist in retail banking, life insurance, wealth management, P&C insurance and telcos.

Challenges – naturally big, enterprise solutions raise the largest problems; agility/ease of change of processes; adherence to standards; the difficulty of full lifecycle process improvement in a highly diverse technical environment; the role for rules management; and managing the explosion of UI requirements that arise from enterprise BPM deployment.

Alternatives – the current wave of core system replacements in the banking industry and the increasing adoption of eTOM-based enterprise telco solutions seem to pose a medium-term challenge to BPM (and, perhaps to a lesser extent, SOA) as a separate industry, though the concepts should live on within the replacement products.

Opportunities and the future – not only core systems, but also enterprise BPM systems are potential subjects for upgrade, particularly as the limits of ‘first generation enterprise BPM systems’ become apparent to user organizations. There are a number of areas that can benefit from improvement and, at least until core system upgrades are complete (at least 10 years away), demand exists to pay for them.

3. The Document Manager “We started to look at document management – scanning our incoming mail and working from this and our incoming faxes through an image viewer, rather than paper – and realized that workflow and, better, BPM, was a significant value add”.

Comment – this is a common sense and frequently used reason for BPM adoption, particularly in a departmental and/or small/medium enterprise context. The current strategic challenge for document management product vendors – increasing commoditization and therefore lower margins – provides opportunities for solution/service providers and customers alike.

Opportunities and the future – whilst this customer perspective starts a little differently (with document management) it soon returns to the familiar territory of specific process management (what are the scanned documents being used for?) and interactions with other systems. So the challenges are similar to ‘The Spot Solution’ and others.

4. The Administrator “We’re looking for a cheap and cheerful way to improve our simpler, administrative processes that our business requires but none of our core systems provide”

Comment – classic examples used are leave processing; timesheets; expense claims; many aspects of employee on-boarding and maintenance, such as security card applications; and so on. Also processes needed to support short term projects often fall into this general category.

Of course even ‘simple’ processes like these would ideally include integration (do we need to see receipts before approving expense claims? shouldn’t timesheet and leave processes interact with the payroll system?). Most BPM systems will provide the technology to provide this integration, though the services required might make the overall business case tough.

Opportunities and the future – this has always been a popular area for workflow, particularly if integration can be ignored (ie worked around by a human operator). Today the relatively simple functionality required combines naturally with BPM delivered through the web as a service, with power provided by cloud computing. No doubt it will soon be as easy and cheap to build an online process as it is today to build a web site (just start with a template…), providing exciting opportunities for a huge number of users and a (very?) few cloud-based suppliers, whilst soon removing this as a market for traditional workflow/BPM solution providers.

5. The Case Manager “We need a Case Management system”

Comment – this is an interesting one. BPM products have frequently been used as part of, or even as the core product for, case management systems. The reason being the obvious one, that BPM products include much of the functionality required for case management, plus the promise of some interesting ‘extras’ (simulation, full cycle process improvement) and in addition many have a significantly larger user base than specifically ‘case management’ products.

Challenges – there are many points of divergence between case management requirements and those of other processes, and many of these can be seen as gaps in BPM product functionality. Two examples  (1) case management workflows tend to mix status (‘state’) and workflow process rules (so a case may be worked on by a case worker, who may carry out a range of actions, some of which include their own workflows, such as a referral to a colleague; once a defined status/state is reached the case must be sent for approval (another workflow); once approved, the case is returned to the case worker who now has a different range of permitted activities, because the state has changed. Standard ways of representing process flows, including BPMN (BPM Notation – think swim lanes), are very good for workflows and equally clumsy or long-winded for defining and illustrating state changes – ‘state transitions’, in the jargon); (2) case management puts a lot of emphasis on data relationships (so in a legal case a law firm may need to record many clients, witnesses, third parties, etc in respect of related cases – BPM systems tend to most easily support only simple data relationships, at both data storage and UI levels, requiring customization to support more). These ‘gaps’ provide opportunities for service providers to craft custom solutions today, and product vendors to extend their products tomorrow.

For the best summary of the special features of case management I have seen recently, check out the following, published here by the ever-readable Business Process Trends:

http://www.bptrends.com/publicationfiles/07-09-WP-CaseMgt-CombiningKnowledgeProcess-White.doc-final.pdf

Yes, it’s produced by a vendor – Singularity – but it’s a great contribution to the literature on the subject, nonetheless.

Opportunities and the future – demand for, and therefore interest in, case management appears high at the moment. This probably reflects its concentration in the Public Sector, where every new government initiative requires administrators/administration and, usually, case management. And there is no equivalent of eTOM on the near horizon, as far as I’m aware (any ideas from anyone else on that?).

So the market for case management systems that manage case data, processes and reporting flexibly and reliably appears secure for now. Improvements are, again, needed. The recent signs of movement in the OMG, triggered by a draft RFP from the guys at Cordys, might result in some standards in this area, and/or might stimulate a product vendor to push ahead with some real innovation.  Lots of topics here for future blogs…

If you read this far, well done. Not an easy read, and I can assure you, not an easy write either!

Happy trails……..