BPM Futures


Category Archive

The following is a list of all entries from the BPM Agility category.

Pegasystems to buy Chordiant

Conversations that include the words ‘Pegasystems’ and ‘buy’ in the same sentence tend to cast Pega as the vendor, so this move has the immediate benefit of surprise.

I’m not going to attempt too much learned reflection on the purchase, since I know very little about Chordiant. However, I do know that CRM / BPM mergers aren’t always easy – Staffware bought a small US CRM vendor in the late 90s (whose original name escapes me now) and my own sense was that in the end this was more distraction than synergy. It’s hard to make a world-beating BPM system and, no doubt, the same goes for CRM. Trying to maintain both of those market positions whilst simultaneously promoting an entirely new ‘CRM+BPM’ market position as well requires a near-superhuman organisation of the engineering teams, not to mention sales & marketing.

What leaves me with some interest and excitement with Pega is that their vision has for some years been focussed on all-round excellence in BPM with an emphasis on BPM’s original mission – providing systems that render the exceedingly complex (UI +process+rules+integration+data) sufficiently easy for business process deployment to be affordable and – crucially – for subsequent process change (=agility) to be realistic.

Given this track record, it could just be that Pega will use Chordiant’s technology to push ‘Build for Change’ in quite new and original directions. BPM needs a lift at the moment – perhaps from a ‘should buy’ to a ‘must buy’ – and the vendor that delivers this transition will be richly rewarded. My money to date has been on the big players – IBM, Oracle, SAP – simply because of the size of the challenge in terms of both re-engineering and re-launching into the market. However, perhaps Pegasystems will leverage its intense focus to take the lead, showing the rest of the market the way.

Let’s hope that the fact of the purchase is the least of the surprises Pegasystems has in store.

For more facts and figures, check out this article. Good stuff.


Let data make that process dance

I was recently discussing BPM agility with a senior IT manager, when the topic of configuration files came up. “Many of my colleagues don’t like them” he said “we have hundreds of them, and they don’t like changing them because they don’t know what the consequences will be”.

It’s easy to smile at this, but practically all guardians of customised systems are in a similar position. The reason that I’m writing about it is because config files (or config data, to generalise) can be a powerful way to drive change to BPM systems if used correctly, because data can be significantly easier to change and deploy than code.

What sort of data might one want to externalise in order to improve BPM agility? A few examples:

  • Field labels
  • Text to be included in reports/alerts
  • Route/path ids and sub-process ids (to permit config-driven re-routing)
  • Data relating to rules (‘if value of loan > $1000 then’ will obviously be more flexible if the value – $1000 – is a variable) if the rules are not already externalised through a rules engine (which would be better still).
  • Data that drives which cases/process instances should adopt changed config values. Should changed values be picked up by closed or archived cases? Should they only apply to new cases, or cases with particular characteristics?

As the last point suggests, there is an important system design aspect to this, regarding when config data should be (re-)read and how much granularity is required to control change impact.

And of course, the data to be held will vary widely, depending both on the business processes and the BPMS used.

For this data driven approach to deliver significant improvements to BPM agility the mechanisms used must be thoroughly tested – part of ‘Agility Testing’. Unless one can be confident that changes to the config data will result in predictable outcomes, abstracting it will provide few benefits, since changes will once more need to be system tested, delaying deployment.

One significant source of comfort for the hard-pressed project manager is that Agility Testing can be carried out after the initial go-live. Not perfect, obviously, but a promise that ‘the system will become easier to change after the second deployment’ will put you ahead of many BPM solutions.

So are config files, as such, the best way of delivering the config data into production? Well, no. Config files are hard to add CRUD functions to (and so are often edited in Notepad – easy to make a mistake) and must be deployed (albeit it’s a very simple deployment).

Using a relational database (RDBMS) helps a lot, in terms of basic validation, and if already in use on your project it will be easy to piggyback on existing or planned resilience and recovery features. However, without a user interface (UI), the standard method of changing an RDBMS is via a stored procedure (for audit and repeatability reasons) and this, once more, will require testing and deployment.

A custom UI would be best. It needn’t be pretty – it will only be used by Admin users or Prod Support so some shortcuts may be possible in terms of corporate standards. However, it will need to be able to provide Read/Update (possibly Create/Delete) functions for quite an assortment of data, so some extendable way to segment the data, such as tabbing, will help. Most importantly it must include an audit log and, ideally, a function that will import an existing audit log, making the changes included.

The reason for the audit log import is procedural. Even though no technical issues should arise from changes made via the UI, there is still potential for the Business to have erred in their change request. So an initial deployment into a User Acceptance Test (UAT) environment may well be required. The UI may be used to directly change the UAT config data – indeed this should be so easy that it could be done with a business representative alongside. Once the Business is satisfied with the result, the audit log can be deployed into Production and run against the Production UI (as an import) with no further testing required, since the resulting Production changes will necessarily be identical to those in UAT.

The BPM project steering committee will need to assess the value of this sort of approach, since it clearly involves a cost. The degree to which config data can be used to drive agility in a BPM solution will vary, as will the value that the Business places on it.

And of course all of this works best within an A-BPM (Agile BPM – here today, Gartner tomorrow!) development framework.

Happy trails…


Getting Agile – Rules Engines show the way

So what are the really major product enhancements that would drive BPM into Agile territory? What new functionality would allow businesses to see change routinely happening within 24 hrs of the business request, and the disappearance of the dreaded ‘change backlog’?

The single biggest obstacle to Agility is the need to bring down the system whilst making changes, leaving the business at a halt. With business operations under seemingly relentless pressure to extend their operating hours, the windows available for system change are getting ever smaller. At the same time, the number and complexity of systems used to support business is growing, so the competition to use the available windows is intense. This in itself can drive system change back to weekends in a cycle that quickly produces a 2-4 weekly change cycle for any given system (such as BPM).

So one important way forward is to find ways to reduce deployment times (downtime) and keep risk associated with change to an absolute minimum. The latter is important since risk drives contingency planning, such as allowing for a multi-hour system restore period in the change plan – something that can be a big factor in pushing change back to the weekend.

Extreme Programming (XP) sets the pace in at least one aspect of this, by demanding that no development be carried out that cannot be tested, built and deployed in 10 minutes or less. Yes, you read that right – 10 minutes. If we take that as our goal, it has one obvious corollary – the entire process must be automated. That means a scripted build, scripted deployment and – the tough one – scripted testing. XP addresses this head-on by requiring that code is only developed after an automated test harness has been built. For the record, this is intended to drive more rigorous business requirement analysis before coding, as well as facilitating change after coding.

So what can BPMers learn from this? Well, an easy point is that requirements for BPM product selection criteria should include the ability to script build and deployment tasks. Not a very big ask, and one that simply brings BPM products into line with custom code, this isn’t a requirement that will trouble serious BPM vendors. Just remember when evaluating the responses though – it only takes one exception to prevent a fully automated build/deploy cycle.

The big and hairy beast in this discussion is automated testing. This has enormous benefits in terms of minimising the deployment risks I mentioned above, as well as saving time on the critical path of implementing change. As a result it is favoured by software ‘factories’, such as product vendors themselves (for their internal testing) and others where repeatability is attainable. The challenge for BPM solutions is that business processes are often unique to one business unit, which has to bear the entire overhead of creating and maintaining the tests and test harnesses as part of the cost of change. This is made worse by typical IT roles, where responsibility for test scripting (in both the sense of business logic, and automation) is more often than not given to the Test Manager, who quickly concludes that only the Developers have the knowledge and skills to support what is required. And – guess what – the Developers are too busy ‘developing’.

Happily we have – even outside of the world of XP – a shining example of how this can tackled in a way that works. Business Rule Engines are increasingly challenging BPM products as the ‘must have’ technology for BPM solutions. One of the reasons for this is that testing is typically built into development. That is, the same toolset that is presented to the Developer to define the rules compels – or at least encourages – the developer to create a test harness whereby the rule set and any changes to it can be tested in seconds. There are perhaps two reasons for this – firstly that rules are the hardest aspect of IT development to test intuitively and the easiest for people to test incompletely or erroneously, and secondly that automating rules testing is relatively easy. Most rule sets have limited inputs and outputs, with the complexity lying (inside the rule set) in the relationships between the inputs. So BRE product vendors have been able to include a user interface that captures test data and expected results alongside rules, and an automated test execution mechanism that is so easy to use that Developers are happy to use it for unit testing (traditionally part of their job), and see it as beneficial to their world, rather than as an overhead.

The prize here for BPM vendors is significant – the chance to demonstrate real Agility in comparison to their peers. And whilst the problem will not be an easy one to solve, it isn’t impossible (not least because those BPM product developers are clever people!). I expect the better solutions will draw clear boundaries around the product and/or aspects of it in such a way that change can be fully encapsulated, and then render activities at these boundaries clearly visible in the Developers UI. This will favour a native solution – the purchase by a BPM vendor of a third party testing product would at best only help a little, in practice it would likely just eat funds better spent on native development.

And what can those responsible for BPM solutions do to aid Agility, whilst waiting for their BPM product to include native test automation? Well quite a lot, as it happens. Mostly these have to do with solution design – I bet you didn’t read this far without wondering where a BRE could usefully fit into your BPM solution, for example – and the recognition that Agility requires focus. It is unlikely to happen by itself.

And if you are absolutely unable to reconsider your BPM solution design, then putting some investment dollars into a faster backup/restore solution might give a better RoI, in terms of positive impact on speed of change, than most.

Finally, the more I consider Agile BPM – in the sense of BPM that is easy to change – the more facets to it appear. As a result I’m starting a small and very reasonably priced consulting offering around it. So if this is a subject that keeps you awake at night, you might like to read more on www.bpmfutures.com.au/SpringOffer.html and get in touch.

Happy trails….


“Agile” – time for a change

Back in the early 90s we used to demonstrate workflow by building a 3-step ‘Leave Application’ process from scratch, run it from the user perspective, change it, then run it again. The message, reinforced by our patter and marketing collateral, was clear – workflow was Agile. And you can read the same from every BPM vendor today – BPM will result in (or facilitate, support – pick your own weasel word) Agility.

The reality is rather different. Most operations managers have to prioritise the changes they need to their BPM system and these typically get delivered according to a release schedule that is measured in weeks rather than hours.

Why? Well, there’s the complexity – of real business processes that are usually much more complex than their users first thought, defined in tools that are smart but not quite perfect (think workarounds), including multiple integration points (so we may need to change the Java code too), and a user interface that is shared with other systems and environments. So the typical end result of a single process implementation is something that is quite hard to ‘get your head around’ and requires special skills to change. For a multi-process deployment, this gets even harder.  And this complexity of interacting components results in genuine risk – of developer error, of user error (in terms of clearly thinking through what is required), and of deployment error.

These risks are typically mitigated through a series of tests. Systems Testing is specifically designed to test that the changed components work together as expected by the developer. Regression testing will test that the rest of the system has not been impacted by the change. Acceptance testing ensures that the user gets what they (thought they) asked for, and gives them a chance to change their minds. And post-implementation verification testing validates that the smorgasbord of changed components that has been deployed hasn’t destabilized the live system. All of this testing and deployment activity takes time and is more efficiently carried out on batches of process changes, rather than one change at a time. The actual deployment may well need to happen outside of working hours, too, as a further risk mitigation strategy. It therefore tends to happen every few weeks, not daily.

Is this inevitable? I don’t think so. After all, there are some changes that are always carried out swiftly – on the same day or, at worst, overnight. Adding a new user, complete with a required permissions profile, will have significantly complex effects – not only on that user’s system access, but also on reporting, enquiries, and supervisor access – but is typically carried out within hours. Why shouldn’t the same apply to process changes?

The answer, today, is that your current BPM solution – typically a BPM product that has been configured, customized and extended to meet your requirements –will not have been designed to support true Agility.

Does any of this matter? Well, think back to the much-derided paper-based process that the BPM system replaced. The process was largely determined by the contents of a tick sheet stapled to the front of the manila folder that contained the case documents. The users knew the process rules, which were based upon what was ticked and what wasn’t.  So changing the process involved changing the tick sheet (thanks, MS Word) and giving new instructions to the team. It could be done in hours. Sure, the end result was much more error prone, less efficient and lacking in MIS. But are we currently trading enormous improvements in all of these, when we deploy BPM, for a loss in agility?

I believe this to be the single biggest challenge to BPM, with truly agile BPM providing potentially one of the most radical changes that technology can contribute to business processing. As an industry, can we rise to the challenge? What did The Man say….? Yes We Can?

To be continued. Happy trails….