Process modeling has been going though an evolution. If you haven’t noticed the evolution, you have either been living in a vacuum or you are still using flowcharts in Visio. Everywhere I turn people are talking about processes and process improvement. At least this is one good thing to come out of the economic recession.
The other trend we are seeing more of in this decade is the use of more events, and less tasks. A task that says that something happened is not a task at all; it’s an event. To be a task it has to be something that is performed by a person, system, or process. One could argue that everything is performed somehow, so everything is a task. So let me ask you this: Is it a task for the weather when it rains? Is it a task for the highway that traffic is backed up? Is it a task for the stock market when the NASDAQ drops by 100 points? If I can’t put a performer to the task, it can’t be a task.
Events can basically do two things: start activity or interrupt activity. The weather changed, so what are you going to do about it? The traffic is bad, so maybe try another route? The stock market is down, so maybe you should buy stocks (unlike the herds of people who sell every time there is a jitter in the market).
Condition or event?
You could also argue that the bad weather is a condition, not an event, and you might be right. But what caused the weather to be bad? It was likely an event. And when the condition exists, is that not an event? The BPMN specification says that a condition is a category of event. There are two types of condition events; start and intermediate. As stated above, an event can either initiate activity, or interrupt activity. Also in the BPMN 2.0 specification we have the new non-interrupting start and intermediate variations of the condition event. These shapes still serve the same purpose, but also add context to where they can be used. For example, an intermediate non-interrupting shape is essentially a means to start an activity or flow that is relevant to the subprocess the event is attached to.
In the world of event processing you have essentially two things; events and conditions. Conditions describe a combination of one or more events. For example, it’s raining outside and the traffic is terrible. This is a condition that possibly the weather caused the terrible traffic. Although, one could argue that all of the cars are causing the weather to get worse (global warming). But that’s another topic entirely. The two conditions combined of traffic and weather cause an event of something I was looking for, and the conditions are correlated as an event in time. Weather can be bad on its own, and so can traffic. The two are not necessarily connected until a stream of events (which we could also call facts) are correlated. For example, a weather report in combination with an emergency dispatch of a car hydroplaning due to excessive water on the road. With respect to both of these events occurring within a window of time, a condition is born.
Conditions act as a filter of events. Billions of trillions of events occur every millisecond. But obviously we are not interested in all events; only the ones that are relevant to our business process activity. When I figure out a way to filter the events down to something interesting then I have a “condition event”. In other words, the billions of events have been aggregated down to just one single event that is important to my process.
BPM: meet CEP. CEP: meet BPM.
Now that introductions are made, let’s talk about what they are and why they are both BPM and CEP are very important and relevant in this decade.
BPM was not designed to handle billions of events. The BPMN notation is far too simple to handle the sophistication of monitoring millions of stock ticker streams, or monitoring millions of cars per day travelling on a highway. The individual events are so insignificant that they go largely unnoticed by the larger business processes that everyone is familiar with. Up until last decade we’ve simply called this an application and didn’t bother modeling it. But now that Complex Event Processing (CEP) is coming of age, there is a new approach emerging in process modeling that efficiently handles complex events.
Up until recently I honestly couldn’t figure out how to use the condition event in a real-world process model. The difference is that lately I’ve been experimenting with complex event processing concepts. Suddenly I realized that a condition is the result of a complex event, then it was easy to put conditional events everywhere.
The job of the complex event system is filter through millions or even billions of events per second and find something interesting that I might want to act upon. Once captured, this event causes a so-called complex event. So is it an event or a condition? Why not a condition event? This led to my new nickname for the CEP acronym. Instead of calling it “complex” I call it “conditional event processing”. I wonder if this will catch on? Probably not, but at least this might help you make sense of all this.
When CEP generates an event, BPM decides what to do with it. There are two basic use cases here; you can send the event to a process participant, or the BPM system can further aggregate the condition into a decision of whether or not to take action. For example, a condition event is detected, which causes a flow into a rule, which determines that either no action is required, or an activity should be routed to a participant. However, if either one of these paths are taken too often, this is yet again another condition that could be used. For example, too many condition events are causing too many people tasks, and the organization is overloaded with activity. This overload condition can cause a feedback to the CEP system to relax the thresholds in which it triggers its complex events. This is an environment where BPM and CEP help each other do what they do best.
Even Olympic size pools only have a limited number of lanes.
I know that if I suggest that swimlanes will be a thing of the past I’ll never hear the end of it. So go ahead and start your comments now because yes, I’m about to go there. I’m not saying we’ll see our friend the swimlane disappear anytime soon. But I am saying that swimlanes are becoming less relevant. In an event based world, we don’t necessarily have a performer of a task until runtime. I don’t know who is going to do what. So why would I model my process in a way that assumes a particular person is doing something?
Swimlanes have their place. I stand by my advice that if you have more than five lanes in a pool you seriously need to stop and think about what you are doing. Also, if you have more than five pools in a diagram, a problem should leap off the page and smack you in the face. Organizations are not flat like this, and in practice, more than 5 roles would be unmanageable. Instead, a hierarchy exists to manage the complexity. So what I’m suggesting here is to select process participants that are more in line with how the organization works.
If you have too much activity happening in one place (a diagram) it could be best described as chaos. Chaos is complex, and there are likely to be way too many events to process with BPM, or draw with BPMN. You cannot possibly draw all of the events and conditions that might occur in a diagram that has more than 5 or 6 participants. Likewise, for a single participant, if you are subdividing the role into many lanes there are likely way too many events to handle (too much participation for one participant). So why not try the event approach instead?
Understanding complexity, events, conditions, and process activity
For better understanding, let’s take a look at a complexity analogy. Imagine a large, crowded room full of people having a formal dinner evening. The attendees (participants) walk around the room introducing themselves to others, and conversations begin. As the conversations increase, the noise increases, and you can no longer hear a conversation more than a meter or two from where you are standing. A simple task such as getting to the dinner buffet is interrupted by hundreds of events; people bumping into you and trying to cut line ahead of you. As a participant in this dinner process, if you happen to notice it’s getting quiet in the room, it might be a good idea to stop shouting at the top of your voice because someone probably has an announcement to make. Or, if you happen to notice everyone running to the exit, maybe you should go too. There might be a fire.
The point to this analogy is that most of the dinner evening was not planned. Instead, it was a series of events triggering micro processes. The overall objective of socializing, rubbing elbows with the important people in the room, and having a good time was achieved. Everything else was random occurrence. But even in the randomness there is order and process. For example, everyone got in line when it was time for the dinner buffet, and the other agenda items occurred according to schedule.
The basics of management theory state that the more people are involved in a meeting or gathering, the less productive the outcome. The same can be said for processes. By keeping the number of swimlanes down to just the important participants, you can actually show more relevant detail. The other participants are involved, but there is no point in showing detail of what they do; it’s out of scope. In the dinner evening process above, it only makes sense to make a process of the over-all agenda. There might be hundreds of participants in the ceremony, but only a few are important.
At the same time, it’s important that all of the guests are having a good time. For example, the host detects a high percentage of people complaining about the food or getting sick. This might be a problem that could be fixed before the party is a disaster. But the only way to know about this condition is to mingle with the crowd and ask everyone if they are having a good time. Too many people not having a good time (individual events) is a condition that can be brought to the attention of the organization (the process context).
Event driven business processes
The same can be said for many processes in the business world today. There is the general high-level process and all of the subprocesses that support the main objective. In addition, there should be some sort of feedback mechanism to govern the process flow. Otherwise, the highest level objectives will likely fail. For example, if I have a manufacturing business and I don’t watch the market for signs of growth or slowing, how do I know how many units to make? And when I slow down production, do I simply cut back on my labor force, or will that negatively affect my ability to operate? These are questions that cannot be solved in the process modeling realm; it’s an event problem. However, these events, filtered into conditions, are something that can be modeled. I can set up market indicator monitors that create condition events, which tell me when to speed up or slow down production. I can also set up a means to collect information on bottlenecks in my production line, and correlate this data with employee morale data from the HR department. Too few people on staff can cost me just as much as too many (overtime, employee retention and training, product quality issues, etc).
The old approach to process modeling was to go about business as usual and hope that everything works out. The new approach with the event-enabled process is to provide a way to enable the process to self-optimize. This is not to say that everything can be automated. But the information needs to get back to the people who make the decisions, at the right time, and filtered to what is relevant. As organizations become larger, the events become more important. This is because the lines of communication are long, and much information is lost in the chain of command. We cannot model the entire process in one big picture, but we can model the events and conditions that affect related processes.
– Rick Geneva