Showing posts with label event flow. Show all posts
Showing posts with label event flow. Show all posts

Monday, June 28, 2010

On event flow and control flow


Teaching another internal course in event processing, one of the questions that was asked by the participants, and is actually a frequently asked question when viewing our notation of event processing network is -- whether we are reinventing workflows.

The answer is -- no.

Viewing control flow and event processing networks (both figures were taken from IBM developerWorks)


The common denominator is that both of them looks as graphs with nodes and edges of various types; but there are also some differences:

In control flow systems the edge typically means that when some task that executes on the node ends the control (conditionally) is passed to the task represented by the predecessor node in the graph. There can be some and/or/xor conditions as seen in the diagram above - and they typically refer to the termination status of the task (success/fail).

In event processing network, the nodes represent EPA (event processing agent), and the edges represent events that flow between these agents. The fact that an EPA emits an event does not necessarily mean that it has been terminated, an EPA can create derived event in an "immediate mode" which might designate that a pattern was matched. The source EPA may still continue to run. Likewise, an event that is sent to a target EPA does not necessarily mean that it is invoked, it may have been there, it does not even indicate that the target EPA will use this event, it may filter it out on entry. If it uses the event, it may not react to it directly, since it is a part of a pattern that has not been completed yet. In general -- all an edge in the graph says that one node sends an event to another, and nothing more.

One cannot represent event processing network using regular workflow, however, a workflow can be represented using event processing network !


Wednesday, February 18, 2009

On Event Flow and Decoupling


This is a simulation of an anesthesia workstation, it can simulates various cases that creates a flow of events that are flowing inside this configuration, e.g. what happens when there is power failure.

I was recently asked if there is no contradiction between two concepts:
  • The decoupling concept, each event processing agent is independent, it subscribes to some event, publishes derived events, and is independent in any other agent, furthermore, it is decoupled and does not know anything about the other agents.
  • The Event Flow concept, in which there is explicit modeling of event flows
My answer is that there is not really a contradiction, since these two principles are in two different levels. The decoupling is in the execution level, event processing agents indeed do not need to communicate with one another, since there is no RPC or any other synchronous communication among them. The event flow concept exists in the modeling and management layers. In the modeling layer, there should be a view of the entire "event processing network" to enable that that the orchestra is playing together; in the management level, there should be a possibility to trace back the provenance of a certain decision or action, or trace forward the consequences of any event, however this still does not require to violate the decoupling in the execution layer, that's the beauty of model driven architecture... more - later.