This is a blog describing some thoughts about issues related to event processing and thoughts related to my current role. It is written by Opher Etzion and reflects the author's own opinions
Saturday, March 5, 2011
Event processing - manual, build or buy ?
The various chapters of the event processing manifesto, published yesterday, hold some interesting insights about the state of the practice and the future of the event processing area. Chapter 1 written by a team lead by Robert Berry has provided some insights and directions about why use event processing systems. The figure below taken from the manifesto (figure 1.5, page 20) is talking about a frequently asked question - whether an event processing computerized system should be used at all, and if yes - should it be implemented as part of the conventional programming or using a dedicated COTS event processing system.
As seen in this figure - this is a function of complexity. There are various complexity factors, defined earlier in the document:
1. Degree to which the application is expected to change over time, e.g., with new
event sources, new interactions and new responses expected
2. Numbers and types of event sources
3. Numbers of consumers of information communicated in the events
4. State and context management
5. Opportunity to create new value, e.g., by introducing reflection and introspection
Back to the decision - in some cases the complexity is low, and there are no really processing required except for getting the events and send them to some person directly or displaying them on a dashboard.
In this case the most cost effective solution is "manual", namely, a human is the event processor and no computerized system is required to do any processing, except for routing, which can be done by any messaging or DBMS system.
The first vertical bar (from left) is the complexity break-even point between the "manual" approach and the "build" approach, beyond this bar it is cost-effective to construct an event processing computerized solution, but the complexity of such system is quite low, typically doing filtering, some transformations, maybe simple aggregations, typically without strict timing constraints. In such cases learning and using a COTS system might be an overkill, and it is relatively simple to develop as part of regular programming.
The second vertical bar is the complexity break-even point between the "build" and "buy", beyond this bar, it becomes cost-effective to invest in COTS (the right one which satisfies the requirements, of course).
Note that sometimes the decision should be forward looking, if there is a plan or prediction that the complexity of the application will increase within the short and medium range, then it should also serve as consideration to avoid re-writing of the system within a relatively short period of time.
I think that we need some empirical research to determine how to measure these two break-even points in a more exact way.
Our next mission is probably to devise best practices for the community on this and other issues, and it is already proposed to have it as one of the next working items for EPTS.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment