Still in Orlando, the Gartner Event Processing Summit is now behind us, and the Gartner folks are happy from the turnover, and promised another summit in August 2008. The closing keynote speaker was David Luckham who needs no introduction in this community, who talked about the past, present and future of event processing. David has positioned himself as the prophet and the person setting challenges to the community, and talked about "creeping event processing" (present), "event processing as first class citizen in computing" (five years from now) and ubiquitous event processing, where event driven architecture and event processing will be fundamental to computing infrastructure, with the vision of dynamic event processing networks, that consist of Internet-scale amount of agents, that are dynamically created and destroyed. Another interesting talk was the talk of Robert Almgern, who has a long academic record, but currently works as Managing Director, Head of Quantitative Strategies for Banc of America Securities. He has provided excellent introduction to algorithmic trading, one of the most pervasive applications of event processing products today. There has been one sentence that attracted my attention in his talk : "vendors are talking about 200,000 events per second, this is great, but far from what we need, the typical load in our case is 7,000 events per second, with a peak of 13,000 events per second, and any product in the market can deal with this loads with no problems". He further pointed out that the main effort has been invested in connectivity with other applications, and not in setting up the algo. trading system. This brings me to earlier thoughts about the "mythical event per second" . Old timers may remember the mythical man month that discussed the fact the problematic in estimating time durations in software products. While some vendors attempt to make the "event per second" as a main property - there are two questions: the first one -- what does it mean ? is that throughput, well one can have very large buffers within the boundaries of the system, and make sure that all events are there in the buffer and will be processed eventually - this is a good property, but does not make it high performance. It is how many events can be processed in a second that matter (thus, latency is impacted), but what do we mean by process -- this is like talking about "transaction per second" without specifying what is there between the beginning and end of the transaction, this can mean anywhere from filtering out the event, and using it (and its descendants) in 2000 pattern detections. Thus, without a benchmark, this term is meaningless. However, the second question is even more interesting --- how critical is high performance ? some vendors have interest to make it a major requirement, and if you'll ask customer - do you want high performance, nobody will say no -- which reminds me that once I have interviewed customer and asked whether he has a need of high throughput, the answer was - "yes, of course", then I said about the quantity, and the answer was " around 10,000 events per hour", and if you think of it, for a human this is a high throughput. While there are certainly some applications which need "high performance", the evidences show that majority (say 95%) of the candidate applications do not require very high performance, since the main value of event processing is the abstractions to mitigate the complexity, and not the throughput. I remember a discussion with CIO of a bank, who said -- well - today we are doing these things in batch, let's learn how to walk, before start dancing. I think that we need to teach the customers how to walk first - use the right abstractions, integrate with their systems etc, as a higher priority. Bottom line -- I think that the "high performance" in event processing is somewhat hyped, and is only one of a series of considerations, certainly not the most important for most applications. I have started with David Luckham's talk -- and will end with saying that EP frameworks like the dynamic event processing network mentioned before will, by definition, resolve all scalability issues in the framework level, rather than in the engine level.
More on frameworks vs. engines - later.