Showing posts with label TCO. Show all posts
Showing posts with label TCO. Show all posts

Saturday, August 25, 2012

Acting faster than the speed of thinking

Chris Taylor from TIBCO has written in a Blog with the nice title: "a place for good ideas in a fast changing world" entitled "getting there faster than your opponent".  In the same spirit of TIBCO's two second advantage, it makes the point that event processing is vital for getting things faster than others.  Chris enlists the famous OODA loop originated in the US Air Force, for mentioning that faster decision can impact the combat field.   Note that OODA is one of the variations of  control loops, other variation I have written about is the 4D variation.    
It is interesting to note that speed of reaction has been one of initial reasons for using event processing technology in applications like high frequency trading, where trading programs compete on speed.  I guess that military applications also gain from competition in fast reaction, as well as cyberspace wars.   
While these are notable applications, a common misconception is that event processing is restricted to these type of applications, however, there are many other applications in which competing on speed is not an issue that can benefit from the use of event processing due to the benefits in reducing cost of development and maintenance due to higher level abstractions.  In fact one of the first posts in this Blog, almost five years ago tried to answer the question whether the only motivation to use event processing is high performance?  in this post I have discussed the Total Cost of Ownership as a function,  I guess it is also applicable today.

The two  main observations are:   acting faster than the speed of human thinking issue an important type of event processing applications,  but not the only reason, also in many cases event processing is not enough, and some real-time decision mechanism (reactive or proactive) need to be applied to achieve autonomic action, since the required speed requires the elimination of human from the loop.


Sunday, March 22, 2009

On Event processing as part of DBMS

Paris. I have arrived a few hours ago to Paris, and I went for a walk in the streets to stretch my legs after the flight, my hotel is not far from the Bastille, so I went there and watched the monument that you can see in the picture and the people who watch it.. Now, returned to my hotel to check Email and rest, before my hosts are coming to take me to dinner.

Today's topic is a short reply to some discussion that actually event processing should be done as part of DBMS, this is not a new claim, it is repeating from time to time by one database person or the other; in my past I have dealt with active databases that has attempted to put some form of event processing functionality as part of a DBMS engine, overall this approach has not lead to a big traction on the DBMS products. The main idea has been to add some language constructs in form of ECA rules (that also support composite events) to DBMS engines. The only traction on products from this works is the notion of "trigger" that does not really to justice to what the active database community has tried to do...

Anyway, twenty years have passed and the event processing thinking has been evolved from the early thinking on active databases. As said the main issue here is not performance, as some of the vendors claims, but TCO. Many of what is called "complex event processing applications" deal with the detection of patterns over multiple event instances and types, SQL may not be a natural language to express such patterns, in some cases due to its set-oriented thinking and some other limitations. In fact, in some cases customer reported that they could save 75% of the cost of development time by using language that can express patterns more naturally. This difference may not be materialized in languages that are by themselves variations or extensions of SQL, but this is only part of the EP universe.

Of course, the DBMS community can return to the idea of active databases and add language constructs to express patterns in the DBMS engine, and I guess that this may be a valid variation of event processing, but it will not naturally blend into SQL, it will have to be an hybrid language. More about this - later.

Sunday, February 1, 2009

On Off-Line Event Processing



A comment made by Hans Glide to one of my previous postings on this Blog, prompted me to dedicate today's posting to Off-Line Event Processing. Well - as a person who is constantly off any line, I feel at home here...

Anyway -- some people may wonder and think that the title above is an Oxymoron, since they put "real-time" as part of the definition of event processing. I have used before this picture that is the best describing some of what is written about event processing - by everybody:



This, of course, illustrates a collection of blind people touching an elephant; each of them will describe the elephant quite differently, and the phenomenon that people say "event processing is only X", where X defines a subset of the area is quite common. In our case X = "on line".

The best here is to tell you about a concrete example of a customer's application I am somewhat familiar with. The customer is a pharmaceutical company which monitors its suppliers related activities. It looks at events related to supplier-related activities and checks them against its internal regulations. The amount of such events are several thousands per day and from business point of view, it does not require real-time requirements, the observation about any regulation violation and action taken, can be done in the next day. The way that this system works is accumulate events during the day, and activate the vent processing system at the end of each day, which is actually a batch processing done off-line.

An interesting question is why have this customer chosen to use an event processing system, and did not use a more traditional approach of putting everything in a database and using SQL queries. The answer is quite simple: This applications have some interesting properties:
  • The number of regulations is relatively high (in the higher range of three digits);
  • Many of the regulations rules are indeed detection of temporal oriented patterns that include multiple events,
  • Regulations are inserted or modified frequently.
Given all these it turned out that the use of event processing system in off-line was the most cost-effective solution; While using SQL is nominally possible, writing these regulations in SQL is not easy, and the magnitude makes the investment in development and maintenance quite high.

So - the benefit of using event processing here is neither the real-time aspect, nor high throughput support, but simple TCO considerations.

This is not the only applications of this type, and in fact, I have seen several other cases in which event processing has been used off-line. There is also another branch of off-line processing which combine on-line and off-line together, but I'll write about it in another posting...

More - Later.

Tuesday, November 20, 2007

"The only motivation to use EP COTS is to cope with high performance requirements" - true or false ?



Somehow, I find myself using my collection of analysts' presentations to help me make some points, this time, I am showing a slide from Roy Schulte's presentation in the Gartner EPS summit - I'll return to the content of this slide shortly, but will start with some discussion I had yesterday about the reasons that enterprises are using COTS for event processing. I have heard (and not in the first time) the assertion - the only reason that one will want to use EP software and not hard-coded solution is the ability to cope with high throughput / low latency - in short, to deal with high performance requirements. If there are no high performance requirements there are other solutions, e.g. the database guys think that in this case one can insert all events to a database and use simple SQL queries for CEP patterns, or just using the good old C/Java programming for this purpose. This is somewhat inconsistent with my own experience, where customers that did not have "high performance" requirements were eager to use CEP technologies. Indeed, high performance is a reason to use CEP COTS, however, as indicated in Roy Schulte's slide above - it is actually a minor reason. According to Gartner, the high end is somewhere between 5-10 percent of the candidate applications, while looking at the prediction for 2012 - the use in the high end will be 8% - out of the 27% total use; note also that Roy Schulte defines the high end as 250 events per second, which is really far from the definition of "high performance", so the numbers are even lower. It seems that the market for "non high performance CEP" is much larger, and will grow faster. If that's so - where does the misconception that EP equals high performance always ? I think there are two sources - the first, the early adopters were from the capital markets industry, where some (not all !) of the candidate applications has indeed high performance characteristics. However, with the growth of the market, and use of EP software in other applications and other industries, these type of applications, while continue to grow, will not match the higher growth of non high performance applications. The other reason is that some vendors make the high performance as their main message, and trying to get the market believe that this is indeed the most important property.
So - if high performance is not the only reason to use EP COTS what are the other reasons to use EP COTS ? this is a matter for investigation, but IMHO the main one is the "high level programming" and agility - in short - the ability to reduce the Total Cost of Ownership.
I'll provide more insights about the TCO issue in a future post.