Thursday, August 6, 2009

On the criteria to evaluate event processing products


This is a map of Finland, the location of our family vacation for this year. The vacation is planned to start in Saturday, and I'll be disconnected from the cyberspace for 15 days. Working late at night to advance in the EPIA book we are writing.

It seems that this is the time of the year of analysts report, the community blogland was full of references to the Forrester wave report - Complex event processing platform, Q3 2009, dated August 4, 2009.

I will not comment about the grades that they gave the different products, the reason for that is that I decided that in my role as chair of EPTS, I see my role to work on the coop side of the coopetition, and leave the competition side of the coopetition to others. I think that the main competition is not between the different vendors (though they have point competitions), but against the barriers that the event processing area has to fulfill its potential and become a pervasive main-stream technology.

The Forrester reports starts the executive summary by saying:
"Forrester evaluated nine complex event processing (CEP) platforms using 114 criteria".

Without getting into the long list of criteria (not part of the report itself, but I managed to look at it), I have some doubts about the ability to get a meaningful information to customers by weighing 114 criteria. There are two reasons, one is practical and one is methodological.

On the methodology side, the compensatory model of decision making advocates weighing of many criteria, however, experience shows that the actual decision making model is lexicographic, meaning ordering the criteria according to importance, and making the decisions according to the most important criteria. People may use a compensatory model of weighing a lot of criteria, if their organization require them to work this way, but this is done only as justification to decision that has already been made by the lexicographic model.

Let's move from decision making theory to the event processing universe. The event processing universe is diversified from both functional and non-functional requirements point of view. I really don't believe in a "one size fits all" in anything related to this area, and this goes also for set of evaluation criteria. Getting criteria that are good in variety of cases and weighing them together may not get a good solution to any particular case. The more practical approach is to set a collection of relatively small sets of important criteria, and also segment the space of application, and assign a set of criteria to each segment. Anybody that will manage to do it, will help customers more to make the best decision for a particular case. I hope that EPTS, through its use cases workgroup will be able to provide this segmentation, and this will be the starting point for analysts to come with the right criteria for each segment.

Tuesday, August 4, 2009

On the Gartner 2009 application architecture hype cycle

Here is a revised version of my Blog entry that relates to the Gartner Application architecture hype cycle report (Gartner Report ID number G00168300 from July 16,2009) , the revision was done at the request of Gartner who asked that I'll make exact citations in their report, and make clear distinction between what is quoted from the Gartner report, and my own remarks.

Here are a collection of citations from the report that are of interest from the Event Processing perspective:


  1. "Event-driven architecture (EDA) is an architectural style in which a component (or several components) in a software system executes in response to receiving one or more event notifications". In the report EDA is positioned under the hype cycle phase "Climbing the slope of enlightenment" which according to Gartner's terminology is defined as " Focused experimentation and solid hard work by an increasingly diverse range of organizations lead to a true understanding of the technology's applicability, risks and benefits. Commercial off-the-shelf methodologies and tools ease the development process"
  2. CEP is positioned under the hype cycle phase of "Technology Trigger" which according to Gartner's terminology is defined as "A breakthrough, public demonstration, product launch or other event generates significant press and industry interest", and is the phase that precedes the "peak of inflated expectations" phase.
  3. For CEP: "market penetration is 1% to 5% of target audience"
  4. CEP use is expected to grow at approximately 25% per year from 2009 to 2014, but the use of COTS CEP products is expected to grow more than 40% per year in this time frame
  5. For CEP COTS products: " Most of these products are immature and incomplete"
  6. "Most business analysts do not know how to identify business situations that could be addressed through CEP, and that is limiting the rate at which CEP use can expand. Most software engineers are not familiar with CEP development"
  7. "The Event Processing Technical Society (EPTS) was launched in June 2008, and it is expected to facilitate the adoption of CEP".


Here are my own comments:

  • Note that EDA and CEP are positioned in different phases of the hype cycle.
  • The fact that the market penetration is low indicates that there is still a substantial growth potential, if we can overcome the adoption challenges
  • The adoption challenges consist of product maturity and market awareness. We are now still in the first generation of products in this area and maturity is typically achieved in later generation. Awareness and understanding of value and positioning are indeed a challenge.
  • EPTS indeed has been formed to facilitate the adoption of the event processing area. Both challenges mentioned here – advancing the state of the art to accelerate the next generations, and educate the general community about the value and positioning of event processing within the enterprise computing.


Sunday, August 2, 2009

Concentrating on the important things with event processing

I would like to welcome to the Blogland - Mani Chandy, seen in this picture, one of the pioneers of the event processing area, the person who came to the first event processing symposium in 2006 to tell us what are the obstacles in the way to success in this area, and my partner in organizing the event processing Dagstuhl seminar in 2007, and the second one in 2010.
Mani has recently started a Blog named: Smart systems that sense and respond, so be sure to bookmark it. Mani writes his Blog as mini-papers, including references.

Inspired by the last posting on Mani's Blog I would like to write about a role of event processing that is sometimes overlooked -- not generating more information, but generating less information, or more precisely focus our attention on the right information.




Mani says in this posting that a human attention is the world's scarcest resource, thus focusing the attention on the right stuff has a very strong impact on what's happening. This is true in many senses, I think that in one of the many management courses I took over the years the instructor said something like --- you can do around 20% of what you plan to do, however the difference between success and failure is whether you can identify the right 20%. This is true in many areas of life, a smart student knows what is important to study in depth before the final exam, a smart physician knows which of the symptoms is important, a smart reporter knows how to identify the news item that will take him to fame, and there are many other examples.

Event processing is the basis for smart systems that can detect when it is important to attract the patient attention, in this case the role of the software is not to create more events, but to highlight existing ones, though the highlighted ones may be derived events and not raw events...

So -- focus your attention to other smart articles in the smart systems Blog. More - later.

Wednesday, July 29, 2009

On event processing and philosophy




In case you don't know I have a BA in Philosophy, at later phases in life I also studied for MBA, and for Ph.D. in computer science, but I still have a soft spot for philosophy, it was the most fun to study, and I also think that it has been very useful for me to learn clear and exact thinking, this is an important asset I got from these studies. As I moved away, I did not keep much contact with philosophy, although during my Ph.D. studies I took an advanced logic course in the philosophy department with the late Professor Hugues Leblanc, one of the giant scholars I have ever met, one of the leading logicians of our times, and a great person.

Recently I had an Email exchange with an interesting person, Ken Archer from Telogical systems, whom I met in Nashville earlier this month. Ken has participated recently in a tutorial on ontology of events, given by philosophers, I have copied the description he sent me:

These were philosophical ontologists in dialogue with ontology engineers (primary in biomedical domains, where ontologies have found relatively high acceptance). The philosophers’ starting premise was that the top level distinction in ontology is between continuants (entities that endure through time while maintaining their identity) and occurrents (entities that happen, unfold or develop in time). Continuants and occurrents are orthogonal to each other, as continuants participate in occurrents (e.g. surgeons and patients participate in surgery). Following from this premise was the claim that, while continuants have received much attention historically in philosophy, occurrents have not. As a result, we are much more comfortable talking and generalizing about continuants (objects) than we are about occurrents (processes, events), and we often tend to reify occurrents into continuants as a result.

From this starting point, the philosophers embarked on a philosophical ontology of occurrents that they said reflected the best results of current philosophical research. The bulk of what they had to say can be found in the Basic Formal Ontology, beginning on page 59 (the diagram for their occurrent ontology is copied below). I’ve found the Basic Formal Ontology to be very helpful in all of my modeling. As a result, these philosophers would definitely say that we can talk of types of events or processes, as we do in event processing, and argue that the reason we tend not to speak of types of events/processes is this bias for continuants over occurrents. The research project of these ontologists, then, is to provide an upper ontology of occurrents.

I think that while we are constructing our terminology and conceptual models, we can go back to the roots and look at the formal ontology work. I need some free time to digest it, but it looks interesting.

More on this - later.

Tuesday, July 28, 2009

On social networks and event processing


Today I noticed that the number of my LinkedIn contacts is now a round number 0f 600, unfortunately two of these 600, Klaus Ditrtich and Shlomit Zak have passed away, but their LinkedIn records are still alive, I hope that the rest of the 598 are alive and well. I was introduced to LinkedIn 3 or 4 years ago, when I received a LinkedIn invitation from Mark Palmer, who was my first contact, and accumulated the rest of the links over the years. Part of them invited me, and part of them were invited by me, I have there classmates from high school, colleagues from all places I worked in, students I taught or supervised, some friends, and of course, the members of the event processing community. LinkedIn was my first social network, currently I even don't know exactly how many social networks I am member of, but LinkedIn is the only one that I am active in. So I'll take this opportunity to say a few things about how event processing can become part of social network platforms.

There is a lot of buzz recently about Twitter events, which is also one of the Web 2.0 family, but I'll take the LinkedIn example now, since it has some structure, which creates various event types. While there is applications that attempt to use Twitter events to get information about the stock market, in social networks the events are about individuals; some of the interesting event patterns that may be obtained about individuals:
  • A person got more than 3 recommendation during the last week -- probably looking for a job now, this is an information that can be of interest for various people, such as head hunters, or enterprises seeking for employees.
  • A certain amount of contacts joined a certain group -- may be an indication about a group of interest
  • LinkedIn has microblogging of 14o characters like Twitter -- various things can be obtained from this microblogging.
  • Postings that people make on groups may indicate something about a person.
  • Level of activity in groups may also indicate something about that group.
Many other possibilities can be thought on; I foresee that event processing will be embedded in social network platforms, and maybe will be the basis of abilities to offer paid services. Some social networks send me an Email every time that somebody looks at my profile, and offer me to know the identity of this person, if I'll become a paying member, I have never tempted to do it, however, maybe the use of event processing will provide such services that will be worth paying for, but I guess that most people expect to get these kind of services for free... Can be a good exercise for students, to build such a service. That's all for today, almost midnight...

Friday, July 24, 2009

On use of technology and unfairness


This is somewhat related to event processing, but the issue is more general. It relates to an article published today in the NYTimes, claiming that high speed traders gain unfair benefit over regular traders. The relationship to event processing is obvious, since typically event processing system run under the cover of the "high speed trading"; however, the point is whether technology progress should be stalled due to fairness consideration. I live in a country which has a socialist root; When I took the matriculation exams in high school, we were forbidden from using calculators, since at that time it was relatively expensive, and not every high school student could purchase it, I actually learned how to use slide rule, which I think can be found only in museums today, which is somewhat equivalent to calculator; it did not take long until calculators were allowed. Another example from the Israeli history, when the TV started in Israel, it was black and white, but programs that were purchased from other countries were in color, so people have started to purchase more expansive TV sets that supported color; the government thought it is not fair that the "rich" people can see movies in color, and the "poor" people can see movies only in black-and-white, and ordered the TV station (at these days it was only one, owned by the government) to erase the colors so nobody will be able to see anything in colors, even if they have color TV. However, Israel is also a high-tech country, and an engineer called Mooly Eden (who is now a senior executive in Intel) developed a device called "anti eraser" that restored the color, by reversing the erase function. At the end, the government gave in, and the "anti eraser" became obsolete. These are two examples where fairness considerations tried to bit technology progress, but not for long.

Technology is not the only issue of fairness, we send our children to private school, since we see that the public schools here are inferior; not really fair towards those who cannot afford it. the same goes for health. This actually goes to every facet of life, citing the immortal phrase from "start trek: the next generation" -- resistance is futile.

Thursday, July 23, 2009

On logical and physical interpretations of EPN and EPA


My youngest daughter Daphna has finished last week her summer course in the Technion in the framework of the program of "science seeking youth". She studied her first programming course using "Microworlds", a variation of the rather old Logo language, this is of course been translated to lower level language when executes in practice, by this fact is totally transparent to those who program in Microworlds. I am using this analogy since there seems to be some terminology discussion going on recently about the terms EPA and EPN. These terms were introduced in the past by David Luckham, who used them to describe a physical operational view of event processing application. Thus, an EPA is mapped in 1-1 fashion to a software module, and the EPN describes the running software modules and connections among them using physical channels, the first version of the EPTS glossary reflects this view.

However, the way I am using the terms EPN and EPA is slightly different, the physical view is of interest to system administrators, but for the users, designers and developers, the logical view is more relevant, thus I am using these term in a logical way and not a physical way. In order to demonstrate the difference, let's look at the following simple example: There are many patterns that relate to the management of a call center, one of them is the frustrated customer detection: if a gold customer complains three times within a single day (possibly on multiple issues), then a supervisor should call this customer immediately.

However, there is a spectrum of ways that this application can be implemented in reality:
  • It is possible to have a centralized implementation with a single software module that executes all the different functions within this applications, and actually the EPN is internal to this module;
  • On the other extreme we can have a software module implements any single function instance, for example, an agent that detect the frustrated customer pattern for Alice, where a different agent detects the frustrated customer pattern for Bob.
  • Another possibility is a context oriented implementation --- all patterns related to the Alice are processed within a single software module
  • Yet another possibility is a functional partition -- there is a single module for detecting the frustrated customer pattern for all customers
  • There can be also some more combination.
Should the user / system designer / developer care about it and build a different EPN for each variation ? In the past when event processing was hard coded in general purpose programming languages, the logical EPN was also the physical EPN, but one of the gains from using dedicated event processing languages are the ability to abstract the implementation out.
The actual mapping of functions to software modules is left to an optimizer, and can be dynamically changed based on change in the system behavior, load balancing etc.. Actually the paper we presented in DEBS 2009 is part of such an optimization scheme. Thus, the way I am using the term EPA is a single logical function and not necessarily a software module. In the EPIA book we are building our entire concept based on a logical level meta-language that can be translated to various implementations, and even programming styles. As said, there is also an interest in the physical realization of EPN, but it is more of interest to system administrators and implementers of event processing products, but it should be transparent to the user of event processing applications. More on this topic - later.