Wednesday, December 19, 2007

On deleted event, revised event and converse event

First, congratulations to my student Ayelet Biger, who has successfully taken today her M.Sc. thesis defense exam. Ayelet's thesis topic has been - Complex Event Processing Scalability by Partition which deals with parallel execution of CEP logic, when there are complex dependencies among the different agents. I'll discuss this issue in one of the later postings - we still need to compose a paper for one of the coming conferences on this thesis. Ayelet is my 17th M.Sc. student that has been graduated (together with 5 Ph.D. students makes it the 22nd thesis exam). Most of the students have done theses on - active databases, temporal databases (my past interest areas) and in the last few years to event processing. Supervising graduate students is a great way to work on new ideas that I don't have ability to work on in my regular work, the only thing that is needed are three more hours in each day...

Today's topic is inspired by a recent blog that I have recently read by Marco Seiriƶ. Marco is one of the pioneers in EP blogging, I've started reading his blog in January 2006, when he started the blog as "Blog on ESP", however at some point his blog became "Marco writes about complex event processing", another evidence that the name ESP has disappeared. Anyway, in his Blog, Marco talks about event model, I'll not discuss event model today, but concentrate in one interesting point that Marco raises about "undoing events". This is indeed a pragmatic issue with some semantic difficulties. There are systems in which events can be deleted, and some actions can be triggered by the event deletion. However, event is not a regular data and cannot be treated as such, since event represents something that happens in reality, then conceptually events are "append only" - in database terms, one can only insert events, but not modify or delete them. Deleting events also blocks the way from the ability to trace decisions/actions or have retrospective processing of the events. So - when in reality we need to delete/undo/revise events:

  1. when event is entered by mistake - typically not the event itself, but some details in the event attributes, we'll need a possibility to revise event.
  2. when we wish an event no longer to effect the processing.
  3. when the event itself expired or we'll not need it anymore, and don't need to use it in any other processing - including retrospective.

The first case is a revision case - if we are in an "append only" mode, then the way to do it is to enter another event, and have the possibility that it will override an existing event (or set of events) for the purpose of processing. Example: somebody sent bid for an electronic auction and realized that one of the details (say: the price he is ready to buy) is wrong, then he can add another bid that will override the first bid. Why not delete the original bid ? it may be possible that the original bid is already in process, and the overriding cannot stop this process, even if not, there is a possibility that for retrospective processing we'll need to reconstruct past state which includes the original bid (these considerations are actually not new, we have thoroughly discussed these issues within the temporal database community a decade ago when we (Sushil Jajodia, Sury Sripada and myself) edited a book about temporal databases research and practice

The second case is even more interesting, but similar in type of thinking, here we would like to eliminate an event from taking effect, this can be done by sending a "converse event" that reversing the effect of the event - e.g. cancel bid. The implementation problem is that this event, and maybe its descendant events may have being flowing all over the event processing networks, with some even getting out from the EPN with actions triggered, some in process, and some are part of a state, but have not been processed yet (e.g. since a pattern has not been detected yet). Theoretically there is a possibility to apply something similar to "truth maintenance system" in AI that includes also the action and compensate for all actions, but this complicates the system, so recommended only when it is critical to do it (I'll discuss such cases in another postings), when the event has not gone out from the EPN, it is still possible to stop it, most system does not provide a language primitive to do it globally in an EPN, and recently I have watched a concrete customer case, where they had to do it manually.

The third case is the "vacuuming" case - when an event is no longer needed (in agents' state, in the global state etc..), I never got deep into this issue, but thought intuitively that it is a relatively easy problem; however, when this issue has been discussed in the Dagstuhl seminar last year, the claim was that the general issue of event vacuuming is still an open question.

I'll stop here now -- spent enough time on this one... more - later

7 comments:

Richard Veryard said...

The following case probably falls under your second category, but I think it is worth a special mention.

Wikipedia defines composite event in terms of inference. But such an inference can be overturned (cancelled) by additional information, which creates an alternative (and more likely) explanation of the original data.

Let's say a given combination of basic events makes it likely that a car accident has occurred, and this inference triggers some emergency response. The car accident is represented as a composite (inferred) event. But it is possible to discover in a particular instance that what appeared to be a car accident was in fact something else. This discovery (presumably itself a new event) causes the "carAccident" event to be cancelled, and any emergency services to be notified and stood down.

Indeed, an unstable system might change its mind several times about whether a particular event had happened or not, and this instability would be a cause of concern for a system engineer.

Opher Etzion said...

Hello Richard. This is indeed a valid case in which due to new information, a "derived event" should be tetracted, and is actually category 1 in my posting - since this represents some error in the basic information about events that we need to correct and represents a kind of "non monotonic logic". I'll refer to non monotonic event derivation in one of the nexts postings, but today this is not something that products support as a first class concept.

cheers,

Opher

Richard Veryard said...

True, but we need to be careful how we define "error". We are talking about a situation where the inference was correct based on the information available at the time, which may have been incomplete rather than inaccurate. I believe this is known in law as Error Coram Nobis.

Opher Etzion said...

Hello Richard.

There are indeed different types of "errors" - some of them leads to the thinking of "event processing under uncertainty" see
my posting on that topic: http://epthinking.blogspot.com/2007/12/on-sources-for-uncertainty-in-event.html

There can be the case that the inference has been right, but the one who defined the pattern defined it in an inaccurate way since some relevant cases were not anticipated, this can also be the case that a pattern is rightly detected (or not detected) according to the information available, but more information that refer to the past arrives late.

cheers,

Opher

Anonymous said...

Ah, so my stealth name change got noted. I think the main reason for it was triggered by how others used the terms. Slowly CEP started to stick as the term used to describe this area, so I thought why not use it in order not to confuse people. I still think we are going divide the while EP are into subsections in the future when we understand it better.


About canceled events... A pragmatic approach that I have been thinking of which works for a important special case is to delay the processing of events for a short while. Let's say we hold onto an event for 5 seconds before processing it and allow for an event generator to send an "ooops" event during that time. It solves a part of the problem. Not usable for all kinds of applications but if you're not in a hurry it solves some of the problems. Most importantly it stops the effects of revoked events into entering into the core of the event processor.

In Richards accident example I would prefer sticking a probability to each event. 10 primitive events with 99% probability could easily be used to calculate the probability of the detected situation. This probability could then be used to decide what actions to take.
A situation with 100% probability could trigger an unrevertable action but a 20% probable situation might require that you put some effort into putting some mechanisms in place to execute an undo at a later point.

Although this contradicts my ideas of keeping the event model as simple as possible. By adding probability into an event model you would require that each part of the system can deal with it. Just another example how a really small change in event model can have huge implications in the design of your event processor.

Opher Etzion said...

Hello Marco. about the cancelled event - the approach that you describe has been called by some the "buffer approach", and let events "catch up" before progressing. This is used also to order events that arrive out of order. This may apply for some cases, and does not apply to others.

stay well,

Opher

Richard Veryard said...

Marco suggests we attach probabilities to events, and I can see that this could be useful sometimes. However, I am not convinced about propagating these probabilities (and the accompanying complexity) throughout the system. I can't see that it is necessary that "each part of the system" can deal with this. I'd prefer to find a way of containing the complexity, so that some parts of the system are presented with a simple binary event statement (either it happened or it didn't) while other parts of the system may be presented with a more complex probabilistic event statement (it might have happened, with probability X%). This is a form of attenuation.

This attenuation could be managed architecturally by layering - for example we might separate a process coordination layer (which knows about the probabilities) from a process execution layer (which doesn't).