Tuesday, December 31, 2013

Summary of 2013 as reflected in this Blog

2013 is phasing out and this is a time to do a short summary via this Blog.

This year I have been abroad   times:  Two of them were vacation  - long one in New Zealand  (pictures on FaceBook) and very short one in Paris.  I have also  been twice in the USA (January and July), and attended DEBS 2013 in Arlington, Texas.   once in Luxembourg (for negotiation of 2 EU projects), once in Brussels (as reviewer for another EU project), once in Barcelona (for the ACM multimedia conference), and once in Hong Kong (for ER 2013).

The main activity in 2013 was around the event model, I explained some of the background early in the year, presented it first in ER 2013, and towards the end of the year we also produced YouTube  video clip.
We achieved a great progress in this front, and will see which shape and direction it will take. 

The most read post on this Blog this year was the post on comparison between S4 and Storm.  Some other well read posts were: causality vs. correlation  Web serviced triggered by SAP ESP,  Is Philosophy dead?  (an "off topic" post, but I have an academic degree in Philosophy in my record...). and "Event Model - what comes first, the logic model or data model?"

What is coming for me in 2014?  --   stay tuned!

One more thing:   This year I 'gained' a Wikipedia entry about me  (some revisions are needed, but according to Wikipedia rules, I am not allowed to update it)..  

Happy New Year. 

Wednesday, December 25, 2013

58 sensor applications

I came across a site that lists "top 50 sensor applications for the smarter world".  It actually list 58 applications partitioned to the following areas::  smart cities, smart environment, smart water, smart metering, security & emergencies, retail, logistics, industrial control, smart agriculture, smart animal farming, domestic & home automation, and eHealth.  

I worth digging into the different areas to check the potential applications, and the role of event processing in each of them. 

Friday, December 20, 2013

On reversing the roles - the kid got it!

Earlier this week I spent 2 hours with a group of high school students that were selected to be part of the "President of Israel program to discover and cultivate the inventors and scientists of the future".   The IBM Haifa Research Lab took part in this program, by conducting a sequence of sessions, each with one of   our local scientists (I was the last in the sequence).  
It was a very interactive session, and as part of it I described four scenarios of event processing in different areas (typical examples I am using in my talks:  the car theft example, the intensive care unit scenario, the never-lost luggage scenario, and the cold chain scenario that we use in our recent TEM video clip.  
I have asked them what they think is the common denominator among all these scenarios -- they said many right things, but one kid said the most important thing:   "in these scenarios the roles are reversed, instead of the usual way that the person tells a computer what to do, here the computer tells a person what to do".
This kid will  definitely have a bright future... 

Gartner's recent predictions about business intelligence and analytics

A recent set of predictions by Gartner, state that Business Intelligence and Analytics will remain top focus for CIOs through 2017. 

It mentions two interesting observations:   One is that the confusion in the market about the term "big data" and its tangible results constrain the spending and limit the growth of BI and analytics software.   The second observation is that By 2017, more than 50 percent of analytics implementations will make use of event data streams generated from instrumented instrumented machines, applications and/or individuals.  

This is consistent with the Gartner's term -  "two tier analytics",  where event processing is the second tier, after historical data analytics.  While the need to consolidate analytics and event processing is becoming more pervasive, the utilization barrier and the need to battle complexity is still a common denominator. 

Saturday, December 14, 2013

The Event Model: short promo on YouTube

Following the first exposure of "The Event Model" in ER 2013,  we have produced a 5 minutes video clip explaining shortly the idea.  The screenwriters were Fabiana Fournier and Sarit Arcushin and the video was produced by Tammy Dekel, Hanan Singer and Chani Sacharen (who is also the narrator). 

The video clip issues a call for partnership in investigating this model -  either by working with us on use cases to validate the model, or working with us on the challenges in further developing the model.

Sunday, December 8, 2013

On Rolls Royce's engine health management

In DEBS 2013, Roger Barga from Microsoft mentioned in his keynote talk that Rolls Royce is proposing to its customers a model of engine hours as a service, and used it as an example that event processing can be enabler of changing business models.    I recently talked with somebody about these type of systems and decided to follow up and learn more about the Rolls Royce system using the available information on its website.   The service is enabled by "Engine Health Management" (EHM)  The illustration above shows some of the engine sensors.   The monitoring follows the scheme: Sense-Acquire-Transfer-Analyze-Act.

The sense phase deals with the activation and capture of the sensors.
The acquire phase stands for a combination of routing reports in various milestones (takeoff, climb, summary at landing)  and detection of abnormal situations (this is the "derive" part of event processing).
The transfer phase deals with the communication to the ground operation  center
The Analyze phase is a manual phase that take the input from the previous phases and adds manual control and decision about next actions 
The Act phase deals with the actions required -- such as servicing and part replacement and determine the urgency and location.

This is consistent with the 4D (Detect-Derive-Decide-Do) model.  Here the decision is mainly manual.  It seems that this is one of the early cases where the use of sensors and event-driven applications are used to employ new business models.   More on business models change  -- later. 

Wednesday, December 4, 2013

From health persona to societal health by Ramesh Jain

Ramesh Jain uploaded interesting presentation to slideshare. The presentation entitled "From health persona to societal health"  talks about using personal events based on mobile devices to connect sick people to medical services upon detection of situations.     It seems that the area of mobile and personalized healthcare is an emerging area of using events coupled with mobile devices.    This can also be a good area for use of non programmer control (e.g. physicians) on situation detection applications.   I'll write more on this combination soon.  

Sunday, December 1, 2013

More on event processing and mobile devices

More than a year ago I have first written about event processing and mobile devices, and illustrated this picture, showing the possible roles of mobile devices as producers, consumers, front-end and dashboard carrier.   Now I am concentrating on the "EPA on mobile", which means that mobile devise (phone, tablet, smart sensor etc...) can host the actual execution engine of event processing.  One of the questions raised is why is it needed, typically people think on mobile devices as end points,  front ends, and collection points, while the actual processing is executed on a server (cloud, mobile back-end server etc...).  
There might be multiple reasons to run event processing on mobile:

First -- mobile devise may be off-line (e.g. when I am travelling abroad, wherever there is no WiFi connection, my mobile phone is in off-line, since mobile data connection is very expensive out of the country).  

Second -- there are various sensors attached to a mobile device,  whether these are the internal sensors of the devise itself, or a sensor network wired locally to a mobile devise (e.g. tablet).   In many cases events are emitted frequently, and raw events are relevant only at the local level.  Sending them to the back-end requires both communication cost, and has high toll on power consumption, which is still the weak spot of mobile devices.   

Third --- there are also privacy considerations,  such as processing events that the owner does not want to share with the rest of the universe.

Recently I have learned about some implementations done now in this area, they are hybrid implementations, certain part of the overall logic might run on a mobile device, while part might run in the back-end.
running open source event processing on mobile devices.   I guess that this will see much more of it, as the world moves to mobility, and to the Internet of Everything.    

Saturday, November 30, 2013

On the PLAY project

I have spent this week some time in cold Brussels, in my role of reviewer of the PLAY project, which was a project in the framework of the ICT program of the European Union.    There are quite a few projects that have event processing at their core, in fact early in 2014 we'll be involved in two new projects: SPEED and FERARI, about which I'll write in a later phase.  Being a reviewer, I accompanied the PLAY project since its beginning -- starting with the "kick-off" review, and continuing to the three annual reviews.  As a reviewer, my role is both to evaluate what was done and provide comments and evaluations, and also to be a kind of mentor for the project and try to help them going in the right direction.    The project has evolved during these years, started with event-driven services as a motivation, and in addition touched topics like Internet of Things and events coming from sensors.   It uses RDF and semantic web technology to describe events and patterns, and also plays with the idea of event marketplace, an idea that deserves more discussion in one of the next posts. As for the event processing part, they have developed distributed ETALIS,   I guess that this will be replaced if they want to take it to the real life, as logic programming based languages are great for the few people who understand how to program with them, and a barrier to others.  While this is a research project, and in real-life setting  this implementation will probably be replaced, the approach taken have a promise.  There will also be some follow-ups to this project, which is something that is desirable for these projects, the "after life".   On the whole, this is an opportunity both to assist and to learn, and I hope to hear about the "after life" in the future.  

Saturday, November 23, 2013

On Dynamic M2M Event Processing

M2M is one of the realizations of the Internet of Things which attracts a lot of work recently.  Event Processing is in the core of such applications. They don't work in the traditional Internet model of - store and search, but they are intended to alert or act now.     
An interesting presentation from Eclipse Con 2014 (planned for March 2014) is entitled "On  Dynamic M2M Event Processing".  This presentation (marked as a draft) is  by  Hitachi and Oracle.    It talks about event processing within remote devices embedded within   the OSGi component model.  Worth reading -- and we'll see a lot more in this direction. 

Saturday, November 16, 2013

On ER 2013

This week I have spent several days in Hong Kong attending the ER 2013 conference.  I have attended this conference several times before, but not recently, and came back due to the fact that I have renewed interest in conceptual modeling due to my work on event modeling.    It was interesting to observe what are the key topics today in conceptual modeling research.    The keynote speakers by  David Embley. Surajit Chauduri and Marie-Aude Aufaure have all dealt with conceptual modeling in big data, enterprise analytics and business intelligence, so this seems to be the key interesting topics. Other topics were the classic topics like: business process modeling, data semantics, and ontology based modeling.     Conceptual modeling has contributed as means of providing abstractions over computing and data, where modeling play a vital role in the current IT industry, moreover, we see standards emerging in various areas such as BPMN and recently DMN.   You can look at the conference program to get a feeling on the various presentation.   This was my 5th visit in Hong Kong over the last 9 years and this place has its own charm.   

Thursday, November 14, 2013

The event model - first exposure

Yesterday,at the ER 2013 conference in  Hong Kong, I have delivered a tutorial whose title was "modeling in the event-driven world".   This has been a first exposure of the work we have been doing entitled "the event model"  The main motivation behind this work is the observation that the next frontier in the event processing area is not to provide, but to overcome the complexity of event processing logic.   The main idea is to raise the center of gravity from the code level to the modeling level and aspire for radical simplification and making event processing logic accessible to larger audiences.  I'll write a series of blog posts around this model in the next few weeks.   
For a start - anybody who is interested in this topic can look at the tutorial's slides on Slideshare.


More - later. 

Thursday, November 7, 2013

On the role of storytelling

Storytelling became a popular trend.  While Steve Jobs made it popular, it is quite an ancient art.   My first comprehensive written document, submitted for promotion in the Israeli Air-Force, started with a story telling that when I started writing this, I had a terrible tooth pain.  The examiners actually were quite surprised, especially when in first few pages they had to go through a series of caricatures (I never knew how to draw, but I enlisted people who knew to help).   Their first reaction was -- this guy is not serious,  this is supposed to be a serious professional project.  However, when they kept reading, they understood why the story served as a good introduction to the project, and provided the ultimate motivation for what I am doing. Since that time I am practicing storytelling from time to time.  I found it very powerful, especially as a way to start a presentation, or in long presentations as a way to regain attention.  The art is to do the storytelling natural to the presentation, it has to be relevant and have a message or emphasize a point,  and not a distraction on out-of-context issue.   I notice that now there are even competitions on digital storytelling
The IBM CEO, Ginny Rometty, is famous for practicing storytelling.
Now I am looking for some good story for my next week tutorial in ER 2013.  I have to talk for three hours alone, so good stories are essential to get by...

Saturday, November 2, 2013

On monitoring human behavior

In the event-based multimedia I met Pil Ho Kim from University of Trento.   He wears a camera that takes picture every minute from his life as long as other indicators such as temperature and creates his personal detailed log.   This is part of his study on human behavior,  He posted a presentation given at the ACM multimedia conference on Slideshare (it seems that he borrowed one slide from my presentation) entitled: "how to monitor human behavior".  He is using ESPER for doing event processing, based on the multimedia events collected.    This is a good example of using complementary technologies along with event processing. 

Saturday, October 26, 2013

On events in multimedia research

Back at home -- I promised to provide some summary of impression from the events in multimedia workshop I attended earlier this week.    Meanwhile some of the presentation have been uploaded to the workshop website, as well as some pictures.    I recommend to view the presentation of  Ansgar Shcerp, it is the most interesting one (to me). 

I was not familiar with the multimedia research until being invited to this conference to have a dialogue with them.  My impression is that what they are doing is complementary to what we do in the event processing universe.   We assume that events exist and we need to process them, they start from pictures and video streams and try to detect what event is described in the picture, they care less about what somebody does with the event after being detected.    They have long discussions about "what is an event",  events that they have talked about in some talks are social events, sport events etc.   I think that a synergy between different communities who deal with different phases of the event processing story is vital in order to exploit events that come from multimedia sources, and will continue the dialogue with them/.  

Tuesday, October 22, 2013

On event semantics -- my talk in the event-based multimedia workshop

Still in Barcelona for the event-based multimedia workshop.   I still need to write on the second day and the closing panel (in which I also participated), and my impression about the event-based multimedia community.

Meanwhile -- I have uploaded my presentation to slideshare, parts of it is reuse of other presentation (e.g. the slides explaining the notion of context),  the new stuff is about the semantics -- who are the players in the event game.    I am planned to give a long tutorial on event modeling in ER 2013 next month - stay tuned. 

Workshop on event-based media integration and processing - the first day

This is the logo of the ACM multimedia 2013 conference that is taking place these days in Barcelona.
Co-located with this conference (we share the coffee in the coffee break) is the event-based media workshop which I am attending.  My talk is planned for today, so yesterday I was just a listener in the first day.   Here are some impressions:
The most interesting talk was the one by Ramesh Jain,  who is working on events in multimedia for a while.
Ramesh talked about cybernetics in society, and about smart systems defined as smart actions in respond to smartly detected situations.   He also talked about social networks and the Maslow's hierarchy of needs

Ramesh said that the social media concentrates on the third and fours layer - love/belonging and esteem,and posed as a challenge to get more focused on the basic needs - the two lower layers.  
Ramesh also talked about the 4V of big data,  Saying that the volume is the least important one -- as there are methods to partition the space, but the other three actually issue -- event processing in multimedia under uncertainty.  Ramesh mentioned  the Israeli company Waze (recently acquired by Google) as example of a smart social system.  

Some of the other talks concentrated about analysis of pictures that are put on social networks and try to identify the event they are describing, first see if they describe an event, and then identify the event - talking about sport events, public events, family events and more.   It is funny, since some of the event definitions view event as instantaneous occurrence - and then a sport event (say football game) is not an event since it takes (net) 90 minutes.  I guess that there are different views of events -- I'll mention it in my talk today.

Saturday, October 19, 2013

There is no Internet of Things - (yet?)

"There is no Internet of Things", this is a title of a promotion for a recent Forrester report.  The report has one more word in its title -- "yet".   The (right) claim is that while there are a lot of sensors, mobile devices, wearable computing devices and vision about "Internet of Things",  and recently "Internet of everything", the vision was not fulfilled yet, and the world of sensors is still very fragmented.   
What made the (current) Internet successful is a combination of standard protocols, the ease of creating content, and the ease of retrieving its content with the combination of web browsers and search engines.  
In order for the Internet of Things to become "Internet" - there is a need to make it much simpler and standard oriented.  Imagine that in the current Internet the retrieval from the Internet would have required everybody to write SQL queries --  do you think that the Internet  would have become pervasive?
What is the equivalent?  --  standard and simple way to perform the functionality of IoT.   I'll be able to report about our related work  in a few weeks -- stay tuned. 

Monday, October 14, 2013

Start of a new semseter and upcoming talks

I have not written recently -- some due to a short vacation (in Paris), and some due to seasonal laziness.. 
In any event - this week the semester started in Israel, and I am giving an Event Processing course again at the Technion, this time I am doing some change in the course.   The course is still based on the EPIA book,  but I am updating now to teach event processing through a model-based approach, which corresponds to the project we are working on recently.  We are going to do first public exposure of this work in ER 2013 in Hong-Kong.  I'll write about it later.  
Next week, I am giving keynote talk in an event oriented workshop co-located with ACM Multimedia 2013.  in which I'll talk about the semantics and modeling of situations and contexts.   Both talks will become public after they are given.   

Monday, September 30, 2013

The low latency game: reported 176 ns for high performance trading

While I don't have any way to verify the reported results,  the low latency portal reports on  "groundbreaking results" by Argon Design. Anyway - the reported latency is 176 nano second,  to those unfamiliar with the term -- there are billion (1,000,000,000) nano second within one second...   Recently I wrote on time perception of different animals.  If the time perception of a person is limited to 1/60 of a second, then you can do the arithmetic to calculate how faster this technology is relative to human perception.  It is interesting to check is there a limit to the time perception of computerized system.   Building low latency systems can help building robots that are very fast and one can think of the potential of what such robots can do to compensate for the relative slow reaction of humans...   

Saturday, September 28, 2013

On time perception - depends who you are

I have not written in the last 12 days -- some of it were vacation in Eilat (a resort town in Israel near the Red Sea) and some just lazy time -- holiday period is over, tomorrow - back to work!   Back also to the Blog writing -- I was always fascinated by the notion of time, and wondered if everybody has the same perception of time.  Today I came across an article in the science and technology section of Economics - dealing with time perception within different animals.   It turns out that there is a metrics to measure the perception of time, which is called CFF-  critical flicker-fusion frequency that has to do with the frequency in which the eyes can refresh an image and thus determine the rate at which the information arrives to the brain.   The human CFF is 60HZ  (so 60 refreshes per second).    The movies and TV are adjusted to this frequency.   A dog's CFF is 80HZ, thus according to the article, it does not have interest in TV movies, since there is a mismatch which makes the movie to be out of sync.     A fly has  CFF of 250HZ which is a biological advantage, because a fly can react faster to threats and opportunities -- thus a movement we do towards capturing a fly seems as a slow motion movement to the fly and thus it can easily escape.   The different perception of time depends on the CFF, what we grasp as a second, a fly grasps as 4.15 seconds, and thus it subjective time perception is that it lives much longer than the time we perceive it living.   On the other side of the spectrum turtles have CFF of 15 HZ, thus one second in our time seems to a turtle like quarter of a second thus a turtle thinks that it actually moves fast...   Anyway -- can we take advantage of this to process events in different frequencies to provide different views of the universe?  I think that the potential of it should  be investigated..  

Monday, September 16, 2013

On six years of blogging

Recently the Blog has completed six years.   A lot of things happened during this time,  buzzwords have come and gone, and the understanding of event thinking has been sharpened over time as the hype cycle progresses.   Looking at Google Analytics' statistics,  the most popular postings in this blog are:

1. On unicorn, professor and infant - a debate about event processing and analytics.  The topic of the relationships between event processing and various kind of analytics (a buzzword by its own right) are still popular and relevant.
2. On family trees - this was an off-topic post, that talked about a passover vacation in which I spent a few days tracking the roots of my family. 
3. On Dave Mayer's keynote in DEBS 2012 - this one is relatively recent, and was the champion of the last year's posts.  It talks about the phenomenon of re-inventing wheels, and in particular, Dave Mayer in his DEBS 2012 keynote invented frames which have large overlap with the notion of context.  

As for visitors - totally over the years more than 150,000 distinct readers have passed through the blog, around 3000 have visited at least 200 times.  The countries with most visitors are (in descending order):  USA, UK, Germany, India, Israel, Canada, Philippines, France, Australia and Japan  (I don't know anybody from Philippines... so will be interesting to get a feedback from a reader of that location).   The cities with most visitors are: London, New York, Tel-Aviv,  Bangalore, Manila, Paris, Singapore, Karlsruhe,   and Sydney. 

The most popular referencing site is the almighty google, but there are also many references from complexevent.com, manning.com and TIBCOblogs.

Next yer's Blog will continue to follow the trends and opinions, and will get deeper on my current work about event modeling - after we'll publicly expose it in ER 2013 (November, Hong Kong).

I always wonder why people are reading what I am writing - but the several hundreds of readers per day encourage me to continue...  although I have been written less recently  ( a matter of mood)..

Tuesday, September 10, 2013

On the reactive manifesto

Today I am writing from Luxembourg, where I participated in the negotiation meetings of two EU projects that are being launched.   I'll write about them when they will actually signed.  
Not far from here, in  Dagstuhl 2010, we worked on the "event processing manifesto" .   Today I discovered another manifesto that I think can be tracked to EPFL in Laussanne (not completely sure since the manifesto authors don't identify themselves), the manifesto is called "The REACTIVE MANIFESTO"  and is dated July 15, 2013.   The picture above is copied from the manifesto, and as you can see they define REACTIVE as event-driven, scalable, resilient, and interactive.   I wonder what is the background and motivation about it,  perhaps one of this Blog readers will be able to shed light...

Friday, September 6, 2013

Gartner's Hype cycle on emerging technologies 2013

Hype cycles time of the year, and Gartner published the emerging technologies hype cycle for 2013. 

Some insights: 

  1.  "Complex Event Processing" is still near the peak of inflated  expectations, actually moved a little bit below the peak.  This means that it started the process of filtering out the hype, and getting to realistic contributions.
  2. On the analytics front -- predictive analytics is now at the plateau of productivity, and on its way to being a commodity, while prescriptive analytics is on the rise, but still in the innovation phase.
  3. Human augmentation, brain-computer interface, quantum computing and mobile robots are on the rise.  In fact, the Gartner's press release emphasizes the human - machine relationships.
  4. In adjacent technologies to event processing; Internet of Things is getting closer to the peak, In memory DB and context analytics are also getting past the hype peak. 
  5. Big data is still in the height of the hype --- as we saw in other sources,  it is now recognized as a catch-all hype, and I guess that it spawn several distinct concepts in the future.
  6.   Mobile phones/tablets etc are not mentioned explicitly as part of the emerging technologies, I guess that mobile by itself is not a technology -- it has influence on all other technologies (the same as the WEB is not a technology).

Friday, August 30, 2013

New market research on the event processing market by Markets&Markets

It seems that there is a new comprehensive market research on the event processing market in the years 2013-2018 by Markets and Markets.  I don't have the market research itself (it is quite expensive), but the site gives some details, according to the report,  Markets&Markets forecast that  the "CEP  market"  is expected to grow from $764.5 million in 2013 to $3,322.0 million in 2018.  I wonder what this figures represent, it seems that this is beyond the accumulative sales of event processing platforms. 

They also classify the market according to the following verticals: 

BFSI: algorithmic trading, electronic transaction monitoring, dynamic pretrade analytics, data enrichment, fraud detection, governance, risk and compliance (GRC); 

Transportation and Logistics: asset management and predictive scheduling and toll system management; healthcare: self-service proactive monitoring and alerting and governance, risk and compliance (GRC); 

Telecommunication: mobile billboards, revenue assurance, network infrastructure monitoring and predictive CDR assessment; 

Retail: inventory optimization, shoplifting detection and real-time marketing and customer engagement; 

Energy and utilities: oil and gas operation management and nuclear crisis and smart grid energy management; 

Manufacturing: shop floor automation and operational failure detection, infrastructure management and supply chain optimization; 

Government, defense and aerospace: Intelligence and Security, emergency response services and geo-fencing and geospatial analysis; 

Others: includes education and research

Hope to get more insight towards this research. 

Friday, August 23, 2013

On concept computing - take one

We think in concepts.  We study concepts, we reason about concepts.   
Now we also have "concept computing", the term was coined by Mills Davis.  It does not appear in Wikipedia yet, but it is an interesting and useful idea.  Mills Davis uploaded his AAAI keynote talk on Slideshare recently, and the slides below is taken from there.   The work we are doing now is somewhat the projection of this idea for the event-driven world.  I'll write about it in the future.  Meanwhile -- this presentation is recommended 

Tuesday, August 20, 2013

Big data analytics will never replace creative thought

The claim expressed in the title of this posting is the title of  a piece in "Data Quality News" by Richard Jones.   It claims that the "data craze" - the conception that data mining alone is sufficient to get decisions in all areas, is a misconception in some areas.  Jones provides two examples:  marketing - where statistical reasoning gives a great value, but it deals with the small details, however  human creative thinking deals with the big picture, and data mining alone cannot get it,  and healthcare - again, data mining can be of great value, but interactions with the patient and personal examination by a physician is vital.    
I guess that the research into AI should also deal with how to create artificially creative thinking.  As I've written before Noam Chomsky has criticized the AI community by making statistical reasoning its mainstream and deserted the strive for  "solid model of the universe" .  I guess that after some disillusionment from the "data craze" the industry will settle on getting data mining its right place, as a supporting technology.

More on this - later.

Thursday, August 15, 2013

On machine learning as means for decision velocity

Chris Taylor has written in the HBR Blog a piece that advocates the idea that machine learning should be used to handle the main issue of big data - decision velocity.  I have written recently on decision latency, which according to some opinions - real-time analytics will be the next generation of what big data is about.
Chris' thesis is that the amount of data is substantially increasing with the Internet of Things, and thus one cannot get a decision manually in viewing all relevant data,  there will also not be enough data scientists to look at the data.   Machine learning which is goal oriented and not hypothesis asserting oriented will take this role.     I agree that machine learning will take a role in the solution, but here are some comments about the details:

Currently machine learning is off-line technology, case sensitive, and cannot be the sole source for decisions.

It is off-line technology, systems have to be trained, and typically it looks at historical data in perspective and learns trends and patterns using statistical reasoning methods.  There are cases of applying continuous learning, which again done mostly off-line, but is incrementally updated on-line.    When a pattern is learned it needs to be detected in real-time on streaming data, and here technology like event processing is quite useful, since what it does is indeed detect that predefined patterns occur on streaming data.  These predefined patterns can be achieved by machine learning.    The main challenge will be the online learning -- when the patterns need change, how fast this can be done in learning techniques.  There are some attempts at real-time machine learning (see presentation about Tumra as an example), but it is not a mature technology yet.

Case sensitive means that there is no one-size-fits-all solution for machine learning, and for each case the models have to be established in a very specific way for that case.  Thus, the shortage in data scientists will be replaced by shortage of statisticians,  there are not enough skills around to build all these systems, thus the state of the art need to be improved to make the machine learning process itself more automated.

Last but not least - I have written before that get decisions merely based on history is like driving a car by looking at the rear mirror.  Conclusion from historical knowledge should be combined with human knowledge and experience sometimes over incomplete or uncertain information.  Thus besides the patterns discovered by machine learning, a human expert may also insert additional patterns that should be considered, or modify the machine learning introduced patterns.

Tuesday, August 13, 2013

On event-driven, request-driven,stateful and stateless

This slide is taken from our DEBS 2013 tutorial, explaining what are the differences in thinking between the traditional request-driven way and the event-driven way.   It shows the differences by answering three questions.     This goes back to the differences between business rules and event processing, and old topic, on which I have written first time around 6 years ago!    One of the claims that I've heard several times is that the distinction between them is that business rules are stateless and event processing is stateful.     I think that the main difference is that business rules are treated as request driven,  the rule is activated on request and provides a response, while event driven logic is driven by event occurrence as shown in the slide above.

While it is true that there is correlation between event based/request based and stateful/stateless,  these are really orthogonal issues.

Event-driven logic can be stateless.  If we only wish to filter an event and trigger some action,  this can be stateless (most filters are indeed stateless), but it has all the characteristics of event-driven, including the fact that if the event is filtered out - no response is given.   

On the other hand -- a request-driven logic may be stateful, there are many instances of session oriented and other stateful request-response protocols.    One can also implement stateful rule engine in a request-response way, where invocation of rule is based on result of previous rules that are retained by the system.  

Bottom line:  stateful vs. stateless is not equivalent to event-driven vs. request-driven.   

Tuesday, August 6, 2013

On the role of chief evangelist

I thought this is an interesting title and wondered what is behind this title. Today Theo Pristley published a Blog post with his interpretation of the term.  According to Pristley the three main point of this role are (according to my own interpretation):  telling the story, connecting the dots and forming influential opinions. 
There has been some posts before on technology evangelism, for example, the one from which I copied the above picture,  saying that  "Evangelist is born to learn, speak, share, sell and inspire the masses towards the technology or product they are passionate about".

 I always believed that it is important to a person to work on things that they are passionate about. Early in my career I have taken some management course in which the instructor said: "if you are not getting up every morning being enthusiastic about what you are doing - you are not in the right place, do something else!".  
I realized that this assertion is true for me, and I went to do something else (study for Ph.D) - true story!

Some people have used the term about me claiming that I am evangelizing "event processing" (although, unlike the official evangelists nobody ever paid me to do it). While I find it difficult to identify myself with a term associated with religion, I think that I do preach event processing for various aspects for years in different ways (of course, I am not the only person doing it).  

Actually, in the last slide of the tutorial that Jeff Adkins and myself delivered in DEBS 2013,  we put the following statement:

BTW - Jeff has on his business card the title "connecting the dots", which is kind of part of the evangelist role according to Prisetley.    I guess that this is different from the role of chief evangelist in a software vendor level that is looking across the vendor's portfolio.  I generally believe that people with bright eyes are happier and working better - and that people who can inspire people to brighten their eyes are great asset.believe that people with 

Saturday, August 3, 2013

New name and face-lifting to David Luckham's site: complex event processing & real time intelligence

In this picture, taken seven years go, you can see me (somewhat heavier than I am today) with David Luckham (in the middle) and Roy Schulte (in the right hand side).  I have talked with David Luckham recently and found out that he is still maintaining his website, and now also face-lifting it.   On the website he also explains why he has changed the name.   According to Luckham, the name "complex event processing" is strongly identified with the event processing platforms, dedicated software to do event processing,  but this is a tiny fraction of the market, where the bigger market is embedded event processing inside other software.  This observation has been done before by various people (including myself), so I believe he is right. Luckham calls the bigger game as "real time intelligence", some people call it "real time analytics", but  there is an agreement that it is part of the big data game (and other games) and that event processing is in its backbone.   It will be interesting to see if such branding will catch on, and how exactly "real time intelligence/analytics" will be defined -- I'll write more about it.  

Saturday, July 27, 2013

Taking the complex out of complex event processing

The quote of this week is taken from an article in InformationAge that talks about operational intelligence. 
The article explains what operational intelligence means, and you can read it to see if you find anything new.
The point of this post is a quote done by Ivan Casanova from TIBCO:  
We should all be focused on taking the 'complex' out of complex event processing" 
This quote is in the context of explaining the acquisition of Streambase by TIBCO.    I don't know Mr. Casanova personally, but what I have learned from his statement is that he believes that going forward, the programming model and tools represented by Streambase are better fit and less complex to use that TIBCO has done before, where it extended RETE based business rules system to handle stateful event processing cases, while retaining the rule-based programming model.    Streambase is using an "event flow" model that is some variation of event processing network.    Without getting to analysis of specific products (a restriction I have taken upon myself in this Blog), I would say that overall I believe that as a conceptual model for event processing I believe in the EPN model (which is of the family of data flow models),  and in visual working environment (better than textual working environments) to design and program.   This reduces the complexity for IT developers, which I think is very important trend.   The ultimate reduction of complexity requires one more step -  event processing modeling in the level of the business user level and automatic translation to an implementation language.  
Bottom line: I agree with the statement in the quote -- actually this is my main area of interest nowadays. 

Sunday, July 21, 2013

On continuous compliance

The Waters special report sponsored by Apama that was published recently,  ranked "risk and compliance" as the number one application of event processing in financial institutions.   Today TIBCO also published in its Blog under the "event processing" category its view on compliance - entitled: "The Y in comply".   It gives the view that people comply better when they understand the rationale of compliance, the example given is compliance with regulation in the food industry that employee has to wash hands before putting on gloves.  
I agree that people comply better when they think that the regulation makes sense,  however, my own subjective impression is that in the current culture, compliance became goal of its own whose sole reason as somebody described it "to get the regulator off my back".   Thus employees are required to do things that they don't see any reasonable explanation why they should do it, and IT systems check that they are doing it. Here, event processing is used  since in various cases auditing is event-driven, it is time sensitive an often involves calculation in time windows, and in some cases online auditing on the fly is required. The business value sometimes is just pleasing regulators, but this is often a great motivation for managements to take it very seriously...   Anyway, technologies are used for strange reasons sometimes...

Wednesday, July 17, 2013

Call for papers: special issue of ACM Transaction on Internet Technology on "event recognition"

ACM Transactions on Internet Technology issued a call for papers for a special issue on "event recognition".  In the CFP they explain that "event recognition" mean "event pattern matching".  This is an opportunity to report on interesting research work in that area, with a relatively short review and publication cycles.

Monday, July 15, 2013

On the OODA loop and the 4D

Richard Veryard asked in a comment to my post on  the recent tutorial given by Jeff Adkins and myself if our 4D scheme (which was introduced by Jeff)  is related to the famous OODA loop.  I have mentioned the OODA loop in the past in connection with the need to act faster than the speed of thinking. the strategy of air combats, later he also made claims about the generality of this method.    The 4D is certainly from the same family, and the four stages are indeed similar.   Interestingly Boyd's had event-driven thinking.    
The OODA loop was aimed to describe event-driven decision by a human - the human has to observe that an event happened, perform mental self orientation to analyze the meaning and implication of the event,  decide what to do, and act accordingly, have feedback loop to see whether the observation has changed.

The 4D describes a computational process, where the event is detected (not necessarily directly observed),  situation is derived (by computational means and not by mental process), and then a decision is taken (autonomic or manual) and an action is performed.   

The mapping is not 1-1:    
detect is always mapped to observe;
derive can be mapped to observe - as the detected situation is a derived event, and sometimes to orient - as it may derive a conclusion.
decide can be mapped to the combination of orient and decide in the OODA loop
do seems to be always mapped to act. 

More thoughts about the 4D and related stuff - later.