Showing posts with label event processing products. Show all posts
Showing posts with label event processing products. Show all posts

Sunday, December 14, 2014

CEP Market players - end of 2014 - from Paul Vincent


Paul Vincent published the new instance of his series on the genealogy of event processing players, as seen in the picture above.   Note that it includes also streaming platforms like STORM which is not an event processing tool per se, but a platform on which event processing functionality can be programmed.  Such platforms are indeed the most notable shift from previous versions.  

Friday, June 14, 2013

More on the acquisition of Apama by Software AG


One of the interesting questions about the acquisition of Apama by Software AG is what is the strategy of Software AG going forward in the event processing area, given that it has already acquired in the past an event processing technology from RTM which is named "WebMethods Business Events".
An article in COMPUTERWORLD attempts to shed light on this  issue, citing Stephen Ried from Forrester: "Apama and WebMethods Business Events complement each other; While the former RTM is really lightweight and can be embedded in many Software AG products to provide basic event communication capabilities, the Apama product is for those customers who like a dedicated business event management platform.".

According to this - there are two major use patterns.  Event processing as components embedded inside other products,  I have written before about the component approach to event processing, and indeed not every product needs all the event processing capabilities.  On the other hand, a full fledged event processing application require an event processing platform, optimized for performance metrics.




Thursday, April 18, 2013

Progress Apama announces a version which compiles to native machine code

Progress Software announced today on the release of a new version that compiles the Apama EPL into native machine code, claiming to improve the performance of the previous version by 2000%.   They don't mention what they actually measure.   The big data era renews the investment in scalable event processing solutions with various ways of optimizations.   We may start to see specialized event processing hardware. 
I think that it will be useful to establish a set of benchmarks, since it was seen in some works that there are huge differences in performance between types of event processing application - for example: those doing mainly filtering, those doing mainly aggregations, and those doing pattern matching.  It will be good to have a set of benchmarks that fit different types of applications, and a method to map application characteristics to a specific benchmark - to avoid the phenomenon that vendors cite numbers that cannot be compared.  More -later. 

Thursday, January 3, 2013

S4 vs. Storm

Yesterday I wrote about a talk given by a person from Yahoo Labs!  Another piece from Yahoo Labs! is a comparison between S4 (the author participated in the development of S4)  and Storm   acquired by Twitter.    Interesting!   Note that S4 and Storm are not the only players in the area of distributed stream processing, there are some others like IBM Infosphere Streams, so a more comprehensive survey should not be limited to these two. 

Saturday, October 27, 2012

StreamEPS from SGT - an open source event processing from Ghana




I was recently approached by a company that resides in Accra, Ghana called SoftGene Technologies,  which  has developed an open source event processing product called StreamEPS.   Looking closely at the description of the supported functionality, one can realize that this is an implementation that follows the EPIA book.  I'll write in a separate posting about the impact of the book in terms of follow-up works, it is quite interesting...
Softgene technologies describe itself as "Research-lead private company".  I like the definition, since I believe that much of the useful software is research lead. 

This also completes the continent coverage of people working in development event processing software.  
While there are quite a lot of software developed in Europe and North America.  There is now event processing software developed in Asia (Sri Lanka, Japan and Israel - that I know of), Australia, and Brazil.

If there are event processing related software developed in additional countries -- let me know and I'll survey in this Blog.






Wednesday, October 10, 2012

SAS announcement on event processing


SAS announced today that a new "SAS DataFlux Event Stream Processing Engine" will be available in December.  It is described as: "the new software is a form of complex event processing (CEP) technology...incorporates relational, procedural and pattern-matching analysis of structured and unstructured data".     Welcome to the event processing club,  this seems to be an indication that the analytics guys see the value of adding event processing to their portfolio, I guess that either the "limited appeal" of event processing has somewhat changed in the last couple of years to justify it.  Anyway - I welcome SAS to the club, and hope that they will also become active  part of the event processing community.  


Sunday, February 12, 2012

Crash course to build simple EP application using Esper


A crash course claimed to take less than an hour entitled "A simple introduction to complex event processing" has been posted.  This is done by example, which seems to be indeed very simple, finding "decreasing" or "increasing" pattern over two consecutive events and setting the color as green or red.   The main emphasis is on the setting - how to obtain, define and use events, and configure the engine - threadpools, listeners etc...  However not much about what event processing actually can do -- this is probably the next lesson.


  Esper is contrasted with commercial products since its open source model allows developers to play with it, use it for toy examples, and for daily usage that is not necessarily a commercial application of big enterprise, in our days of enterprise computing this approach has certainly a role to play, it should be noted that Esper is not the only open source in this area, and that some of the commercial products allow free development version (not access to the source code, but enabling developers to use the product for these purposes for free).


Anyway -- if you wish to learn Esper, it is a good start. 

Saturday, December 10, 2011

The genealogy of event processing players - December 2011 edition

                                 (Click to enlarge)


Paul Vincent, who took upon himself to be the record keeper,  posted a revised version of his famous market players with some additions. Over the last year I have encountered several more in this area, so if you are a player and want to be on the map,  notify Paul and get into the picture.

Sunday, November 13, 2011

Continuous event processing in Quartet FS


Continuing to survey additional product related to the event processing area, I came across Quartet,   this illustration is taken from Quartet FS' webpage.   Quartet FS advertises its product as "aggregation engine",  from the description it seems to be some incarnation of active database, where the OLAP cube is constantly updated, this variation is useful for some financial services applications.      I guess that we'll discover more event related products coming from different areas within different frameworks (in this case - OLAP/BI).

Monday, September 12, 2011

SpatialRules


Sometimes I am getting notifications that reveal players in the event processing space that I was not aware of,  the one I've discovered today is SpatialRules by ObjectFX.   From the description on the website it is difficult to understand the exact capabilities of this product which is described as CEP for geospatial data. It seems that it support tracking events that relate to spatial objects - entering, exiting, are inside/outside/close to areas, with also some spatiotemporal capabilities.    It seems that this product is not new,  but I must admit it is new to me, maybe recognized in the GIS community.    I've written before about spatiotemporal event processing capabilities in event processing products such as: Microsoft StreamInsight or Oracle CEP.   ObjectFX seems to come from the other direction,  instead of extending event processing to have spatiotemporal capabilities, it extends spatial platform (ObjectFX has other spatial oriented products) to event processing.  It will be interesting to see whether the two approaches meet.  

Wednesday, August 3, 2011

On spatiotemporal event processing


In the EPIA book we referred to the spatial dimension of context, and also to spatial patterns of events, there is also a paper on this topic that I've written last year with Nir Zolotorevsky,   There are also combination of the spatial and temporal perspectives,  both in sense of composite contexts,  and also spatiotemporal patterns 
(such as:  going north). 


It seems that spatiotemporal event processing is becoming popular.  Alex Alves from Oracle presented a paper  about it in  the industrial track of DEBS'11, and also recently wrote a short posting about it in his Blog;
Seems that Oracle is putting a support of spatiotemporal features in its product.


I also came across some description of spatiotemporal processing within Microsoft StreamInsight,  


The current support in spatiotemporal capabilities is quite elementary, and various extensions are possible (a student project for the  next semester?). It will be interesting to see more about applications that utilize the spatiotemporal capabilities, and their functional requirements. 

Friday, July 22, 2011

Another implementation of the "Fast Flower Delivery"


In the EPIA book, we had a running example used for demonstrating all constructs in the book, the example described a scenario called:  "Fast Flower Delivery".    During the book writing we approached the event processing community and issued call for implementations, there has been six implementations that were ready  during the book's writing:  Aleri (currently Sybase),  Apama(Progress),  Esper, Etalis,   ruleCore and Streambase.    It seems that more implementations are being devised,    I was asked for permission to use the "Fast Flower Delivery" scenario as the running example in an upcoming book teaching the use of one of the products,  will write about that when this book will be out. 


Recently,  an implementation of this scenario in IBM Websphere Business Events (WBE) was posted on IBM developerWorks as a tutorial to teach the use of that product.  
Seems to becoming the "Hello World" of event processing.  

Thursday, February 24, 2011

SAP BuisnessObjects Event Insight was launched



The news of this week which I got in multiple copies from multiple sources is the launch of a product called SAP BusinessObjects Event Insight, the SAP website that describes this product exhibits the picture I have copied here, probably a business person (by the dress) that get some event insights by phone, and looks happy about these insights.    The product is probably a descendant of Aleri, which purchased Coral8, and later sold its assets to Sybase, which in turn was purchased by SAP.  It is interesting to note that it is branded as "BusinesObjects" which is another SAP acquisition of a French BI company.  This shows that SAP positions event processing as part of BI suite, which follows one of the current trends.     When we started the event processing community meetings in 2006 one of the participants said that event processing will hit the mainstream of computing when all the four big software companies -- IBM, Oracle, Microsoft and SAP will have products in this area.   With this product launch, SAP joins the three others who are already there, along with some of the medium size software companies Progress and TIBCO.     While all the "big dogs" are there, I think that there is also a place in the ecosystem to small companies and startups that will go for either niche requirements or domain specific products.  It is also conceivable that small companies will create disruptive technologies that will take us to further generations -- meanwhile, greetings to those behind the project, hope to be able to learn more about it.  

Saturday, June 12, 2010

Some subjective reminiscences on Amit

Catching up on Emails and community's Blog after returning home, I have discovered that Paul Vincent has added Amit to the genealogy of products that he is maintaining. There is also going to be in DEBS 2010 a paper presented about experience with Amit, written by some of our team members (I did not participate directly in this paper, since it described work on some use cases that were done in the period that I have spent outside Haifa Research Lab).

I wanted, for a long time, to write something about Amit, maybe this is the right opportunity.

What is Amit?

Amit has been a name of a research project resulted with what is called today "event processing engine". It is based on ECA rules (however, had originally been specified as extension to SQL, as done in active databases). The original specification and design have been done by Asaf Adi (who also single-handedly coded the first version) and myself in 1998. The original language has survived with some additions over the years.

What is was used for?

Amit was used as a component in some of IBM products over the years, most notably Websphere Message Broker, but also Websphere Sensor Events and some others; also it served for some customers engagements as stand-alone, both IBM internal and external customers. The applications developed through Amit spanned various industries. It was also OEM-ed to two external software vendors and served as part of their solutions.

Where is the name Amit coming from?

The official name has been - "Active Middleware Technology" and at some point, somebody in UK decided that the acronym should be written as AMiT, thus you can see that in some places it is spelled that way, however, the acronym came later -- the original name was indeed Amit, and it was called this name since Amit is an Hebrew name, and I chose Hebrew names to all my research projects (Pardes, Adi and Arad are some of the other names), this particular one is also an Indian name, and one person whose name is Amit, who was born in India told me that the meaning of the name in Hindu is "to aspire beyond limitations", actually looking Amit at Wikipedia, one can found that the Hindu meaning of the name is: Infinite or immeasurable or boundless. I liked the aspiring beyond limitations, since that when we started it we did not know where it will lead, but we had a feeling that this is something big, a feeling that the immediate environment did not share (hence the limitations). Later, I have coined the "Active Middleware Technology" acronym, and the plan has been to develop a middleware with many components, we started with a first component called: situation manager, and somehow this was the only component developed, as we went deeper in this one, thus the situation manager component became a synonym of Amit. Again -- history develops in strange way.

What is the impact and legacy of Amit?

With the acquisition of Aptsoft, in early 2008, the IBM executives decided to retire Amit, and it is now still used as a legacy in some places, but not been developed for several years now. However, the impact of Amit has been and still is noticeable. IBM has recognized the impact last year through its internal awards process, that the activity around Amit, and some other related activities was the direct reason that IBM has decided to enter the EP market.
Looking at technical impact -- since we have published detailed description of Amit in journal and conferences papers, it was read by many people in the community, and we identified ideas in various products (even names of operators) inspired to Amit, I'll not finger any specific product, though. I think that some of the ideas that we worked on (which have been developed further in the post-Amit period) like contexts and policies laid foundations for the next generation.

As the paper written about experience with Amit states -- the experience with Amit also taught us where the event processing platforms should evolve to, and triggered the post-Amit work on the EPN/EPA based event processing that we are working on in the last couple of years.

This is a very brief account on Amit, I might write some follow-up to this one telling more.

Sunday, February 7, 2010

Event processing - stand-alone, part of a bigger game, or both?


Following my previous posting, somebody told me that I was vague about whether I think that event processing as stand-alone technology is good or bad. Well -- when I was a student I took "image processing" course, and we worked on a graphical screens with 256 gray levels, since then I realized that even in "black and white" picture there are a lot of gray area Thus, I cannot classify it as "good" or "bad". But I'll provide some observations:

  1. The event processing area started as "stand alone" engines, this has been obvious, since it started by start-ups and not by big companies as part of other frameworks
  2. There is a gradual shift in the market from stand alone event processing solutions to event processing capabilities embedded in larger frameworks, when bigger companies got into the picture, and this trend has intensified.
  3. "Stand alone" products may have to implement functions that "embedded" products can use existing functions in the original framework, such as: routing, transformation, filtering...
  4. Unlike some software components that may need tight integration, event processing work in loose coupling relative to other components -- sending and receiving events --- thus this supports the possibility for having stand-alone EP.
  5. However, there are no interoperability standards which requires to provide adapters for each producer and consumer, which makes stand-alone EP more difficult, relative to a single framework -- the level of difficulty is a functions of the quantity and diversity of producers and consumers. Enterprise integration framework may already include variety of adapters that the embedded EP can get "for free".
  6. Event processing is in many cases part of a bigger application, and in this case, there is a benefit of having a single programming model for the bigger application, and not using different programming models/languages/user interfaces for the various part of the system, this also goes against stand-alone EP; In cases where the system is pure EP, this consideration may not be valid.
  7. Stand-alone EP may support heterogeneous components -- e.g. work with DBMS from one vendor, messaging system from another vendor, and connect to BPM system from third vendor, while embedded EP is typically homogeneous, since it all comes from one vendor. This may be true, though today there are a lot of cross-adapters among various components that enable framework to support other components (say DBMS) from other vendors, especially where there are standard interfaces.
Is there a bottom line here? --- I guess that the gray area is that there is some segment of the market in which stand-alone EP can live, but I also think that the trend of moving from stand-alone to embedded will continue to exist.

Thursday, August 27, 2009

On downloadables









Among all topics that I mentioned in my previous "catching up" posting, I chose to talk about downloadables. The Forrester report mentioned "free downloadable" as a criterion, and one of the authors defended this criterion in a short article. Paul Vincent has written about his impressions from the summer school he participated in that some of the academic people like to use open source, so they can play with the implementation code.

There are two issues here: whether downloadable is a criterion in product evaluation, and whether for academic reasons one should use open source or commercial product downloadable ?

As for the first question: the Forrester report stated that event processing is moving from being niche technology to being part of mainstream computing. An enterprise can chose in using a specific product, either because it is being sold through some solution / application, and in this case the line of business owning the application, may select this application and get the product behind it as a by-product, this is exactly niche technology kind of behavior. Getting it as part of mainstream computing, mealing that this product is being used in enterprise architecture, and being endorsed by the IT side of the enterprise. I have spent 8 years in my life in an IT shop of a big enterprise, and from this angle, the world seems somewhat different, decisions (if not constrained by force majors) are being determined after the IT developers have studied the products from all angles. Big enterprises don't typically have problems to get any software for free evaluaton; however, where free downloadable may help is for people who want to explore products by their own initiative, whether they are developers in organizations that have not decided (yet) that they want to do formal evaluation, or by students, who are the future generation of developers. Investment in students is considered a good investment and thus many vendors provide free software for academic purposes. As I have written in the past, the publisher of the EPIA book we are writing is very keen on providing downloadable software for the reader. Since we told him that there are multiple approaches and we don't want to restrict the reader to a single approach, we have approached several vendors, who agreed to: a). provide free downloadable; b). provide a solution of (at least a subset) of the "Fast Flower Delivery" example that accompanies this book. Currently we have several vendors that already agreed to work with us on this: Aleri (representing the Stream programming style), Rulecore (representing the ECA rules programming style), Apama (representing the script programming style). We also got agreement from Esper (which is already an open source) to provide the solution of the case study in ESPER. I hope that other vendors will follow. I am sure that the readers will be benefited by this exercise (we'll put all solutions in an appendix to the book, and point at the soft copy of the solution, and downloadable from the book's website).

Now, to the question of downloadable vs. open source in teaching. I am giving students two types of programming assignments (depends on the course) -- one is to write some application using a product, the other is to write a subset of an event processing engine.

For the first type of assignment, both product downloadable and open source can be used. In the database area, I guess that the majority of the basic database courses are using My SQL, however, in the event processing area we don't have a standard language yet, and typically products are more advanced than open source.

For the second part -- writing a subset of an event processing engine -- one can take an open source as a basis and extend it, which may be one approach (common in operating systems course), my personal preference is to let students build such system from scratch, the reason is that again, due to lack of standard, the use of open source will restrict the student to use the approach taken by that open source, alas, the approach in which I am teaching event processing is somewhat different. I also think that students have to think about how to implement concepts rather than to ruminant on other food. More on teaching considerations -- later as I am prepared to teach an event processing course twice in the two semesters for two different populations, in the Information Systems Engineering program at the Technion, where I'll ask the students to use products, and in the Computer Science program, where I'll ask the students to write a mini-product.

A comment: I am moderating comments after some spam attack in the past, recently I am getting everyday a comment to this Blog in a charachter set which may be Chinese or Japanese.
I wanted to respond: לא הבנתי מה רצית ממני, but as this Blog is being handled in English, I will not post comments in other languages, I don't even know whether they are spam or real comments.
I also don't post comments from people who don't identify themselves.

Last but not least, a reminder: The EPTS 5th event processing symposium is getting closer, if you wish to participate, and have not registered yet, please do it on the conference site, the reason we need this registration ASAP is that there are quite a few of requests to register from non EPTS members, and since we limit the number of participants, we want to make sure that we are within these limits. BTW - the program on the site is not up-to-date. The updated program will hopefully be published next week.

Tuesday, July 7, 2009

Live from DEBS 2009 II - our paper presentation

Stratification is one of the terms that computer scientists borrowed from Geology. In 2007, Ayelet Biger, my former M.Sc. student has done her thesis about Complex Event Processing Scalability by Partition which looked at semantic partition of an EPN graph to strata, where in each stratum all agents are independent and can run safely in parallel. Today we have reported about 2008 research project that took the stratification idea and developed a system to assign agents to machines in a distributed environment. Geetika Lakshmanan delivered the talk about this project today. I have posted it on Slidshare. Enjoy !

This project is an interesting example for a life-cycle of a project:
  • It started as an academic thesis;
  • It has flown to a research project within IBM Haifa Research Lab ;
  • After showing promising results in the lab, it evolved to a more "down to earth" project that deals with assignment of agents to machines and threads within the IBM product - WBE-XS (Websphere Business Events - Extreme Scale) which enable event processing on grid environment. This project has to take into account products and their implementation, deal with product instrumentation, and other stuff that pure research projects do not deal with.

While starting an idea in the academia and then going to a start-up has its magic, the work in IBM Research enables to get stuff from academic projects until impact through products, and using my hat as an adjunct professor in the Technion, this is possible. However -- as noted before, getting things pushed in a big company are not necessarily easy.

Sunday, June 28, 2009

On hard coding event processing functionality

Today, again, I did not celebrate my birthday. I don't have a habit to celebrate a reminder to the fact that I am getting older. Interestingly, I got today much more (relative to recent years) birthday greetings in various communication ways -- E-Card through the Internet, phone calls, Email and even Real-time messaging. The reason that I got much more greetings relative to previous year can be attributed to the fact that I made many friends in the last year, but the more realistic reason is that certain social networks send people reminders about birthdays of other people in the network, which makes the knowledge about the birthday more accessible. Well -- none of my family remembered my birthday, here, at least, no change from previous years.

One of the many meetings I had last week was a teleconference with some IBM customers (I shall not expose their identity), my role in IBM is not in sales or sale support, but I am being called from time to time to participate in meetings with customers, by request of the people handling such meetings. The major issue that they wanted me to discuss is the benefit of using a middleware software for event processing (in the large sense) vs. hard coding it into the application code itself, like this customer is used to do.

Indeed, this is not a clear cut issue, there are two cases in which hard coding the event processing functionality make sense: either in the case that it is very simple, and thus it is not cost-effective to purchase, learn and deploy a specific product, or in the case that the functionality is too specific, and not covered by products, furthermore, it does not represent a ubiquitous requirement that makes it cost-effective for vendors to support it.

There are also some cases in which it is reasonable to write some functions in Assembly languages (even I did some of this in the past), but typically (most) people prefer to code in higher level language.

Likewise, in many applications, satisfy none of these conditions, and for these application is cost-effective to use generic software. The reason is that there are various common functions (e.g. filtering, routing, enrichment, transformation, pattern matching) that are repeating. Hard coding them means -- re-inventing wheels over and over again, instead of reusing existing implementations as "services", and enjoy other people's work that is being upgraded and optimized with time. This is similar to the reasoning of using other generic products -- messaging, workflows, databases, adapters, development environments and others.

The reaction of this particular customer was interesting: "what you are saying makes much sense, currently we are not used to think in terms of separating the event processing from the rest of the application logic, and we need to digest the idea". So, there will be a follow-up meeting to continue discuss it. I have seen this kind of reaction before, I think that a challenge of event processing vendors may be the competition with potential customers who do not understand the benefits above hard coding. But, this was also the case for other technologies that succeeded to cross this bridge. The growing number of event processing customers indicate that this thought is getting traction.

I think that this is also a topic for a community effort, that may be pursued by EPTS. More on this topic -- later.

Friday, June 26, 2009

On criteria for evaluation of event processing products

Yesterday, I spent much of the evening in Nofim elementary school, my youngest daughter Daphna, is graduating from elementary school, and they have done their grand celebration, with a lot of speeches, and performance of the children in signing, dancing and playing. It was nice, but somewhat long (the speeches part), and they have done it outside in the hottest day of the year (so far).





Another experience is that I needed to replace the door lock cylinder in my former apartment, since she is going to rent it now. I know that the Americans tend to do everything (including oil change in cars) by themselves, when I joined IBM, I was surprised to discover that I am now a "technical person", actually my current IBM title is - IBM Senior Technical Staff Member; I have never thought about software development as a technical activity, in my mind, a technical person is a person that knows how to hold a screwdriver properly, a skill I never possessed, so I looked at the Internet and called a person whose profession is to do that, we set a meeting at 6PM, and I went out in the middle of a meeting to go there, at 6:10PM I called him, he said: the person is on his way, will be there within 10-15 minutes, I waited until 6:30, and my patient started to run out, I called again -- the answer: he is almost there, give him couple of minutes. I called again in 6:40, got the same answer. My last call was in 6:48, saying -- I am cancelling this order, and will try to find another one that when he says 6:00 he really means it. Today I called another guy from the Internet, and again set a meeting with him at 6:00PM, this time I called him before leaving home, and he said -- I'll be there in 30 minutes, and indeed, he was there 25 minutes later, and changed the lock. I told him that I am happy to see that there are some professional people who arrive on time, and he said that he as also a customer hates service people who are late... I always thought on punctuality as a value, unfortunately, I am (almost) the only one in my family who has a sense of time.
Anyway, I got today some spreadsheet that an analyst made to evaluate various event processing products, the good thing is that people are trying to devise such criteria, however, looking at the criteria there, my observation is that the people devised this list came with a long laundry list of criteria, many of them seem to me irrelevant for many of the applications I know, while others that I would imagine that will be there, are left out, or treated in a shallow way. This gets me back to the difficulty of devising performance benchmarks in this area.

My observation is that the event processing area is not monolithic; people are using event processing for different reasons, and have different functional and non-functional requirements and priorities. Trying to think in a monolithic way, may yield a result that is not valid for anybody. I think that there should be deeper research in this area and provide criteria for classes of applications; this classification is not easy, as the diversity of applications that each individual organization or vendor has, is limited. This is one of the things that require a community effort, and I hope that the EPTS use case working group will be able to provide it, so I am looking forward to hear their tutorial in DEBS 2009. This also relates to our goal of establishing event processing manifesto, which will be the main theme of the event processing Dagstuhl seminar in May 2010. Stay tuned for more.

Friday, August 29, 2008

On research and practice in event processing




Triggered by a question of Hans Glide to a previous posting, today's topic is the relationships between research and practice in event processing. I'll not go to ancient history of the event processing area ansectors such as: simulation, active databases etc.., but start from mide to late 1990-ies, when the idea to generate a generic languages, tools and engines for event processing has emerged. This area has emerged in the research community. David Luckham and his team in Stanford, has done the Rapide project, Mani Chandy and his team in Cal Tech has done the Infospheres project, John Bates has been a faculty member in Cambridge University and Apama was a continuation of his academic work, my own contribution has been in establishing the AMIT project in IBM Haifa Research Lab, which is also part of the research community (kind of..). In the "stream processing" front there have been various academic projects - The Stream project in Stanford, The Aurora project in Brown/MIT/Brandeis, this are just samples, and there were more - however, the interesting observation is that the research projects have been there before the commercial implementation, furthermore, many of the commercial implementation were descendents of academic projects, examples are: Isphers was descendent of Infospheres, Apama was descendent of John Bates' work, Streambase was a descendent of Aurora, Coral8 was a descendent of the Stnaford stream project, and probably there are more. However, when commercial products are introduced, the world is changing, and there is a danger of disconnect between the research community and the commercial world, since products have life of their own, and are being developed to various directions, while people in the research community continue in many cases with the inertia to work on topics that may not be consistent with the problems that the vendors and customers see. While wild research is essential to breakthroughs, the reality provides a lot of research topics that have not been anticipated in the lab, and there is a need to do synchronization in order to obtain relevant research.

The Dagstuhl seminar in May 2007, where people from academia and industry met for five days and discussed this issue has been one step, my friend Rainer von Ammon organizes periodic meetings on these issues, and a European project may spin off these meetings We shall discuss this topic in the EPTS symposium, we have more than 20 members that are part of the research community, many of them will also participate in the meeting.


Bottom line: the life cycle is --


1. Ideas start in the research community.

2, At some point the commercial world catches-up.

3. Parallel directions - research continues, commercial products evolve to their own way.

4. Synchronization, exchange of knowledge, ideas flow in both directions -- need guidance.


More - later.