The main issue in terms of programming model is that manually programming a combination of parallel programming and distributed programming is very difficult, since many considerations are playing here. The solution relies on the notion of virtualization. The event processing applications should be programmed in a conceptual level, providing both the application logic and flow, but also policies that define nonfunctional requirements, since different applications may have different important metrics. Then, given a certain distributed configuration that may also consist of multi-core machines, the conceptual model should be directly compiled into an efficient implementation based on the objectives set by the policies. This is not easy, but was already done on limited domains. The challenge is to make it work for multiple platforms. This is part of the grand challenge of "event processing anywhere" that I'll describe in more length in subsequent posts. Achieving both scale-up and scale-out in event processing require intelligence in the automatic creation of implementation, and ability to fully virtualize all functional and non-functional requirements. More - later.
Tuesday, April 13, 2010
On the virtualization of event processing functions
There is some discussion about scale-up and scale-out for measures of systems scalability, as indicated by a recent Blog of Brenda Michelson, I would like to refer to the programming model aspects of it. Parallel computing becomes more and more a means of scalability due to hardware development and barriers in the scalability of a single processor that stem from energy consumption issues. In event processing both parallel and distributed computing will play important role, as we a large, and geographically distributed event processing networks.