The main issue in terms of programming model is that manually programming a combination of parallel programming and distributed programming is very difficult, since many considerations are playing here. The solution relies on the notion of virtualization. The event processing applications should be programmed in a conceptual level, providing both the application logic and flow, but also policies that define nonfunctional requirements, since different applications may have different important metrics. Then, given a certain distributed configuration that may also consist of multi-core machines, the conceptual model should be directly compiled into an efficient implementation based on the objectives set by the policies. This is not easy, but was already done on limited domains. The challenge is to make it work for multiple platforms. This is part of the grand challenge of "event processing anywhere" that I'll describe in more length in subsequent posts. Achieving both scale-up and scale-out in event processing require intelligence in the automatic creation of implementation, and ability to fully virtualize all functional and non-functional requirements. More - later.
This is a blog describing some thoughts about issues related to event processing and thoughts related to my current role. It is written by Opher Etzion and reflects the author's own opinions
Tuesday, April 13, 2010
On the virtualization of event processing functions
There is some discussion about scale-up and scale-out for measures of systems scalability, as indicated by a recent Blog of Brenda Michelson, I would like to refer to the programming model aspects of it. Parallel computing becomes more and more a means of scalability due to hardware development and barriers in the scalability of a single processor that stem from energy consumption issues. In event processing both parallel and distributed computing will play important role, as we a large, and geographically distributed event processing networks.
Subscribe to:
Post Comments (Atom)
4 comments:
Hi Opher,
Would the meta language not have different parallelization/distribution characteristics depending on which implementation it is bound to? Meaning that the logic might still have to be written with a particular implementation in mind.
Hans
Hello Hans.
The idea is to have the mapping from the virtual to real done in automatic way, taking into consideration for each mapping to implementation, its specific constraints, however it is still an open issue whether we'll succeed to achieve fully automatic compilation, I guess that more platform specific directives will need to be added, but this is still under investigation. See also my next posting on this topic.
cheers,
Opher
Implementation specific directives are one route, but the current state of the art usually requires one to customize the logic itself to the execution platform. Of course, this is currently something that's done in the logic of the execution platform and not in a higher level language.
The wisdom is actually in the link between the platform independent and platform specific models. We still need to create a specific translation fro the PIM to each of the platforms since they have different syntax and semantics, however, if we succeed to do this automatic translation right, then we'll not need to adjust each application.
cheers,
Opher
Post a Comment