ACM keeps sending me hard copy of the "Communication of ACM", in addition to send Email whenever the electronic copy is available. Yesterday I browsed through the June 2011 issue (it takes more than a month to get it delivered), and found out the paper entitled: "10 rules for scalable performance in simple operation's Datastores" by Mike Stonebraker and Rick Kattell. The 10 rules are summarized in the illustration below.
This is a mix of various types of advises: from - use high availability and automatic recovery, to don't try to build ACID consistency yourself, through don't be afraid to use high level languages, and even use open source. The domain of this paper are - as declared in the title - "simple operation's data stores", the question is what can we learn about scalability in event processing -- which is somewhat different -- neither focused around data stores, nor around simple operations. Also, scalability in event processing have several dimensions, not just scalability in the number of events, in fact in the DEBS 2011 tutorial we mapped all the scalability dimensions in event processing
I guess that shared-nothing architecture is always a good practice, the use of high level languages and the utilization of main memory are also good practices. Recovery is a matter of application's requirement, for some application recoverability is vital, for others - it is not really necessary. As for the use of open source, it is again depending on the context. In summary -- some of the rules are rather well known best practices, some are subjective, and some are context dependent.