Friday, November 16, 2007

SOA World 2007 Observations

I just came back from SOA World 2007 in the beautiful city of San Francisco. What a nice city, I always love visiting this charming town. But anyway, the reason for my trip was to attend SOA World 2007, so I had no time to enjoy this lovely place.

I see some really important shifts in people’s perception and adoption of SOA infrastructure. Here is my view:

  1. Many people somehow still perceive SOA as being largely about Web Services, which was rather surprising.
  2. The focus around SOA governance revolves primarily around deployment, policy management, and version control. Very few really focus on monitoring, especially performance and transactional monitoring, which is key for assuring the quality of service of SOA enabled applications. Most SOA governance tools are still all about Web Services. What about more advanced SOA deployments with ESB, brokers, and non WebService based environment? That is a much bigger problem to deal with and more complex indeed.
  3. It is now widely accepted that the SOA based paradigm is not suitable for all types of problems, which is good. Previously somehow many believed that SOA is going to solve many if not all problems. I think many are disillusioned; so many projects have failed and are still continuing to fail. At the end of the day, SOA may be just one of the practices in the Architects "toolbox" to solve specific sets of problems. While I agree that most failures are not attributed to the SOA concept itself, I think the bigger issue is around people, processes, best practices and expectations.

It is interesting to see a new term like "governance" replacing a good old term like "management". In fact, what is the difference between SOA governance and SOA management? I don’t see any difference. So we have a new slew of concepts and terms, which really add very little over and above the good old terminology.

During one of the presentations I saw the heading "SOA is bigger then SOA" -- very amusing. Not sure what the author meant by that. But somehow it grabbed my attention:)

Sunday, October 28, 2007

Time Synchronization and Event Processing

When processing events, especially when dealing with event occurrence and sequence, time becomes an important factor. Given events A and B their time of occurrence could be tricky to compare especially when generated from multiple sources. Time would have to synchronized for both sources in order to compare time(A) and time(B) to be able to determine if time(A) < time(B), which is to say that event A occurred before event B.

But then even if we managed to synchronize time say to the millisecond or some microseconds, what happens if both events occur within or close to the resolution of the time synchronization which is to say that events occurred "simultaneously".

It would also seem that CEP processor would have to be aware of the synchronized time resolution in order to judge whether two events qualify as A < B or A <= B. A <=B would be true of the difference in occurrence is equal or less to the resolution of the time synchronization.

Another approach is to treat event occurrence to be the time of reception by the CEP processor, where all time stamps are based on the CEP time and no synchronization is required. Although, this method is highly sensitive to event discovery and delivery latencies, which is a problem in most instances.

Saturday, October 27, 2007

Can CEP be used to analyze transaction flow?

While Complex Event Processing (CEP) is a general purpose technology for processing events in real-time, I am wondering how it can be used to analyze transaction/message flow. The basic premise of transaction tracking within SOA and message driven architecture is to identify the flow of messages by observing message exchanges between systems and applications. In such environment messages have to be related, correlated and analyzed to determine beginning and the end of the flow, timings of each exchange as well as discover hidden relationships. Transaction boundaries are determined by observing units of work and relating them based on message exchanges. Sometime ago I published a method of correlating transactions, which describes the basic mechanics.

So far I don't see how it can be done using CEP based approach (using rule based (EPL) CEP principles). One can create a rule that observes messages (aka events), however I don't see a way to derive relationships that can later on be used to classify incoming message or events.

It seems to me that CEP would require a relationship engine of some sort that can be used to derive, store and query relationships that can be used by the CEP engine when processing events. 

For example: say we observe events A, B and C. There maybe a relationship between these events. We can say events A->B a related (-> related) if A and Bs pay load contains a certain key (example order number of customer id).  Lets call E(x) event E with payload x. If we observe A(x), B(x) and C(x): we can derive that A->B and B->C. If relation is transitive we can derive that A->C as well.

So it would be helpful to have a relationship service within CEP engines where once can declare a relationship and then at runtime determine whether events A and B are related and how, an if they are what types of relations qualify.

Tuesday, October 9, 2007

What are the key performance attributes of CEP engines

CEP engines are typical implementations of a classic producer and consumer paradigm and therefore can be measured in their ability to produce and consume events. So what would be some of the metrics that we can use:
  • Rate of complex rules per second -- number of rules that can be processed per second
  • Rate of instructions per second -- since each complex rule may consist of more primitive instructions, knowing the rate of instruction execution per second may be useful.
  • Publishing rate per second - peak rate at which events can be published to the engine
  • Consumption rate per second -- peak rate at which events can be consumed by event listeners a.k.a sinks.
  • Event processing latency (ms)-- time it takes for event to be processes after it is published
  • Event delivery latency (ms) -- time it takes to deliver event after it is processed by the event processor or cain of event processors.
  • Outstanding event queue size -- number of events that waiting to be processed. An important measure that tell the user how many events are in the queue to be processed.
The sum of the processing and delivery latency produces the total latency to be expected by the end user. This latency can then be compared to the required quality of service or SLA for given process to determine if the processes can yield useful results as specified by the SLA.

What interests me is not only the metrics, but also the behavior in the situations when rate of production exceeds the rate of consumption for a significant period of time. In this case, the influx of incoming events would have to buffered somewhere to be processed by the engine. This of course can not go on without a significant performance degradation as well as the increase the overall processing latency.

There are several strategies that can be used separately or in combination:
  • Buffering -- simplest technique where events are buffered for both consumers and producers to accommodate for peaks. Eventually the buffers get exhausted and production and consumption rates must equalize by either reducing rate of production or increasing the rate of consumption
  • Increasing number of consumers -- this can drive the consumption rate up. However this technique suffers from the plateau effect -- meaning after a certain number the rate of consumption stalls and starts to decrease.
  • Dynamic throttle -- this where rate of production and consumption are throttled. The easiest place to throttle is at the event production phase, where events are actually dropped or event production is decreased via deliberate and controlled action. In this situation the latency is passed on to event producers.

Friday, September 7, 2007

The Great Battle of the Networks

"The Great Battle of the Networks" is the what is happening in today IT environment. Businesses that have best most efficient networks, better intercommunications and integration among various subsystems will prevail. Todays IT networks can be compared to biologicals nervous systems.

So what is happening today: organizations are building more complex, more efficient networks. Technologies such as virtualziation, application integration, grid-computing, network and performance monitoring making these networks faster and more agile.

While business are investing into their IT infrastructure to improve their business, cut costs and maintain competitiveness I am wondering if there is a more hidden by-product of this growth -- a steady evolution of the intelligent network -- a web of self-organizing, self-healing, agile networks.

As with biological organism growth in neuro complexity led to the evolution of inteligence. While every nueron is a failry simply cell their collections produces asstounding results -- a higher order of function.

The world wide web already exhibits the some features of intelligent self-organizing systems. While you may say that "we the humans" are the once organizing. In my view the "who or what" does not matter, what matters is the final outcome.

Just like bio-organisms networks allow businesses to adapt to ever changing environment. We may be looking to the early rise of the "intelligent net". What might be interesting is that such intelligence may be beyond our senses just like each and every neuron is unaware of the higher organization it is part of.