Monday, March 8, 2010

Business Transaction Management vs. Business Transaction Performance

BTM or Business Transaction Management vs. Business Transaction Performance -- two terms aimed to describe the current state of the affairs in what Gartner calls Transaction Profiling. Ever since I came across the term BTM I questioned whether the term actually reflects what vendors do in this space. The word "management" implies a bi-directional relationship between the manager and the entity being managed. In the world of Application Performance Management the term management implies "measure, monitor, administer, control, plan, automate, improve". If anything the BTM should be redefined as Business Transaction Performance Management or BTPM. Transaction Profiling (Gartner's definition) while more accurate implies a specific implementation of how performance is actually accomplished -- "profiling". One can envision measuring transaction performance without actually doing any profiling. It seems that profiling is an implementation construct and as such should be avoided when naming a broad discipline such as this. In fact BTM, as defined, is really a derivative of Business Process Management rather than an Application Performance Management discipline.

The term BTM actually confuses the market place. What part of "management" is actually being done by the vendors in the space? Most if not all vendor in this space measure performance and report. Any proactive involvement in the transaction lifecycle itself is minimal or not practical in most cases. How practical is it to define application and business logic within a transaction "management" tool? And even if it were feasible wouldn't it be better to do this in the BPM orchestration layer? Managing transaction lifecycle is already defined by the Business Process Management discipline and as such belongs in the BPM space. Today's transactions are orchestrated and therefore managed by widely known BPM tools from IBM, Microsoft, Oracle and others. So either BTM is part of BPM, rather than APM and if this is true do we really need another term to describe the same thing? or BTM simply is all about performance and therefore "management" should be dropped from the acronym.

No matter what we call things, it is important to understand what these things actually are in reality. BTM, no matter what vendors say focus on performance and measurement. Any active involvement in the transaction lifecycle, while possible, in many cases impractical and in most not desirable for many reasons. So BTM is really about performance, and in my view BTP (Business Transaction Performance) or BTPM (Business Transaction Performance Management) are more appropriate. Keeping terms honest is important and benefits the end user. Why? because we are already awash in so many terms, abbreviations, acronyms, technologies, products and vendors with "super-natural abilities". What we need is simplicity and clarity rather than ambiguity and complexity.

Thursday, October 29, 2009

Technology Overload

I am completely convinced that just like we've produced too many cars, too many houses, too much credit and sadly too many dollars, we have also produced too much technology, software products, packages and solutions. The result is that organizations are not only confused but unable to absorb the technology and products that they already own. Over the past decade enterprises acquired too many products, a large portion of which have become shelve-ware. So what is the response of the corporate CIO -- vendor/product committees, tool consolidation, vendor consolidation and other tactics to keep new vendor and technologies away and make do with what they already own.

Enterprise solutions are so complex and vendor messaging so confusing and ambiguous that often times you need Gartner or some other research agency to decode what is what. The number of new terms, abbreviations is just staggering. The best way to deal with complexity is ... simplicity. I like the KISS approach Keep It Simple and Stupid or Stupid and Simple. But unfortunately that is not what is happening.

Monday, April 21, 2008

On events, non-events and metrics

I would like to talk about events, non-events and metrics (aka. facts). Facts are elements of truth usually expressed as name=value pair. Some examples of factual information: current_tempreature=30F, or CPU usage=30%, of course this assumes that the measurement instrument being used is accurate. When monitoring applications, systems or business services, facts are the key performance indicators that reflect the state, availability and/or performance of a given service, system or a subsystem.


So what are the events and how they are different from facts? Event is a change in state of one or more facts. A “High CPU usage” event simply means that CPU usage has exceeded a certain threshold defined by the observer. So events are just the vehicles by which changes in facts are carried from the source to the observer. Therefore most events if not all have the following common attributes {source, timestamp, variable1, variable2...., cause=other_event_list}. Timestamp is simply a time associated with the change of fact state or attribute. Example: temperature changed from 20 to 30F. One can design a event generator that creates add, removed, change events every time a fact is added, removed or changed. These events in turn can feed into a CEP or EP engine for processing.

It is also worth noting that detecting non-events should always be in the context of time, (for example non-occurrence within last 5 min or 24 hours). When the time interval expires it is easy to check for occurrence of certain events and evaluate the remaining CEP expression.

Friday, November 16, 2007

SOA World 2007 Observations

I just came back from SOA World 2007 in the beautiful city of San Francisco. What a nice city, I always love visiting this charming town. But anyway, the reason for my trip was to attend SOA World 2007, so I had no time to enjoy this lovely place.

I see some really important shifts in people’s perception and adoption of SOA infrastructure. Here is my view:

  1. Many people somehow still perceive SOA as being largely about Web Services, which was rather surprising.
  2. The focus around SOA governance revolves primarily around deployment, policy management, and version control. Very few really focus on monitoring, especially performance and transactional monitoring, which is key for assuring the quality of service of SOA enabled applications. Most SOA governance tools are still all about Web Services. What about more advanced SOA deployments with ESB, brokers, and non WebService based environment? That is a much bigger problem to deal with and more complex indeed.
  3. It is now widely accepted that the SOA based paradigm is not suitable for all types of problems, which is good. Previously somehow many believed that SOA is going to solve many if not all problems. I think many are disillusioned; so many projects have failed and are still continuing to fail. At the end of the day, SOA may be just one of the practices in the Architects "toolbox" to solve specific sets of problems. While I agree that most failures are not attributed to the SOA concept itself, I think the bigger issue is around people, processes, best practices and expectations.

It is interesting to see a new term like "governance" replacing a good old term like "management". In fact, what is the difference between SOA governance and SOA management? I don’t see any difference. So we have a new slew of concepts and terms, which really add very little over and above the good old terminology.

During one of the presentations I saw the heading "SOA is bigger then SOA" -- very amusing. Not sure what the author meant by that. But somehow it grabbed my attention:)

Sunday, October 28, 2007

Time Synchronization and Event Processing

When processing events, especially when dealing with event occurrence and sequence, time becomes an important factor. Given events A and B their time of occurrence could be tricky to compare especially when generated from multiple sources. Time would have to synchronized for both sources in order to compare time(A) and time(B) to be able to determine if time(A) < time(B), which is to say that event A occurred before event B.

But then even if we managed to synchronize time say to the millisecond or some microseconds, what happens if both events occur within or close to the resolution of the time synchronization which is to say that events occurred "simultaneously".

It would also seem that CEP processor would have to be aware of the synchronized time resolution in order to judge whether two events qualify as A < B or A <= B. A <=B would be true of the difference in occurrence is equal or less to the resolution of the time synchronization.

Another approach is to treat event occurrence to be the time of reception by the CEP processor, where all time stamps are based on the CEP time and no synchronization is required. Although, this method is highly sensitive to event discovery and delivery latencies, which is a problem in most instances.