IT Trends Impacts (Part 2)
What cannot be avoided
Maintain stability on existing It Services
Managing changes and their releases without creating instability in existing IT Service operations is a priority and will remain so on a constant basis. The benefit of change implementation speed for business units can easily be wasted by the accumulation of the collateral damages it generates.
To measure this point, you have to evaluate the financial and brand loss due to Service unavailability, and compare it to the potential benefit the initial decision to lower precautions can bring. Generally, scales are out of proportion: 10 to 20 times more financial impact due to instability than benefits linked to project/change optimization. You just need to compare the nature of impacts to understand:
► The first one addresses the completion time of the project, which can have the following impacts (in case of failure):
► Project cost increase (IT and business resources cost
► Opportunity cost linked to “time-to-market” not complied with. It generally refers to one business (less frequently, to several)
► The other one addresses the stability of existing IT Service:
► Incurs an operating cost on one or several existing Services, possibly affecting the objectives of one or several businesses
► Can possibly affect corporate image and lead to permanent loss of customers
► Leads to additional cost for the management of generated incidents
Only product launches or offers strategic for business justify taking risks. So, taking precautions is not justified for changes on non critical IT Services, or those whose impacts are very limited. For others, constraints linked to Change Management and Release and Deployment Management are justified.
So all this imperatively requires joint change evaluation with businesses. ITIL describes two main opportunities of involving businesses in a change impact evaluation:
► Upstream, when a change is approved, and the CAB mission consists in evaluating a change impact.
► Further downstream, during the various test stages (change test, release approval) where businesses participate in tests results evaluation (called “performance evaluation”, see Evaluation Management in ITIL V3 reference books).
The importance of human factor
Even if new architectures and technologies facilitate operations linked to change, the importance of the human factor should not be underestimated. It lies at the heart of risk analysis and associated decisions. Technology alone cannot anticipate and avoid all dangers. We should take advantage of its contributions whenever possible, bearing in mind that it cannot be a complete substitute to human intelligence.
For example, the following change tasks should never rely on purely automatic processes:
► Change impact evaluations, because correlations made by tooling are limited to explicit settings contents, whereas human intelligence is capable of correlating links and detecting risks beyond tooling know-how.
► Planning stages must take into account no compatibility risks, that only experienced staff can anticipate
► Stakeholder’s information and coordination are mechanically limited to existing lists. People are able to make wider correlations, integrate recent facts or decisions and, as a result, involve the right persons.
► Test results analyses and evaluations. Only humans are in a position to distinguish significant results from the others, and identify the real trends and conclusions
► Go / no go before deployment, because a human view is indispensable before any decision, and avoid taking any risk (on critical scopes) based on automatic conclusions provided by tooling. At this stage, a human intelligence is necessary
However, all evaluation assistance tasks producing quantified elements, or automating repetitive activities which correspond to perfectly identified cases, can be more easily totally automated. But they remain assistance tasks to humans (not the reverse).
Maintain built IT Systems quality
Embed quality as soon as build stage
New application architectures, also called “software packages”, based on functional or business logic (e.g. SOA), make evolutions or implementation of correctives more independent, according to modules. “Highly available” infrastructure architectures (e.g. totally redundant equipment, “load balancing”, etc…) make it possible to consider maintenance or change operations without IT Service interruption.
Complete processes for compliance control (with norms covering application architecture, infrastructure, data), then for test covering the nature of customer guarantees, must be enforced (and formalized in SLAs). Just like for aircraft manufacturing, where operations excellence and test comprehensiveness are standards. Of course, requirement levels vary, depending on the criticality of business IT Service. Control processes must be adapted according to impacts. There too, the approach consists in comparing the cost of risk (of Service unavailability) to savings in terms of formalization and norms control.
Also, from a financial point of view, what industrialists (from the automobile or electronics industries) teach us, is to “do things right the first time”. Further efforts to recover or solve incidents due to nonconformities are strongly reduced and are a major source of savings if you rely on a process guaranteeing a good implementation of norms and standards and imposing adequate quality control.
Agility does not rythm with rush
Rushing and underestimating the necessity to integrate operational rules or controls is not a real time saver for business. Just like for Total Cost of Ownership, agility should be measured as well on the whole IT Service lifecycle (or at least a significant operational period), and not only on the criterion of earliest availability. Immobilizations of the business production tool (linked to vulnerability resulting from IT Service design) are wasted time that has to be deducted from time gained at delivery. Finally, end-users are losers.
Unfortunately, the split between Design (and business IT) departments and actual operators objectives do not make this awareness easy.
Coordinate one's partners
Resorting to partners and subcontractors is unavoidable and implicitly makes change control more complex. The IT department becomes a kind of “extended company” whose IT service delivered to customer’s results from several internal and external contributions, and which does not control end-users visible operations.
This implies managing these partners with great rigor and managing the performance of their contributions. The Change Management process brings coordination, because it clarifies both roles and responsibilities in activities and deliverables, as well as commitments of the various external stakeholders. The issue is the implementation of an efficient alignment of UCs (operational commitments of external stakeholders) on SLAs (commitments towards customer).
Given the various findings, the reader can legitimately wonder why components or services part of critical IT Service for business are outsourced? Should they not be totally managed internally by IT management?
An organization has no interest to do so, and even if it wished to, it cannot reach the cost and performance levels business specialists can reach, due to their size and experience (economy of scale, industrialization level, etc).
First of all, the decision to externalize is linked to a global organization strategy integrated into the debate about the “Service Model” targeted. It specifies a certain number of architecture choices, technical standards, support types and implicitly outsourcing or internalizing of competences. All of these choices aim at supporting service levels the organization will commit itself to offer its customers. They are fully part of its Service strategy. As a result, they must implicitly be aligned on SLAs. So, most often, they are common to several IT Services and in this case, it becomes more difficult to introduce peculiarities that are not economically justified in a single IT Service.
The choice of outsourcing should not depend on what the organization can or cannot do, but rather on what it decides to do on its own or not. Processes such as Change Management and Release and Deployment Management aim at guaranteeing that the IT Service supply is in line with expectations and interfaces with selected partners at a specific requirement level.
Avoid reinventing the wheel
Very often, the search for large design freedom is assimilated to agility. This ability to adapt quickly and precisely to a customer needs also leads to duplicate efforts, which is both costly and useless. Instead of seeking benefits linked to re-use, initiative and spontaneity (not to say rush) tend to prevail, neglecting a good exploitation of experience gained. Once they are developed and tested, modules or components corresponding to frequently used functionalities save time, money and limit potential failures linked to new developments (deeper changes).
One of the drifts observed with Agile methods is the failure to re-use previously developed modules. As a matter of fact, this drift, more linked to stakeholders’ maturity than to method, reduces the organization capacity to maintain its cost and service levels.
Change Management plays an essential role ensuring that established norms and standards are implemented, whenever it makes sense. Change Management missions must imperatively evaluate the actual cost of risk in relation to its apparent benefit (whose single asset is often to be immediate).
The cost of diversity
Failing to control the good implementation of common norms and standards through Change Management potentially leads to diversity in developments and technologies. As a consequence, this risk multiplies competence areas. Cost is directly increased, due to the higher number of necessary resources to maintain regular level upgrades, support their diversity and guarantee them.
But remain agile
The fact that process approaches hardly attract volunteers in build teams is also due to the lack of discernment and flexibility of their “sponsors”. Fanaticism with ITIL is just as dangerous as it is in other areas. The consequence is the lack of understanding of its inputs in daily operations and the total lack of buy-in from stakeholders.
Their common characteristic is their new way of considering change manageThis book mentions several examples of process flexibility, which show how important it is that principles and rules should be implemented in a relevant and useful way. Wishing to set up a unique process and to manage all changes in the same way does not correspond to requirements. Neglecting to adapt to volumes or to focus efforts on what brings value to business (generally concentrated on critical IT Service) turns control activities into “bottlenecks”, definitively deteriorating the operational stakeholders’ perception of the process.
One should not hesitate to remove inappropriate, even useless activities and make process implementation easier (without losing sight of the objective of service levels guarantee), relying on recommendations mentioned in previous chapters (adaptation to control levels, criticality, change standardization, tasks automation), and adding processes such as Lean Management.
Smoothly structured planning
Wanting to structure the release planning in order to limit totally unpredictable “on the fly” operation mode is a worthy objective. However, it must not turn into a heavy machine, unable to adapt to business requirements in some technical cases (such as a poorly anticipated migration becoming necessary).
At this stage, good recipes are not obvious because they can hardly be transferred from one environment to the other:
► Identify totally autonomous infrastructure or application sets, whose evolutions can take place totally independently*. For these sets, it is appropriate to set up specific Release Policies
► For each Release Policy, build the highest frequency possible, reaching the highest update flexibility required by business, while maintaining the minimum stability necessary. To be clear, a single organization can rely on 3 or 4 release policies, each one corresponding to an evolution cycle (or frequency), specific to a business requirements
► This frequency is based on the opportunity cost** of the absence of agility (linked to insufficient frequency) compared to the cost of unavailability suffered (impact of a potential interruption). This optimal frequency can be daily, weekly or monthly, according to IT Service and their business issues. Unfortunately, there is no single rule valid for all organizations
* With new technologies linked to virtualization and Cloud approaches for infrastructure supply, independence between technical bases and application software becomes increasingly real.
** Opportunity cost: shortfall for business if there is no necessary flexibility.
Copyright © 2013 - PRACT Publishing
► No link
► Change & Release ...