The modern business is highly dependent on IT. When systems go down, the disruption can be widely felt, and even lead to tangible damage to the business or its brand. Against this background, it doesn?t make sense to gamble with systems availability. So why do so many take risks?
Systems failures occur frequently and impact the business in multiple ways When more than 1,200 IT professionals were asked about the frequency with which IT systems failures impact their businesses, more than half (57%) alluded to disruptions occurring on at least a monthly basis. The end result is a direct hit on business productivity, increased IT overhead and knock on effects as delays impact processes, schedules and plans. Beyond this general disruption, one in five organisations suffers brand damage or tangible financial loss on at least a quarterly basis.
Application availability hotspots differ by organisation size
Larger enterprises are more inclined to identify core business applications as an availability hotspot, as highly integrated in-house developed systems and heavily customised software packages create a complex landscape with many potential points of failure. Small and medium-sized organisations call out horizontal applications such as email as being particularly troublesome from an availability perspective, as a result of rapid growth in demand and underinvestment in platforms.
Lack of resiliency planning often leads organisations to gamble on availability Much of the exposure leading to high failure rates comes about because system availability is only considered towards the end of the project lifecycle. This often results in having to choose the lesser of two evils: either slipping delivery times to retrofit resiliency measures, or taking the gamble and putting the system live with vulnerabilities. Even if the will is there to do the right thing, unfortunately the money may not be, as the cost of implementing resiliency will not have been budgeted.
Dealing with the challenges requires a balanced approach
Whether it?s poor planning or simply a lack of appreciation of the need to invest, in most organisations, a significant gap exists between the resiliency measures the business requires and those that are actually in place. Issues range from the fundamental such as inadequate controls during the application lifecycle leading to software that isn't "operations ready?" to simple things like the absence of failover solutions for key applications or the lack of effective monitoring to pre-empt potential failures. While the research suggests that addressing such issues individually will pay back significantly, the real aim has to be incorporating resilience and availability into all aspects of IT.
But don’t try to boil the ocean, start with the simple stuff
An obvious step to take, if you have not already done so, is to involve IT operations staff early in the project lifecycle. This will highlight resiliency requirements and allow dependencies and conflicts with the existing infrastructure to be understood up front so plans and budgets can be set appropriately. Addressing some of the hotspots identified above is also a good move. Simply stabilising an email or collaboration system, for example, will be a step in the right direction, freeing up resources and getting the business to appreciate the value of uptime, which is a great foundation to lay for the future.