Twitter Facebook LinkedIn Flipboard 0 In part 1 we considered the importance of both knowing the details of your company’s network and in part 2 we discussed the need for keeping data backup onsite in case of an emergency, along with the acronym D2D2C and the 3-2-1 rule for maximum protection. When it comes to system outages or downtime, unless you have widespread system automation, the biggest effect of system outage will obviously be on the end user. It’s therefore important to factor in user experience to any data recovery or backup plan. Keep track of general problems users are having, from latency to inadequate response time during normal usage, as these can be an indicator of problem areas and lead to bigger glitches down the line if left unaddressed. Questions to consider: What are the most common end user computing problems we have? How frequently do they occur? What is the cost of leaving them unaddressed? At what layer of the computing network does this problem originate? (If you can’t address this final question, keeping accurate tally will help your service provider identify it properly) The difference between a cloud hosting provider and a provider with a service layer is that, in the event of an outage, you may be on your own with traditional purchase models, or there will at least be a longer process of recovery. With a service provider, however, you usually receive access to updates, monitoring, and reporting. Stay in close contact with your service provider to address these issues before there’s a full-blown outage. The best plan for data recovery is to prevent an outage in the first place! Aim for minimal disruption for the end user during the planning and design phases in data recovery. Avoid peak usage times just as you would with routine maintenance and updating. Cloud hosting services usually offer globalized access to mission critical applications across devices. A disruption in service throws employee workload rhythms off, reducing productivity and increasing employee frustration. A properly formulated strategy will allow the user access to alternative resources in a specified time, usually according to a specified service-level agreement. Make sure you keep an accurate tally of your end user computing problems to ensure your protection and help the recovery process, especially if you were unable to address the issue as it arose. In part four we’ll discuss the importance of controlling cloud traffic. Twitter Tweet Facebook Share Email This article originally appeared on True North ITG and has been republished with permission.Find out how to syndicate your content with B2C Author: Jay Leonard Jay is a UK-based cryptocurrency expert, specialising in fundamental analysis and medium to long term investments. Jay has a great deal of hands-on experience in analysing financial markets and performing technical analysis. Jay is currently focusing on the institutional adoption of cryptocurrency and what it means for the future ofView full profile ›More by this author:Cameo CEO Steven Galanis Wallet Hacked – $231k Worth of NFTs StolenMastercard CFO sees Growth Opportunities in CryptoMarvin Inu Trending on Twitter – Is Tamadoge Next to Pump?