Businesses stand to make huge efficiency gains through the use of cloud computing. But many businesses are rightfully concerned that they’re not extracting every last drop of potential from what is possibly a very significant restructuring investment. In this article, we’ll draw from cloud computing theory to suggest areas those concerned businesses can check off to guarantee they’re getting the most out of the cloud.
Cloud computing is becoming ubiquitous and synonymous with remote service provision. As far back as the 1950s, scientists – including Herb Grosch – were predicting the current state of play, with clients accessing centralized mainframes. In fact, Grosch’s predictions – that the entire global community would access data stored in 15 data centers – form a great deal of the foundation behind IBM’s Smarter Planet vision.
There are lessons from the past that we can and should use to approach our cloud provision and strategy in the present. In this article, we’ll examine how the four characteristics of cloud computing laid out in Douglas Parkhill’s 1966 book The Challenge of the Computer Utility can inform our approach to refining the services we offer today. The book won the McKinsey Foundation award for distinguished contributions to management literature, and forms a canonical component of any cloud administrator or network designer’s literature.
Recommended for YouWebcast: Sales and Marketing Alignment: 7 Steps To Implement Effective Sales Enablement
- Elastic provision. Elastic provision mostly refers to how limited resources – such as computational capacity, storage capacity and network infrastructure capacity – are managed in response to user demand. It’s a key component of any cloud hosting setup as it dictates what kinds of resources each user can access at any given time.
Clearly the challenge for elastic provision lies in two main areas: limited resources and responsiveness to changes in user demand. Any cloud infrastructure will see variances in user demand across a period of time; certain times of the day, for example, are naturally ‘peak’ periods. This becomes a more complex issue if the cloud services offered are diverse, or the exemplar usage scenarios differ wildly from user to user.
The key to successfully meeting this challenge is to channel users’ requirements into clear ‘bands’ or established ‘scenarios’. This is a matter of design, and can be met by ensuring that the cloud architecture includes specific portals through which users can access certain pre-established sets of resources. For example, does a developer on your system require the same user access privileges as a subscriber? The answer is almost certainly no. Beyond privileges, does a developer require the same computational allocation as a regular subscriber? Again, it’s unlikely. By apportioning resources appropriately, limited resources can be stretched to meet a variety of different requirements without enormous administrative overhead. This is referred to in the biz as ‘cloud management’, and this service is becoming increasingly popular with large and small companies across the world.
- Online. It’s taken as given that cloud service provision happens online. However, part of this key design concept is planning for downtime and redundancy. How can you minimize downtime? And how will you back up your server? Certain modern open-source IaaS (Infrastructure as a Service) implementations – such as OpenStack and Asgard – include ways of minimizing the impact of both backup and restoring procedures.
- Illusion of infinite supply. Users must not feel that at any given point they are accessing a computationally limited service. Any cloud service therefore needs to make sure that it is snappy. Part of this will come from good approaches to elastic provision – which will ensure that there are always plentiful resources for any given usage scenario. Additionally, the burden of computation must, as much as is reasonable, be placed on the user. This way the illusion of infinite supply can be maintained, as users will understand the limited resources of local machines.
- Provided as a utility. Finally, users must be able to access your cloud service as a utility, not as an end unto itself. This has important ramifications for how you use user data, for example. While some services tread the line between using user data for personal gain – such as providing targeted advertising for revenue maximization – and using it to maximize interaction experience – such as, in quite a different sense, providing targeted advertising for user engagement incrementation. A few fail to recognize the distinction between providing a cloud service for corporate gain and providing a cloud service for user gain. It’s a generally accepted systemic principle that this is a one-way relationship. User gain will typically lead to corporate gain – but it’s not necessarily the case the other way round.
Douglas Parkhill was well ahead of his time, and should be recognized as a trailblazer. These four simple principles can guide all stages of a cloud service project, from initial design to architectural implementation and administration. They will ensure that you can maximize your returns from the move to Cloud Computing. If you have any thoughts on additional principles, please note them in the comment