Recently we put together a startup guide to small business computing to provide a comprehensive overview of all the concepts small business should be aware of when constructing a solid IT infrastructure. This will always lead to discussion about virutalization, which has become one of the more popular IT trends for businesses and IT firms around the world. Especially those businesses with large IT infrastructures. The benefits of a virtualized system have greatly been publicised, greater cost savings, reduced carbon footprint, and a greater freedom in the hardware and software your company can use. Consolidating your many servers into one servers or less servers yields many benefits, which has led to many companies making the leap to virtualization. At Akita, we have overseen countless transitions from multiple server systems to a virtualized solution. This transition is a monumental task however, and for those thinking of adopting this kind of infrastructure, you need to be aware of how best approach the procedure.

When you know its time.

Business owners and small business IT managers will usually see early warning signs of an ancient setup beginning to fail. It’s at this point that they should consider consulting with a professional to find a solution that will replace their ailing system for a virtualized system. Ideally, the transition is planned out over the course of several months with each individual server moved across slowly to address the concerns from users resulting from the change.

From there, a strategy is worked out, and the system requirements are drawn up. For instance, even in a SMB we theorised the following servers often exist; file/print server, Microsoft Exchange email, database server(s), Blackberry Enterprise Server, bespoke software servers and virtual desktop servers so there’s plenty of scope for a growing business to have a lot of physical machines. A phased option is usually the best.

One of the main things to get correct is the communication to your staff. If it’s a larger organisation occupying different offices, floors or territories then there’s a lot more scope for users to adjust poorly to the change. Most changes take place in quieter periods of the year over weekends. However, very occasionally clients plan changes in more busier periods. These are just some of the concepts we must consider before work begins.

How to maintain an efficient transition

As with everything revolving around data, efficiency is important as file sizes and the shire amount of data stored by companies is forever increasing. We have to consider whether we are storing data on-site or off-site where data transfer limitations will apply. Almost all datacenters are on the fibre backbone in the region, so in theory limitations rarely exist. More remote parts, especially in the US, will have to think about may have to think about this further to assess either time and/or data cost savings.

As mentioned earlier, because the virtual machines are essentially managed as large files, these can be managed and backed up with ease. There’s a huge cost efficiency here that can be passed on directly to the consumer. Once a system is virtualised – be it on-premises or inside a data center, it can easily be transferred to another location meaning business owners are not tied to equipment, office location or service provider.

Some bottlenecks to be aware of are internet speeds, old hardware owned by businesses or working with customers with the old mentality of ‘data is mine, so I’ll keep it on-site’, and ‘on-site data is safe because I don’t trust these service providers’.

Clearly IT companies know how to specify servers correctly for their purpose, but in the case of a lower budget operation or a DIY attempt, the server may not be up to spec. Hard drive and RAID controller speeds are the major specification issues as well as the more typical processor clock/RAM speeds.

Servers running virtual clients are more demanding than their counterparts and require fast hard disk drives (HDD) and large RAID arrays. Most commercial IT firms will as standard use disks with read/write speeds of 15k RPM – three times as fast as typical consumer HDD’s. Servers employ RAID technology to spread their payload across multiple physical disks, increasing redundancy and also increasing overall process speeds (because not everything’s going to one disk). All host machines of virtual servers should be running at least RAID 5, as it allows for continuity with a disk failure. RAID 6 allows for two concurrent disk failures, which is almost unheard of, and in high-availability infrastructures RAID 10 and greater are used. In RAID setups, all disks are hot-swappable in easy-to-access panels in the server fascias. When a dead drive is replaced, then the RAID controller runs a process to pull all the relevant data back across from the other disks to restore the redundancy.

Compatibility

In more outside cases, some companies still use ancient bespoke software that only runs on 32 bit machines. This is an issue for many reasons.

Many of these programs do not run on 64 bit machines, meaning virtual 32 bit machines need to remain running well past the date they should have to. 32-bit virtual servers can  still be run alongside a set of 64-bit counterparts, but it will add a cost to the bill as it requires an additional set of resources.

If this is the case, that older software is requiring the continuation of a 32-bit OS, then it’s likely that there will still be performance issues. This is because 32 bit machines can only address 4GB of (server) RAM, so many concurrent users can grind the systems to a halt.

This software could be around 10 years old, and it’s likely the hardware it’s running on is going to be a similar age, so it’s definitely still worth getting that 32 bit machine up and running on the new hardware just in case the old machine was to fail, but also so that it can be backed up and managed along with the other machines.