Virtual Servers – Best Practices for Your Business
Posted by Timothy Platt on Mar 20, 2017
Virtual Servers – Best Practices for Your Business
Today, we’ll talk a bit about best practices for virtualizing your servers. “Virtual” is an abstract concept – so we’ll provide a laymen’s explanation, and then we’ll focus on the benefits. This explanation is targeted to business owners and decision makers, not technical personnel.
Firstly, why would we do this? Virtual servers are highly recommended over purely physical servers – because they offer reliability, flexibility, and scalability that is difficult to match in a purely physical server environment. They also offer clustering and failover options that aren’t feasible otherwise. Lastly, they are much easier to manage, which results in less time spent fussing over servers. First, a little history…
The Drawbacks of Physical Servers
Before the widespread availability of virtual server technology, nearly every server was purely physical. If you needed a file server, domain controller, or web server, you simply purchased a rack mounted server, installed an Operating System (OS), and installed your application. This all worked fine, until there was a problem or issue. Some shortcomings of this approach:
- Not Enough Resources – What if you under-estimated the amount of CPU your application needed? You’d have to upgrade CPU, or add another CPU. This is a time-consuming outage window, not to mention the need to order, install, and configure. Ditto for RAM or disk space.
- Too Much Resources – What if you vastly over-estimated the resources you needed – CPU, RAM, or Disk? Then you had spent a lot of money, and resources were wasted. Even worse, you might have one server machine with too much RAM, and another with too little, and no easy way to “share” amongst them.
- Peak Usage – Related to the above two points – what if your application saw a lot of use from 9am to 12pm every day, but not much the rest of the day? You had to purchase an expensive server that was capable of handling the peak and the rest of the day those resources lay idle and unused.
- Hardware Failures – Any physical machine is prone to hardware failures. An overheating CPU, a dead motherboard, and other similar issues will stop a server dead in its tracks. Rack mount servers often have redundant disks, power supplies, and network connections, but it’s not reasonable to truly have every component redundant inside a single box… which leads us to “clusters”.
- Cluster Complexity – For true High Availability (HA) a “failover cluster” is created. A cluster is a collection of independent servers (at least two) that cooperate – if one fails, the other takes over, automatically. This is great for uptime, but these clusters are difficult to manage and properly configure. Further, it’s inefficient to have to build a cluster for every one of your applications. It’s also a lot of cost. Imagine everything, times two.
- Hardware Dependencies – If your server failed, and had to be restored from backups, it had to be restored to the same, or equivalent, machine. Otherwise the network drivers, motherboard drivers, and many other dependencies would fail. This is a problem if you have lots of spares, but do you really want to have spare servers laying around? Additionally, it greatly complicates disaster recovery, and results in more time spent to restore service.
- Administrative Overhead – For all the reasons above, system administrators were hesitant to make changes and apply updates – especially due to those hardware dependencies. Lastly, because of the limitations of housing more physical equipment, each server ends up running multiple applications. This results in a spaghetti like mess of dependencies, not to mention a lot of downtime for multiple services when a failure occurs.
We can eliminate every one of these disadvantages by leveraging server virtualization.
What is Virtualization?
In a nutshell, virtualization lets you disassociate the server from the hardware it’s running on. Virtualization utilizes special software known as a hypervisor to provide a virtual, generic, hardware image to the server operating system (OS). The OS (and the applications installed on top of it) will run happily, believing they are running directly on some hardware. But they are not. What’s more, you can host multiple virtual servers on one piece of physical hardware. They can share that expensive RAM, CPU, and disk. Lastly, virtual servers end up being just a big file on a hard disk somewhere. So, you get all the benefits (and ease) of working with files – you can snapshot them, clone them, and backup and restore them easily. Popular server virtualization technologies include VMWare, Hyper-V, and Xen.
Virtualization applies on the desktop as well – Mac users can use Parallels, VMWare Fusion, or VirtualBox to run a Windows desktop, on their Mac. This is just one example, there are many others. We’re going to stay focused on servers here.
What does a typical virtual server cluster consist of? At a minimum, it’s two rack mounted servers, which provide the CPU and RAM. Both those servers will run VMWare or another hypervisor. You’ll also need some high performance, shared disk storage (SAN or NAS) that can be accessed equally by either server. It’s possible to skip the shared storage, but then you lose much of the benefits of virtualization. We’ll explain more below.
Virtualization to the Rescue
OK, so virtualization is the answer. But just how does the VM cluster eliminate those disadvantages? We’ll address each one in turn:
- Not Enough Resources – You can install multiple virtual servers on one physical server, so all that RAM and CPU can be shared – and dynamically re-allocated on the fly – to whichever server needs it. This is great for upping the utilization ratio of the hardware – less time sitting idle for all that expensive gear.
- Too Much Resources & Peak Usage – For the same reasons, this won’t be a problem either. All that shared RAM and CPU will be shared to whichever server needs it.
- Hardware Failures – We’ve got two server nodes, in a cooperating cluster, and if there is a motherboard failure in one server, the hypervisor on the other machine will detect this and take over – seamlessly. But didn’t I say clusters are complicated?
- Cluster Complexity – Hypervisor clusters are much simpler to configure, manage, and maintain when compared to classical HA clusters. Hard to explain just why this is – but this concept is so central to the benefits of virtualization, it’s something the vendors have mastered. Additionally, this feature will let you cluster services that wouldn’t be possible otherwise. You are also setting up a cluster once, and all your apps benefit from it.
- Hardware Dependencies – Your virtualized servers are using an artificial, generic hardware image. If this generic image is consistently provided by the hypervisor, it doesn’t matter what actual physical hardware is in use. Need to restore to a different sort of physical server? No problem. This makes disaster recovery a breeze.
- Administrative Overhead – Everything becomes so much easier to manage and change, administrators can confidently make the changes and improvements they need to, and still feel comfortable that they can restore services rapidly if needed. There isn’t an administrator alive that doesn’t appreciate the ability to take snapshots, or to restore to arbitrary hypervisor hosts.
We didn’t really address the flexibility in the list above, so we’ll do that now. With virtualization, you can bring up a new server in minutes. Compare that to days or weeks to order, ship, install, and configure new hardware. When it’s that easy, you avoid having servers running multiple applications. The spaghetti mess of intertwined dependencies never gets a foothold. Additionally, you can resize machines with a simple restart – more or less RAM or CPU? Piece of cake.
A Few Closing Thoughts…
OK, just two more comments. With your VM cluster, you’re going to need bigger servers than you would order on average – lots of CPU cores, and lots of RAM. This is because you may have the equivalent of 15 or 20 physical servers running across those 2 Hypervisor nodes. So they’ll be more expensive, but there’s going to be a lot less of them. Less depreciating capital assets is a good idea, for any business. Secondly, that shared storage (SAN or NAS) – it’s got to be high speed and ultra-reliable. If that storage is slow – all 20 servers are slow. If that storage is down – all 20 servers are down. Obviously two higher powered servers and high performance, ultra-reliable storage is going to cost more. But the virtualization advantages in reliability, flexibility, and performance will make it worth it.
Get Help from the Server Support Experts
We hope this information has been helpful. Your situation and unique requirements will need specific assessment. And remember, we’re here to help. If you’ve got a server related challenge – whether onsite or in the cloud, reach out to us – we’d love to help.
IT Support by Virtual Operations
Virtual Operations provides IT support for small businesses in the Orlando and Central Florida area. Our managed IT services offering provides the expertise and quality care your small business needs. Please contact us today to find out how we can help with your computer support and network support needs.