Search

Setting up ESXi for use in a Ghetto REC Environment

Go and talk to a salesman or friend about server virtualization. They are going to tell you that you can’t virtualize your server environment without lots of expensive equipment and licenses. You’ll be told that you’ll absolutely need a brawny cutting-edge server.  People you know that have “done it before” will also fill your head with a bunch of terms and obfuscate the process.  You’ll be overloaded with of tech-talk and acronyms that seem to always contain the letter “V” such as VMotion,  VSAN, or P2V.  The SAN advocates will want to sell you really big and expensive storage appliance with a fancy name that, in your mind, holds about as much data as a single drive you can purchase at Best Buy for around $90.

Being a mission oriented manager, you’ll shake your head and not tread further into the thought of virtualizing your datacenter or computer room.  Being a successful cooperative manager in a conservative industry, you will go by your rule-of-thumb: If it doesn’t make life simple, easier, or better, don’t do it – no matter how cool it seems to be.  Enjoy the simple things in life and accomplish you mission.  But I’ve found that the more one bolts-on to an environment, the more inherently complex it is going to be. The fact is: your life is not simple.  The more servers you must add, you find that your life is becoming increasingly complex.

You probably already manage many PC-based applications with each installed on its own server. Your application vendors will give you system specifications for the hardware and software for each application: Hunt Command Center, Aclara (TWACS) Oracle database and the new OC interface, Exchange, NISC’s iVUE (Linux), Itron, Milsoft, Partner Staking, C3iLex, ACS or other SCADA system, file servers, terminal servers, translation servers (DNP3 ,workstation, or soap server), domain/AD controllers, Multispeak middleware boxes, network management, and firewall logging databases.

Taking into consideration the MTBF of supporting several mission-critical servers that run in a cooperative environment, it is not unusual to come in to work after a long weekend and find that one server running in a depreciated state.  Even though modern servers allow you to “RAID” memory (yes, that’s right: you can hot-swap a defective RAM module), hot-swap hard drives and install PCI-e network interface cards without bringing a server down, it is rare if the OS does not require a reboot of some sort, regardless of the tolerance of the hardware.   It’s just the nature of the numbers.  Buy more lottery tickets and your chances of winning increases; buy more hardware and your chances of failure increases.   There has got to be a simpler way while maintaining an amount of resiliency.

Using virtualization technology, just about all of your machines can be put on a single server machine with redundant components.  After virtualizing applications and appliances across several industries using the methodology and process that I will describe, it goes without saying that cheap virtualization can address the needs of any business – let alone an electric cooperative.  In my experience with virtualized PC environments, there are a couple of points that stand out to me:

1)      Cost effective

2)      Resiliency

3)      Disaster Recovery

Cost Effective

At its simplest level, virtualization allows you to cost-effectively have two or more computers, running two or more environments on one piece of hardware. For example, you can have a Linux machine and a Windows machine running concurrently on the same system. Conversely, you could also have 20 Windows servers running on a single piece of computer hardware.
Modern server hardware is built to allow for no single points of failure (e.g. RAID, multiple power supplies, backup memory, hot-swappable devices, etc.) By leveraging this resilient hardware more cost effectively for all supported applications, you’ll find that traditional hardware-to-software provisioning is as archaic as a one-speed transmission.  The side effect of being cost effective provides a preface to my next point…

Resiliency

By decoupling applications from specific hardware characteristics of the systems they use to perform computational tasks, virtualization simplifies hardware upgrades by allowing administrators to capture the state of a virtual machine and transport that state from an old to a new host system.  In a virtual environment, your applications are no longer tied to physical devices and drivers that can only operate on specific hardware.  Moving a virtual machine from one physical system to another is seamless and does not require loading of device drivers to support the new system!

A vendor might be trying to sell you some resilient software module that allows you to run your SCADA system (or whatever) in a fault-tolerant configuration.  It only costs $75,000 – what a deal!  Before buying it, check and see if virtualizing the server application will not accomplish the same thing as their $75k module.  Of course, a poorly written software application that requires the constant restarting of system processes will not address your uptime problems.  In other words, virtualization will fix the potential hardware issues but not bugs in the vendor’s software application.

Disaster Recovery

Because the hardware is decoupled from the guest operating systems, virtualization allows for ease of backup and recovery.  Regression testing is easy.  Recovery is a snap.  Backup does not require downtime.

The intent of regression testing provides assurance that no additional errors were introduced in the process of fixing another problem.  Remember that last Milsoft update?  Yeah, I remember.  It messed up my database and I had to recover from backup; this amounted to a lot of downtime.  If your software was virtualized, you could copy the production system to a test environment that contains all of the components: domain controller, PartnerSoft staking, WindmillMapESRI, and whatever else you need to test.  Go through your regression check-list to confirm that the Milsoft is doing what they said without breaking anything else before pushing that update.
Just to be safe, be sure to run a snapshot of the production environment before doing that upgrade.  You can revert to the last snapshot and continue like nothing ever happened!

For disaster recovery and backups, is as easy as putting a backup hard drive into a new system and booting it up.  Backup done!

Let’s do it!

Just so you don’t think I’m all talk and no action, I’m going to show you step-by-step how to set this up in your cooperative’s server room.  If you have a bunch of physical servers and you need to do a hardware upgrade soon, why not buy one big server and try this out on a bench?  It’s fun and only takes a little bit of your time.  You might be surprised at how easy it is.

Also, the point of this is to show you how to do it in a Ghetto REC Environment.  This means that, aside from your time, it will not cost you a penny to try it out.  In fact, even if you don’t have the big hardware to virtualize to, it doesn’t hurt to virtualize your servers even if each host has only one guest running on it.  Think of how easy it is going to be to move to a new home in the future.

Advertisements


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s