VoIP Call Recording with Remote Port Mirroring

  The following are only my opinions and should not be taken as gospel from the Book of Alexander Graham Bell. Without getting into the business, philosophical, risk management, or business management reasons for recording calls, I’ll simply address the issues that I focus on when accomplishing this technical task.

It can be said that all call recording systems accomplish the same thing: they record calls.  Like all phone systems, they accomplish a similar end-result using very similar methods.  They all have a way of ‘tapping’ into the phone line (virtual or physical), indexing the call record, storage and/or archiving the call, and offer some sort of a playback/reporting interface.  In my opinion, the only significant difference between one call recording solution over another is the touch-and-feel of the interface or the method(s) in which a line can be tapped.

There are two common interface locations that an installer will place a recorder: Trunk or Station.

With a Trunk recorder, all in/outbound lines that come in from the phone company are recorded.  This is accomplished by tapping in (bridging) where the PRI, T-1, or POTS lines come into the PBX.

The pros to this interface:

– All customer/external calls will be recorded
– Very little configuration
– Call indexing is simplified
– Easy to understand
– Easy to troubleshoot

The cons to tapping at this interface:

– Internal calls (extension to extension in the same PBX) will not be recorded
– Recordings are not easily bypassed should a call not need be recorded
– Usually trunk recorders are expensive (depending on the physical interface used for the tap)

With a Station recording solution, calls are recorded per station (extension).  You simply specify which stations you want to record.

The pros to tapping into the station:

– Only the stations that are required to be recorded are
– Internal and external calls are recorded
– Station recording solutions are common on many modern voicemail systems (you already have an option to record to voicemail on your Mitel 3300, it needs to purchased and configured per station in the COS form)

The cons:

– Confusing and sometimes complex configuration when compared to any trunk tap
– High potential for duplicate recordings and information overload (one recorded ext calls another recorded ext = 2 recordings)
– Difficult to troubleshoot

I am going to focus on station based recording solutions.  Trunk based recorders are really easy to setup; although they have a few drawbacks, they are well known and easy to understand for both the installer and the users of the system.  Station recording systems have a potential for confusion due to a lack of perspective. For instance, after researching for a recording on a station that isn’t being recorded, the user will scream, “So, you’re telling me that we spent all that money on that system and it didn’t even record my important call?!”  Another task I’m going to focus on is recording VoIP (virtual taps) as opposed to traditional phones (physical taps).

In configuring a station based recording system, your network must be capable of mirroring ports.  A port “mirror” is a feature that most managed Ethernet switches are capable of.  I have a bunch of HP Procurve switches on my network.  Each station that I want to record must be identified at the physical port on my switch and traffic must be mirrored (virtually tapped) to the recording Ethernet port.  This is accomplished with the following commands on an HP procurve switch:

# This is the port to be recorded

interface C17
name “Mediatrix”

# this line assigns port C17 to mirror one, monitor all traffic (in- and out-bound)

monitor all both mirror 1 no-tag-added

# This is the port doing the recording

vlan 101
name “Sniffers”
untagged B1
no ip address

# this sets up the mirror instance on the switch

mirror 1 port B1

I put the monitor port in its own VLAN called Sniffers to only allow traffic-to-be-monitored on the VLAN segment.  While this it isn’t completely necessary, it does prevent the recorder interface from being overloaded with a bunch of traffic not applicable to call recording.

So now I can plug my vendor’s recorder in on port B1 of my switch and it will start recording SIP based VoIP traffic auto-magically.  My vendor will now take my money and go spend it on that F150 he’s been eyeing – yeah… that’s right. That nice one with the vanity mirror.

There are some caveats that you should be aware of:  If you want to record VoIP and DTMF signaling, be sure to read up on RFC 2833.  You might have to call your vendor to make necessary changes to your equipment so you can actually hear the DTMF digits dialed in your recording.  Many SIP implementations decode and send DTMF digits out-of-band using RTP so all you’ll hear is a bunch of button clicking noises instead of the expected DTMF.  It also messes up IVRs – which is what I record exclusively.

Be sure you turn off encryption on your PBX.  It sounds pretty funny when the recordings are encrypted…

So, now we have a system recording calls and we need to retrieve the recordings. Indexing is where many people have issues with their recording systems.  Some systems don’t index anything but a date/timestamp of when the recording took place.  Others record the extension, IP address, MAC address, SIP offer/invitation header, dialed number, whether the call was in or outbound, and all sort of nifty information.  In my opinion, you should address each of these index criteria based on your requirements.  It is not common that a vendor will set all of these up for you; all they will care about is whether the call is being recorded and walk away leaving you to figure out how to find the calls you are looking for in your new digital file cabinet.

As mentioned in my previous post, I simply use the date/timestamp to research and retrieve my recordings.  Why do I do this?  Because a timestamp is an inherent index that comes with -every- call recording system, it is cheap and only requires synchronization of a couple of system clocks; synching clocks is something that should be done anyway.  Also, before even considering a call recording system, the business should already be recording the call records.  Call records are usually called SMDR or CDR and should be indexed in an easily accessible database with a nice research interface.  The users should already be familiar with this record system and have one installed before they even try recording calls anyway; it’s a “walk before you run” thing, again, in my opinion.

I get a lot more useful information looking at CDR in a comparable timeframe than I ever do listening to recordings anyway; plus, CDR gives me traffic reports and other useful info that I can never get when listening to call recordings.

The major drawback of using date-timestamp retrieval/indexing method is that it requires extra mouse clicks.  I have to research calls on the SMDR, get the times and date of those interesting calls, and then cross-reference those calls with the date-timestamps of the recordings.  While it isn’t as integrated as a system with all the extra fields attached to the recording, it has worked very well for me in the past and present.  Another drawback is if a recorded-caller answers a call, transfers the call to another station that isn’t being recorded, and then the call is transferred back to another recorded-caller, the SMDR or CDR might not account for the timestamp of the transfer. This scenario will require additional research in retrieving the call recording (a lot of hunt and peck and cross-referencing).  I’m sure that there are other hypothetical’s that I haven’t thought of yet but I’m not positive that a fancy call-indexer could also address every one of the those either.

Playback is usually WMP on a web interface.  This is pretty standard.  There are bells and whistles available to mark a call, color it a different shade of grey, or download it – like the vanity mirror in my vendor’s new F150 pickup truck, these are all selling points that I might be enticed to focus on… or not.  Again, one might not like the taste of one system over another. Either way, they all pretty much do the same thing.

The following commercial call recording systems I have installed in the past:  ASC, TeleRex, and Onvisource.  I’ve used the following freebies: Wireshark, Ethereal, (yes – the Ethernet sniffers can record calls too!) and Oreka.

I use Oreka now because it is easy to setup, it has a nifty web interface and, more importantly, it is free.  While it doesn’t do a lot of fancy stuff, it gets the job done and the price is right.  The amount of time I spent on configuring it was well worth it compared to the $60k price tag of the ASC.

I’m sure there are a lot of opinions and input on these types of systems out there – take this as just another opinion from a guy named “ian”.

Quality can only be engineered into a product. So either way you choose to go, make sure you understand and design around valid business requirements before you start kicking the tires on a call recording system (or any system you choose to install at your cooperative for that matter)!

Hope this helps…


Remote Port Mirroring and intelligent mirroring for Oreka (or other tasks)

In the past I configured a lot of physical port mirroring.  It was most useful when configuring call recording for VoIP systems but it can be used for other tasks – like sniffing out a network problem.  It’s easy to configure for call recording because usually there is one call control system or media gateway such as a Mitel or Shoretel system that you have to monitor.  Configuring for this task was simple: make a list of ports that you want to monitor, configure a mirror port, and put the sniffer on that port.

Conventional port mirroring is possible on just about all modern switches now days.  The limitation of traditional port mirroring is that it’s limited to traffic local to the switch that it’s configured on.  In order to replicate a traffic flow from a non-local switch port, the mirror port must be on the trunk that connects to another switch. This means that the packet capture has to do all the filtering.  On a trunk port that is running gigabit speeds, this makes the equipment work a lot harder than it really needs to.  So, to keep the traffic down on the several switches in your campus LAN, you’ll need a network analyzer and port mirror on each one of your switches.

Before I only recorded IVR traffic.  The IVR system health falls solely into the Information Technology department and should be something that we maintain quality control of.  I view each automation port as an individual employee which must be managed accordingly.  Just because they show up to work on time, answer/service more calls than any single employee in the company, and work correctly 99.9% of the time, when there are problems, I am the one accountable.

I had a problem last weekend. The media gateway decided to take a break for about 30 hours.  Now, I configured for this contingency: interflow forward to the dispatcher on duty if an IVR port rings for more than 4 times.  This weekend, the dispatcher was unaware of a technical issue and just answered the calls.  Unfortunately, there was no alarm for this glitch.  Reviewing the logs, it appeared that the SIP gateway decided not to answer calls for 30 hours.  We are now assuming that the gateway reset itself because calls started coming back in with no problems.

I want to record the dispatcher.  It would also be nice to record other phones on the fly without having to mirror physical ports all throughout our campus.  So, I decided to rebuild my Oreka setup to run under Windows XP x32 running on a virtual machine and monitor only certain traffic from specific VoIP MAC addresses.

Configuring remote mirroring on HP Procurve

I use Procurve switches in my campus LAN.  I found that they allow me to redirect specific data flows to a different physical destination switch.  This will allow my Oreka VoIP recording VM to physically reside in the datacenter even though the monitored traffic is on a physically different port.

At a high-level, my goals are simple:  mirror all traffic to and from interesting MAC addresses from a remote switch (PhoneRM to the switch port that my Oreka VM is connected to (CPUrmHP  Interesting MAC addresses are things that I identify such as the dispatcher’s ShoreTel phone and the Mediatrix media gateway that services the NISC analog IVR.  The port doing the monitoring must be a physical port.  So, I’ll need a free port to act as the mirror on my switch.  I saw that port C9 in the computer room was open.  This will be my “destination” mirror port.

Configure Destination switch

Personally, I prefer configuring all of my switches from the command line interface.  It gives me a more global view of what is happening on my switch rather than navigating through menus and tabs on a web interface. In order for me to configure the remote mirror port, I ran the following command in global config mode:

CPUrmHP(config)# mirror endpoint ip 20302 port C9

What this command does is setup a mirror port endpoint.  On this switch I set up a service to expect traffic from the remote switch on UDP port 20302 which will be mirrored on port C9.  I must set this switch up first before the remote switch is configured otherwise the remote switch would have nowhere to send the mirrored traffic to.  The command syntax is:

CPUrmHP(config)# mirror endpoint ip <src-ip-add> <srv-udp-port> <dst-ip-add> port <port#>

The command syntax was a little confusing to me because the destination IP address is the same switch that you are typing the command in –the destination switch.

Configure mirror on remote switch(es)

I logged into my PhoneRM switch and entered the following command in global config mode:

PhoneRM(config)# mirror 1 name “VoIP traffic” remote ip 20302

What this command does is setup a mirror process that forwards interesting traffic to the destination port (C9) in the computer room.  If you have multiple switches that you need to mirror traffic to, all that is required is that the switches have IP connectivity to each other.  Use this command on other switches to mirror traffic to your central monitoring location.  The command syntax is:

PhoneRM(config)# mirror <1-4> [name <name>] remote ip <src-ip-add> <src-udp-port> <dst-ip-add>

Configure interesting traffic on remote switches

Now that the mirror process is setup to forward traffic to the destination switch port C9, we can define interesting traffic to monitor.  There are two ways that this can be done: by MAC address or by physical port.  If you have a device that occupies a port on your remote switch and you want to see all of the traffic on that port, you should mirror the port.  If you are interested in traffic coming to and from a MAC address, you should mirror based on the MAC address.  For the purposes of VoIP call recording, I chose to mirror traffic to and from a MAC address.  It seemed to be the easiest because I don’t have to find out which port each phone is plugged into.  Also, I can just input the same monitor command in my switches should the phone move from switch to switch (if the VoIP phone is a wireless device, for example).

I configured interesting traffic coming from my VoIP devices MAC addresses using the following command in global config mode:

PhoneRM(config)# monitor mac 00014900eeffee both mirror 1

monitor mac MAC-ADDR < | src |dst|both > mirror <1-4 | NAME-STR>

To monitor a port, you’ll use the interface command and specify what traffic you want to monitor, traffic direction and the mirror process number.  The command syntax is:

PhoneRM(config)# interface <port/trunk/mesh> monitor all [in | out | both] mirror <1-4>

Setup the VM

Now that my mirror process is configured to send VoIP traffic to physical port C9 in the computer room, I need to setup my Oreka virtual machine.  I’m running ESXi 5 in this environment.

I installed a VM with 2 network cards.  One of the NICs is going to be connected to the already-configured corporate LAN for network access and the other is going to be connected to the mirror port for VoIP mirror traffic.  First, I have to configure a free physical NIC port from the VM host to connect to port C9 on my CPUrmHP switch.

On the host I configured a new virtual switch by clicking on the Add Networking wizard in the vSphere Configuration tab.  I named mine vSwitch2:

Note that the security setting on this switch allows promiscuous mode, MAC address changes, and forged transmits.

Note the VLANid is set to all (4095)

Then I connected the second network card to the “Sniffers” network on vSwitch2 and plugged vmnic2 into port C9 of my HP Procurve switch.

Install OS and Configure

Next, I installed Windows XP Professional, downloaded all the updates using our WSUS,  and installed VMWare Tools.  I then removed all MS Networking and assigned a static bogus IP address to the vNIC connected to the Sniffer network; this prevents silly Microsoft traffic from being transmitted on this adapter.

Test with Wireshark

I downloaded Wireshark, started a capture on the Sniffer port, and made a test call to generate traffic.  Everything looked good:

Installing Oreka on Windows XP

Since Wireshark already installed the pcap driver, installing Oreka was a snap.  First, I downloaded the latest orkaudio-1.2 installer from and installed Oreka as a Windows service.

I didn’t want Oreka to monitor the LAN interface.  I wanted it to monitor only the Sniffer network on port C9.  Also, I wasn’t planning on using the web front end.  The Oreka installation is actually 3 separate software packages: OrkAudio, OrkTrack, and OrkWeb.  OrkAudio captures the VoIP network traffic and makes files out of it.  Optionally, OrkAudio updates OrkTrack: a database application that keeps track of the audio files.  OrkWeb interfaces with the database and presents a semi-OK front end application to listen to your recordings using a web interface.

For my purpose, I don’t want or need a web front-end.  This makes my configuration very simple: sniff off VoIP traffic mirrored from the switches and turn them into audio files with a date-time stamp, caller, direction, and callee.  My configuration is limited to OrkAudio only.

All configuration files are located in c:\Program Files\OrkAudio.  The first one I need to configure is config.xml

I opened it up with Notepad and made the following changes:

  • <StorageAudioFormat>gsm</StorageAudioFormat>
  • <DeleteNativeFile>yes</DeleteNativeFile>
  • <TrackerHostname></TrackerHostname>
    (I don’t need a Tracker – blank this tag out)
  • <TrackerTcpPort></TrackerTcpPort>
  • <!– <CapturePortFilters>LiveMonitoring</CapturePortFilters> –>
    I don’t need Live Monitoring so, I commented this line out.
  • <TapeProcessors>BatchProcessing, TapeFileNaming, Reporting</TapeProcessors>
    I do want custom file names, so I added the TapeFileNaming option.
  • <TapePathNaming></TapePathNaming>
    I’m fine with the default path where it makes a directory tree down to the hour
  • <TapeFileNaming>[year],[month],[day],_,[hour],[min],[sec],_,[localparty],_,[shortdirection],_,[remoteparty]</TapeFileNaming>
    My WAV files will be written just the way I want them: 20120703_133032_%2B1928[censored]_O_388.wav meaning this recording was taken on July 3, 2012 at 13:30:32, when +1928[censored] called extension 388.
    More file naming options are referenced in the Oreka User’s Manual.
  • <!– <TapeDurationMinimumSec>5</TapeDurationMinimumSec> –>
    I commented out this line which disregards calls with a duration less than 5 seconds.  Why? Because there are strange instances where the SIP call record is put on a blank recording that contains the SIP INVITE message; the actual recording is recorded on another file with the callee or caller labeled as Unavailable.  I have a ticket in with Shoretel about these calls.  Until then, I keep all calls no matter their duration.

Now, the VoIpPlugin, Devices tag needs to be customized for your configuration.  You’ll need to open the file, orkaudio.log with Notepad and look at the INFO packet: logs after the “Initializing VOIP plugin” event.  You’ll see a listing of available pcap devices.  Copy the NIC that is configured on your Sniffer vSwitch and paste it into the config.xml file under the <Devices> tag.

Start –> Run services.msc and Restart the OrkAudio service.  Check the OrkAudio.log file for any initialization errors.

Make a test call

By default, the recordings are stored in a directory tree c:\oreka\audio\[year]\[month]\[day]\[hour]\[filename].wav  When the call is being recorded, I noticed that there are files being written with the MCF extension.  This stands for multimedia container format and contains the raw audio data as sniffed off the wire as the call is being made.  After the mcf file is completed, Oreka converts it (if possible) to a wav/gsm file (see config.xml) so that you can use Media Player to listen to the recording.  If there are MCF files that are not converted to wav, one of three things has happened: you have option DeleteNativeFile set to NO in config.xml, the service stopped prior to the conversion taking place, or the phone system you are recording is using a codec that Oreka doesn’t know how to decode.

One biggie for me is that Oreka cannot decode the G.729a or G.729 codecs nor can it decode some of the clear-channel, high quality, Shoretel codecs.  To disable these codecs on your Shoretel system, remove them from Codecs List screen in your ShoreWare Director interface.  The only ones I left enabled are the BV16 and PCMU8000.


IOPs, Bandwidth, Throughput…

Storage Contention

I was given a chart from a vendor that shows raw throughput of a disk array they suggested I purchase. This chart was a comparison between one array versus a list of others in terms of throughput.  In an attempt to objectify this storage contention idea that keeps getting thrown in my face, I looked up the definitions of IOPS, throughput, and Bandwidth and their relationships in a storage environment.  Here’s what I learned:

  • IOPS = IO’s Per Second which is 1 / (average latency in ms + average seek time in ms) also based on block size
  • Throughput = Amount of data the network can handle (3Gb, 6Gb, 12Gb SAS)
  • Bandwidth = Network speed of the storage device (7.2k, 10k, 15k RPM drives)

Being somewhat of a gear head, I see these terms correlate closely to automotive concepts of horsepower, torque, and vehicle function.  We Americans are known to purchase horsepower off-the-lot; however, we drive torque when choosing our vehicles. Likewise, many computer guys purchase throughput – 10k vs. 15k SAS drives when our applications actually “drive” IOPS.  It is rare that a user’s speed perception and end-user experience is based solely on throughput. A high-horsepower vehicle may under-perform when compared to a lower-horsepower yet higher low-end torque vehicle at the green light. So, why is it that sales people and application vendors focus solely on drive RPM speed bandwidth and storage network throughput?

Read the rest of this entry »

Upgrading to ESXi 4.1?

What’s New in VMware vSphere 4.1 for Small and Midsize Businesses

I just watched this video from VMWare regarding virtualization, VMotion, and what’s important for small/medium businesses (SMB). There were a couple of interesting questions that were asked in the web conference:

For us small people will you ever change the cost structure for VCenter? (ie Two license options; 6 proc <Foundation> or Unlimited). For a small company we have 4 two proc servers and cannot justify they cost for an unlimited VCenter for adding just one more server.

Agreed: the vCenter license cost seems to be out of line with the rest of the licensing especially for smaller implementations.
We’re investigating this – appreciate the feedback.

Is it necessary to have a SAN to take advantage of all the features of vSphere?
It’s necessary to have shared storage – this could be NAS, SAN, etc.

Is this a trial version that has a trial period, such that it’s better to go to essentials plus kit?
No this is not a trial version. This a free version of vSphere with perpetual license. vSphere Hypervisor cannot be centrally managed from vCenter. Only locally via vSphere Client.

For storage vmotion, is it possible to vmotion a vm across data centers in 4.1?
It is possible, but it is not very easy. You need the right storage architecture.

One thing that I’m finding consistent as time passes and as users become more familiar with VMWare: they are asking more targeted questions regarding their storage infrastructure.  By reading these questions, I am assuming that administrators are demanding more and expecting to pay a lot less for their storage as well.  Just wait…  Something’s around the corner when it comes to addressing storage requirements and I don’t think it’s going to be an expensive piece of hardware.

Desktop Virtualization

I had a phone call with a friend of mine at another cooperative regarding virtual desktops.  He read a magazine article about the drug that some are now using to deploy their desktops. He asked if I had experienced the hallucinogenic and euphoric effects that he read it would provide.  I told him I wasn’t interested in it – at all.  When he bluntly asked me “why not?”, I felt an obligation to answer him:

Desktop virtualization makes no sense to deploy in my environment.  In the case of server virtualization, several computers in the data center were using more hardware than their applications demanded.  CPU utilization was low, processor core time was limited to the number of threads an application is programmed to use, there was duplication of duplication, and the user experience requirement was practically non-existent. Virtualization makes perfect sense to me in a server environment.
Looking at user desktops, I see virtualization as a solution in need of a problem in my cooperative. Actually, I see it as more of a bane than a cure.  Why, I can purchase a dual-core 2.8GHz processor desktop with 2GB of RAM, dual-headed video, sound, and a Windows OS pre-installed for the same amount of money as a dumb terminal device.  The dumb terminal’s capabilities are far less than what I get for the desktop.  The flexibility of the desktop meets and exceeds the requirement for just about any user and for the same price; I don’t feel like I’m painting myself in a corner like I would with a terminal.  With that desktop PC, I have processing power that is designed to meet the dynamic needs of each user which are very different than the relative fixed demands and requirements of a server.  Also, just because virtualization works well in the datacenter does not mean it will work equally well on the desktop.  These are very different worlds.

Given the decision to buy a house or rent an efficiency apartment, I found that the house might cost just as much as the apartment.  I later found that the purchased house will absolutely cost less than the apartment – both short and long term.  Given these facts, what choice in housing do you think I should make?

There’s no amount of rationalizing that one can apply to make an apartment sound appealing to me given these facts.  Granted, one does not have to maintain an apartment as much as a house just like desktop virtualization vs a desktop PC – but at what cost to the users and the business?  In the end, you’re paying a lot more for a lot less. This doesn’t make a lot of sense to me.

I’m not saying that it isn’t a solution for some.  I can just about guarantee you that it’s not a solution for me – yet…

TechAdvantage 2011 – Storage Virtualization


There were several off-line comments and questions that I have been getting from cooperatives regarding the necessity for virtualization storage requirements. I mentioned in the presentation that we did not deploy with a storage area network (SAN).  This seems to confuse many as to why or how we were able to do that.  The purpose of this post is to address the storage options that you can have in a virtual server environment and give you the pros and cons of setting them up.

Why Storage is Important

The importance of storage in a VMware environment is paramount. I would rate the necessity for storage to be higher than the amount of processing speed one rates the CPU’s on the host.  Why? Here is the reality: Read the rest of this entry »

TechAdvantage 2011 – Installing ESXi 4.x


There are several howto’s out there on how to install ESXi.  There are even ones that are specific to your particular hardware if you search enough.  Instead of reinventing the wheel here, I’m going to share the checklist that I put together during our disaster recover exercise.  Following my internal documentation, I will include some links and videos that I thought will be helpful.

Read the rest of this entry »

TechAdvantage 2011 – Virtualization Questions

Questions asked before and after the presentation are posted here.

There’s been a big push in the Midwest for Google Gmail to take over in-house hosting our e-mail.  My Redhat is 7 years old, handles all our e-mail and should be replaced.  You mentioned Postini: should I outsource my e-mail entirely?

In my opinion, the fact alone that your server is 7 years old is not a compelling reason to move to Gmail.  There are usually 3 reasons people choose to upgrade: technology, security, and functionality mandate.  All of these are really driven by risk and cost.  Cost does not always have to equate to dollars but it is good to do so if you have to explain it to someone other than yourself.  The way I rationalize with it is that if the cost of holding what you have exceeds the cost of the technology upgrade, then you need to upgrade.

In your case, hardware “getting old” would fall into a technology mandate.  The hardware needs to be replaced because it is seven years old.  You can purchase a less-problematic and brand new box for relatively less cost than is required to maintain the current one.  Because your e-mail is doing everything that you want it to, this would not fall into a functionality mandated upgrade.  E-mail is really a simple and common service for an IT department.

Again, this is only my opinion, but moving to Gmail without defining a business need would be like throwing the baby out with the 7-year-old bathwater.  This is not a good practice in most cultures.  With virtualization, the bathwater can always be fresh.  No reconfiguration necessary.  All stays the same and essentially eliminates the technology mandated reasons for upgrading – such as a dated server.

Didn’t you guys do it backwards?  Should you have purchased a SAN first? Read the rest of this entry »

TechAdvantage 2011 – Virtualization

I just uploaded all of the video to YouTube today.  Here are the videos if you care to view them:

Part I

Part II

Part III

Stay tuned…

I am finishing the post of how-to virtualize for free.  This will be a step-by-step guide on how we accomplished this task.  Also, I will be posting some more information based on some of the questions that Ben and I were asked before and after this presentation.

(Edit: trouble with transcoding video.  A replacement video is being uploaded)