Search

Shoretel Panic Buttons

The safety guy called me and asked about panic buttons for the personnel at the front desk last week.  After listening for several minutes describing how they “used to do it” using lights, buzzers, SCADA, and pagers they finally asked me for input.  “I can configure a button on their phones to alert a group of people.”  I’ve done this in the past with Mitel systems; it can’t be that hard to do the same thing with Shoretel.

I figured that if I setup a shared call appearance on a key to all those users designated to respond to panic calls, all I would need to do is setup a speed dial for the panic button.

Bridged Call Appearance

First, I configured a BCA.  Logging into Shoreware Director, click on Call Control –> Bridged Call Appearances.  I named the BCA “Emergency” and assigned it an extension:

Image

Configure Panic Responder Phones

After setting up the BCA, I started mapping each responders’ button on their respective phones.  Clicking on Users –> Individual Users and selecting the user –> Personal Options –> Program IP Phone Buttons, I configured the following screen for each user:

Image

I put an ‘SOS’ label so that it would show on their phone’s display.

Configure Panic Button Phones

For the people needing panic buttons, I simply programmed a speed dial on an IP Phone Button:

Image

Notify The Users

From: Ian Fleming
Sent: Wednesday, February 06, 2013 8:56 AM
To: People who need to know
Cc: IT Department
Subject: Panic Button

Panic buttons have been configured at the Lakeside office.

You have been designated as a responder.

When the ‘SOS’ button illuminates and rings your phone, the display will read:

For 184
Emergency
[NAME]
{extension}

Example:

Image

If you see this on your phone, Gracie pressed her panic button.

Advertisements

VoIP Call Recording with Remote Port Mirroring

  The following are only my opinions and should not be taken as gospel from the Book of Alexander Graham Bell. Without getting into the business, philosophical, risk management, or business management reasons for recording calls, I’ll simply address the issues that I focus on when accomplishing this technical task.

It can be said that all call recording systems accomplish the same thing: they record calls.  Like all phone systems, they accomplish a similar end-result using very similar methods.  They all have a way of ‘tapping’ into the phone line (virtual or physical), indexing the call record, storage and/or archiving the call, and offer some sort of a playback/reporting interface.  In my opinion, the only significant difference between one call recording solution over another is the touch-and-feel of the interface or the method(s) in which a line can be tapped.

There are two common interface locations that an installer will place a recorder: Trunk or Station.

With a Trunk recorder, all in/outbound lines that come in from the phone company are recorded.  This is accomplished by tapping in (bridging) where the PRI, T-1, or POTS lines come into the PBX.

The pros to this interface:

– All customer/external calls will be recorded
– Very little configuration
– Call indexing is simplified
– Easy to understand
– Easy to troubleshoot

The cons to tapping at this interface:

– Internal calls (extension to extension in the same PBX) will not be recorded
– Recordings are not easily bypassed should a call not need be recorded
– Usually trunk recorders are expensive (depending on the physical interface used for the tap)

With a Station recording solution, calls are recorded per station (extension).  You simply specify which stations you want to record.

The pros to tapping into the station:

– Only the stations that are required to be recorded are
– Internal and external calls are recorded
– Station recording solutions are common on many modern voicemail systems (you already have an option to record to voicemail on your Mitel 3300, it needs to purchased and configured per station in the COS form)

The cons:

– Confusing and sometimes complex configuration when compared to any trunk tap
– High potential for duplicate recordings and information overload (one recorded ext calls another recorded ext = 2 recordings)
– Difficult to troubleshoot

I am going to focus on station based recording solutions.  Trunk based recorders are really easy to setup; although they have a few drawbacks, they are well known and easy to understand for both the installer and the users of the system.  Station recording systems have a potential for confusion due to a lack of perspective. For instance, after researching for a recording on a station that isn’t being recorded, the user will scream, “So, you’re telling me that we spent all that money on that system and it didn’t even record my important call?!”  Another task I’m going to focus on is recording VoIP (virtual taps) as opposed to traditional phones (physical taps).

In configuring a station based recording system, your network must be capable of mirroring ports.  A port “mirror” is a feature that most managed Ethernet switches are capable of.  I have a bunch of HP Procurve switches on my network.  Each station that I want to record must be identified at the physical port on my switch and traffic must be mirrored (virtually tapped) to the recording Ethernet port.  This is accomplished with the following commands on an HP procurve switch:

# This is the port to be recorded

interface C17
name “Mediatrix”

# this line assigns port C17 to mirror one, monitor all traffic (in- and out-bound)

monitor all both mirror 1 no-tag-added
exit

# This is the port doing the recording

vlan 101
name “Sniffers”
untagged B1
no ip address
exit

# this sets up the mirror instance on the switch

mirror 1 port B1

I put the monitor port in its own VLAN called Sniffers to only allow traffic-to-be-monitored on the VLAN segment.  While this it isn’t completely necessary, it does prevent the recorder interface from being overloaded with a bunch of traffic not applicable to call recording.

So now I can plug my vendor’s recorder in on port B1 of my switch and it will start recording SIP based VoIP traffic auto-magically.  My vendor will now take my money and go spend it on that F150 he’s been eyeing – yeah… that’s right. That nice one with the vanity mirror.

There are some caveats that you should be aware of:  If you want to record VoIP and DTMF signaling, be sure to read up on RFC 2833.  You might have to call your vendor to make necessary changes to your equipment so you can actually hear the DTMF digits dialed in your recording.  Many SIP implementations decode and send DTMF digits out-of-band using RTP so all you’ll hear is a bunch of button clicking noises instead of the expected DTMF.  It also messes up IVRs – which is what I record exclusively.

Be sure you turn off encryption on your PBX.  It sounds pretty funny when the recordings are encrypted…

So, now we have a system recording calls and we need to retrieve the recordings. Indexing is where many people have issues with their recording systems.  Some systems don’t index anything but a date/timestamp of when the recording took place.  Others record the extension, IP address, MAC address, SIP offer/invitation header, dialed number, whether the call was in or outbound, and all sort of nifty information.  In my opinion, you should address each of these index criteria based on your requirements.  It is not common that a vendor will set all of these up for you; all they will care about is whether the call is being recorded and walk away leaving you to figure out how to find the calls you are looking for in your new digital file cabinet.

As mentioned in my previous post, I simply use the date/timestamp to research and retrieve my recordings.  Why do I do this?  Because a timestamp is an inherent index that comes with -every- call recording system, it is cheap and only requires synchronization of a couple of system clocks; synching clocks is something that should be done anyway.  Also, before even considering a call recording system, the business should already be recording the call records.  Call records are usually called SMDR or CDR and should be indexed in an easily accessible database with a nice research interface.  The users should already be familiar with this record system and have one installed before they even try recording calls anyway; it’s a “walk before you run” thing, again, in my opinion.

I get a lot more useful information looking at CDR in a comparable timeframe than I ever do listening to recordings anyway; plus, CDR gives me traffic reports and other useful info that I can never get when listening to call recordings.

The major drawback of using date-timestamp retrieval/indexing method is that it requires extra mouse clicks.  I have to research calls on the SMDR, get the times and date of those interesting calls, and then cross-reference those calls with the date-timestamps of the recordings.  While it isn’t as integrated as a system with all the extra fields attached to the recording, it has worked very well for me in the past and present.  Another drawback is if a recorded-caller answers a call, transfers the call to another station that isn’t being recorded, and then the call is transferred back to another recorded-caller, the SMDR or CDR might not account for the timestamp of the transfer. This scenario will require additional research in retrieving the call recording (a lot of hunt and peck and cross-referencing).  I’m sure that there are other hypothetical’s that I haven’t thought of yet but I’m not positive that a fancy call-indexer could also address every one of the those either.

Playback is usually WMP on a web interface.  This is pretty standard.  There are bells and whistles available to mark a call, color it a different shade of grey, or download it – like the vanity mirror in my vendor’s new F150 pickup truck, these are all selling points that I might be enticed to focus on… or not.  Again, one might not like the taste of one system over another. Either way, they all pretty much do the same thing.

The following commercial call recording systems I have installed in the past:  ASC, TeleRex, and Onvisource.  I’ve used the following freebies: Wireshark, Ethereal, (yes – the Ethernet sniffers can record calls too!) and Oreka.

I use Oreka now because it is easy to setup, it has a nifty web interface and, more importantly, it is free.  While it doesn’t do a lot of fancy stuff, it gets the job done and the price is right.  The amount of time I spent on configuring it was well worth it compared to the $60k price tag of the ASC.

I’m sure there are a lot of opinions and input on these types of systems out there – take this as just another opinion from a guy named “ian”.

Quality can only be engineered into a product. So either way you choose to go, make sure you understand and design around valid business requirements before you start kicking the tires on a call recording system (or any system you choose to install at your cooperative for that matter)!

Hope this helps…

-ian

Remote Port Mirroring and intelligent mirroring for Oreka (or other tasks)

In the past I configured a lot of physical port mirroring.  It was most useful when configuring call recording for VoIP systems but it can be used for other tasks – like sniffing out a network problem.  It’s easy to configure for call recording because usually there is one call control system or media gateway such as a Mitel or Shoretel system that you have to monitor.  Configuring for this task was simple: make a list of ports that you want to monitor, configure a mirror port, and put the sniffer on that port.

Conventional port mirroring is possible on just about all modern switches now days.  The limitation of traditional port mirroring is that it’s limited to traffic local to the switch that it’s configured on.  In order to replicate a traffic flow from a non-local switch port, the mirror port must be on the trunk that connects to another switch. This means that the packet capture has to do all the filtering.  On a trunk port that is running gigabit speeds, this makes the equipment work a lot harder than it really needs to.  So, to keep the traffic down on the several switches in your campus LAN, you’ll need a network analyzer and port mirror on each one of your switches.

Before I only recorded IVR traffic.  The IVR system health falls solely into the Information Technology department and should be something that we maintain quality control of.  I view each automation port as an individual employee which must be managed accordingly.  Just because they show up to work on time, answer/service more calls than any single employee in the company, and work correctly 99.9% of the time, when there are problems, I am the one accountable.

I had a problem last weekend. The media gateway decided to take a break for about 30 hours.  Now, I configured for this contingency: interflow forward to the dispatcher on duty if an IVR port rings for more than 4 times.  This weekend, the dispatcher was unaware of a technical issue and just answered the calls.  Unfortunately, there was no alarm for this glitch.  Reviewing the logs, it appeared that the SIP gateway decided not to answer calls for 30 hours.  We are now assuming that the gateway reset itself because calls started coming back in with no problems.

I want to record the dispatcher.  It would also be nice to record other phones on the fly without having to mirror physical ports all throughout our campus.  So, I decided to rebuild my Oreka setup to run under Windows XP x32 running on a virtual machine and monitor only certain traffic from specific VoIP MAC addresses.

Configuring remote mirroring on HP Procurve

I use Procurve switches in my campus LAN.  I found that they allow me to redirect specific data flows to a different physical destination switch.  This will allow my Oreka VoIP recording VM to physically reside in the datacenter even though the monitored traffic is on a physically different port.

At a high-level, my goals are simple:  mirror all traffic to and from interesting MAC addresses from a remote switch (PhoneRM 192.168.1.100) to the switch port that my Oreka VM is connected to (CPUrmHP 192.168.1.200).  Interesting MAC addresses are things that I identify such as the dispatcher’s ShoreTel phone and the Mediatrix media gateway that services the NISC analog IVR.  The port doing the monitoring must be a physical port.  So, I’ll need a free port to act as the mirror on my switch.  I saw that port C9 in the computer room was open.  This will be my “destination” mirror port.

Configure Destination switch

Personally, I prefer configuring all of my switches from the command line interface.  It gives me a more global view of what is happening on my switch rather than navigating through menus and tabs on a web interface. In order for me to configure the remote mirror port, I ran the following command in global config mode:

CPUrmHP(config)# mirror endpoint ip 192.168.1.100 20302 192.168.1.200 port C9

What this command does is setup a mirror port endpoint.  On this switch I set up a service to expect traffic from the remote switch 192.168.1.100 on UDP port 20302 which will be mirrored on port C9.  I must set this switch up first before the remote switch is configured otherwise the remote switch would have nowhere to send the mirrored traffic to.  The command syntax is:

CPUrmHP(config)# mirror endpoint ip <src-ip-add> <srv-udp-port> <dst-ip-add> port <port#>

The command syntax was a little confusing to me because the destination IP address is the same switch that you are typing the command in –the destination switch.

Configure mirror on remote switch(es)

I logged into my PhoneRM switch and entered the following command in global config mode:

PhoneRM(config)# mirror 1 name “VoIP traffic” remote ip 192.168.1.100 20302 192.168.1.200

What this command does is setup a mirror process that forwards interesting traffic to the destination port (C9) in the computer room.  If you have multiple switches that you need to mirror traffic to, all that is required is that the switches have IP connectivity to each other.  Use this command on other switches to mirror traffic to your central monitoring location.  The command syntax is:

PhoneRM(config)# mirror <1-4> [name <name>] remote ip <src-ip-add> <src-udp-port> <dst-ip-add>

Configure interesting traffic on remote switches

Now that the mirror process is setup to forward traffic to the destination switch port C9, we can define interesting traffic to monitor.  There are two ways that this can be done: by MAC address or by physical port.  If you have a device that occupies a port on your remote switch and you want to see all of the traffic on that port, you should mirror the port.  If you are interested in traffic coming to and from a MAC address, you should mirror based on the MAC address.  For the purposes of VoIP call recording, I chose to mirror traffic to and from a MAC address.  It seemed to be the easiest because I don’t have to find out which port each phone is plugged into.  Also, I can just input the same monitor command in my switches should the phone move from switch to switch (if the VoIP phone is a wireless device, for example).

I configured interesting traffic coming from my VoIP devices MAC addresses using the following command in global config mode:

PhoneRM(config)# monitor mac 00014900eeffee both mirror 1

monitor mac MAC-ADDR < | src |dst|both > mirror <1-4 | NAME-STR>

To monitor a port, you’ll use the interface command and specify what traffic you want to monitor, traffic direction and the mirror process number.  The command syntax is:

PhoneRM(config)# interface <port/trunk/mesh> monitor all [in | out | both] mirror <1-4>

Setup the VM

Now that my mirror process is configured to send VoIP traffic to physical port C9 in the computer room, I need to setup my Oreka virtual machine.  I’m running ESXi 5 in this environment.

I installed a VM with 2 network cards.  One of the NICs is going to be connected to the already-configured corporate LAN for network access and the other is going to be connected to the mirror port for VoIP mirror traffic.  First, I have to configure a free physical NIC port from the VM host to connect to port C9 on my CPUrmHP switch.

On the host I configured a new virtual switch by clicking on the Add Networking wizard in the vSphere Configuration tab.  I named mine vSwitch2:

Note that the security setting on this switch allows promiscuous mode, MAC address changes, and forged transmits.

Note the VLANid is set to all (4095)

Then I connected the second network card to the “Sniffers” network on vSwitch2 and plugged vmnic2 into port C9 of my HP Procurve switch.

Install OS and Configure

Next, I installed Windows XP Professional, downloaded all the updates using our WSUS,  and installed VMWare Tools.  I then removed all MS Networking and assigned a static bogus IP address to the vNIC connected to the Sniffer network; this prevents silly Microsoft traffic from being transmitted on this adapter.

Test with Wireshark

I downloaded Wireshark, started a capture on the Sniffer port, and made a test call to generate traffic.  Everything looked good:

Installing Oreka on Windows XP

Since Wireshark already installed the pcap driver, installing Oreka was a snap.  First, I downloaded the latest orkaudio-1.2 installer from http://oreka.sourceforge.net/download/windows and installed Oreka as a Windows service.

I didn’t want Oreka to monitor the LAN interface.  I wanted it to monitor only the Sniffer network on port C9.  Also, I wasn’t planning on using the web front end.  The Oreka installation is actually 3 separate software packages: OrkAudio, OrkTrack, and OrkWeb.  OrkAudio captures the VoIP network traffic and makes files out of it.  Optionally, OrkAudio updates OrkTrack: a database application that keeps track of the audio files.  OrkWeb interfaces with the database and presents a semi-OK front end application to listen to your recordings using a web interface.

For my purpose, I don’t want or need a web front-end.  This makes my configuration very simple: sniff off VoIP traffic mirrored from the switches and turn them into audio files with a date-time stamp, caller, direction, and callee.  My configuration is limited to OrkAudio only.

All configuration files are located in c:\Program Files\OrkAudio.  The first one I need to configure is config.xml

I opened it up with Notepad and made the following changes:

  • <StorageAudioFormat>gsm</StorageAudioFormat>
  • <DeleteNativeFile>yes</DeleteNativeFile>
  • <TrackerHostname></TrackerHostname>
    (I don’t need a Tracker – blank this tag out)
  • <TrackerTcpPort></TrackerTcpPort>
  • <!– <CapturePortFilters>LiveMonitoring</CapturePortFilters> –>
    I don’t need Live Monitoring so, I commented this line out.
  • <TapeProcessors>BatchProcessing, TapeFileNaming, Reporting</TapeProcessors>
    I do want custom file names, so I added the TapeFileNaming option.
  • <TapePathNaming></TapePathNaming>
    I’m fine with the default path where it makes a directory tree down to the hour
  • <TapeFileNaming>[year],[month],[day],_,[hour],[min],[sec],_,[localparty],_,[shortdirection],_,[remoteparty]</TapeFileNaming>
    My WAV files will be written just the way I want them: 20120703_133032_%2B1928[censored]_O_388.wav meaning this recording was taken on July 3, 2012 at 13:30:32, when +1928[censored] called extension 388.
    More file naming options are referenced in the Oreka User’s Manual.
  • <!– <TapeDurationMinimumSec>5</TapeDurationMinimumSec> –>
    I commented out this line which disregards calls with a duration less than 5 seconds.  Why? Because there are strange instances where the SIP call record is put on a blank recording that contains the SIP INVITE message; the actual recording is recorded on another file with the callee or caller labeled as Unavailable.  I have a ticket in with Shoretel about these calls.  Until then, I keep all calls no matter their duration.

Now, the VoIpPlugin, Devices tag needs to be customized for your configuration.  You’ll need to open the file, orkaudio.log with Notepad and look at the INFO packet: logs after the “Initializing VOIP plugin” event.  You’ll see a listing of available pcap devices.  Copy the NIC that is configured on your Sniffer vSwitch and paste it into the config.xml file under the <Devices> tag.

Start –> Run services.msc and Restart the OrkAudio service.  Check the OrkAudio.log file for any initialization errors.

Make a test call

By default, the recordings are stored in a directory tree c:\oreka\audio\[year]\[month]\[day]\[hour]\[filename].wav  When the call is being recorded, I noticed that there are files being written with the MCF extension.  This stands for multimedia container format and contains the raw audio data as sniffed off the wire as the call is being made.  After the mcf file is completed, Oreka converts it (if possible) to a wav/gsm file (see config.xml) so that you can use Media Player to listen to the recording.  If there are MCF files that are not converted to wav, one of three things has happened: you have option DeleteNativeFile set to NO in config.xml, the service stopped prior to the conversion taking place, or the phone system you are recording is using a codec that Oreka doesn’t know how to decode.

One biggie for me is that Oreka cannot decode the G.729a or G.729 codecs nor can it decode some of the clear-channel, high quality, Shoretel codecs.  To disable these codecs on your Shoretel system, remove them from Codecs List screen in your ShoreWare Director interface.  The only ones I left enabled are the BV16 and PCMU8000.


Document Retention vs. Disaster Recovery

Subject:Record Retention and Disaster Recovery & EPHI & PHI

Hello Ian and Ben,

HR would like to meet with the two of you and discuss [the] record retention program and the safeguards in place to restore data.

We would also like to discuss EPHI & PHI policies to ensure [we stay] in compliance.  It would be helpful to gain a better understanding of IT’s processes and procedures concerning data back-up and the access to sensitive personnel information.

Would you please suggest a day and time when we can meet for about half an hour?

Thanks.

Read the rest of this entry »


Monitoring CPU Runaway Processes

Summary

My event log recorded cpu_runaway processes from SNMP on our virtualized Exchange server, Leka, this week.  Usually we correlate a cpu_runaway with a scheduled task such as a backup or system maintenance process.  The cause of these was elusive.

The cpu_runaway probe is a Dude function that looks for overall utilization over 25% for 30 seconds.  I chose 25% as the threshold because our VM’s have quad processors.  One-quarter overall utilization means a single process could be in a runaway state.  I researched the times that the CPU runaway’s occurred and couldn’t find anything abnormal.  I started to setup the performance monitor to record utilization to a log file but I really wanted better resolution when the issue was actually happening.

the probe

The event shows CPU Runaway. What now?

Read the rest of this entry »


Script Monitoring

Script Monitoring

In Offsite Disaster Backups, I outlined three methods I use to move backup data to a backup machine with removable hard drives.  Essentially, this backup system replaced our tapes almost entirely.  The one thing that I didn’t like about it was the way that each of the systems to be backed up were being logged.  I used three methods: NTBackup, Robocopy, and Veeam.

  • The native NTBackup utility copied all of the log files to a directory and rotated the log file names.  This was difficult to manage in its native form so I compiled a workaround script to copy the log files over to the backup media.  The trouble with this workaround is that I had to manually go and clear the logs off the backup drive.  This is not a significant problem but it is a hassle.
  • The Robocopy scripts with the /MIR switch are pretty straightforward.  The log file path can be determined in the script which I usually put alongside the mirrored file structure on the backup disk.
  • Veeam e-mails me when it is done copying to the media.  The ghettovcb.sh script also logs to a file and sends e-mails to me saying that a backup is commencing and when it’s finished.

The real concern I have is coordinating all of these events and reading entire separate and individual log files scattered around the disk.  Should something go wrong, how would I know without looking at the logs?  Research HO!

Read the rest of this entry »


IOPs, Bandwidth, Throughput…

Storage Contention

I was given a chart from a vendor that shows raw throughput of a disk array they suggested I purchase. This chart was a comparison between one array versus a list of others in terms of throughput.  In an attempt to objectify this storage contention idea that keeps getting thrown in my face, I looked up the definitions of IOPS, throughput, and Bandwidth and their relationships in a storage environment.  Here’s what I learned:

  • IOPS = IO’s Per Second which is 1 / (average latency in ms + average seek time in ms) also based on block size
  • Throughput = Amount of data the network can handle (3Gb, 6Gb, 12Gb SAS)
  • Bandwidth = Network speed of the storage device (7.2k, 10k, 15k RPM drives)

Being somewhat of a gear head, I see these terms correlate closely to automotive concepts of horsepower, torque, and vehicle function.  We Americans are known to purchase horsepower off-the-lot; however, we drive torque when choosing our vehicles. Likewise, many computer guys purchase throughput – 10k vs. 15k SAS drives when our applications actually “drive” IOPS.  It is rare that a user’s speed perception and end-user experience is based solely on throughput. A high-horsepower vehicle may under-perform when compared to a lower-horsepower yet higher low-end torque vehicle at the green light. So, why is it that sales people and application vendors focus solely on drive RPM speed bandwidth and storage network throughput?

Read the rest of this entry »


Happenings – April Fools Week

Over the past couple of days I’ve been fighting text messages from Postini that one of our Internet connections used to transfer e-mail was flapping for about 10-90 seconds every night. It would happen almost like clockwork: exactly 12:38AM every morning. Thinking that it was trouble with our ISP, I notified them of the issue. They didn’t seem concerned but scheduled a software update and reboot on our equipment anyway.
The Monday following April 1st, at 8:36AM the Exchange server lost connection with the rest of the network. Going in to VSphere, Ben found the system unresponsive. Just as he was about to hit the virtual power button, the screen unfroze. Perusing the event log was uneventful. The only thing that we could find of significance was a SCSI event about delayed writes to the disk. I was really worried about disk contention on the VM host but this system has been running trouble-free for about a year. The database has not grown either.
Regardless, I took the database offline and ran an ESEUTIL to defrag it. Then I defragged the virtual disk. Viewing the logs after the Information Store came back online, I found event 9580:

“Virus scanning is enabled but diagnostic logging for ‘Virus Scanning’ category is turned off. To see diagnostic events related to virus scanning, increase logging level for ‘Virus Scanning category using Exchange System Manager.”

So, into the ESM I went to turn on virus debug logging. Now I’m getting the events delivered to The Dude and the event viewer. Come to find out, the McAfee GroupShield product was not working properly. We’re now on the phone with McAfee tech support to figure out what went wrong.