Interop Cisco Unified Call Manager with Microsoft Teams

A common topic of conversation with our Cisco Call Manager (“CUCM”) customers has been whether or not Microsoft Teams can act as a softphone for CUCM.

The answer is … sort of.

Let’s look at the process of getting this working. First and foremost, at the time of this writing, calling from CUCM to MS Teams directly is not supported by Cisco – so don’t expect to call TAC if you have problems. Best we can tell, Microsoft doesn’t seem to care what 3rd-party PBX you’re using as long as you’re using a supported Session Border Controller (SBC).

What Cisco does support is using their SBC – the Cisco Unified Border Element (“CUBE”) – as an intermediary between a PSTN provider (ex: a SIP carrier) and MS Teams (Microsoft refers to this as “Direct Routing”).

The CUBE/Microsoft configuration is documented here.

Aside from being quite a long read and somewhat difficult to re-type, it does work generally as directed. The supported equipment CUBE models are an ISR4K or CSR1K running IOS 17.2.1r or IOS 17.3.2 (We built our installation on IOS 17.2.1r).

Getting Started


As mentioned, Cisco won’t support it, but there’s nothing stopping substituting the “PSTN provider” for CUCM. Our goal is to reproduce as close to a “softphone experience” for CUCM as possible, using MS Teams as the “softphone”.


There are a few items of note that aren’t particularly obvious from the document that are worth calling out:

  • Be sure to follow the certificate installation very carefully. While we’ve been deploying SIP TLS and SRTP for years, it’s usually involved self-signed certs or pre-shared keys. We had our security CCIEs handle the cert installations, and even they had to do it a couple times over to get it just as the document indicates. The document’s process is correct, just be mindful.
  • Copying the config out of the PDF is problematic. This may seem obvious, but it’s possible to break the SRTP negotiation in such a way that is impossible to fix from debugs: parts of the SRTP negotiation is obfuscated in the debugs for security reasons, and you literally can’t see your errors.
  • This config is complicated enough that you’ll end up needing to do some debugging – getting it right on the first shot is slim. Where to start? OPTIONS ping

Microsoft requires the CUBE to initiate the OPTIONS pings. If you’re configured correctly, Microsoft will respond, and your dial-peers will come up:

sbc#show dial-peer voice summary
<output omitted>

200   voip  up   up             map:200            1  syst dns:sip.pstnhub.micr          active     NA

201    voip  up   up             map:200            2  syst dns:sip2.pstnhub.mic          active     NA

202    voip  up   up             map:200            3  syst dns:sip3.pstnhub.mic          active     NA

That’s great if you get that far, but having virtually anything broken will prohibit it. Keep in mind it’s not particularly simple to debug your own outbound OPTIONS pings as “debug ccsip messages” doesn’t show them, and moreover, these are SIP TLS encrypted packets, so a wireshark is of limited help too. The troubleshooting is therefore a little hacky, you basically have to send OPTIONS pings and not only hope you get a response, but if you do get a response, Microsoft will start sending your CUBE OPTIONS pings back at that point in time.

They look like this:

OPTIONS sip:<withheld>.com:5061;transport=tls SIP/2.0
FROM: <sip:sip-du-a-us.pstnhub.microsoft.com:5061>;tag=402f0e5b-097b-4b9c-b74e-e741c66a1d70
TO: <sip:<withheld>.com>
CSEQ: 1 OPTIONS
CALL-ID: 45441787-5ca6-4f06-81f1-7ae844bfb2e0
MAX-FORWARDS: 70
VIA: SIP/2.0/TLS 52.114.148.0:5061;branch=z9hG4bKab92aa27
CONTACT: <sip:sip-du-a-us.pstnhub.microsoft.com:5061>
CONTENT-LENGTH: 0
USER-AGENT: Microsoft.PSTNHub.SIPProxy v.2021.2.16.6 i.USWE2.4
ALLOW: INVITE,ACK,OPTIONS,CANCEL,BYE,NOTIFY

To reiterate that, Microsoft will not send CUBE an OPTIONS ping until CUBE initiates OPTIONS pings.

Callout: Microsoft is very picky about the SBC model, particularly if you want to call Microsoft for help. Don’t expect to use anything that’s not on the supported list. Make sure your packets identify as the appropriate model you’re using.

I’m not going to dive much more into the SIP debugging process. The document is accurate, this all works, but you must have it spot-on. If you don’t have it spot-on, don’t expect debugs to be of much use. Things “just break” with little debug information, more often than not.

The Challenge


This is a good spot to discuss the elephant in the room. While customers are regularly asking us to use MS Teams as a softphone, it’s important to understand that MS Teams is it’s own phone system. It’s not an extension of CUCM. Teams will retain it’s own extensions and dial plan. CUCM has its own extensions and dial plan. You’re effectively tying two phone systems together towards a common goal, rather than adopting MS Teams as a CUCM client. The headache should be fairly obvious: For each user that’s being maintained in CUCM, a duplicate one needs to be created in Teams. For every user removal, a duplicate deletion needs to take place, etc. It’s not particularly simple to maintain, and it’s very easy to get it out of sync.

SIDENOTE: What are others doing?
Most SBC integrations between MS Teams and CUCM rely on the SBC forking inbound calls to both systems simultaneously. The first one to answer the call (be it a desk phone on CUCM, or a soft phone on MS Teams) “gets” the call, and the SBC terminates the call to the system that didn’t pick up. We find this answer inferior because it provides no single point of truth – either phone system may or may not have a log of the call (no CDR), or cradle-to-grave ownership of the call (no option to provide alternate call treatment such as forwarding on no-answer or sending to a call handler). Our goal is instead to have CUCM own the call cradle-to-grave, and treating MS Teams as a softphone solution.

The challenge we faced from here is how to make MS Teams as “softphone-like” as possible. A few items to cover included:

  • Different extensions on different devices is an unnecessary inconvenience calling users.
  • Call Forward No Answer adds a delay when calls are never answered at the first device and limited to no control over where voice messages are left.
  • Requiring users to change a call forward all when working remote is only a workaround – not a good practice that we should ever consider a permanent solution to a business problem.

We want a call to a single extension to ring a device registered to CUCM and MS Teams every time and give the users the choice of which device(s) to utilize and when.

However, when an extension is dialed, CUCM needs to either ring an internal phone, OR route a call outbound – not both.

The Solution


Single Number Reach: Single Number Reach (SNR) is a feature built into CUCM that allows us to “split” a call. When a call rings a CUCM extension with SNR enabled, the call will ring the extension and split to ring a remote destination at the same time. When this occurs, the first device to accept the call takes the call, and a cancellation message is sent to the other destination.

We can adjust timers to control when a call starts and stops ringing the remote destination so we can prevent scenarios such as a dead cell phone instantly answering a call because it goes to their voicemail.

We can force the remote destination to stop ringing before the remote destination issues a ring no answer to send the call to voicemail so we can control what mailbox calls are left on.

We can also use Class of Control to isolate the Route Pattern to MS Teams in its own Partition that is only reachable by the Remote Destination profile. This means that the MS Teams extensions can be the same extension as the user’s CUCM extension so both destinations can ring on the same number without the worry of overlapping numbers.

Advantages over splitting on the SBC

Because the routing/split happens on CUCM at the user lever rather than on the CUBE based on dial-peer we see a few distinct advantages.

  1. Configuration is easier, and can be limited to only users/numbers that need calls split without additional headache.
  2. Feature can be enabled for certain days/times
  3. Feature can be enabled/disabled by end users
  4. Reporting, CDR, and billing remains accurate for any application reporting on calls handled by CUCM.

The configuration

  • First we identify the user that we want to configure Single Number Reach. We access their End User profile and enable mobility on their end user profile.
  • Then we configure a Remote Destination profile for the user. This is what ties together the user information, the DN that rings that triggers the split, and the remote destinations to split the call to.

Key configuration is the User ID of the user, and the Rerouting CSS which is the CSS used to reach the remote destination.

  • Then we select “Add a new DN”, and select the DN and Partition of the line that we want to trigger the split when it rings.
    When done, we go back to the Remote Destination profile configuration, and select “add a New Remote Destination”. Here is where we will configure the MS Teams extension that we want to ring when the users internal extension rings. Then we select “Add a new DN”, and select the DN and Partition of the line that we want to trigger the split when it rings.
  • When done, we go back to the Remote Destination profile configuration, and select “add a New Remote Destination”. Here is where we will configure the MS Teams extension that we want to ring when the users internal extension rings.

Key configuration is the Destination as the DN on MS Teams, and enabling “Enable Unified Mobility features” and “Enable Single Number Reach”. Additionally you can enable “Enable Move to Mobile” which allows a user to press the “Mobility” button when on a call on their internal device and transfer the call to their MS Teams extension.

At the bottom of the screenshot we can see timers where we can adjust when the remote destination should start and stop ringing. These can be adjusted as needed to make sure that calls don’t instantly get answered on the remote end if the device instantly sends a call to voicemail, or we can make the remote destination stop ringing before it sends the call to the remote destinations voicemail (or after if we WANT voicemails left at the remote end).

Below this configuration we have the option to configure time of day routing for Single Number Reach if desired.

After we click save, we will be returned to the Remote Destination configuration page. In the top left, be sure to click “Line Association” next to the DN you want to trigger this Remote Destination, and click save.

And that’s it for the core configuration. At this point, when the desired extension rings, the call will be split and also ring the remote destination that was configured.

Optionally, we can modify the SoftKey template on the user’s phone to give them additional control, for example:

  • If we add the “Mobility” Soft Key when the phone is “Connected”, the user can press the “Mobility” button to send the call to the remote destination without a manual transfer. Note that this only works if the “Enable Move to Mobile” box is checked in the Remote Destination Profile configuration as noted above.
  • If we add the “Mobility” Soft Key when the phone is “OnHook”, the user can press the “Mobility” button to manually enable/disable Single Number reach from their own device. When this is executed from the phone, it toggles the checkbox “Enable Single Number Reach” in the Remote Destination Profile”.

In conclusion: It’s possible to get a “close to a softphone” functionality from MS Teams using it with CUCM. This is accomplished by using Direct Routing between MS Teams and CUCM, and then using SNR on CUCM. However, the setback is having to maintain two separate PBXes simultaneously.

Co-authored by Jeff Kronlage and Nick Finch

Thinking Outside the box with Cisco DNA Center

What other applications does DNA have?

Cisco’s DNA Center appliance is generally talked about in the context of SD-Access (SDA), but SDA is a complex technology that involves significant planning and re-architecture to deploy.  DNA Center is not just SDA, though – it has multiple features that can be used on day 1 that can cut down on administrative tasks and reduce the likelihood of errors or omissions.  From conversations with our customers, the most asked-for capability is software image management and automatic deployment, and that is something that DNA Center handles extremely well compared to many other solutions out there.

Wait…I can manage software updates with DNA?

Managing software on network devices can be a substantial time burden, especially in businesses that have a substantial compliance burden and require regular software updates.  Add to this the increasing size of network device images – pretty much all the major switch and router vendors’ products now have image sizes in the hundreds of megabytes up to several gigabytes, and software management can now take up a significant chunk of an IT department’s time.  One of our customers is interested in DNA Center for this specific purpose – with 500+ switches, being able to automate software deployment saves several weeks of engineer time over the course of a year.

That may leave you asking…

So, what devices can I manage? 

DNA Center can manage software for any current production Cisco router, switch, or wireless controller.  Additionally, some previous-generation hardware is also supported.  Of this hardware, the Catalyst 2960X and XR switches as well as the Catalyst 3650/3850 switches are the most commonly used with DNA Center. Now let’s talk about how DNA Center does this.

Neat! Now, tell me how to do it. 

First, be sure that every device you want to manage is imported into DNA Center.  Once that’s done, the image repository screen will automatically populate itself with available software image versions by device type.

Here’s an example:

From here, select the device family to see details.  Once you’ve decided on the version you want to use, click on the star icon, and DNAC will mark that as the golden image (aka the image you want to deploy).  If not already present on the appliance, the image will also be downloaded as well.

Next, go to Provision > Network Devices > Inventory to start the update process.  From here, select the devices you want to update, then click on Actions > Software Image > Update Image.  You’ll be given the option to either distribute the new images immediately or to wait until a specific time to start the process.  Different upgrade jobs can be configured for different device groups as well.

Here, I’ve set DNAC to distribute images on Saturday the 19th at 1pm local time for all my sites.  This process is just the file copy, so no changes are made to the devices at this time.  The file copy process is also tolerant of slow WAN connections, though not poor-quality connections.  We’ve tested this process in our lab and found out that it’ll happily work even over a 64k connection (though it’ll take quite a while).  Poor quality connectivity, however, will cause this process to fail.  Finally, once the image is copied to the target devices, a hash check is performed to ensure the image hasn’t been corrupted.

The next step is to activate the image.  Activation here means ‘install the image and reboot the device’.

Like the distribution process, DNAC can either install immediately or wait until a scheduled time.  Note that for IOS XE devices, this process will do a full install of the image vs. just copying the .bin file over.  Once the software activation is complete, the devices will show their status in the inventory screen. As you can see, DNA Center’s software image management capability can save substantial time when updating software as well as ensuring that no devices fail to receive updates through error or omission.

Prepared by: Chris Crotteau

Application Hosting and Guest Shell on Catalyst 9000

Two of the lesser known yet extremely useful features present in the Catalyst 9000 and many other route/switch products in Cisco’s lineup are Guest Shell and Application Hosting. Both of these features rely on Cisco’s use of Linux as an underpinning of their various network OSes and the use of x86 CPUs for their devices as well. As the Catalyst 9000 switches are the most common and accessible, we’ll focus on this platform for now.

Guest Shell

Guest shell allows the switch operator access to two alternate CLI environments – a Bash shell and a Python 3 interpreter. From these environments, scripts can be written and executed. Further, IOS features like EEM can call Python or Bash scripts, and conversely, these scripts can call into the IOS CLI or the NETCONF API to allow for a significant boost in automation capability.

Application Hosting

Application hosting is the next step beyond guest shell – the ability to install and run 3rd party Linux software packages directly on the device. Application hosting does differ a bit in implementation based on platform, with ISR routers, Cat9k switches, NX-OS switches, and IOS XR routers all somewhat different. However, the result is the same – one can run 3 rd party Linux applications directly on networking hardware. On the Catalyst 9000 series switches, application hosting uses Docker containers exclusively.

One concern regarding this is the ability of an application to behave in such a way as to impair the functionality of the networking device. Cisco has put hard limits on what CPU and memory resources can be used for application hosting (which varies by platform), so that misbehaving applications cannot consume enough resources to compromise the device’s functionality.

Hold on – what’s a container?

One question that comes up regarding this is what a container is, and how it compares to server virtualization as done by VMWare/KVM/Hyper-V/etc. While a detailed treatment of virtualization and containers is out of scope here, we can go over the basic details. In a more traditional virtualization environment, a hypervisor – a dedicated OS whose job is to abstract the host’s hardware and provide the plumbing needed to run multiple OSes on a single server – is the key bit of technology. On the other hand, a system built to run containers is running only one OS – what we segment is the user space of the OS. This allows applications to share the same host and OS, but without the ability for apps in one container to interact with another except through the network. Containers lack some flexibility compared to a full VM, but they do have a number of strong points. The primary benefit to a container is its reduced resource requirements compared to a full VM. In a fully virtualized environment, the us of a hypervisor will exact a performance penalty. This comes about from all the work the hypervisor must do – translating client OS calls to the host hardware, managing contention for hardware resources, and networking for the clients. Additionally, there is the overhead of running the client OSes themselves. Between the resources needed by the hypervisor and the resources used just to run the client OSes, there can be a notable performance penalty. With a container, since there is no hypervisor in place, only one OS running, and segmentation is done within the OS user space, application performance is often very close to bare metal.

Getting Started with iPerf3

For this post, we will be using iPerf3 as our example software. iPerf3 is a very common tool used to measure network performance – it supports a number of options regarding traffic type and total throughput that are useful in identifying network performance issues. Being able to use a switch as an always-on iPerf server provides a standard location to run performance tests against and eliminates the need to stand up a laptop or other device as an iPerf server.


Before we get started, there are a few prerequisites to be aware of. Here’s what to know:

  1. Application hosting is not supported on the Cat9200 series switches. All other Cat9k switches do support application hosting.
  2. A dedicated SSD is needed to store and run the applications. On the Cat9300, the SSD is installed in a dedicated USB slot on the back of the stack master switch, while on the Cat9400 and 9600, an internal SSD is needed on each supervisor.
  3. You must be running IOS XE 16.12.1 or later for application hosting to be available.

Next, the container will need network access. For this, we have two options – bridge the container to the management port, or to the front panel ports. In the former case, the container will need an IP address in the same subnet as the management interface and will only be accessible from the physical management port. For the front panel ports, a container can be bridged to a single VLAN, or set up as a trunk for access to multiple VLANs. In this case, the container would not be accessible from the management port. Once we have decided on network access, if the application you want to run is already available in container form, just download it to the switch SSD and begin configuring the switch. Otherwise, you’ll need a Linux system to build the container. For this, use Docker to pull the container and save it as a .tar file like so:

LinuxPC$ docker pull mlabbe/iperf3
LinuxPC$ docker save mlabbe/iperf3 > iperf3.tar

Now, onto configuration!

First, create an app instance in the switch CLI:
Switch(config)# app-hosting appid iperf3

Next, choose whether to use the management port or the front panel ports, and give the container an IP address and its gateway. This example uses the management port:

Switch(config-app-hosting)# app-vnic management guest-interface 0
Switch(config-app-hosting)# guest-ipaddress 192.168.0.200 netmask 255.255.255.0
Switch(config-app-hosting-mgmt-gateway)# app-default-gateway 192.168.0.1 guest-interface 0

Finally, we’ll want to pass some parameters to iperf3, specifically to be a server, what ports to listen on, and to always restart unless manually stopped:

Switch# app-hosting install appid iperf3 package usbflash1:iperf3.tar
Switch# app-hosting activate appid iperf3
Switch# app-hosting start appid iperf3

Now iPerf3 is up and running as a server on the switch!

This is just one small example of how application hosting can be used to build additional functionality into the switch. The capabilities of this feature are really limited only by system resources.

By: Chris Crotteau