Thinking Outside the box with Cisco DNA Center

What other applications does DNA have?

Cisco’s DNA Center appliance is generally talked about in the context of SD-Access (SDA), but SDA is a complex technology that involves significant planning and re-architecture to deploy.  DNA Center is not just SDA, though – it has multiple features that can be used on day 1 that can cut down on administrative tasks and reduce the likelihood of errors or omissions.  From conversations with our customers, the most asked-for capability is software image management and automatic deployment, and that is something that DNA Center handles extremely well compared to many other solutions out there.

Wait…I can manage software updates with DNA?

Managing software on network devices can be a substantial time burden, especially in businesses that have a substantial compliance burden and require regular software updates.  Add to this the increasing size of network device images – pretty much all the major switch and router vendors’ products now have image sizes in the hundreds of megabytes up to several gigabytes, and software management can now take up a significant chunk of an IT department’s time.  One of our customers is interested in DNA Center for this specific purpose – with 500+ switches, being able to automate software deployment saves several weeks of engineer time over the course of a year.

That may leave you asking…

So, what devices can I manage? 

DNA Center can manage software for any current production Cisco router, switch, or wireless controller.  Additionally, some previous-generation hardware is also supported.  Of this hardware, the Catalyst 2960X and XR switches as well as the Catalyst 3650/3850 switches are the most commonly used with DNA Center. Now let’s talk about how DNA Center does this.

Neat! Now, tell me how to do it. 

First, be sure that every device you want to manage is imported into DNA Center.  Once that’s done, the image repository screen will automatically populate itself with available software image versions by device type.

Here’s an example:

From here, select the device family to see details.  Once you’ve decided on the version you want to use, click on the star icon, and DNAC will mark that as the golden image (aka the image you want to deploy).  If not already present on the appliance, the image will also be downloaded as well.

Next, go to Provision > Network Devices > Inventory to start the update process.  From here, select the devices you want to update, then click on Actions > Software Image > Update Image.  You’ll be given the option to either distribute the new images immediately or to wait until a specific time to start the process.  Different upgrade jobs can be configured for different device groups as well.

Here, I’ve set DNAC to distribute images on Saturday the 19th at 1pm local time for all my sites.  This process is just the file copy, so no changes are made to the devices at this time.  The file copy process is also tolerant of slow WAN connections, though not poor-quality connections.  We’ve tested this process in our lab and found out that it’ll happily work even over a 64k connection (though it’ll take quite a while).  Poor quality connectivity, however, will cause this process to fail.  Finally, once the image is copied to the target devices, a hash check is performed to ensure the image hasn’t been corrupted.

The next step is to activate the image.  Activation here means ‘install the image and reboot the device’.

Like the distribution process, DNAC can either install immediately or wait until a scheduled time.  Note that for IOS XE devices, this process will do a full install of the image vs. just copying the .bin file over.  Once the software activation is complete, the devices will show their status in the inventory screen. As you can see, DNA Center’s software image management capability can save substantial time when updating software as well as ensuring that no devices fail to receive updates through error or omission.

Prepared by: Chris Crotteau

Application Hosting and Guest Shell on Catalyst 9000

Two of the lesser known yet extremely useful features present in the Catalyst 9000 and many other route/switch products in Cisco’s lineup are Guest Shell and Application Hosting. Both of these features rely on Cisco’s use of Linux as an underpinning of their various network OSes and the use of x86 CPUs for their devices as well. As the Catalyst 9000 switches are the most common and accessible, we’ll focus on this platform for now.

Guest Shell

Guest shell allows the switch operator access to two alternate CLI environments – a Bash shell and a Python 3 interpreter. From these environments, scripts can be written and executed. Further, IOS features like EEM can call Python or Bash scripts, and conversely, these scripts can call into the IOS CLI or the NETCONF API to allow for a significant boost in automation capability.

Application Hosting

Application hosting is the next step beyond guest shell – the ability to install and run 3rd party Linux software packages directly on the device. Application hosting does differ a bit in implementation based on platform, with ISR routers, Cat9k switches, NX-OS switches, and IOS XR routers all somewhat different. However, the result is the same – one can run 3 rd party Linux applications directly on networking hardware. On the Catalyst 9000 series switches, application hosting uses Docker containers exclusively.

One concern regarding this is the ability of an application to behave in such a way as to impair the functionality of the networking device. Cisco has put hard limits on what CPU and memory resources can be used for application hosting (which varies by platform), so that misbehaving applications cannot consume enough resources to compromise the device’s functionality.

Hold on – what’s a container?

One question that comes up regarding this is what a container is, and how it compares to server virtualization as done by VMWare/KVM/Hyper-V/etc. While a detailed treatment of virtualization and containers is out of scope here, we can go over the basic details. In a more traditional virtualization environment, a hypervisor – a dedicated OS whose job is to abstract the host’s hardware and provide the plumbing needed to run multiple OSes on a single server – is the key bit of technology. On the other hand, a system built to run containers is running only one OS – what we segment is the user space of the OS. This allows applications to share the same host and OS, but without the ability for apps in one container to interact with another except through the network. Containers lack some flexibility compared to a full VM, but they do have a number of strong points. The primary benefit to a container is its reduced resource requirements compared to a full VM. In a fully virtualized environment, the us of a hypervisor will exact a performance penalty. This comes about from all the work the hypervisor must do – translating client OS calls to the host hardware, managing contention for hardware resources, and networking for the clients. Additionally, there is the overhead of running the client OSes themselves. Between the resources needed by the hypervisor and the resources used just to run the client OSes, there can be a notable performance penalty. With a container, since there is no hypervisor in place, only one OS running, and segmentation is done within the OS user space, application performance is often very close to bare metal.

Getting Started with iPerf3

For this post, we will be using iPerf3 as our example software. iPerf3 is a very common tool used to measure network performance – it supports a number of options regarding traffic type and total throughput that are useful in identifying network performance issues. Being able to use a switch as an always-on iPerf server provides a standard location to run performance tests against and eliminates the need to stand up a laptop or other device as an iPerf server.


Before we get started, there are a few prerequisites to be aware of. Here’s what to know:

  1. Application hosting is not supported on the Cat9200 series switches. All other Cat9k switches do support application hosting.
  2. A dedicated SSD is needed to store and run the applications. On the Cat9300, the SSD is installed in a dedicated USB slot on the back of the stack master switch, while on the Cat9400 and 9600, an internal SSD is needed on each supervisor.
  3. You must be running IOS XE 16.12.1 or later for application hosting to be available.

Next, the container will need network access. For this, we have two options – bridge the container to the management port, or to the front panel ports. In the former case, the container will need an IP address in the same subnet as the management interface and will only be accessible from the physical management port. For the front panel ports, a container can be bridged to a single VLAN, or set up as a trunk for access to multiple VLANs. In this case, the container would not be accessible from the management port. Once we have decided on network access, if the application you want to run is already available in container form, just download it to the switch SSD and begin configuring the switch. Otherwise, you’ll need a Linux system to build the container. For this, use Docker to pull the container and save it as a .tar file like so:

LinuxPC$ docker pull mlabbe/iperf3
LinuxPC$ docker save mlabbe/iperf3 > iperf3.tar

Now, onto configuration!

First, create an app instance in the switch CLI:
Switch(config)# app-hosting appid iperf3

Next, choose whether to use the management port or the front panel ports, and give the container an IP address and its gateway. This example uses the management port:

Switch(config-app-hosting)# app-vnic management guest-interface 0
Switch(config-app-hosting)# guest-ipaddress 192.168.0.200 netmask 255.255.255.0
Switch(config-app-hosting-mgmt-gateway)# app-default-gateway 192.168.0.1 guest-interface 0

Finally, we’ll want to pass some parameters to iperf3, specifically to be a server, what ports to listen on, and to always restart unless manually stopped:

Switch# app-hosting install appid iperf3 package usbflash1:iperf3.tar
Switch# app-hosting activate appid iperf3
Switch# app-hosting start appid iperf3

Now iPerf3 is up and running as a server on the switch!

This is just one small example of how application hosting can be used to build additional functionality into the switch. The capabilities of this feature are really limited only by system resources.

By: Chris Crotteau