Application Hosting and Guest Shell on Catalyst 9000

Two of the lesser known yet extremely useful features present in the Catalyst 9000 and many other route/switch products in Cisco’s lineup are Guest Shell and Application Hosting. Both of these features rely on Cisco’s use of Linux as an underpinning of their various network OSes and the use of x86 CPUs for their devices as well. As the Catalyst 9000 switches are the most common and accessible, we’ll focus on this platform for now.

Guest Shell

Guest shell allows the switch operator access to two alternate CLI environments – a Bash shell and a Python 3 interpreter. From these environments, scripts can be written and executed. Further, IOS features like EEM can call Python or Bash scripts, and conversely, these scripts can call into the IOS CLI or the NETCONF API to allow for a significant boost in automation capability.

Application Hosting

Application hosting is the next step beyond guest shell – the ability to install and run 3rd party Linux software packages directly on the device. Application hosting does differ a bit in implementation based on platform, with ISR routers, Cat9k switches, NX-OS switches, and IOS XR routers all somewhat different. However, the result is the same – one can run 3 rd party Linux applications directly on networking hardware. On the Catalyst 9000 series switches, application hosting uses Docker containers exclusively.

One concern regarding this is the ability of an application to behave in such a way as to impair the functionality of the networking device. Cisco has put hard limits on what CPU and memory resources can be used for application hosting (which varies by platform), so that misbehaving applications cannot consume enough resources to compromise the device’s functionality.

Hold on – what’s a container?

One question that comes up regarding this is what a container is, and how it compares to server virtualization as done by VMWare/KVM/Hyper-V/etc. While a detailed treatment of virtualization and containers is out of scope here, we can go over the basic details. In a more traditional virtualization environment, a hypervisor – a dedicated OS whose job is to abstract the host’s hardware and provide the plumbing needed to run multiple OSes on a single server – is the key bit of technology. On the other hand, a system built to run containers is running only one OS – what we segment is the user space of the OS. This allows applications to share the same host and OS, but without the ability for apps in one container to interact with another except through the network. Containers lack some flexibility compared to a full VM, but they do have a number of strong points. The primary benefit to a container is its reduced resource requirements compared to a full VM. In a fully virtualized environment, the us of a hypervisor will exact a performance penalty. This comes about from all the work the hypervisor must do – translating client OS calls to the host hardware, managing contention for hardware resources, and networking for the clients. Additionally, there is the overhead of running the client OSes themselves. Between the resources needed by the hypervisor and the resources used just to run the client OSes, there can be a notable performance penalty. With a container, since there is no hypervisor in place, only one OS running, and segmentation is done within the OS user space, application performance is often very close to bare metal.

Getting Started with iPerf3

For this post, we will be using iPerf3 as our example software. iPerf3 is a very common tool used to measure network performance – it supports a number of options regarding traffic type and total throughput that are useful in identifying network performance issues. Being able to use a switch as an always-on iPerf server provides a standard location to run performance tests against and eliminates the need to stand up a laptop or other device as an iPerf server.


Before we get started, there are a few prerequisites to be aware of. Here’s what to know:

  1. Application hosting is not supported on the Cat9200 series switches. All other Cat9k switches do support application hosting.
  2. A dedicated SSD is needed to store and run the applications. On the Cat9300, the SSD is installed in a dedicated USB slot on the back of the stack master switch, while on the Cat9400 and 9600, an internal SSD is needed on each supervisor.
  3. You must be running IOS XE 16.12.1 or later for application hosting to be available.

Next, the container will need network access. For this, we have two options – bridge the container to the management port, or to the front panel ports. In the former case, the container will need an IP address in the same subnet as the management interface and will only be accessible from the physical management port. For the front panel ports, a container can be bridged to a single VLAN, or set up as a trunk for access to multiple VLANs. In this case, the container would not be accessible from the management port. Once we have decided on network access, if the application you want to run is already available in container form, just download it to the switch SSD and begin configuring the switch. Otherwise, you’ll need a Linux system to build the container. For this, use Docker to pull the container and save it as a .tar file like so:

LinuxPC$ docker pull mlabbe/iperf3
LinuxPC$ docker save mlabbe/iperf3 > iperf3.tar

Now, onto configuration!

First, create an app instance in the switch CLI:
Switch(config)# app-hosting appid iperf3

Next, choose whether to use the management port or the front panel ports, and give the container an IP address and its gateway. This example uses the management port:

Switch(config-app-hosting)# app-vnic management guest-interface 0
Switch(config-app-hosting)# guest-ipaddress 192.168.0.200 netmask 255.255.255.0
Switch(config-app-hosting-mgmt-gateway)# app-default-gateway 192.168.0.1 guest-interface 0

Finally, we’ll want to pass some parameters to iperf3, specifically to be a server, what ports to listen on, and to always restart unless manually stopped:

Switch# app-hosting install appid iperf3 package usbflash1:iperf3.tar
Switch# app-hosting activate appid iperf3
Switch# app-hosting start appid iperf3

Now iPerf3 is up and running as a server on the switch!

This is just one small example of how application hosting can be used to build additional functionality into the switch. The capabilities of this feature are really limited only by system resources.

By: Chris Crotteau