Umbrella SIG – Your Field Guide to Protecting the Remote User

The Problem

Here’s the situation – you have a group of users who work a hybrid schedule.  One of the users, while working at home, gets malware from a compromised website.  That malware-infected PC gets taken back to the office and put on the corporate network, whereupon the malware now can try and spread to other devices.  How do we protect against this kind of situation?

Historically, tools to manage remote device security on the endpoint itself were limited in their security capabilities or had undesirable effects on user experience.  Alternately, remote users would be required to VPN into the corporate office at time of boot, where all traffic would be sent over the VPN for processing by the on-premises security stack.  The first issue is that with a lot of remote users, the corporate firewall and the internet connections would be put under considerable load, perhaps enough to demand a larger firewall and faster Internet connections.  Second, there is a problem with what happens when the endpoint is off the VPN – there is still the potential for malware or user misbehavior in this case.  Setting up endpoints for always-on VPN (the VPN tunnel establishes itself pre-logon, usually with certificate based authentication) is a possibility, but always-on VPN is notoriously complex and finicky to deploy, making it a poor solution for most organizations.

Advanced Threats

Another demand on security is the need for SSL decryption to detect threats.  Modern malware also uses SSL encryption, and as such, is much harder to detect without decrypting and inspecting that traffic.  SSL decryption on an on-premises firewall has a substantial performance impact, often cutting firewall throughput in half or more – this makes it impractical for the traditional full tunnel VPN option.

A New Defense

These days, we have better tools.  A whole class of services known as SASE (secure access services edge) have arisen in response to the need to secure a distributed workforce.  However, SASE services all differ quite a bit due to how they’re implemented.  Most services require some form of always-on VPN for connection to the SASE service with all the complexity that entails to avoid the problem of what happens when the user isn’t on the VPN to the SASE service.

Our preferred SASE solution for remote workers is Umbrella SIG, as it addresses all the common problems I’ve discussed above.  SIG works a bit differently than other solutions, as it doesn’t rely on VPN connectivity.  Instead, traffic destined for the Internet gets proxied through the Umbrella cloud, where SSL decryption occurs, followed by inspection and filtering for malware and content restrictions. 

Agile and Effective: Umbrella SIG

So, what can SIG do from a security perspective? Here are the key capabilities:

  • File download scanning – files that match known malware signatures are blocked.
  • File type control – disallow users from downloading risky file types. For example, no normal user needs to download a .dll file to their laptop.
  • Detailed content filtering – block not just domains, but specific URLs within a domain. This is especially important with large, sprawling sites like Reddit that contain both legitimate and risky content.
  • Data Leak Prevention (DLP) – scan Internet-bound traffic for well-known or user defined data patterns that match controlled data such as credit card numbers or intellectual property.
  • SaaS Tenant Controls – keep users contained to the organization’s tenant for services like Microsoft 365.
  • CASB – Discover ‘shadow IT’ SaaS application usage and block access to those apps as needed.
  • Logging and Reporting – Get a clear picture of remote users’ Internet activities and what security events each user has caused.

SIG Simplifies Deploying SSL Decryption

Deploying SIG is much easier than most other SASE services. SIG has a streamlined deployment compared to other SASE services due to how it operates. As mentioned before, there is no need for always-on VPN or other invasive endpoint configurations that can break easily. Core features can also be implemented quickly – where there is complexity, it’s where it is inherent to the features available and not in setting up prerequisites. For example, SSL decryption will always need a certificate installed on the endpoint to not break the Internet, and choosing what to decrypt or not often needs input from other parts of the organization to ensure users’ private information isn’t compromised (think banking or medical information here – stuff that can cause liability to the organization by inspecting it.)

Your Umbrella SIG Deployment Guide

Here’s a guide to getting SIG up and running – this assumes that you currently have a fully functional Umbrella DNS deployment and want to expand that to SIG. I don’t go into the more complex features here, so consider this a starting point, not an authoritative guide.


Enable the Secure Web Gateway (SWG) capability remotely. Note that this requires the use of the new Cisco Secure Client (aka Anyconnect 5.0 or later) to work. SWG can be enabled globally or on a per-identity basis.

Install the Umbrella root certificate on the PCs/Macs to protect. Alternately, if you already have a local CA, your cert can be used in place of the Umbrella cert. Without this, enabling SSL decryption will completely break the Internet for the unfortunate users.

Next, define a list of categories, applications, and URLs to bypass SSL decryption. Some sites will need to be bypassed for functionality purposes (ex: Windows Update) while others will need to be bypassed for non-technical reasons (banking, health, or other sites that can leak protected information.)

Now it’s time to set up a SWG policy. First, we create a destination list (or several, as needs direct). Destination lists are how content filtering policies are applied. If you have a pre-built list of domains and URLs to filter, they can be imported into a destination list via .csv file.

Next, we configure the Web policy.

First, configure the global settings for the ruleset. This is where SSL decryption, file controls, logging, and general security settings are configured. Let’s start with SSL Decryption.

Enable the feature and associate a selective decryption list with it. Next, configure the other global settings as appropriate.

Following this, configure a ruleset for content filtering using the destination lists we created earlier. The individual rules follow the same mode of operation as an ACL – once a match occurs, traffic is either blocked or forwarded, so be careful of rule order to avoid shadowing. Be sure to add identities to each rule and ruleset for them to be applied!

Finally….

Test SWG settings before rolling it out to the entire organization. Note that once SWG is enabled globally, any endpoint with Secure Client and the Umbrella agent component will then forward all traffic to the Umbrella cloud to be proxied. No policy is applied unless identities are added to rulesets, so impact will be minimal until appropriate rulesets and identities are configured.

Interop Cisco Unified Call Manager with Microsoft Teams

A common topic of conversation with our Cisco Call Manager (“CUCM”) customers has been whether or not Microsoft Teams can act as a softphone for CUCM.

The answer is … sort of.

Let’s look at the process of getting this working. First and foremost, at the time of this writing, calling from CUCM to MS Teams directly is not supported by Cisco – so don’t expect to call TAC if you have problems. Best we can tell, Microsoft doesn’t seem to care what 3rd-party PBX you’re using as long as you’re using a supported Session Border Controller (SBC).

What Cisco does support is using their SBC – the Cisco Unified Border Element (“CUBE”) – as an intermediary between a PSTN provider (ex: a SIP carrier) and MS Teams (Microsoft refers to this as “Direct Routing”).

The CUBE/Microsoft configuration is documented here.

Aside from being quite a long read and somewhat difficult to re-type, it does work generally as directed. The supported equipment CUBE models are an ISR4K or CSR1K running IOS 17.2.1r or IOS 17.3.2 (We built our installation on IOS 17.2.1r).

Getting Started


As mentioned, Cisco won’t support it, but there’s nothing stopping substituting the “PSTN provider” for CUCM. Our goal is to reproduce as close to a “softphone experience” for CUCM as possible, using MS Teams as the “softphone”.


There are a few items of note that aren’t particularly obvious from the document that are worth calling out:

  • Be sure to follow the certificate installation very carefully. While we’ve been deploying SIP TLS and SRTP for years, it’s usually involved self-signed certs or pre-shared keys. We had our security CCIEs handle the cert installations, and even they had to do it a couple times over to get it just as the document indicates. The document’s process is correct, just be mindful.
  • Copying the config out of the PDF is problematic. This may seem obvious, but it’s possible to break the SRTP negotiation in such a way that is impossible to fix from debugs: parts of the SRTP negotiation is obfuscated in the debugs for security reasons, and you literally can’t see your errors.
  • This config is complicated enough that you’ll end up needing to do some debugging – getting it right on the first shot is slim. Where to start? OPTIONS ping

Microsoft requires the CUBE to initiate the OPTIONS pings. If you’re configured correctly, Microsoft will respond, and your dial-peers will come up:

sbc#show dial-peer voice summary
<output omitted>

200   voip  up   up             map:200            1  syst dns:sip.pstnhub.micr          active     NA

201    voip  up   up             map:200            2  syst dns:sip2.pstnhub.mic          active     NA

202    voip  up   up             map:200            3  syst dns:sip3.pstnhub.mic          active     NA

That’s great if you get that far, but having virtually anything broken will prohibit it. Keep in mind it’s not particularly simple to debug your own outbound OPTIONS pings as “debug ccsip messages” doesn’t show them, and moreover, these are SIP TLS encrypted packets, so a wireshark is of limited help too. The troubleshooting is therefore a little hacky, you basically have to send OPTIONS pings and not only hope you get a response, but if you do get a response, Microsoft will start sending your CUBE OPTIONS pings back at that point in time.

They look like this:

OPTIONS sip:<withheld>.com:5061;transport=tls SIP/2.0
FROM: <sip:sip-du-a-us.pstnhub.microsoft.com:5061>;tag=402f0e5b-097b-4b9c-b74e-e741c66a1d70
TO: <sip:<withheld>.com>
CSEQ: 1 OPTIONS
CALL-ID: 45441787-5ca6-4f06-81f1-7ae844bfb2e0
MAX-FORWARDS: 70
VIA: SIP/2.0/TLS 52.114.148.0:5061;branch=z9hG4bKab92aa27
CONTACT: <sip:sip-du-a-us.pstnhub.microsoft.com:5061>
CONTENT-LENGTH: 0
USER-AGENT: Microsoft.PSTNHub.SIPProxy v.2021.2.16.6 i.USWE2.4
ALLOW: INVITE,ACK,OPTIONS,CANCEL,BYE,NOTIFY

To reiterate that, Microsoft will not send CUBE an OPTIONS ping until CUBE initiates OPTIONS pings.

Callout: Microsoft is very picky about the SBC model, particularly if you want to call Microsoft for help. Don’t expect to use anything that’s not on the supported list. Make sure your packets identify as the appropriate model you’re using.

I’m not going to dive much more into the SIP debugging process. The document is accurate, this all works, but you must have it spot-on. If you don’t have it spot-on, don’t expect debugs to be of much use. Things “just break” with little debug information, more often than not.

The Challenge


This is a good spot to discuss the elephant in the room. While customers are regularly asking us to use MS Teams as a softphone, it’s important to understand that MS Teams is it’s own phone system. It’s not an extension of CUCM. Teams will retain it’s own extensions and dial plan. CUCM has its own extensions and dial plan. You’re effectively tying two phone systems together towards a common goal, rather than adopting MS Teams as a CUCM client. The headache should be fairly obvious: For each user that’s being maintained in CUCM, a duplicate one needs to be created in Teams. For every user removal, a duplicate deletion needs to take place, etc. It’s not particularly simple to maintain, and it’s very easy to get it out of sync.

SIDENOTE: What are others doing?
Most SBC integrations between MS Teams and CUCM rely on the SBC forking inbound calls to both systems simultaneously. The first one to answer the call (be it a desk phone on CUCM, or a soft phone on MS Teams) “gets” the call, and the SBC terminates the call to the system that didn’t pick up. We find this answer inferior because it provides no single point of truth – either phone system may or may not have a log of the call (no CDR), or cradle-to-grave ownership of the call (no option to provide alternate call treatment such as forwarding on no-answer or sending to a call handler). Our goal is instead to have CUCM own the call cradle-to-grave, and treating MS Teams as a softphone solution.

The challenge we faced from here is how to make MS Teams as “softphone-like” as possible. A few items to cover included:

  • Different extensions on different devices is an unnecessary inconvenience calling users.
  • Call Forward No Answer adds a delay when calls are never answered at the first device and limited to no control over where voice messages are left.
  • Requiring users to change a call forward all when working remote is only a workaround – not a good practice that we should ever consider a permanent solution to a business problem.

We want a call to a single extension to ring a device registered to CUCM and MS Teams every time and give the users the choice of which device(s) to utilize and when.

However, when an extension is dialed, CUCM needs to either ring an internal phone, OR route a call outbound – not both.

The Solution


Single Number Reach: Single Number Reach (SNR) is a feature built into CUCM that allows us to “split” a call. When a call rings a CUCM extension with SNR enabled, the call will ring the extension and split to ring a remote destination at the same time. When this occurs, the first device to accept the call takes the call, and a cancellation message is sent to the other destination.

We can adjust timers to control when a call starts and stops ringing the remote destination so we can prevent scenarios such as a dead cell phone instantly answering a call because it goes to their voicemail.

We can force the remote destination to stop ringing before the remote destination issues a ring no answer to send the call to voicemail so we can control what mailbox calls are left on.

We can also use Class of Control to isolate the Route Pattern to MS Teams in its own Partition that is only reachable by the Remote Destination profile. This means that the MS Teams extensions can be the same extension as the user’s CUCM extension so both destinations can ring on the same number without the worry of overlapping numbers.

Advantages over splitting on the SBC

Because the routing/split happens on CUCM at the user lever rather than on the CUBE based on dial-peer we see a few distinct advantages.

  1. Configuration is easier, and can be limited to only users/numbers that need calls split without additional headache.
  2. Feature can be enabled for certain days/times
  3. Feature can be enabled/disabled by end users
  4. Reporting, CDR, and billing remains accurate for any application reporting on calls handled by CUCM.

The configuration

  • First we identify the user that we want to configure Single Number Reach. We access their End User profile and enable mobility on their end user profile.
  • Then we configure a Remote Destination profile for the user. This is what ties together the user information, the DN that rings that triggers the split, and the remote destinations to split the call to.

Key configuration is the User ID of the user, and the Rerouting CSS which is the CSS used to reach the remote destination.

  • Then we select “Add a new DN”, and select the DN and Partition of the line that we want to trigger the split when it rings.
    When done, we go back to the Remote Destination profile configuration, and select “add a New Remote Destination”. Here is where we will configure the MS Teams extension that we want to ring when the users internal extension rings. Then we select “Add a new DN”, and select the DN and Partition of the line that we want to trigger the split when it rings.
  • When done, we go back to the Remote Destination profile configuration, and select “add a New Remote Destination”. Here is where we will configure the MS Teams extension that we want to ring when the users internal extension rings.

Key configuration is the Destination as the DN on MS Teams, and enabling “Enable Unified Mobility features” and “Enable Single Number Reach”. Additionally you can enable “Enable Move to Mobile” which allows a user to press the “Mobility” button when on a call on their internal device and transfer the call to their MS Teams extension.

At the bottom of the screenshot we can see timers where we can adjust when the remote destination should start and stop ringing. These can be adjusted as needed to make sure that calls don’t instantly get answered on the remote end if the device instantly sends a call to voicemail, or we can make the remote destination stop ringing before it sends the call to the remote destinations voicemail (or after if we WANT voicemails left at the remote end).

Below this configuration we have the option to configure time of day routing for Single Number Reach if desired.

After we click save, we will be returned to the Remote Destination configuration page. In the top left, be sure to click “Line Association” next to the DN you want to trigger this Remote Destination, and click save.

And that’s it for the core configuration. At this point, when the desired extension rings, the call will be split and also ring the remote destination that was configured.

Optionally, we can modify the SoftKey template on the user’s phone to give them additional control, for example:

  • If we add the “Mobility” Soft Key when the phone is “Connected”, the user can press the “Mobility” button to send the call to the remote destination without a manual transfer. Note that this only works if the “Enable Move to Mobile” box is checked in the Remote Destination Profile configuration as noted above.
  • If we add the “Mobility” Soft Key when the phone is “OnHook”, the user can press the “Mobility” button to manually enable/disable Single Number reach from their own device. When this is executed from the phone, it toggles the checkbox “Enable Single Number Reach” in the Remote Destination Profile”.

In conclusion: It’s possible to get a “close to a softphone” functionality from MS Teams using it with CUCM. This is accomplished by using Direct Routing between MS Teams and CUCM, and then using SNR on CUCM. However, the setback is having to maintain two separate PBXes simultaneously.

Co-authored by Jeff Kronlage and Nick Finch

Navigating RESTCONF for Cisco Network Engineers

Navigating RESTCONF for Cisco Network Engineers

In both my personal education and in work projects, there’s been a slow but steady move into network automation. This document is written from the angle of a network engineer, and as such, the document approaches the topic from the angle of moving from the CLI to a true programmatic interface in an efficient manner.

What you can expect to gain from reading:

  • The ‘cliff notes’ version of RESTCONF
  • The ‘cliff notes’ version of YANG
  • The ‘cliff notes’ version of the pyang tool
  • Basic use of Postman
  • A quick & dirty way to implement working RESTCONF on a Cisco device
  • An elegant way to implement RESTCONF on a Cisco device

What you should not expect:

  • Any Python (or any other programming language) education. There are countless trainings for Python elsewhere on the web.
  • A deep dive of REST. This article assumes the reader has familiarity already.
  • Much detail on NETCONF. While a lot of the information with RESTCONF overlaps with NETCONF (as RESTCONF’s origin technology), I chose to focus on RESTCONF due to almost all APIs being REST-based now.
  • A thorough explanation of YANG. While researching this article, I read some unbelievably good deep-dives of YANG, but this article is about shifting from CLI to RESTCONF, and only a mid-level understanding of YANG is needed.

Some things you’ll need

  • Postman
  • A Linux machine or VM

With that said, what’s NETCONF?

Although just recently gaining traction, NETCONF has actually been around quite a long time – the RFC was published in 2006. NETCONF is an XML-based interface to configure and monitor network devices. One of the primary drivers for NETCONF is to augment SNMP. SNMP’s original use case was meant to be both read and write, but the “write” element never gained wide adoption – primarily because of the difficulty in navigating MIBs to figure out how to trigger the appropriate outcome. NETCONF typically works over an SSH session to TCP port 830. NETCONF can be informally thought of as SNMPv4.

What is YANG?

Building off the idea of SNMP, if MIBs are the index for SNMP, then YANG is the index for NETCONF. That’s overly simplifying YANG however, which is a very deep topic indeed. YANG is a hierarchical language, built in a tree-format, that defines in a readable format the generalized models required to configure a network. Understanding YANG at a high-level is necessary to use NETCONF.

Interesting note: YANG stands for “Yet Another Next Generation”. Strange name if you don’t know the origin. The competing technology was SNMP-based. SNMP uses SMI as its back-end data structure, and before YANG was created, SMI Next Generation (SMIng) was being created. Reference RFC 3780: https://tools.ietf.org/html/rfc3780. When Yet Another format was created, it was called YANG.

So, what’s RESTCONF?

RESTCONF swaps the SSH session that NETCONF uses and instead uses a REST-based API. The YANG models used are identical between NETCONF and RESTCONF. An easy way to think of RESTCONF is just putting a web API on top of the long-standing NETCONF framework. Additionally, RESTCONF expands on NETCONF’s XML interface by optionally offering JSON as a data format (XML can still be used as well). I personally enjoy using RESTCONF because I’m already familiar with REST APIs and therefore the interface is very familiar.

What else is different between NETCONF and RESTCONF?

NETCONF technically has a few more functional benefits than RESTCONF. The most obvious is that streaming telemetry (example: polling the CPU utilization every X seconds) requires a session to stay open. That’s possible with an SSH session, but with REST, every command is transactional and there is no session to keep that kind of data flowing. There are a few other benefits which are beyond the scope of this document.

So why would I want to use either of these?

The main use case is fairly obvious. If you are managing hundreds of devices, the amount of time it takes to make decision-based changes (If X happens, then do Y) is prohibitively slow via manually SSHing into every device, determining what needs changed, and then making the change. A well-written script and an API can do in minutes what a human would take hours to perform, and at the cost of zero man-hours.

Another more advanced use case is infrastructure-as-code.This is the idea that intent should define the network configuration, which is then deployed via software. This is beyond the scope of this document.

But… I’m already doing automation with expect scripts (or similar) CLI-based automation…

That certainly can be done, but think of using NETCONF/RESTCONF as the “next level”. The CLI was written for humans to interpret. Imagine the output from “show ip bgp neighbor” – easy for you to read as a human, but try to parse that with automation. It can be done, but it’s very clunky. Or, imagine trying to dynamically configure an extended access-list with CLI commands, with a computer making the decisions. It works, but it’s clunky. The ideas behind NETCONF/RESTCONF + YANG are to take those same tasks and make them more computer readable/writable, instead of human readable/writable.

YANG in just a little more detail

We’re going to come at these topics in little bits, and the next step requires understanding YANG just a little bit, so that we can give some simple RESTCONF examples.

Some quick intro knowledge is that there are several different creators of YANG models. The first, and from my understanding, the original, is the IETF. IETF’s goals are idealistic – create a series of models that work with all manufacturers of network equipment. You could re-use the same code against Cisco, Juniper, Arista, etc, and end up with the same outcome on all of them. Sounds great, right? The problem becomes apparent the more you work with programmatic models, vendors just “do things differently”, and even though all networking is generally standard, the way things are handled inside a router are completely different.  An obvious example is you’ll never see an EIGRP or PFR IETF YANG model.

CALLOUT: Another vendor-neutral model is from Openconfig. It has similar goals to the IETF models but is backed by a group of manufacturers instead of the IETF:
https://www.openconfig.net/projects/models/

Next are the native models. As illustrated above, no matter how good an industry standard model is, it’s not going to cover anything vendor-specific (and many things that aren’t vendor-specific). I’ve not looked at any other vendor besides Cisco, but the Cisco native models are very extensive, complex, and can basically perform any router task you’d like. Side note – it’s my understanding that the vendor-neutral models are translated into the Cisco native models before processing, but I have no specific way of showing this.

Let’s get some basic samples going

I’ve never cared for reading learning material that doesn’t let you get your hands dirty until all the “learning” is done. So, before I go on any longer, let’s get this thing rolling.

You’re going to need a sample IOS-XE device. I strongly recommend a CSR1K, as it exhibits some different behavior than physical routers. I’m using v17.2.1, for reference. I’ll explain more on that different behavior later in the article.

You’re also going to need Postman: https://www.postman.com/

Why Postman? While it does far more than I’m going to write about here, it takes the “code writing” complexity out of testing an API. Writing code (presumably Python) adds a layer of complexity in dealing with data formats and logic. As mentioned at the beginning of the article, this isn’t about teaching how to program, it’s about teaching practical RESTCONF. Postman allows you to interact with a REST API without writing any code.

Assuming you have those things running, let’s make RESTCONF do something.

Prepping your router is very straightforward.

First, since we’ll be using TLS, you need an encryption key:  
csr1k#crypto key generate rsa

Then you’ll need to enable the secure HTTP server and setup local authentication:
csr1k#conf t

Enter configuration commands, one per line. End with CNTL/Z.

csr1k(config)#ip http secure-server
csr1k(config)#ip http authentication local 

After that enable RESTCONF:

csr1k(config)#restconf

You’ll also need a local user that’s privilege 15:
csr1k(config)#username cisco priv 15 secret cisco123

Now, let’s load up Postman and see if we can’t get restconf to do something.

After you’ve downloaded and signed into Postman, you should get a page that looks something like mine

This is an image of the Postman landing page. It says "Let's burn some midnight oil, Jeff Kronlage." This is followed by "Start something new" and "Create a Request"


Go ahead and click “Create a request.”

The next page will look like this. Be sure to select the GET field as you see below.

This is an image of the "start request" page in Postman. It shows the "GET" function selected with an address bar.  Below the bar are headers that read from left to right as follows. Params, Authorization. Headers, Body, Pre-request Script, Tests, Settings

  In the GET field, type your IP address in to replace ‘your-ip-address’: https://your-ip-address/restconf:

Next, click on Authorization, change the type to “Basic Auth”, and put the username and password you created into the Username and Password blank.

On this screen the Authorization tab has been chosen. There is a drop down for "Type" on the left hand side. Basic Auth is chosen. The right side has a login screen with a username and password field.
Choose basic auth here

Press the Send button in the upper-right

This image is the same as the previous image. This image has a call out for the "send button" which is on the top right side of the screen.
Send is in the upper right corner

If you configured the router correctly, the response field should look like this:

This is an image of the Body header selected. This shows a series of code with the results of the GET request.

NOTE: Nothing too useful here other than it tells us that RESTCONF is working. Note the output is in XML. If you prefer to get it back in JSON, make the changes in the following steps.

Click on the Headers tab:

This is an image of the headers tab selected in postman. 7 options show here with all selected.

Once here, uncheck the default “Accept” header:

Create a new Accept header at the bottom specifying application/yang-data+json:

Alternatively, you can manually specify application/yang-data+xml, but that appears to be the default

Press “Send” again, and the output should now return in JSON:

I’ll proceed with using JSON from here on out of personal preference.

Expanding upon the idea

Now that we’ve confirmed that RESTCONF is running on the router and shown how to change to JSON output, let’s do a few more simple interactions to show what we’re trying to accomplish here.

I want to specifically call out that my next examples are on a CSR1K. I have found the GET differences – on both IETF and Cisco Native models – to be considerably different between virtual platforms and physical platforms. So, if you want to replicate my results be sure you’re on the CSR1K. Again, I’m using v17.2.1. I’ll show more on this later.

First, perform a GET on: https://10.200.200.100/restconf/data/ietf-interfaces:interfaces/interface=GigabitEthernet1

Since I’ve preconfigured my GigabitEthernet1 we get back some configuration details:


This is fairly easily read by a human – we’re looking at an interface, it’s shutdown (enabled: false), and

Let’s break down what we asked for in the GET: https://10.200.200.100/restconf/data/ietf-interfaces:interfaces/interface=GigabitEthernet1

  • 10.200.200.100 = The hostname
  • /restconf/data/ = This path will be specified for RESTCONF config data. (differs for RPCs, more below)
  • /ietf-interfaces = We’re using the ietf-interfaces YANG module (more on YANG modules below)
  • :interfaces = Specifying the “interfaces” container inside /ietf-interfaces (more on containers below)
  • /interface = Specifying the list “interface”
  • =GigabitEthernet1 = For the list “interface”, the key is the string “name”, and the name is GigabitEthernet1

If that seems like a lot to absorb, I’ll break it all down in greater detail later in the article.

Thus far we’ve focused on using GET, let’s change the IP address using PUT.

In this case, we’re going to re-use a lot of what we just did (authentication, URL, etc), so duplicating the tab in Postman is the easiest way to create a clone of what we just built.

Right-click on your current tab and press “Duplicate Tab”:

On the new tab, change your GET to a PUT:

As I had mentioned, this isn’t meant to serve as a REST tutorial, but while GET retrieves data, and POST creates new data, PUT is used for modifying existing data.

We’ll also need to go and modify the headers so that we’re sending JSON.

Uncheck the default Content-Type:

At the bottom of headers, as we did above for “Accept”, create a new Content-Type of application/yang-data+json:

To start preparing to send JSON to the CSR, click on “Body” and select “raw”:

Copy the output from your earlier GET of GigabitEthernet1. Building off this example, I’ve grabbed the JSON contents of it and modified one field – the IP address from .102 in the fourth octet to .103. I’ve also enabled the interface. This is what I pasted this into the Body field:

Press Send again, and you should get:


Status: 204 No Content on the right side of the results is a success. Generally speaking, anything in the 200 range is a success, just like with HTTP or any other REST protocol.

You can check your work by running the GET from your prior tab again, or you can just log in to the router and look:


You’ll notice the IP has changed, and the interface is no longer shutdown.

Let’s also go ahead and create some data. Clearly you can’t create a physical interface, but you can certainly make a logical one. Let’s craft a new Loopback.


Duplicate your tab again. Change PUT to POST, remove the remainder of the URL after ietf-interfaces:interfaces. In the body, change the name to Loopback and a number of your choosing, change type to softwareLoopback, change the IP address to something that doesn’t overlap with other interfaces, and (optionally) change your netmask to a /32.

Press Send.


You should see Status: 201 Created in the lower-right corner. All changes are illustrated below:

I think this example speaks for itself outside of why we trimmed the URL. We can’t POST to a list (an interface, in this case) that doesn’t exist yet. I’ll show more examples on this as we proceed.

The last HTTP verb to demonstrate would be DELETE. Let’s wipe out that Loopback we just created. Duplicate your tab again.

Change the POST to DELETE. Add the list back in at the end of our URL: https://your-ip-address/restconf/data/ietf-interfaces:interfaces/interface=Loopback1001


You should get a status 204 No Content upon success.

Something to note: The body is irrelevant in this type of request. Since we duplicated the tab, we inherited the body from the POST, and we could leave it there, or you can erase it. It doesn’t matter.

Additionally: The debugs on the router are near useless. When I first started on this topic, I was hoping for a translation of RESTCONF into CLI to show what was actually going on behind the scenes, but no such luck.

Debugs are turned on with: csr1k#debug restconf level debug

The output from creating a Loopback looks like this (I have trimmed it slightly for brevity and privacy):

%DMI-5-AUTH_PASSED: R0/0: dmiauthd: User 'cisco' authenticated successfully from <myIP> and was authorized for rest over http. External groups: PRIV15
%SYS-5-CONFIG_P: Configured programmatically by process iosp_vty_100001_dmi_syncfd_fd_179 from console as NETCONF on vty63
%DMI-5-CONFIG_I: R0/0: dmiauthd: Configured from NETCONF/RESTCONF by cisco, transaction-id 189

So basically, the debug shows that I logged in using an API and made a change… but no real details.

Now you’ve seen the basics on retrieving data, changing data, creating data, and deleting data. This is the easy part. Next, the real challenge begins in trying to figure out how to craft the body without having internet examples.

So – isn’t there some documentation? Well…

Well, there is none.

This hasn’t changed in the last five years. For writing code around RESTCONF, you’re on your own. Instead of documentation, you need to develop strategies to understanding creating the body. There are two strategies that I’ve used, one of which lacks finesse but is very fast, and another which is more likely what the YANG developers intended, but takes some patience and a deeper understanding of YANG.

Let’s start with the fast method.

Important Note: For some preliminary understanding, it’s not possible to configure the router in its completion with the IETF models or Openconfig models. However, the Cisco native models have a representation of all standard configuration. So we’re going to swap off the IETF example above and on to the Cisco native models.

When I first started working with RESTCONF, I found myself looking for the equivalence of snmpwalk for RESTCONF. The question I asked myself is “How do I index this thing?”

My natural tendency was to perform a GET at the highest URL “level”:

That’d be a GET to https://your-ip-address/restconf/data/Cisco-IOS-XE-native:native

Think of this as the RESTCONF version of “show running-config”

For the example above, I’m swapping back to a physical router to show an oddity.

“204 No Content” This threw me off for quite a while until, on a lark, I tried it on a CSR1K:

As you can see, it works fine on a CSR, but not on an ISR – I would love an explanation if anyone knows why this is. I couldn’t find any information on it. Note, I did try multiple ISRs.

For brevity, I couldn’t show the entire config here, so I’ve just shown another relevant snippet from below:

As an example, let’s create a banner on the CSR:
csr1k#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
csr1k(config)#banner exec 1 Restconf Banner 1

I deliberately picked ‘banner’ as it’s towards the top of the config, and makes the example easier in screenshots.


We can derive from this that setting the banner would be a matter of doing a PUT to: https://10.200.200.100/restconf/data/Cisco-IOS-XE-native:native/banner/exec

Getting the JSON down just takes some practice, but the body looks like this:


{
    "exec": {
        "banner": "1 NEW Restconf Banner 1"
    }
}

And here’s how it’s crafted in Postman:


I’ve already pressed “Send”, and you’ll note the 204 No Content (Which, like above, is a success)

And the proof can be seen from the CLI or from another GET:
csr1k(config)#do sh run | s banner exec
banner exec ^C NEW Restconf Banner ^C

As I mentioned, this is quick, dirty, and inelegant. Having to build all your config to understand how to address it in the API just isn’t a clean method. The elegant way is to become familiar enough with the YANG files to be able to interpret them as a form of self-documentation.

You Mentioned a More Elegant Method?

In order to go further with this, we need the YANG files. Since we’re also going to be using a tool that only works in Linux, you’ll need yourself a Linux box or VM from here on in.

All the YANG models are available for download via github. One of the cool things about this is that even the vendor native models are also on github, so you get all the relevant YANG files in one shot!

jeff@linuxlab:~$ git clone https://github.com/YangModels/yang.git
Cloning into 'yang'...
remote: Enumerating objects: 252, done.
remote: Counting objects: 100% (252/252), done.
remote: Compressing objects: 100% (200/200), done.
remote: Total 36526 (delta 98), reused 163 (delta 50), pack-reused 36274
Receiving objects: 100% (36526/36526), 76.68 MiB | 4.18 MiB/s, done.
Resolving deltas: 100% (27870/27870), done.
Updating files: 100% (40193/40193), done.

For illustration purposes, I’m going to swap back to the IETF models for now, as they’re not as daunting to read as the Cisco native ones.

jeff@linuxlab:~$ cd yang/vendor/cisco/xe/1721
jeff@linuxlab:~/yang/vendor/cisco/xe/1721$

Of Note: While I’m demoing on XE, there are XR and NX-OS models in the same folder structure

Taking a look at the IETF files in this folder

Taking a Referencing our prior example above: https://10.200.200.100/restconf/data/ietf-interfaces:interfaces/interface=GigabitEthernet1

Let’s take a look in ietf-interfaces and try and gain some basic understanding. As a reminder from the top of the blog, I am not intending to teach YANG thoroughly, but to give enough understanding that you could take the information and interface with the RESTCONF efficiently.

Pop open ietf-interfaces.yang in your favorite text editor:
jeff@linuxlab:~/yang/vendor/cisco/xe/1721$ vi ietf-interfaces.yang

ietf-interfaces.yang is one of the smallest “major” YANG files, but it’s still 725 lines long. It is considerably more readable than SNMP MIBs are, but it’s a lot to digest. I struggled finding a way to illustrate this without bloating the blog… and didn’t come up with anything. So seriously, pop these files open and take a look. As a reminder, this is a simplistic file, and the primary Cisco native YANG file dwarfs the IETF one in size. We’ll come back more on the solution to this shortly.

As I mentioned above, the files are laid out in a tree. I’m going to pick out key bits of the file to reference how this works.

Let’s start by trying to figure out the URL we used earlier: https://10.200.200.100/restconf/data/ietf-interfaces:interfaces/interface=GigabitEthernet1

You’ll note the first line in the file defines the module name: module ietf-interfaces {

 Scrolling down a bit, we’ll find the interfaces container:

  container interfaces {
    description
      "Interface configuration parameters.";

Followed immediately by the interface list. Note the key of “name” below:

       list interface {
         key "name";
   	
            leaf name {
               type string;

This gives us all the building blocks of the URL below.

https://10.200.200.100/restconf/data/ietf-interfaces:interfaces/interface=GigabitEthernet1

As mentioned /hostname/restconf/data is in every RESTCONF URL on IOS-XE.

The important bits are after that: ietf-interfaces:interfaces/interface=GigabitEthernet1.

Let’s pause and talk about data types for a moment

These are definitions to be familiar with for the purpose of this article. Note, this is not exhaustive, it’s just the bits needed to get through the common RESTCONF use cases.

Containers: Contains other nodes types, including other containers. This is basically just a logical grouping.
List: Contains a sequence of list entries, which is uniquely identified by leafs. The unique identifier is the Key, defined in the list.
Leaf: Contains a single value (Leaf types are the end of the tree)
Leaf-List: Contains a sequence of leaf nodes

Comparing this back to our earlier example:

I’ll show this in a better visual when we get to demoing pyang. This probably doesn’t seem too complicated just yet, but if you’re looking closely, there were a lot more IETF files.

Here’s a first major point of understanding: The files are not standalone. They work as a group.

Reference back to our first IETF example:


Note the presence of IPv4 address information in here.

Go back to the text edit of the ietf-interfaces.yang file and search for “ipv4”:

I can assure you we’re viewing the right top-level file in ietf-interfaces.yang, but there’s no mention of IP addressing. This is where YANG gets trickier to decipher.

The YANG model we’re looking for is actually in ietf-ip.yang.

Let’s take a look inside the ietf-ip.yang:

augment "/if:interfaces/if:interface" {
   description
     "Parameters for configuring IP on interfaces.
      If an interface is not capable of running IP, the server
      must not allow the client to configure these parameters.";
   container ipv4 {

So the container for ipv4 is in a separate file from ietf-interfaces, even though it augments it. The potentially confusing matter here is that the augmenting file (ietf-ip.yang) refers back to the augmented file (ietf-interfaces.yang).

Let’s demonstrate

Run this GET in Postman: https://10.200.200.100/restconf/data/ietf-interfaces:interfaces/interface=GigabitEthernet1/ipv4/address

This is the same URL we’ve been using for our example, but with /ipv4/address at the end.

You’ll get this more-specific subset of the body:

With ietf-ip.yang augmenting ietf-interfaces.yang, the URL above breaks down visually as follows:

Getting hard to visualize? Hopefully you’re following along in the actual files. The IETF files are some of the easiest to interpret via plain text, yet it’s easy to demonstrate how complex this can be to read in plain text.

Introducing pyang

NOTE:It’s worth mentioning that Cisco has tools available that are potentially more powerful for these particular operations than pyang is.

These are:
Yang-Explorer: https://github.com/CiscoDevNet/yang-explorer
Yang-Suite: https://github.com/CiscoDevNet/yangsuite

Yang Explorer is end-of-support – it was flash based. I have not tried installing it. Yang Suite is brand new, as in it launched while I was typing this document. I attended the kick-off. It looks rather impressive, and according to the webinar I attended, it apparently sorts out the confusion around augments. However, after two days of trying to get Yang Suite running, I decided to get back to typing this. Inevitably, if you have the time to figure it out, Yang Suite is potentially a better tool for this operation than pyang.

With that covered, back to pyang.

As I mentioned above, pyang only runs in Linux, so back to your Linux box!

Installation varies slightly from Linux distro to distro, but the basics are simple:
jeff@linuxlab:~$ pip install pyang

pyang does more than I’m going to cover here, but what we basically want it for is to summarize YANG files in tree format (as well as help with augments…)

Our initial usage of pyang will be:
pyang -f tree <file1.yang> <file2.yang> … <fileX.yang>

Run this against ietf-interfaces.yang

Note, the output was trimmed for brevity

Now we can easily conceptualize the YANG module in a tree:

That sure simplifies reading a large YANG file, but it doesn’t get us the IP address information that we noted above is missing. This requires a little bit of interpretative work.

I have already pointed it out, but it’s pretty obvious from the file structure that IP address information would be inside ietf-ip.yang. Now we just need to see them both in the same tree. Note I’ve asked pyang to create a tree for both ietf-interfaces.yang and ietf-ip.yang simultaneously.

One benefit is pyang is smart enough to process the augment in ietf-ip and insert it into the correct spot in the ietf-interfaces tree. Compare to the prior screenshot of pyang that didn’t have the ipv4 tree information in it.

Now it’s much easier to figure out the needed URL: https://10.200.200.100/restconf/data/ietf-interfaces:interfaces/interface=GigabitEthernet1/ipv4/address

That’s an easy way to show some simple usage. Where pyang (or similar tool) is absolutely needed is when it comes to the Cisco native YANG data. For reference, all the Cisco-supported IETF YANG files combined are less than 14,000 lines combined. However, on 17.2.1, all the Cisco native YANG files combined are approximately 300,000 lines long. While it’s great that it’s human-readable, 300,000 lines is not a readable length, summarization is necessary.

Let’s take a quick look at the Cisco-IOS-XE-native.yang file with pyang: jeff@linuxlab:~/yang/vendor/cisco/xe/1721$ pyang -f tree Cisco-IOS-XE-native.yang

This looks great at first glance, but if you run the same command in your lab, you’ll find that the tree index alone for just Cisco-IOS-XE-native.yang is 34,709 ***lines long (just shy of three times the size of all the plaintext data from the IETF files combined!). Referencing above, this doesn’t include any of the other augmenting files, which are absolutely necessary to do most functions.

We need to narrow this down further before we start adding in more files.

This is where the tree-depth argument comes in handy: jeff@linuxlab:~/yang/vendor/cisco/xe/1721$ pyang -f tree Cisco-IOS-XE-native.yang –tree-depth=2

Tree-depth limits how deep the tree is displayed. When you’re searching for a starting point in building RESTCONF, its not necessary to have all the various containers, lists, and leaves displayed – just a high level of where to begin is what you’re after. A tree depth of 2 is a little small to be useful, but it made for a better screenshot.

Much like the IETF YANG files, there’s quite a lot of additional Cisco YANG files augmenting the Cisco-IOS-XE-native module – on IOS-XE 17.2.1, there’s 306 of them! Let’s start by trying to find BGP.

The logical place to start would be to see if it’s include natively (no pun intended) inside the main module. We’ll want to start piping the output to a file to make this manageable.

jeff@linuxlab:~/yang/vendor/cisco/xe/1721$ pyang -f tree Cisco-IOS-XE-native.yang –tree-depth=3 > native.out

jeff@linuxlab:~/yang/vendor/cisco/xe/1721$ vi native.out

Search for “bgp”


Well we got a hit, but that’s probably not what we’re after for configuring BGP routing.

Let’s take a look at the other Cisco native YANG files in the directory, filtering for the word “bgp” in the file names:


Five files – sure beats sorting through 306 of them.

The correct file is fairly obvious:Cisco-IOS-XE-bgp.yang.

Let’s add it in to our pyang tree:

jeff@linuxlab:~/yang/vendor/cisco/xe/1721$ pyang -f tree Cisco-IOS-XE-native.yang Cisco-IOS-XE-bgp.yang --tree-depth=3 > native.out
jeff@linuxlab:~/yang/vendor/cisco/xe/1721$ vi native.out

Searching for “bgp” produces several hits, but having a working knowledge of networking, and a basic understanding of YANG, makes the correct one obvious:

This requires scrolling up a bit to figure out the tree leading up to router, and frankly, you should be pulling the files out to notepad++ or a similar tool to make following a large tree easier.

The full path is:

Cisco-IOS-XE-native [module]
   native [container]
      router [container]
         bgp [list]

So, if I’m crafting a URL for this, I would use: https://10.200.200.100/restconf/data/native/router/bgp

Note the small trick there, Cisco-IOS-XE-native:native can be abbreviated as just “native”

Let’s say our goal is to turn up the BGP process and add a neighbor. We still need to know more than what we have, because ideally, we should be able to build the full PUT or POST straight off the YANG data and our own pre-existing network know-how. What we want is a deeper view of the tree starting at that one location.

Introducing –tree-path: pyang -f tree Cisco-IOS-XE-native.yang Cisco-IOS-XE-bgp.yang –tree-path /native/router/bgp –tree-depth=5

Inspecting the outcome from the data, we can find the next key elements:

The id is clearly marked as being the BGP AS number.

Futher down the output, we find how to create neighbors:

Note the “201 Created”. Double checking our work at the command line:

csr1k#sh run | s router bgp
router bgp 100
 bgp log-neighbor-changes
 neighbor 4.4.4.4 remote-as 101
 neighbor 5.5.5.5 remote-as 102

Some final thoughts…

What’s up with lists?

I referred to lists throughout the document without really covering why they exist. The BGP example is a good use case. Each BGP neighbor, and all the config associated with it, is a list. An element in a list is usually not a 1:1 match up with a single line of IOS configuration.

Take for example creating users on the router:

<username>
        <name>admin1</name>
        <privilege>15</privilege>
        <secret>
            <encryption>9</encryption>
            <secret>(omitted)</secret>
        </secret>
    </username>
    <username>
        <name>admin2</name>
        <privilege>15</privilege>
        <secret>
            <encryption>9</encryption>
            <secret>(omitted)</secret>
        </secret>
    </username>

That’s two elements in a list “username”. The key to the list is “name”, which must be unique, so that it can be independently referenced, modified, or deleted.

Each element equals one line of configuration in IOS:

csr1k#sh run | s username
username admin1 privilege 15 secret 9 <omitted>
username admin2 privilege 15 secret 9 <omitted>

The BGP example is also a good one, where a list can create more than one line of IOS configuration. Let’s say on neighbor 5.5.5.5 we also wanted to enable ebgp-multihop.

The POST would’ve looked like this:

Now in the user example, one list = one line of IOS config. However, in this example, one list = multiple lines of config:

csr1k(config)#do sh run | s router bgp
router bgp 100
 bgp log-neighbor-changes
 neighbor 4.4.4.4 remote-as 101
 neighbor 5.5.5.5 remote-as 102
 neighbor 5.5.5.5 ebgp-multihop 255

This takes a little practice to wrap your head around, but it’s really not too bad. Going back to my original statement that the CLI was built for humans and APIs are built for code, it really makes a lot of sense.

Read-Only vs Read-Write

This blog has focused entirely on read-write configuration. There’s actually quite a lot of read-only YANG models that can be referenced by RESTCONF and is specified in YANG. Think about a BGP neighbor state, or an interface error count – things you would’ve perhaps previously monitored with SNMP. All the samples I’ve pasted above have had a “rw” next to them for read/write as my blog focus was about creating configuration, but there’s a whole side of this just for programmatically monitoring statuses.

For a quick example:


Note the entire bottom half of the screenshot is “ro” instead of “rw”.

If you’re looking inside the YANG file itself, this is denoted differently:

container interfaces-state {
    config false;
    description
      "Data nodes for the operational state of interfaces.";

“config false” is what denotes read-only.

Remote Procedure Calls (RPC)

If you’ve tested SNMP writes, you’ve probably seen the example of why never to leave unguarded “write” SNMP access on: you can actually write a value to reboot the router. That’s an example of an SNMP-triggered RPC. NETCONF and RESTCONF have their own rich set of RPCs.

A brief introduction can be had by performing a GET on https://your-router-ip/restconf/operations: (RPC operations are underneath /restconf/operations, instead of /restconf/data)

For further detail, examine Cisco-IOS-XE-rpc.yang with either a text editor or pyang.

For simplicity’s sake, let’s just demonstrate rebooting the router:

In closing, with the increasing use of network automation it’s important to familiarize yourself with RESTCONF and YANG. As shown in this article you can use the RESTCONF protocol to simplify and manage network configurations and operational features. I’ve always been a believer in working smarter, not harder. While this article was written with a high level overview, there are a myriad of resources to take a deeper dive into YANG, the pyang tool, and how to implement RESTCONF on Cisco devices if you’re wanting a deeper look into these great tools.

Thinking Outside the box with Cisco DNA Center

What other applications does DNA have?

Cisco’s DNA Center appliance is generally talked about in the context of SD-Access (SDA), but SDA is a complex technology that involves significant planning and re-architecture to deploy.  DNA Center is not just SDA, though – it has multiple features that can be used on day 1 that can cut down on administrative tasks and reduce the likelihood of errors or omissions.  From conversations with our customers, the most asked-for capability is software image management and automatic deployment, and that is something that DNA Center handles extremely well compared to many other solutions out there.

Wait…I can manage software updates with DNA?

Managing software on network devices can be a substantial time burden, especially in businesses that have a substantial compliance burden and require regular software updates.  Add to this the increasing size of network device images – pretty much all the major switch and router vendors’ products now have image sizes in the hundreds of megabytes up to several gigabytes, and software management can now take up a significant chunk of an IT department’s time.  One of our customers is interested in DNA Center for this specific purpose – with 500+ switches, being able to automate software deployment saves several weeks of engineer time over the course of a year.

That may leave you asking…

So, what devices can I manage? 

DNA Center can manage software for any current production Cisco router, switch, or wireless controller.  Additionally, some previous-generation hardware is also supported.  Of this hardware, the Catalyst 2960X and XR switches as well as the Catalyst 3650/3850 switches are the most commonly used with DNA Center. Now let’s talk about how DNA Center does this.

Neat! Now, tell me how to do it. 

First, be sure that every device you want to manage is imported into DNA Center.  Once that’s done, the image repository screen will automatically populate itself with available software image versions by device type.

Here’s an example:

From here, select the device family to see details.  Once you’ve decided on the version you want to use, click on the star icon, and DNAC will mark that as the golden image (aka the image you want to deploy).  If not already present on the appliance, the image will also be downloaded as well.

Next, go to Provision > Network Devices > Inventory to start the update process.  From here, select the devices you want to update, then click on Actions > Software Image > Update Image.  You’ll be given the option to either distribute the new images immediately or to wait until a specific time to start the process.  Different upgrade jobs can be configured for different device groups as well.

Here, I’ve set DNAC to distribute images on Saturday the 19th at 1pm local time for all my sites.  This process is just the file copy, so no changes are made to the devices at this time.  The file copy process is also tolerant of slow WAN connections, though not poor-quality connections.  We’ve tested this process in our lab and found out that it’ll happily work even over a 64k connection (though it’ll take quite a while).  Poor quality connectivity, however, will cause this process to fail.  Finally, once the image is copied to the target devices, a hash check is performed to ensure the image hasn’t been corrupted.

The next step is to activate the image.  Activation here means ‘install the image and reboot the device’.

Like the distribution process, DNAC can either install immediately or wait until a scheduled time.  Note that for IOS XE devices, this process will do a full install of the image vs. just copying the .bin file over.  Once the software activation is complete, the devices will show their status in the inventory screen. As you can see, DNA Center’s software image management capability can save substantial time when updating software as well as ensuring that no devices fail to receive updates through error or omission.

Prepared by: Chris Crotteau

Application Hosting and Guest Shell on Catalyst 9000

Two of the lesser known yet extremely useful features present in the Catalyst 9000 and many other route/switch products in Cisco’s lineup are Guest Shell and Application Hosting. Both of these features rely on Cisco’s use of Linux as an underpinning of their various network OSes and the use of x86 CPUs for their devices as well. As the Catalyst 9000 switches are the most common and accessible, we’ll focus on this platform for now.

Guest Shell

Guest shell allows the switch operator access to two alternate CLI environments – a Bash shell and a Python 3 interpreter. From these environments, scripts can be written and executed. Further, IOS features like EEM can call Python or Bash scripts, and conversely, these scripts can call into the IOS CLI or the NETCONF API to allow for a significant boost in automation capability.

Application Hosting

Application hosting is the next step beyond guest shell – the ability to install and run 3rd party Linux software packages directly on the device. Application hosting does differ a bit in implementation based on platform, with ISR routers, Cat9k switches, NX-OS switches, and IOS XR routers all somewhat different. However, the result is the same – one can run 3 rd party Linux applications directly on networking hardware. On the Catalyst 9000 series switches, application hosting uses Docker containers exclusively.

One concern regarding this is the ability of an application to behave in such a way as to impair the functionality of the networking device. Cisco has put hard limits on what CPU and memory resources can be used for application hosting (which varies by platform), so that misbehaving applications cannot consume enough resources to compromise the device’s functionality.

Hold on – what’s a container?

One question that comes up regarding this is what a container is, and how it compares to server virtualization as done by VMWare/KVM/Hyper-V/etc. While a detailed treatment of virtualization and containers is out of scope here, we can go over the basic details. In a more traditional virtualization environment, a hypervisor – a dedicated OS whose job is to abstract the host’s hardware and provide the plumbing needed to run multiple OSes on a single server – is the key bit of technology. On the other hand, a system built to run containers is running only one OS – what we segment is the user space of the OS. This allows applications to share the same host and OS, but without the ability for apps in one container to interact with another except through the network. Containers lack some flexibility compared to a full VM, but they do have a number of strong points. The primary benefit to a container is its reduced resource requirements compared to a full VM. In a fully virtualized environment, the us of a hypervisor will exact a performance penalty. This comes about from all the work the hypervisor must do – translating client OS calls to the host hardware, managing contention for hardware resources, and networking for the clients. Additionally, there is the overhead of running the client OSes themselves. Between the resources needed by the hypervisor and the resources used just to run the client OSes, there can be a notable performance penalty. With a container, since there is no hypervisor in place, only one OS running, and segmentation is done within the OS user space, application performance is often very close to bare metal.

Getting Started with iPerf3

For this post, we will be using iPerf3 as our example software. iPerf3 is a very common tool used to measure network performance – it supports a number of options regarding traffic type and total throughput that are useful in identifying network performance issues. Being able to use a switch as an always-on iPerf server provides a standard location to run performance tests against and eliminates the need to stand up a laptop or other device as an iPerf server.


Before we get started, there are a few prerequisites to be aware of. Here’s what to know:

  1. Application hosting is not supported on the Cat9200 series switches. All other Cat9k switches do support application hosting.
  2. A dedicated SSD is needed to store and run the applications. On the Cat9300, the SSD is installed in a dedicated USB slot on the back of the stack master switch, while on the Cat9400 and 9600, an internal SSD is needed on each supervisor.
  3. You must be running IOS XE 16.12.1 or later for application hosting to be available.

Next, the container will need network access. For this, we have two options – bridge the container to the management port, or to the front panel ports. In the former case, the container will need an IP address in the same subnet as the management interface and will only be accessible from the physical management port. For the front panel ports, a container can be bridged to a single VLAN, or set up as a trunk for access to multiple VLANs. In this case, the container would not be accessible from the management port. Once we have decided on network access, if the application you want to run is already available in container form, just download it to the switch SSD and begin configuring the switch. Otherwise, you’ll need a Linux system to build the container. For this, use Docker to pull the container and save it as a .tar file like so:

LinuxPC$ docker pull mlabbe/iperf3
LinuxPC$ docker save mlabbe/iperf3 > iperf3.tar

Now, onto configuration!

First, create an app instance in the switch CLI:
Switch(config)# app-hosting appid iperf3

Next, choose whether to use the management port or the front panel ports, and give the container an IP address and its gateway. This example uses the management port:

Switch(config-app-hosting)# app-vnic management guest-interface 0
Switch(config-app-hosting)# guest-ipaddress 192.168.0.200 netmask 255.255.255.0
Switch(config-app-hosting-mgmt-gateway)# app-default-gateway 192.168.0.1 guest-interface 0

Finally, we’ll want to pass some parameters to iperf3, specifically to be a server, what ports to listen on, and to always restart unless manually stopped:

Switch# app-hosting install appid iperf3 package usbflash1:iperf3.tar
Switch# app-hosting activate appid iperf3
Switch# app-hosting start appid iperf3

Now iPerf3 is up and running as a server on the switch!

This is just one small example of how application hosting can be used to build additional functionality into the switch. The capabilities of this feature are really limited only by system resources.

By: Chris Crotteau

Automation With RoomKit

Recently, a customer came to us with an interesting problem with their Room Kits… 

The Challenge

Our client has many Cisco Room Kit installations ranging from large training areas on the Room Kit Pro, to some smaller conference rooms and huddle spaces. Most multi-display video systems have two or more same-sized televisions, generally right next to one another. One screen will show the presentation, the other screen will show the active, remote speaker. The smaller spaces didn’t have enough wall real estate to support two large televisions like the training areas on the Room Kit Pro did. 


Instead, a large primary display along with a smaller secondary display, was chosen:

When the displays are both the same size and right next to one another, it doesn’t usually matter which one is showing the presentation and showing the speaker.

This looked good in visual concept; however, the challenge came in utilizing the larger display for the most prominent media. When there was no presentation to show it made sense to have the speaker on the largest screen, but if the speaker had a presentation to pull up then they needed the larger screen to display the presentation content. 

As illustrated in the photo, the active speaker (screen greyed for customer anonymity) remained on the larger display, and the presentation showed on the smaller display. Shy of moving the HDMI cables by hand every time a presentation started, there was no way to accomplish this on RoomOS.  

Moreover, it was desired that it moved back when a presentation stopped. For example:

Circumstance: Speaker Talking, No presentation

Desired Outcome: Larger TV shows Speaker; Smaller TV Blank

Circumstance: Speaker Talking, with presentation

Desired Outcome: Smaller TV shows Speaker; Larger TV shows presentation

Circumstance: Speaker Talking, was presenting, but stopped presenting

Desired Outcome: Speaker swaps back to larger TV; smaller TV becomes blank

Crossconnect’s challenge was to develop a way to ensure the presentation always ended up on the larger display with minimal user intervention. This presented a variety of complexities since our client needed RoomOS to do this regardless if the presentation was shown via Webex, HDMI input, or Cisco Proximity (https://proximity.cisco.com/). Another added complexity we faced was “How would a Room Kit even know how big a display is?”


How did we accomplish this?

Presented with the complex requirements of our client we knew some custom work was needed. Over the course of two weeks, we developed our own RoomOS script that:

– Tracked the native resolution of the TV as a way to determine TV size

– Ensured if only one input (speaker) was present that it ended up the larger TV

– Triggered logic when a presentation started to ensure the presentation always ended up on the larger TV.

– Triggered logic that when a presentation stopped to ensure that the speaker moved back to the larger TV.

– All of this was accomplished without any user input, it “just works”. In case the user wanted to manually control it, Crossconnect built a button to allow manual display swapping, but this is purely optional

– Was extensible into 3+ display units for future use.

We had an “aha” moment on how to keep the presentation on the larger screen while working on the ‘Call’ and ‘Video Output Connector’ objects within the codecs xAPI interface. We used this in order to determine information about the current monitors as well as to get the information required to asynchronously monitor the device for the following: 

(1) Call status

(2) Call properties (is this a video call?)

(3) Content Sharing

This allowed us to determine which screen received the presentation at any given status change during the call, while monitoring for changes in real time. Initially, we wrote a program that walked the available xAPI endpoints both inside and outside of calls to determine which endpoints needed to be referenced for accuracy. 

The first portion of the program runs when the codec starts up. This initially finds a few key characteristics of the output connectors and determines which one will be the main display monitor.

Here is what that looks like: 

function getResolution(data) {
  for (var set in data) {
    if (data[set].hasOwnProperty("Resolution")) {
      videoOutputDevices.push(data[set]);
      var pushMe = data[set].Resolution.Width + "x" + data[set].Resolution.Height;
      resolutions.push(pushMe);
      screenIDs.push(data[set].id);
    }
  }
  console.log("Video resolutions have been found: " + resolutions.toString().replace(",", ", "));
  
  for (i = 0; i < resolutions.length; i++) {
    if (resolutions[i].split("x")[0] > largestScreen.split("x")[0]) {
      largestScreen = resolutions[i];
    }
    else if (resolutions[i] == largestScreen) {
      console.log("Multiple screens have been found: system will choose the first one found.");
    }
  }
  
  for (i = 0; i < resolutions.length; i++) {
    if (resolutions[i] == largestScreen) {
      screenChoice = screenIDs[i];
      break;
    }
  }
  console.log("Largest screen determined to be screen ID: " + screenChoice + "(" + largestScreen + ")");
  setDisplayUnimportant(screenChoice)
}



After this has been determined it just listens for either:

A) a change in output connectors at which point it would recalculate the metrics mentioned above, or 

B) a call starts, at which point the real meat of the program does its job. 

When a call connects it checks to see if it is a call with video capabilities and if it is, it begins the smart screen presentation features. 

async function getVideoCall() {
  let isVideo = false;
  if (await (xapi.status.get('Call'))) {
    await (xapi.status.get("MediaChannels Call")).then(data => {
      let index, channel = null;
      for (index = 0; index < data[0].Channel.length; index++) {
        channel = data[0].Channel[index];
        if (channel.Direction == 'Incoming' &&
            channel.Type == 'Video' && 
            channel.Video.ChannelRole == 'Main' &&
            channel.Video.Protocol != 'Off') {
            isVideo = true
        }
      }
    });
  }
  return isVideo
}

It will initially set the larger screen to be the main display monitor and sets the smaller screen(s) as the secondary monitor(s). 

As soon as the call listener recognizes that a screen share has been sent locally (through HDMI or through proximity) or remotely (through any SIP dialed device) it switches the screen so that the larger monitor is presenting the screen share and the other screens have the call audience. As soon as the screen share has ended, the larger screen takes over the main portion of the view layout that has been chosen by the end user for the meeting. 

We also gave a manual option to swap the screens with a button if the end user would like to have them be different from the programmatically decided screen. This automatically jumps right back into the smart monitor decisions once the presentation has concluded.  

Once we had the concept working it was time to show our clients. They were absolutely thrilled with how this project turned out. During the proof of concept meeting our client stated their intentions to expand implementation. This is just an example of the way Crossconnect approaches each client’s needs as their own unique situation while creatively attacking the challenge. If you’re facing technology challenges without a clear resolution, then reach out and let our team of engineers help you game plan a solution that works for your company. 

Jeff Kronlage CCIE# 46110 -CEO of Crossconnect