Skip to main content
Home > Resources > General > Zero Trust in the Datacenter – Protecting Your Servers from Your Users

Zero Trust in the Datacenter – Protecting Your Servers from Your Users

For the first part of our explorations of the zero trust philosophy, we’re going to look at the datacenter. 

user to machine security, protecting the datacenter

It’s All in the Flow

When we look at the datacenter we have two types of traffic flows, each of which needs to be looked at from a security perspective.  First is user to machine security.  Protecting one’s datacenter resources from the users has always been a necessity, however the types of threats and what we consider a user have changed a lot over the years.  Second is machine to machine security.  This area of datacenter security is much newer and has historically been challenging and expensive to implement.  We’ll be focusing on user to machine security for now – machine to machine security will be discussed in a future post.

Note: What we discuss here can easily be applied to servers located on-prem, co-located, or even in the public cloud. 

On to User to Machine Security

The primary type of user to machine security is what’s commonly referred to as north-south security, where the focus is on Internet users.  Exposing necessary resources to the Internet is a requirement for obvious reasons, but Internet-based threats are omnipresent and can be of considerable sophistication.  It may seem obvious, but it bears repeating that any security policy should be built with only the necessary access privileges granted.  For Internet-facing users, this is usually easy – modern websites/applications usually just require that HTTP/HTTPS traffic coming in on ports 80 and 443 is allowed.  Legacy applications can complicate this process, though, so always work with the application team to understand all requirements for the application to function and build access policies appropriate to your environment.

Beyond simple access controls, it’s important to also consider what the users are inputting into the application. 

For example, putting malicious information in an HTTP POST request is a very common way of trying to get the application to give up information, grant inappropriate access requests, or otherwise misbehave in a way beneficial to the attacker.  How to abuse application inputs is going to be firmly out of scope for this post – whole books have been written on how to exploit things at the application level.  Addressing this kind of abuse is also more complicated, too.  Port and protocol filtering is really a go/no go type of rule, while inspecting inputs is a lot more complicated due to the much more open nature of user inputs.  There are application best practices for sanitizing user inputs, but especially with proprietary applications, it’s not always possible to do so.  It’s also better to be able to stop malicious traffic before it touches the application.  For this, we most commonly use a web application firewall (WAF). 

What is a WAF?

With a WAF, we can look directly at the payload of interesting packets and filter based on their contents.  For example, a field on a website that’s meant for name input shouldn’t ever have SQL syntax appearing in it.  On the WAF, a rule is created (using lots of regexes!) to ID anything like this and block it. WAFs and similar application specific security tools are, unfortunately, a Very Hard Thing to implement.  The nature of a WAF means that HTTPS traffic needs to be decrypted, which presents challenges in not breaking TLS.  Once that’s been dealt with, building appropriate rules needs close collaboration between the security and application teams to ensure that the WAF is blocking everything it should be, and that rules are updated as applications are updated or as threats emerge.  The infamous log4j vulnerability is one that a good WAF rule can easily block, but building that rule requires good Javascript knowledge and an understanding of how the vulnerability is exploited.  To see what a sample WAF rule blocking log4j exploit attempts looks like, F5 has a ready to go iRule available

Other Machine to User Security Considerations

The next part of user to machine security is a somewhat newer topic in security – protecting your datacenter resources from your own users.  This is also referred to as internal segmentation.  Historically, trying to firewall your users from your datacenter was difficult, expensive, and of limited value.  Times have changed, and we’re at a point where your users can be as dangerous to the business as an Internet based threat.  Studies done on where attacks gain their initial foothold show that 90% or more attacks begin with a user opening a malicious email or running a bad executable file.  Once run, the malware will begin to crawl the network looking for other vulnerabilities to gain a foothold in the datacenter and then go to work exfiltrating or encrypting data or otherwise disrupting business operations.  With a traditional setup where internal users are considered trustworthy, their traffic is considered good by default and doesn’t warrant being firewalled.  This is not a good stance to have considering the above statistic. 

Callout: Internal segmentation of some variety should be considered a need to have in a modern security-first network design. 

Implementing Internal Segmentation

This is one of those tasks whose complexity is hard to determine.  On the simple end of the spectrum, putting some basic security ACLs in place at the border device between the DC and the users is surprisingly effective for the amount of effort it takes to implement.  Most organizations should be looking at a firewall for this purpose, though.  Modern firewalls will have many more options for inspecting traffic, detecting threats, and alerting IT staff when something is detected.  Some features are actually easier to implement on an internal segmentation firewall, too.  SSL inspection is one of the best examples of this.  Cracking open TLS traffic is a notoriously resource intensive task and if done for Internet-bound traffic, can easily overwhelm a firewall, break websites, or potentially cause HR issues when specific types of user communications are inspected (banking and health information are two major no-nos for deep packet inspection).   SSL inspection, when done between users and the datacenter, has none of these concerns.  Traffic volumes between users and the DC are often well-known and don’t change much, so firewalls can be effectively sized to avoid performance issues.

For user information concerns, interaction with internal resources and data is pretty much open season for whatever security you want to implement – there’s nothing in, say, an employee’s interactions with the organization’s ERP system that would be out of bounds to inspect, log, and audit.

Some Final Considerations

One final thing to consider is how to treat access for 3rd party contractors and how to properly categorize their traffic.  The prevailing wisdom these days is to treat 3rd party contractors as equivalent to Internet-facing users, as several high profile intrusions were launched via a 3rd party with direct access to sensitive resources.  It’s a little more difficult, though – simply just allowing ports 80 and 443 through to a list of servers isn’t enough.  3rd party contractors may need specialized access or just require a large number of permit rules compared to either an employee or Internet user, so close coordination with your 3rd party is required to keep access requirements to a minimum.  Due to the complexity of building network-based security policies suitable for contractors, another option that we’re seeing adopted more lately is deployment of jump servers with enhanced access management software, such as what Bomgar or SecureLink provide.  This type of software provides additional capabilities to the IT staff, such as full session monitoring, access notification, and the ability to only allow logins at specific times or only if explicitly approved by the IT staff.  With this level of control on the jump server, it becomes easier to build other security policies in a much more general way.  Since all users are accessing resources via a jump server, access control rules only have to permit a small number of hosts from known subnets, and detailed application based or port/protocol based rules can often be omitted or reduced in complexity.