Machine to Machine Security

Zero Trust in the Datacenter

Machine to Machine Security

How Machine to Machine security differs from User to Machine

Now that we’ve looked at user to machine security in the datacenter, it’s time to look at machine to machine security (also known as east-west security).  The goals of machine to machine security are going to be quite different than user to machine security, and those goals will also depend on what types of applications and uses your data center will have.  For most datacenter environments, the primary goal of machine to machine security is to provide a last line of defense in case an intruder has managed to gain a foothold in your organization’s infrastructure.  This is not a reason to ignore machine to machine security! 

Remember: One of the core parts of the zero trust philosophy is to expect that intrusions will happen or may even be happening right now. 

Without machine to machine security, an intruder who gains access or control over a server now has free reign to move laterally to other, more important machines that may contain valuable data and do so undetected (or at least until everything’s encrypted and you’re being asked for Bitcoin). Once we add machine to machine security, moving laterally within the datacenter becomes a much bigger challenge – working around security and avoiding detection takes time and skill, buying you enough time to detect the intrusion before it can be successful.  At a lower level, the goal of machine to machine security is to ensure that servers (whether bare metal or virtualized in some way) only ever communicate to other specific servers and only using the ports and protocols needed for application functionality.  Anything out of the ordinary should be logged, and some particular traffic types being detected should raise an alert of some variety. 

An example of this would be two servers that only communicate via HTTPS – only port 443 should be allowed, and if one server attempts to open an SSH session to the other server, immediately email the security team – bad things are afoot if that’s happening.

Chris Crotteau

How do we implement Machine to Machine security?

With the importance of machine to machine security now clear, it’s time to discuss how it can be implemented.   Before any purchases of hardware or software are made, planning and design work is key.  Machine to machine security is complex, no way around it, and planning is key to having a successful deployment.  The first step is to build a data flow diagram – map out what machines should talk to what other machines and what ports should be allowed.  This will be the primary document used to build the security policy, so do not neglect this at all.  Next up is to determine as best as possible what the east-west throughput needs are.  Security throughput is expensive and in the context of datacenter traffic flows, is potentially a substantial bottleneck. There are a couple of ways to effectively provide machine to machine security, but to start with, there’s one way this shouldn’t be done, and that’s with traditional security ACLs. 

Note: While an ACL is a simple way to better secure things when looking at user to machine security, ACLs in the context of machine to machine security are unwieldy and hard to manage.  This leads to either hard to troubleshoot connectivity problems or user error accidentally leaving things open that shouldn’t be. 

The preferred tools are either physical or virtual firewalls – central management reduces the possibility of user error and advanced security features mean that more effective filtering, logging, and alerting is available.  The easier way to do east-west security is to segment based on server group.  In this kind of setup, like servers can communicate with each other directly, but will need to traverse a firewall to communicate with other types of servers.  For example, a database server cluster can freely communicate with its other cluster members, but to communicate with a web server, the communications have to pass through a firewall.  This kind of segmentation tends to be easier to maintain and have a minimal performance impact (assuming the firewall is sized correctly), since filtering is done on a limited scale and can be centralized in a single pair of security appliances.  No host involvement is needed here and device count is low, so small teams can effectively manage datacenter segmentation this way.

Complex datacenters and Machine to Machine security

For those organizations that provide multitenant services or who have the budget and staff to handle a very complex datacenter environment, the best security is provided through microsegmentation – instead of segmenting based on server group and only firewalling communications between groups, microsegmentation is the firewalling of every server from every other server.  The benefits are obvious – with every server’s traffic being inspected, the ability to detect and quarantine a compromised server before damage is done is much more robust.  This does come at the expense of significant deployment and maintenance complexity, though.  On the deployment side, microsegmentation can’t be effectively done with a pair of firewalls in the services leaf. Instead, a host-based system like VMWare NSX, per-host deployments of a virtual firewall like the Palo Alto VM series, or a microsegmentation-oriented networking system like Cisco’s ACI will be needed to ensure security throughput scales with host count.  On the management side, some form of automated rule creation is necessary – each VM deployed has its own security rules, and manually adding those rules to each firewall instance every time a new VM is deployed is not a practical thing to do.  This means building, testing, and maintaining a scripting infrastructure alongside everything else needed for proper microsegmentation. 

On the deployment side: Microsegmentation can’t be effectively done with a pair of firewalls in the services leaf. Instead, a host-based system like VMWare NSX, per-host deployments of a virtual firewall like the Palo Alto VM series, or a microsegmentation-oriented networking system like Cisco’s ACI will be needed to ensure security throughput scales with host count.

On the management side: Some form of automated rule creation is necessary – each VM deployed has its own security rules, and manually adding those rules to each firewall instance every time a new VM is deployed is not a practical thing to do.  This means building, testing, and maintaining a scripting infrastructure alongside everything else needed for proper microsegmentation. 

Combining the Machine to Machine strategies

Another approach to machine to machine security is to look at a hybrid of both models – high risk servers like internet-facing Web servers are subject to microsegmentation, while other types of applications only have filtering between server groups.  This way can tame the potentially extreme complexity of a full microsegmentation environment for applications that aren’t quite as much a security risk while gaining the security advantages for servers that are the most likely to end up compromised.

Zero Trust Philosophy Index

This series in security philosophy will explore the areas of security that need to be addressed in order to make your plan a reality and to discuss specific areas of focus on how to apply a zero trust mindset.

  • Datacenter
  • Route/switch
    • Features that ensure integrity of network operations
  • Wireless
    • Capabilities for detection and mitigation of RF based attacks
  • Endpoint
    • Network Access Control (NAC) – Ensuring that endpoint activity is controlled and that security threats are detected and mitigated before an exploit can occur.
  • Network Security
    • Ensuring that all traffic through the network is controlled and monitored for malicious activity
  • Cloud Security
    • Ensuring that user to cloud access is controlled and that cloud resources are appropriately provisioned and accounted for.
  • The Human Factor
    • Ensuring that end users are aware of security issues and responsible for their security choices.

What is a Zero Trust Philosophy?

Zero Trust: Its a philosophy, not a plan

The Mindset

One of the current trends in IT security that gets a lot of press and discussion is the idea of zero trust.  Zero trust, however, is really a philosophy, not a plan of action.  Specifically, zero trust is the philosophy that all IT resources, whether internal or external, should be treated as untrusted or even potentially compromised.  While this philosophy is simple, applying it to a live environment can be anything but simple!  Adopting a zero trust mindset requires a holistic approach to security and good cooperation between all stakeholders in the organization in order to execute on this philosophy.  This even extends beyond the technology infrastructure and onto the employees and even organizational policies themselves. 

It’s important to keep in mind that the threat landscape is always changing – what may have been good practice five years ago may not be so today. This is what drove us at Crossconnect to develop a series of posts laying out how to adopt a zero trust philosophy in your organization.

Getting Started

This series will explore various aspects of technology infrastructure with an eye towards how things are built when done so with a zero trust mindset.  Before we get into those details, it’s always best to take some time to think about the big picture questions – many of the areas of security that we’ll talk about will have options that range from ‘very simple’ to ‘year-long project.’ Being able to figure out where effort needs to be made will go a long way towards creating an effective security infrastructure for your organization. 

Foundations of Zero Trust Philosophy

The first step in planning is to think about the capabilities of your organization and the threats you’re likely to face.  Many threats are industry or organization specific, but there are some that are universal.  First, of course, is ransomware – probably the biggest general threat most organizations will face.  Fraud and theft also rank high in terms of general threats.  Sometimes, the organization’s data itself is the target – there are plenty of groups out there who want confidential information for any number of reasons, even just to leak to the public.  And finally, compromising your organization may be done so that the bad actors can gain access to a 3rd party’s network via yours – think contractors and other service providers.

Next, look at your internal capabilities.  Security is, unfortunately, time-consuming to manage.  As such, it gets hard to manage some solutions with limited staff and budget.  Consider what your organization is capable of managing and monitoring when looking at security products and services – a simple but well-managed system is going to be more effective than a very capable, yet complex and maintenance-intensive system that gets neglected.

Once you have a good understanding of what your threats and capabilities are, then it’s time to build a plan.  The core of a security plan is to look at the applications in use, files/information that needs access, and the infrastructure that they run on, then figure out who/what needs access.  The goal is to limit access for any device, user, or application to only the resources it needs. 

Zero Trust Philosophy Index

This series will explore the areas of security that need to be addressed in order to make your plan a reality and to discuss specific areas of focus on how to apply a zero trust mindset.  

  • Datacenter
  • Route/switch
    • Features that ensure integrity of network operations
  • Wireless
    • Capabilities for detection and mitigation of RF based attacks
  • Endpoint
    • Network Access Control (NAC) – Ensuring that endpoint activity is controlled and that security threats are detected and mitigated before an exploit can occur
  • Network Security
    • Ensuring that all traffic through the network is controlled and monitored for malicious activity
  • Cloud Security
    • Ensuring that user to cloud access is controlled and that cloud resources are appropriately provisioned and accounted for.
  • The Human Factor
    • Ensuring that end users are aware of security issues and responsible for their security choices.