If you’re keeping in touch with new services provided by AWS, you probably heard about new security monitoring tool: GuardDuty. You probably also noticed a whole new family of Elastic Load Balancers (v2), which includes Network Load Balancers (NLB). Deploying those two new services may generate some unexpected results - and here’s why.
GuardDuty is a new security monitoring service from AWS, which analyses CloudTrail logs, VPC flow logs and DNS logs from your VPC (if using AWS-provided DNS in your VPC) and generates “findings”: when suspicious traffic reaches or is generated from your network, or when your users act out of character. Side note: GuardDuty will work, even if you don’t enable CloudTrail or VPC flow logs - this data is still gathered by AWS behind the scenes, and GuardDuty can access it, without you having to pay for (and therefore being able to see) those logs.
GuardDuty is region-specific, so you do have to enable it in each region separately - AWS recommends you enable it in all regions, so it can monitor unusual activity everywhere (for example, an instance being launched in region that’s never been used before). It will then analyse the logs from that region and populate it’s findings within AWS Console. You can also use CloudWatch Rules to perform an action when a new finding is in place - send an email, execute a Lambda function, etc.
Network Load Balancers
Network Load Balancers are one of two new ELBv2 options from AWS (other one being Application Load Balancers). “Old” Classic Load Balancers were able to act on two layers of protocols - TCP and HTTP/HTTPS. They were good at their job, but because they were able to do two separate types of load balancing, they couldn’t offer features that were specific to only one of those. That’s were ELBv2 come in - Application Load Balancers take care of HTTP/HTTPS, while Network Load Balancers deal with TCP. Thanks to that, ALB can offer features like URL-based routing, while NLB can have static IPs and be much faster. For full comparison of features, see AWS website.
One of the cool features of NLBs is the fact that it’s able to preserve the original IP address of the caller - so you don’t have to use Proxy Protocol or X-Forwarder-For headers to find out who’s accessing your content. This is likely done by Direct Routing on the NLB (though AWS doesn’t say that officially). It means that IP address of the traffic coming from your NLB into your VPC will remain the same public IP - for all intends and purposes it will seem like an external IP address is accessing resources within your VPC. It will look like this in VPC flow logs and for Network ACLs and security groups.
Because of that you will need to allow external IPs to access EC2 instances behind NLB, both in Network ACLs and security groups . If you’re using standard public-private division of your subnets, that means changing the NACLs for the private subnets to allow external traffic in and out. While this may freak out your security team, your resources within those subnets are still safe - assuming you configured the subnets correctly. If those instances have no public IP addresses, and routing to the internet is done via NAT (and not Internet Gateway), they cannot be accessed from the internet. Even though, it opens a way for some unintended problematic changes, so I’d personally recommend creating a separate set of subnets for those EC2 instances and leaving rest of private resources with properly locked NACLs.
Reading the above descriptions, you may have an inkling as to what the problem here may be. If you enable GuardDuty and use NLBs, the traffic coming through NLB will look like it’s coming in directly from the internet (even though it’s not, it’s just NLB “spoofing” the IP), but apart from you knowing that, there’s no way for GuardDuty to tell that from VPC flow logs. So it will treat that traffic as if it is coming directly from the internet into your EC2 instances and generate its findings accordingly!
This will cause GuardDuty to generate some scary-looking false-postives:
EC2 instance has an unprotected port which is being probed by a known malicious host
This is one you may see quite often in this scenario. It means that an IP address that’s known to be malicious is accessing your NLB - and NLB is routing that to the traffic port on your EC2 instances. Since you essentially had to open this port to all traffic (see above), of course it looks like someone is poking directly at an unprotected port on your instance!
Double check that the port listed in the finding is the one you’re expecting NLB to route traffic to - if that’s the case, then this finding is a straight false-positive.
EC2 instance i-xxx is communicating with IP address x.x.x.x on the Tor Anonymizing Proxy network
As long as “Resource role” is listed as “Target” (i.e. the connection is initiated from Tor and not from your instance), it’s generally nothing to be worried about. Whatever you’re hosting being NLB - someone is simply using Tor to access it.
From GuardDuty perspective, it looks like someone is accessing your instances directly from Tor, which (if true) could be a cause for concern, but since you know the traffic is going through NLB, that’s not really a problem. Again, double check the port listed - if that’s your NLB traffic port, you can ignore this finding as false-positive.
EC2 instance i-xxx is communicating with a disallowed IP address x.x.x.x on the list abc
You’ll only see this one, if you uploaded custom “threat list” into GuardDuty, with IP addresses of hosts you know to be malicious (or which otherwise you don’t want to deal with). It’s a variant of the first finding - host that GuardDuty considers malicious if accessing your resources. Check that “Resource role” is listed as “Target” and port is one of your traffic ports used by NLB - if so, the traffic is being routed through NLB and not into your EC2 instance directly (since it shouldn’t have a public IP address if you configured your subnets correctly!). If you have a (short) list of IP addresses to which you don’t want to serve requests, you can add those into the NACL of the subnet your instances are sitting in. This will block them from reaching your instances and should make this finding go away.