If you run workloads in Amazon EKS, you might have noticed a peculiar behavior: when apps in EKS pods communicate outbound with other servers, target servers “see” network traffic coming from the EKS node IP, not the pod IP. This article explains why this behavior exists & how to change it if required.
Assuming you’re using VPC CNI, this behavior occurs only if your EKS nodes are in private subnets & the target servers are in VPCs connected to the EKS VPC by peering, transit gateway, direct connect, etc.
When EKS pods communicate with servers in the same VPC, the target servers will “see” the pod IP, not the node IP.
And when EKS pods in private subnets with NAT gateways, communicate with public servers on the internet, the target servers will see the traffic coming from the NAT gateway, as is the case for any non-EKS private AWS workload.
By default, each pod in your cluster is assigned a private IP from its VPC’s CIDR. Pods in the same VPC communicate with each other using these private IPs. When a pod communicates to any IP that isn’t within its VPC, the VPC CNI plugin translates the pod’s IP to the pod’s node’s primary ENI’s primary IP. This is why a server running in a VPC connected to the EKS VPC sees the node IP, not the pod IP.
The only exception to this “source NAT” behavior is if a pod’s spec contains hostNetwork: true. Then its IP isn’t translated to a different address. This is the case for the kube-proxy & VPC CNI EKS-managed plugins. For pods of these plugins, their IP is the same as their node’s primary IP, so the pod’s IP isn’t translated.
Servers on the public internet always see your NAT gateway’s public IP. They’re unaffected by external SNAT.
A quick way to test this, is to use a webhook from webhook.site, instead of spinning up a server. curl the webhook from an EKS pod, before & after enabling external SNAT, & in both cases, the webhook will see the NAT gateway IP:
This article demonstrates how disabling VPC CNI’s source NAT behavior, can be useful in cases where, identifying the exact source of a network request originating from an EKS cluster, is important within a company’s private network.
For example, consider apps in EKS pods authenticating against your self-hosted Active Directory servers, accessing files in central FTP/NFS servers, or using a centrally-hosted API gateway like Kong.
About the Author ✍🏻
Harish KM is a Principal DevOps Engineer at QloudX & a top-ranked AWS Ambassador since 2020. 👨🏻💻
With over a decade of industry experience as everything from a full-stack engineer to a cloud architect, Harish has built many world-class solutions for clients around the world! 👷🏻♂️
With over 20 certifications in cloud (AWS, Azure, GCP), containers (Kubernetes, Docker) & DevOps (Terraform, Ansible, Jenkins), Harish is an expert in a multitude of technologies. 📚
These days, his focus is on the fascinating world of DevOps & how it can transform the way we do things! 🚀
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.