
AWS bills are sneaky. You log in, see EC2, S3, Lambda costs, etc., but networking often lurks, adding chunks of spend that you didn’t expect.
The best way to surface AWS networking waste is to slice your CUR by DataTransfer-, tag your resources, and monitor key paths like NAT and cross-AZ. It’s a simple process but it reveals the stuff most people miss.
Here’s your breakdown of AWS networking costs: what’s changed, what’s still costing you, and exactly how to optimize.


💰 Updated Pricing Snapshot
Here are the current resistance points and cost zones that trip people up. (Prices ~as of Sep 2025; your region might differ.)

Where People Lose Money & What To Do About It

Here are patterns I regularly uncover when I dig into CURs, plus the fixes that give the biggest returns.
1. Cross‑AZ chatter no one notices
If two services frequently talk across AZs (say, metrics, logs, cache sync, etc.), every GB costs twice (in + out). Companies with microservices architecture especially feel this.
Fix: Co‑locate services in the same AZ if possible. Use cross‑AZ only when availability absolutely demands it. Or batch movement rather than constant sync.
2. Heavy NAT Gateway traffic
Lots of “just outgoing API calls,” or pulling data/updates, is routed through NAT. That means you pay per GB + NAT‑hour. It scales more than you think.
Fix: Use VPC endpoints for AWS services; offload external API calls to be cached or minimized; replace NAT Gateway with managed instances if traffic is light/moderate.
3. Misrouted traffic via public paths
Public load balancers or EC2 instances with public IPs often cause traffic to leave AWS internal comfort into IGW / Internet Gateway, then return. That costs full egress/inbound (where applicable).
Fix: Use internal load balancers for traffic within VPCs; use private IPs; eliminate unneeded public IPs; define proper routing so that internal traffic stays internal.
4. Not using free/low‑cost tiers and endpoints
Many teams paying for access to S3 or DynamoDB via NAT or internet gateways when they could use free gateway endpoints. PrivateLink for APIs might cost a little, but often less than wasted egress.
Fix: Always evaluate whether you can swap to endpoint. As usage scales, even small savings multiply.
5. Direct Connect under‑utilized or fearingly avoided
Lots of folks avoid Direct Connect because they think setup/port cost is too high, or because SLAs/time to provision seem scary. But when traffic is steady and high, it often pays off faster than expected.
Fix: Run your numbers. Use formula:
Cost_DX = (Port_Cost_per_Hour × Hours_in_Month) + (Egress_GB × DX_Rate_per_GB)
Cost_VPN_Egress = Egress_GB × Internet_Rate_per_GB (+ any VPN fixed costs)
Compare. If DX cost < VPN + internet egress cost at your usage volume, it’s time.
🔧 Breaking Down NAT Gateway Costs
Let’s get into the numbers. NAT Gateway pricing has two main components:
- Hourly Charge: $0.045 per NAT Gateway per hour (or part of an hour) it’s provisioned and available.
- Data Processing Charge: $0.045 per GB of data processed through the NAT Gateway.
These rates are for US East (N. Virginia) — check your region for variations. (Pricing as of September 2025; AWS rates are subject to change, so verify on the AWS VPC pricing page.)
For a single NAT Gateway running 24/7 in a 30-day month (about 730 hours), the hourly cost alone is:
$0.045 × 730 = $32.85
Add data processing: if you process 1,000 GB in a month, that’s another $45.
Total: $77.85
But wait — there’s more. If you’re using NAT Gateway for high availability, you’ll need one per Availability Zone (AZ). For three AZs, that’s:
3 × $32.85 = $98.55 (plus data processing across all)
And don’t forget: data transfer out (DTO) to the internet is charged separately:
- First 10 TB/month: $0.09/GB
- Next 40 TB: $0.085/GB
- And so on.
Even traffic to AWS services like S3 via NAT incurs the processing fee — even if it’s in the same region. That’s where smarter routing helps.
💡 Alternatives to NAT Gateway for Cost Savings
Before jumping ship, optimize your NAT setup:
A. Consolidate NAT Gateways
If HA isn’t critical, use fewer NATs. But for production, one per AZ is recommended.
B. Monitor with VPC Flow Logs
Find your high-traffic sources (e.g., yum
/apt
updates, external logs) and route them smarter.
C. VPC Endpoints
For traffic to AWS services like S3 or DynamoDB, use VPC endpoints:
- Gateway endpoints (S3/DynamoDB): Free — no hourly or data processing charges.
- Interface endpoints (other services): Small hourly fee (~$0.01 per AZ) plus $0.01 per GB processed.
Savings example: If 80% of your NAT traffic is to S3, switching to endpoints could save:
$0.045/GB × 80% of your traffic
D. NAT Instances
For low-to-medium traffic, run your own NAT on an EC2 instance (e.g., t3.micro
).
- Cost: ~$0.0052/hour (EC2 instance only)
- No separate processing fee
Savings: Up to 70–90% for <100 GB/day
But: You manage patching, scaling, and failover. (For high availability, add Auto Scaling Groups and Elastic Load Balancing, which can add ~$20/month in overhead.)
🌐 Scaling to Direct Connect: Hybrid Networking Costs
As your setup grows — especially with on-premises integration — NAT might not cut it. Enter AWS Direct Connect: a dedicated fiber connection from your data center to AWS.
When to Use
- High-volume traffic (>1 TB/month)
- Compliance needs
- Low-latency, hybrid apps
Pricing Breakdown (US Regions)
- Port Charges: $0.30/hour for 1 Gbps = ~$219/month
- Data Transfer Out: $0.02/GB to contiguous US
- Data Transfer In: Free
Example: 10 TB out/month:
$219 (port) + $200 (DTO) = $419
Via internet: ~$900+ in DTO alone
Hosted vs. Dedicated
- Hosted (via partners) for smaller capacities (e.g., $0.33/hour for 1 Gbps)
- Dedicated for 1–100 Gbps
Note: Partner cross-connect fees can add $500+/month depending on provider.
Setup Basics
- Create a virtual interface (VIF)
- Configure BGP peering
- Integrate with your VPC using Private VIFs via VGW or Transit Gateway
Gradual Migration Strategy
You don’t have to flip the switch overnight. Start by routing non-critical or bulk transfer traffic over Direct Connect. Keep NAT for internet-bound egress.
This hybrid approach lets you benchmark performance and cost without risking your production stack.
🛠️ Tools & Visibility: Because You Can’t Fix What You Don’t See
Visibility matters. That’s where being able to slice your CUR, tag your resources, and monitor “DataTransfer‑“ metrics comes in. If you use something like Cost Explorer + CUR + alerts, you’ll catch:
- Unexpected egress spikes
- New NAT/Gateway usage
- LB configuration changes increasing cost
Set alerts when egress for a service or AZ crosses a threshold. Tag everything (LBs, EC2, NAT Gateways, etc.) so you can attribute and act.
🧭 Parting Thoughts
- Visibility is step one — know exactly where your bytes are flowing.
- Fix what’s cheap first — VPC endpoints, internal routing, NAT optimizations often give big wins with little effort.
- Consider Direct Connect when your usage demands it — don’t fear the fixed cost if you have steady, predictable data volumes.
- Review every 3‑6 months — network usage patterns shift (new services, traffic patterns, etc.), so what was optimal before may leak cost later.