The Hidden Tax: Egress Fees, Reserved Instances, and Other Cloud Surprises
> This is part of a series of articles exploring Cloud Economics. The costs and impacts of cloud decisions, told from the perspective of a technology industry veteran. If you want to start at the beginning or see all the related essays, check out the series page.
People joke about the cloud being the Hotel California. What they mean is: it's genuinely great until you try to leave. By then, you've gotten so comfortable with the service and the features that the $90K egress fee feels almost reasonable. Leaving means losing something you've come to depend on. That's what gets lost in the analogy: the cloud's lobby really _is_ beautiful. The rooms really _are_ comfortable. That's what makes the checkout fees so cruel.
You start with a simple architecture. Compute, storage, maybe a database. Then one month your bill is $2,000 higher than expected. The next month, it's $3,000 higher. By month six, you're triple the forecast and your CFO is asking why the infrastructure budget looks like a hockey stick graph.
You hunt for the culprit. It's rarely the compute you planned for. It's almost always something buried in the footnotes of your invoice. A line item you didn't know existed until it became the most expensive thing on the bill.
The cloud provider's playbook is elegant: offer low entry prices to get your data in, then make the economics of leaving, or even just operating efficiently, punishingly complex. This isn't unique to one provider. All three major clouds, AWS, Azure, and Google Cloud play the same game, though the details vary by geography and service.
Take egress fees. This is the bait-and-switch at the heart of the cloud business model. Uploading data to S3, Blob Storage, or Cloud Storage is free. Come on in! Load all your petabytes! But downloading that data? That's where the meter starts running. AWS charges **$0.09 per gigabyte** for outbound data transfer. Azure is nearly identical at **$0.087 per GB**. Google Cloud is slightly cheaper at **$0.08 per GB** , but all three are in the same ballpark. That sounds like rounding error until you do the math. Moving 50TB out of any of them costs between $4,000-$4,500. Moving a petabyte across the board costs $80,000-$90,000. Just for the privilege of retrieving your own files.
This is intentional. It's called "lock-in by economics." You don't need legal contracts to keep customers when leaving costs a ransom payment. Data gravity is real, and Gartner estimates that egress fees represent 10-15% of total cloud spend for data-heavy organizations across all providers. For some trading platforms or media companies, it's closer to 25-35%. Most teams only discover this when they try to migrate or consolidate, at which point the bill arrives like a speeding ticket you didn't know you were earning.
Then there's the Reserved Instance (RI) pricing. The pitch is seductive: "Commit to this instance type for three years and save up to 72%!" Who wouldn't take that deal? AWS, Azure, and GCP all offer similar discount structures for committed capacity, but the mechanics differ slightly. AWS Reserved Instances lock you into specific instance families. Azure Reserved VM Instances are similarly rigid. GCP Committed Use Contracts offer a bit more flexibility with automatic sustained-use discounts on top.
The catch is that you're locking your infrastructure choices in amber. Technology moves fast. The m5.large instance you reserved in 2022 looked archaic by 2025, but you're still paying for it. AWS releases new instance families like Graviton and Trainium that offer better performance for less money, but your reservation is tied to the old hardware. You can't change regions. You can't change operating systems. You often can't even change tenancy models without penalties. The same dynamic applies across all three clouds, commitments are powerful discounts, but they're also anchors.
The result is a portfolio of "savings" that actually act as anchors. Organizations frequently end up paying for capacity they don't need or can't use because their workload evolved but their contract didn't. Savings Plans and Committed Use Discounts offer more flexibility. You commit to spend per hour or compute dollar rather than specific hardware but the savings percentages are lower. It's a gamble: bet on your infrastructure staying static for 36 months, or pay a premium for the flexibility to change your mind.
For teams trying to be clever, there are Spot Instances on AWS, Spot VMs on Azure, and preemptible VMs on GCP. This spare capacity sold at a 50-90% discount sounds like a steal. The gotcha is the eviction notice. All three can terminate your instance with two minutes' warning if a higher-paying customer needs the capacity. It's great for batch jobs or stateless web servers. It's catastrophic for stateful databases or long-running processes that can't handle interruption. Yet I still see teams trying to run production databases on Spot pricing models to save a few bucks, only to act shocked when the instance vanishes during a traffic spike.
Storage has its own traps across all providers. S3 Glacier, Azure Archive, and Google Cloud's Coldline and Archive storage tiers all look incredibly cheap on a per-gigabyte basis. Teams dutifully move archive data there to save money. Then the auditors show up, or a data scientist wants to run a historical analysis, and they trigger a retrieval. Glacier retrieval costs **$0.03 per GB** , more than 7x the monthly storage cost. If you store data for a year and retrieve it once, you've likely wiped out all your savings. It's the "cheap" storage that becomes expensive the moment you actually need to use it.
And then there's the "death by a thousand cuts" API pricing. S3 charges $0.0004 per PUT request and $0.000004 per GET. Per call, it's negligible. But bad code scales really well in the cloud. I've seen a misconfigured backup script that called `ListBucket` 100 times per file instead of once per directory. It generated millions of API calls in a single night. A script running 1,000 GET requests per second runs up a bill of **$125,000 a year** just for API chatter. Nobody budgets for API calls, but they show up on the bill all the same. One AWS customer saw their bill spike from $63 to $834 in a single hour just from API requests. Azure and GCP have similar vulnerability patterns.
Disaster recovery (DR) is another area where reality collides with the spreadsheet across all three platforms. Every architect wants a multi-region active-active setup. It sounds robust. It's also wildly expensive. Replicating data across regions costs money ($0.02/GB on most providers). NAT gateways charge for outbound traffic ($0.045/GB). Load balancers charge by the hour. A true multi-region DR strategy can easily double your infrastructure costs regardless of which provider you choose. The hard choice isn't technical; it's financial. Do you pay the premium for resilience, or do you accept that "US-East-1 is down" means you are too?
Here's where it gets interesting: the real opportunity isn't choosing one cloud. It's architecting _across_ clouds with intention.
Over 76% of enterprises have adopted a multi-cloud strategy, and the cost optimization opportunity is significant but it requires a different mindset. Multi-cloud isn't a fool's errand if you approach it strategically. The key is workload placement. Not all workloads cost the same across all clouds. GCP's baseline compute is roughly 25% cheaper than AWS for comparable instances, but AWS might be better for your database workload. Azure excels in certain enterprise scenarios with hybrid licensing advantages. The real win comes from matching workloads to providers based on actual cost-per-outcome, not just price-per-hour.
You can also leverage competition. When you have workloads spread across multiple platforms, you have genuine negotiating leverage. Providers know that over-pricing on one service might push you to migrate workloads elsewhere. That leverage is worth real money in volume discounts sometimes 15-25% beyond standard rates.
But multi-cloud cost optimization requires discipline. It starts with visibility. Advanced observability platforms like **Dynatrace, Datadog, or even Splunk** are critical here. They don't just monitor uptime; they map dependencies to costs. By using Dynatrace's Carbon Impact or Grail data lakehouse, you can actually see which specific microservices are driving your bill and more importantly, whether that spend correlates to business value. If you can't trace a dollar of infrastructure to a dollar of revenue, you're just guessing.
For specialized cost management, tools like **nOps** and **CloudEagle** have emerged as highly rated solutions (4.7+ stars on G2) for taming this chaos. nOps excels at automated commitment management, effectively playing the RI market for you so you aren't stuck with shelfware. CloudEagle has carved out a niche in SaaS and procurement optimization, helping teams rationalize the dozens of other tools that clog the budget.
But mostly, you need a monthly hygiene check. Look for the zombies:
* **Untagged resources** (who owns this?)
* **Idle compute instances** (the dev server nobody turned off)
* **Unattached storage volumes** (orphaned from dead workloads)
* **Over-committed capacity** (paying for RIs/CUCs you aren't using)
* **Cross-region or cross-cloud data transfer** (misaligned workload placement)
* **Unused managed services** (database instances for canceled projects)
* **Inefficient data retrieval** (archive storage accessed too frequently)
* **Unoptimized load balancer routing** (using expensive instances for simple traffic)
* **API call runaway** (misconfigured monitoring or batch scripts)
* **Duplicate tools and services** (paying for three logging solutions when one works everywhere)
Any one of these can cost thousands per month. Together, they explain why your cloud bill is 30% higher than your forecast. Organizations that implement strict FinOps practices like; unified tagging, budgeting, anomaly detection, and cross-cloud tools often see cost reductions of 20-30% in Year 1 just by eliminating this waste.
The payoff isn't just financial. Better cost hygiene forces better architecture decisions. When you're tracking egress costs, you naturally design fewer unnecessary data transfers. When you're monitoring commitment utilization, you right-size your reservations. When you're comparing costs across clouds, you make better workload placement decisions.
Cloud providers have perfected the art of the introductory rate, except the intro period is measured in milliseconds. But the question isn't whether you're overspending. It's whether you're willing to build the discipline, tooling, and organizational practices to spend _intentionally_. The difference between chaos and optimization isn't technology. It's architecture. And that's something every organization can choose.
#### Sources
Gartner: Egress Fees as % of Cloud Bill
Dynatrace: Cloud Cost Optimization
Hykell: Egress Costs Analysis
nOps: G2 Reviews & Features
CloudEagle: G2 Reviews & Features
New Horizons: Multi-Cloud Cost Management
Growin: Multi-Cloud Cost Optimization 2026
Reddit: API Cost Spike Example
AWS S3 Pricing