KyleG
banner
kylegeisler.bsky.social
KyleG
@kylegeisler.bsky.social
An Austin based technologist blending strategy and deep technical expertise. Comfortable in boardrooms or terminals. When offline: drone racing, mountain biking. Innovation lives at the crossroads of expertise, curiosity, and risk.
Ever thought a deepfake could land a job? 🤖💼 Just saw a CEO grapple with this when a fake candidate applied! Are we ready for AI in HR? The ethical chaos behind those slick resumes is wild. I'm not a deepfake, but I do have impostor syndrome. Can we trust what we see?
February 2, 2026 at 7:51 PM
Reposted by KyleG
FinOps is the adult conversation your cloud strategy needs. Think of it as couples therapy for IT and Finance. For years, they've been talking past each other.
The Discipline Your Cloud Strategy Is Missing
<p>Cloud infrastructure has a nickname in the industry: Hotel California. You can check out anytime you like, but you can never leave. Most organizations discover this the hard way. Usually around the time they realize they've also given their kids unlimited access to the minibar and pay-per-view. Engineering orders what it needs. Finance never sees it coming. By checkout, the bill is devastating.</p><p>That's not hyperbole. It's the default state of most cloud deployments.</p><p>Most organizations moved to the cloud in 2018 because "that's what you do." The first year looked fine. Costs were manageable. Then Year Two hit. The bill climbed 30%. Year Three? Another 40%. By Year Four, the CFO was asking why the infrastructure budget tripled while headcount stayed flat, and nobody in the room has an answer that didn't sound like "we stopped looking."</p><p>That's the gap FinOps fills. Financial Operations brings together finance, technology, and business teams to master cloud unit economics. Think of it as DevOps for your cloud bill. Where DevOps automated deployment pipelines and broke down silos between Development and Operations. , FinOps does the same thing for cloud spending. It forces finance, engineering, and product to speak the same language and share accountability for costs they used to treat as someone else's problem.</p><blockquote>The FinOps Foundation breaks it into three pillars: <strong>Inform</strong>, <strong>Optimize</strong>, and <strong>Operate</strong>. Inform means visibility. You can't fix what you can't see, and most cloud environments are financial black boxes. Optimize means efficiency. Once you know where the money's going, you kill the waste. Operate means governance. You build <strong>systems </strong>that keep costs from spiraling again the moment you look away.</blockquote><p>Simple in theory. Brutal in practice.</p><h2 id="why-organizations-fail-at-cloud-cost-management">Why Organizations Fail at Cloud Cost Management</h2><p>Here's why most organizations fail: cloud spending is decentralized by design. Every team can spin up resources. Every developer can provision a database. Every product manager can greenlight a new service. The cloud's promise was agility, and agility means removing gatekeepers. But when you remove gatekeepers, you also remove accountability. Finance sees a $2 million monthly AWS bill with thousands of line items and no clear owner. Engineering sees infrastructure as an operational concern, not a budget concern. Nobody owns the problem, so the problem grows.</p><p>Add shadow IT to the mix and things get worse. Shadow IT is the uncontrolled proliferation of cloud resources that outside normal process and IT's typical sphere of control while nobody's watching. A data scientist spins up a GPU instance for a weekend experiment and forgets to shut it down. A product team provisions a test environment in GCP because AWS was "too slow to approve." Marketing buys a SaaS tool that integrates with three other cloud services, none of which IT knows about. Research from 2024 shows that 30-40% of IT spending in large enterprises is shadow IT. That's not a rounding error. That's an entire parallel infrastructure running outside your governance model.</p><p>The security angle makes it worse. 21% of cloud files contain sensitive data that may not be under proper governance. When teams bypass IT to move faster, they also bypass the security controls that keep your data from leaking. Shadow IT isn't just expensive, it's a compliance nightmare waiting to detonate.</p><h2 id="how-to-build-accountability">How to Build Accountability</h2><p>So how do you fix it?</p><p>Start with ownership. Decentralized spending requires decentralized accountability. Every team that can spin up infrastructure needs to own the cost of what they're running. That sounds obvious, but most organizations treat cloud costs like a shared utility bill. When nobody owns a specific number, nobody feels pressure to reduce it. The fix is tagging. Tag every resource with the team, project, and cost center that owns it. Make cost visibility part of your deployment workflow. If a developer can't tag it, they can't deploy it.</p><p>Then build dashboards that make the costs impossible to ignore. AWS Cost Explorer gives you raw data, but raw data doesn't drive behavior change. You need dashboards that show each team their monthly burn rate, their month-over-month trend, and how they compare to similar teams. Humans are competitive. When engineering Team A sees that Team B is running identical workloads for 40% less, they'll start asking why.</p><p>Set up anomaly detection and alerts. A runaway script that calls S3 a thousand times per second can burn through $125,000 a year in API fees before anyone notices. Anomaly detection catches it in hours, not quarters. AWS Cost Anomaly Detection uses machine learning to flag unusual spending patterns. It's not perfect, but it's better than learning about a $50,000 surprise three weeks after the bill closes.</p><p>Regular optimization reviews are the boring part that actually works. Monthly meetings where engineering, finance, and product review spending together. Not finger-pointing sessions. Collaborative reviews where you ask "why did this service cost 50% more this month?" and actually dig into the answer. Sometimes it's legitimate growth. Sometimes it's an idle RDS instance someone forgot about. You won't know until you look.</p><h2 id="the-cultural-problem-nobody-wants-to-admit">The Cultural Problem Nobody Wants to Admit</h2><p>But the hardest part isn't technical. It's cultural.</p><p>FinOps requires engineers to care about cost the way they care about performance. That's a tough sell. Engineers optimize for speed, reliability, and elegance. Cost is someone else's job. Except in the cloud, architectural decisions <em>are</em> cost decisions. Choosing a larger instance type costs more. Storing data in S3 Standard instead of Intelligent Tiering costs more. Running workloads in us-east-1 instead of us-west-2 can cost more depending on your egress patterns. Every technical choice has a financial consequence, and if engineers don't see that connection, they'll make expensive decisions without realizing it.</p><p>The fix isn't more training. It's incentives. According to the 2022 State of FinOps Report, 30% of organizations cite "getting engineers to take action" as their biggest FinOps challenge. You can't solve that with a Slack message telling people to "be mindful of costs." You solve it by making cloud cost efficiency part of their KPIs. Measure it. Track it. Reward teams that improve it. When cost optimization shows up in performance reviews, it stops being optional.</p><p>Talk to engineers in their language. They don't care that the CFO is unhappy about the AWS bill. They do care that their deployment is wasting 60% of provisioned capacity because nobody rightsized the instances. Frame cost optimization as an engineering problem, not a budget problem. Show them the metrics. Let them solve it.</p><h2 id="the-tools-necessary-but-not-sufficient">The Tools: Necessary But Not Sufficient</h2><p>The tools help, but they're not magic. AWS Cost Explorer gives you visibility. CloudZero ties costs to features and products, so you can see which parts of your application are expensive. Kubecost does the same thing for Kubernetes, breaking down costs by namespace, pod, and deployment. ProsperOps automates commitment management so you're not manually juggling Reserved Instances. Newer tools use AI to predict usage patterns and recommend optimizations before costs spike.</p><p>But tools without culture just give you expensive dashboards that nobody looks at. Flexera's 2025 report found that 84% of organizations struggle to manage cloud spend, and cloud budgets are exceeding limits by 17% on average. The problem isn't a lack of tooling. It's a lack of discipline.</p><h2 id="the-real-payoff">The Real Payoff</h2><p>FinOps is the adult conversation your cloud strategy needs. Think of it as couples therapy for IT and Finance. For years, they've been talking past each other. Finance wants predictability and control. IT wants flexibility and speed. Cloud gave IT what it wanted and left Finance holding a bill they can't explain. FinOps forces both sides to sit down, look at the data, and make decisions together.</p><p>The organizations that get this right see real results. Adoption of FinOps teams grew 8 percentage points year over year, and wasted cloud spend is trending downward as a result. The global cloud FinOps market was $13.4 billion in 2024 and is projected to hit $32.5 billion by 2033. Companies aren't investing billions in this because it's trendy. They're investing because it works.</p><p>But FinOps isn't a project. You don't "implement FinOps" and declare victory. It's an ongoing practice. Cloud costs shift with usage. New services launch. Teams change. The optimization you did last quarter stops being optimal this quarter. FinOps only works if you commit to doing it continuously, not as a one-time cleanup when the CFO panics.</p><h2 id="the-bottom-line">The Bottom Line</h2><p>The real question isn't whether you need FinOps. You do. Every organization running meaningful cloud workloads needs it. The question is whether you're willing to make the cultural shift it requires. You can buy all the tools you want, but if engineers don't own costs, if finance doesn't understand workloads, and if nobody's empowered to make trade-offs between speed and spend, you're just rearranging deck chairs while the bill keeps climbing.</p><p>Know your workloads. Know your costs. Know who owns what. If you do those three things, FinOps stops being a discipline you're "implementing" and starts being how you operate. And that's when the cloud bill finally starts making sense.</p>
netwit.io
January 27, 2026 at 4:34 PM
Reposted by KyleG
The Hidden Tax: Egress Fees, Reserved Instances, and Other Cloud Surprises
> This is part of a series of articles exploring Cloud Economics. The costs and impacts of cloud decisions, told from the perspective of a technology industry veteran. If you want to start at the beginning or see all the related essays, check out the series page. People joke about the cloud being the Hotel California. What they mean is: it's genuinely great until you try to leave. By then, you've gotten so comfortable with the service and the features that the $90K egress fee feels almost reasonable. Leaving means losing something you've come to depend on. That's what gets lost in the analogy: the cloud's lobby really _is_ beautiful. The rooms really _are_ comfortable. That's what makes the checkout fees so cruel. You start with a simple architecture. Compute, storage, maybe a database. Then one month your bill is $2,000 higher than expected. The next month, it's $3,000 higher. By month six, you're triple the forecast and your CFO is asking why the infrastructure budget looks like a hockey stick graph. You hunt for the culprit. It's rarely the compute you planned for. It's almost always something buried in the footnotes of your invoice. A line item you didn't know existed until it became the most expensive thing on the bill. The cloud provider's playbook is elegant: offer low entry prices to get your data in, then make the economics of leaving, or even just operating efficiently, punishingly complex. This isn't unique to one provider. All three major clouds, AWS, Azure, and Google Cloud play the same game, though the details vary by geography and service. Take egress fees. This is the bait-and-switch at the heart of the cloud business model. Uploading data to S3, Blob Storage, or Cloud Storage is free. Come on in! Load all your petabytes! But downloading that data? That's where the meter starts running. AWS charges **$0.09 per gigabyte** for outbound data transfer. Azure is nearly identical at **$0.087 per GB**. Google Cloud is slightly cheaper at **$0.08 per GB** , but all three are in the same ballpark. That sounds like rounding error until you do the math. Moving 50TB out of any of them costs between $4,000-$4,500. Moving a petabyte across the board costs $80,000-$90,000. Just for the privilege of retrieving your own files. This is intentional. It's called "lock-in by economics." You don't need legal contracts to keep customers when leaving costs a ransom payment. Data gravity is real, and Gartner estimates that egress fees represent 10-15% of total cloud spend for data-heavy organizations across all providers. For some trading platforms or media companies, it's closer to 25-35%. Most teams only discover this when they try to migrate or consolidate, at which point the bill arrives like a speeding ticket you didn't know you were earning. Then there's the Reserved Instance (RI) pricing. The pitch is seductive: "Commit to this instance type for three years and save up to 72%!" Who wouldn't take that deal? AWS, Azure, and GCP all offer similar discount structures for committed capacity, but the mechanics differ slightly. AWS Reserved Instances lock you into specific instance families. Azure Reserved VM Instances are similarly rigid. GCP Committed Use Contracts offer a bit more flexibility with automatic sustained-use discounts on top. The catch is that you're locking your infrastructure choices in amber. Technology moves fast. The m5.large instance you reserved in 2022 looked archaic by 2025, but you're still paying for it. AWS releases new instance families like Graviton and Trainium that offer better performance for less money, but your reservation is tied to the old hardware. You can't change regions. You can't change operating systems. You often can't even change tenancy models without penalties. The same dynamic applies across all three clouds, commitments are powerful discounts, but they're also anchors. The result is a portfolio of "savings" that actually act as anchors. Organizations frequently end up paying for capacity they don't need or can't use because their workload evolved but their contract didn't. Savings Plans and Committed Use Discounts offer more flexibility. You commit to spend per hour or compute dollar rather than specific hardware but the savings percentages are lower. It's a gamble: bet on your infrastructure staying static for 36 months, or pay a premium for the flexibility to change your mind. For teams trying to be clever, there are Spot Instances on AWS, Spot VMs on Azure, and preemptible VMs on GCP. This spare capacity sold at a 50-90% discount sounds like a steal. The gotcha is the eviction notice. All three can terminate your instance with two minutes' warning if a higher-paying customer needs the capacity. It's great for batch jobs or stateless web servers. It's catastrophic for stateful databases or long-running processes that can't handle interruption. Yet I still see teams trying to run production databases on Spot pricing models to save a few bucks, only to act shocked when the instance vanishes during a traffic spike. Storage has its own traps across all providers. S3 Glacier, Azure Archive, and Google Cloud's Coldline and Archive storage tiers all look incredibly cheap on a per-gigabyte basis. Teams dutifully move archive data there to save money. Then the auditors show up, or a data scientist wants to run a historical analysis, and they trigger a retrieval. Glacier retrieval costs **$0.03 per GB** , more than 7x the monthly storage cost. If you store data for a year and retrieve it once, you've likely wiped out all your savings. It's the "cheap" storage that becomes expensive the moment you actually need to use it. And then there's the "death by a thousand cuts" API pricing. S3 charges $0.0004 per PUT request and $0.000004 per GET. Per call, it's negligible. But bad code scales really well in the cloud. I've seen a misconfigured backup script that called `ListBucket` 100 times per file instead of once per directory. It generated millions of API calls in a single night. A script running 1,000 GET requests per second runs up a bill of **$125,000 a year** just for API chatter. Nobody budgets for API calls, but they show up on the bill all the same. One AWS customer saw their bill spike from $63 to $834 in a single hour just from API requests. Azure and GCP have similar vulnerability patterns. Disaster recovery (DR) is another area where reality collides with the spreadsheet across all three platforms. Every architect wants a multi-region active-active setup. It sounds robust. It's also wildly expensive. Replicating data across regions costs money ($0.02/GB on most providers). NAT gateways charge for outbound traffic ($0.045/GB). Load balancers charge by the hour. A true multi-region DR strategy can easily double your infrastructure costs regardless of which provider you choose. The hard choice isn't technical; it's financial. Do you pay the premium for resilience, or do you accept that "US-East-1 is down" means you are too? Here's where it gets interesting: the real opportunity isn't choosing one cloud. It's architecting _across_ clouds with intention. Over 76% of enterprises have adopted a multi-cloud strategy, and the cost optimization opportunity is significant but it requires a different mindset. Multi-cloud isn't a fool's errand if you approach it strategically. The key is workload placement. Not all workloads cost the same across all clouds. GCP's baseline compute is roughly 25% cheaper than AWS for comparable instances, but AWS might be better for your database workload. Azure excels in certain enterprise scenarios with hybrid licensing advantages. The real win comes from matching workloads to providers based on actual cost-per-outcome, not just price-per-hour. You can also leverage competition. When you have workloads spread across multiple platforms, you have genuine negotiating leverage. Providers know that over-pricing on one service might push you to migrate workloads elsewhere. That leverage is worth real money in volume discounts sometimes 15-25% beyond standard rates. But multi-cloud cost optimization requires discipline. It starts with visibility. Advanced observability platforms like **Dynatrace, Datadog, or even Splunk** are critical here. They don't just monitor uptime; they map dependencies to costs. By using Dynatrace's Carbon Impact or Grail data lakehouse, you can actually see which specific microservices are driving your bill and more importantly, whether that spend correlates to business value. If you can't trace a dollar of infrastructure to a dollar of revenue, you're just guessing. For specialized cost management, tools like **nOps** and **CloudEagle** have emerged as highly rated solutions (4.7+ stars on G2) for taming this chaos. nOps excels at automated commitment management, effectively playing the RI market for you so you aren't stuck with shelfware. CloudEagle has carved out a niche in SaaS and procurement optimization, helping teams rationalize the dozens of other tools that clog the budget. But mostly, you need a monthly hygiene check. Look for the zombies: * **Untagged resources** (who owns this?) * **Idle compute instances** (the dev server nobody turned off) * **Unattached storage volumes** (orphaned from dead workloads) * **Over-committed capacity** (paying for RIs/CUCs you aren't using) * **Cross-region or cross-cloud data transfer** (misaligned workload placement) * **Unused managed services** (database instances for canceled projects) * **Inefficient data retrieval** (archive storage accessed too frequently) * **Unoptimized load balancer routing** (using expensive instances for simple traffic) * **API call runaway** (misconfigured monitoring or batch scripts) * **Duplicate tools and services** (paying for three logging solutions when one works everywhere) Any one of these can cost thousands per month. Together, they explain why your cloud bill is 30% higher than your forecast. Organizations that implement strict FinOps practices like; unified tagging, budgeting, anomaly detection, and cross-cloud tools often see cost reductions of 20-30% in Year 1 just by eliminating this waste. The payoff isn't just financial. Better cost hygiene forces better architecture decisions. When you're tracking egress costs, you naturally design fewer unnecessary data transfers. When you're monitoring commitment utilization, you right-size your reservations. When you're comparing costs across clouds, you make better workload placement decisions. Cloud providers have perfected the art of the introductory rate, except the intro period is measured in milliseconds. But the question isn't whether you're overspending. It's whether you're willing to build the discipline, tooling, and organizational practices to spend _intentionally_. The difference between chaos and optimization isn't technology. It's architecture. And that's something every organization can choose. #### Sources Gartner: Egress Fees as % of Cloud Bill Dynatrace: Cloud Cost Optimization Hykell: Egress Costs Analysis nOps: G2 Reviews & Features CloudEagle: G2 Reviews & Features New Horizons: Multi-Cloud Cost Management Growin: Multi-Cloud Cost Optimization 2026 Reddit: API Cost Spike Example AWS S3 Pricing
netwit.io
January 6, 2026 at 7:48 PM
The difference between pattern-matching and actual intelligence? Everything. Stop treating algorithms like oracles. They're tools, not wizards.
netwit.io/the-intellig...

A new publication project, netwit.io.
The Intelligence Illusion: Is Your AI Assistant Just a Very Expensive Autocomplete?
The artificial intelligence we have today is neither artificial nor intelligent—it's a powerful tool for statistical pattern matching that's been remarkably overhyped and frequently misunderstood.
netwit.io
October 28, 2025 at 4:02 PM