Anyone can find an idle EC2 instance.
Surfacing the kind of waste that hides inside guardrail assessments, intermediate data-pipeline tables, cache write/read ratios, and cross-region inference routing — that takes depth. And depth takes a research team that goes looking where nobody else does.
In the last 90 days, ours did. 56 new detections. Four cloud and data platforms. Roughly one new detection every 36 hours. Every one of them now live for every PointFive customer.
Here's what shipped.
The Numbers
Between February 4 and April 27, 2026, our research team released 56 new detections to production — spanning AWS, Azure, GCP, and Snowflake. That works out to roughly one new detection every 36 hours.
Aggregated across our customer base, these new detections are surfacing ~$2M in monthly addressable savings — nearly $24M annualized. That's the new opportunities surfaced just by detections that didn't exist 90 days ago. Before the remediation flywheel turns a single finding into a fix, before the next quarter's drop, before any of the existing 350+ detections in the catalog get re-run.
Where the $24M is coming from:
- AWS cloud-native (ElastiCache, DynamoDB, OpenSearch, RDS, SQS, EBS, Lambda) — ~53% of the total
- AI/ML (Bedrock, SageMaker, Azure OpenAI, Cognitive Services) — ~33% of the total, ~$8M annualized in AI alone
- Snowflake — ~6%, with the largest single Snowflake detection running across tens of thousands of tables
- Azure cloud-native (Site Recovery, Storage, App Configuration) — ~6%
- GCP — early footprint, room to grow
Per customer? On average, our customers are seeing hundreds of thousands of dollars in newly identified annual savings every quarter — from research that didn't exist 90 days ago. And it compounds with every drop.
The pace matters. The spread matters. But what tells the real story is the category of waste these detections expose — most of it is the kind that traditional cost-visibility tools don't even look at.
Three Moments That Define the Quarter
1. Snowflake support went live — PaaS is now in scope
On February 4, we shipped our first-ever Snowflake detection. By the end of the quarter, seven Snowflake detections were in production — covering compute warehouse efficiency, storage tier selection, ingestion pipeline costs, table-level lineage, and clustering effectiveness.
This is bigger than seven detections. It's a category expansion. PointFive now optimizes both IaaS and PaaS — and we're already saving customers tens of thousands per integration. More data-platform coverage is on the roadmap.
2. AI went from zero to seventeen — and ~$8M annualized
Three months ago, our AI detection count was zero. Today it's 17 — spanning Bedrock, SageMaker, and Azure OpenAI. Together they're surfacing ~$663K in monthly addressable savings, or ~$8M annualized. Bedrock is the dominant contributor today, with SageMaker and Azure OpenAI rounding out the catalog.
These detections analyze the layers traditional FinOps tools never touch:
- Inference profile efficiency vs. workload complexity
- Prompt cache write/read economics
- Cross-region inference routing
- Guardrail assessment overhead vs. inference protected
- Custom-model storage with no invocation activity
- Endpoint utilization on GPU vs. CPU instances
- Notebook idle behavior
AI is the fastest-growing line item in cloud spend, and the inefficiency is hidden in places that don't show up on a billing dashboard. Seventeen detections in 90 days is how you keep up.
3. The "Epic Drop" — 32 detections in a single release
In one mid-quarter release, we shipped 32 detections at once — the largest single drop in the history of the platform. It included the entire Bedrock, SageMaker, and ElastiCache categories, six new SQS detections, the first DynamoDB optimizations following AWS's late-2025 on-demand pricing change, the launch of Azure Site Recovery coverage, and three Snowflake additions.
It's the kind of release cadence that's hard to imagine if you're used to a "vendor announces one thing per quarter" rhythm. This is what purpose-built research delivers.
The Depth Most Tools Miss
Here are four examples of detections we shipped this quarter — described by what they catch, not how we catch them.
Excessive Bedrock guardrail overhead. A guardrail's per-text-unit assessment cost can quietly exceed the inference cost it's protecting. We surface the cases where over-application has flipped the economics.
Snowflake intermediate-table lineage detection. Some permanent Snowflake tables sit in the middle of a pipeline — written exclusively from upstream sources, fully regenerable, but still paying for Fail-safe and extended Time Travel. We trace lineage to find them automatically.
Azure OpenAI content safety filter rejections. When Responsible AI policies reject a request, you're still billed for input token processing. High rejection rates produce billed input with no useful output — wasted spend that doesn't show up anywhere obvious.
Surplus CPU credit charges across T-family auto-scaling fleets. Burstable instance economics quietly invert at the fleet level. A single T-family ASG can spend more on CPU credit overages than an equivalent non-burstable fleet — only visible when you aggregate across the group.
You won't find these on a cost-visibility dashboard. You won't find them in a generic FinOps tool. They live in the gap between "what your bill shows" and "what your infrastructure actually does." That gap is where DeepWaste lives.
What Makes This Cadence Possible
Three things, in this order:
-
A research team that thinks like attackers. Our researchers come from cybersecurity backgrounds. Finding cost waste and finding security misconfigurations are the same kind of problem: you have to know where to look, you have to model the system more deeply than the operator does, and you have to assume the obvious things have already been checked. That mindset is why our detections find what others miss.
-
A platform built for depth, not just breadth. Every PointFive detection is grounded in our cost-modelling layer — the same layer that lets us reason about table-level Snowflake economics, prompt-cache write/read ratios, and cross-region Bedrock routing in the same data model. Without that foundation, depth at this rate would be impossible.
-
Customer signal driving the roadmap. Every detection in this drop came from one of three places: a real customer POC where we found something the existing toolset didn't catch, a research backlog priority targeted at coverage gaps, or a competitive bake-off where we wanted to close a known gap. The team treats customer environments as primary research material — not as test data.
Quarterly From Here
This is the first of what will become a recurring quarterly franchise: the PointFive Detection Drop. Every quarter, we'll publish what shipped, what categories expanded, and what the team is targeting next.
The next drop is already in motion.
If you want to see what these detections find in your environment, request a demo or see the product. Every detection in this post is live for every PointFive customer today.
PointFive runs deep — because the waste does.