PointFive
PointFive Labs

Where the depth comes from.

PointFive Labs is the in-house research organization that designs every detection shipped on our platform. While most cost tools update a static rule set once or twice a year, Labs operates on a different rhythm: dozens of new detections every quarter, a roadmap driven by what we find in real customer environments, and depth that only purpose-built research can produce.

56

New detections shipped, last 90 days

~$24M

Newly addressable annual savings, last quarter

~$8M

Of that, AI savings alone

~36 hrs

Average time between new detection releases

On average, our customers see hundreds of thousands of dollars in newly identified annual savings every quarter — from research that didn’t exist 90 days ago. And it compounds with every drop.

How Labs Works

Three things make this cadence possible.

Detection depth at this velocity is not the result of a bigger team. It is the result of a different operating model.

Cybersecurity DNA

Our researchers come from cybersecurity backgrounds. Finding hidden waste and finding hidden misconfigurations are the same kind of problem — you have to model the system more deeply than the operator does, and you have to assume the obvious has already been checked. We bring an adversarial mindset to a domain that has historically settled for thresholds and reports.

Customer Environments as Primary Research Material

Every detection in production traces back to something the team observed in a real customer footprint. We do not generate detections from a feature backlog. We find them in the gap between what billing data shows and what infrastructure is actually doing — and we ship them with the cost model required to quantify the impact.

Depth at Velocity

Most cost-optimization platforms ship one or two new detections per quarter. PointFive Labs ships dozens. We can do this because the underlying cost model is shared across every provider and service we cover — so a new insight in one domain compounds across the rest of the catalog instead of starting from scratch.

Coverage

Four research domains. One cost model.

Most tools specialize in one. We model all four in a shared framework, which is why a new insight in AI inference can extend the catalog for data warehouses the same week.

Infrastructure (IaaS)

Compute, storage, networking, and managed services across every major cloud — analyzed at the configuration level, not just the bill.

  • AWS — EC2, EBS, EFS, S3, Lambda, OpenSearch
  • Azure — Storage, Site Recovery, App Configuration
  • GCP — Dataflow, BigQuery, Compute

Platform Services (PaaS)

Managed services that hide cost behind operational abstractions — where over-provisioning and behavioral inefficiencies routinely escape traditional tooling.

  • Caching — ElastiCache provisioned and serverless
  • Messaging — SQS standard and FIFO patterns
  • Database — DynamoDB, RDS Multi-AZ behavior

Data Platforms

Modern data warehouses are the fastest-growing line item after AI. We optimize them at the table, warehouse, and pipeline level — including lineage-aware detection.

  • Snowflake — virtual warehouses, table lineage, ingestion
  • Databricks — coverage in active expansion
  • BigQuery — coverage in active expansion

Production AI

Inference behavior, model selection, prompt caching, guardrail overhead, and GPU utilization. The AI category went from zero to seventeen detections in a single quarter — surfacing ~$8M in newly addressable annual savings on its own.

  • AWS — Bedrock inference, SageMaker endpoints, custom models
  • Azure — OpenAI deployments, PTU economics, content-safety overhead
  • Routing, caching, guardrails, model selection, GPU utilization

The Depth Most Tools Miss

Anyone can find an idle EC2 instance.

Here are four examples of detections we shipped recently — described by what they catch, not how we catch them.

Excessive Bedrock Guardrail Overhead

When a guardrail's per-text-unit assessment cost quietly exceeds the inference cost it is protecting, the economics flip. We surface those cases. Most tools never look at guardrail spend in isolation.

Snowflake Intermediate-Table Lineage Detection

Some permanent Snowflake tables sit in the middle of a pipeline — written exclusively from upstream sources, fully regenerable, but still paying for Fail-safe and extended Time Travel. We trace lineage to find them automatically.

Azure OpenAI Content Safety Filter Rejections

When Responsible AI policies reject a request, you are still billed for input token processing. High rejection rates produce billed input with no useful output — a category of waste that does not appear on any standard report.

Surplus CPU Credit Charges Across T-Family Auto-Scaling Fleets

Burstable instance economics quietly invert at the fleet level. A single T-family ASG can spend more on CPU credit overages than an equivalent non-burstable fleet — only visible when you aggregate across the group.

You will not find these on a cost-visibility dashboard. They live in the gap between what your bill shows and what your infrastructure actually does. That gap is where DeepWaste lives — and where Labs spends its time.

Recurring Franchise

The Detection Drop. Every Quarter.

Every quarter, Labs publishes a detection drop — a public summary of what shipped, what categories expanded, and what the team is targeting next. It is a window into how the platform is actually evolving, in the cadence the platform actually evolves at.

Every detection in every drop is live for every PointFive customer the day it ships. There is no upgrade path. There is no premium tier. The depth is the product.

Read the latest drop

Q1 2026 Drop · Highlights

  • AI category: 0 → 17 detections, ~$8M annualized

    Bedrock, SageMaker, Azure OpenAI in a single quarter

  • Snowflake support, day one to seven detections

    PaaS now in scope alongside IaaS

  • 32 detections in a single release

    Largest single drop in platform history

  • ~$24M newly addressable annual savings

    Across the customer base, last 90 days alone

Why It Matters

Most cost tools have a backlog. Labs has a research engine.

Dimension
Most cost-optimization tools
PointFive Labs
Release cadence
1–2 new detections per quarter
50+ new detections per quarter
Research origin
Engineering tickets and feature requests
Real customer environments + dedicated research backlog
Detection depth
Idle resource flags and budget thresholds
Inference-level, lineage-level, and behavioral analysis
Coverage growth
One platform, one service at a time
IaaS + PaaS + Data + AI in one cost model
Mindset
Compliance and threshold checks
Adversarial — looking for the patterns nobody else has named yet

See what Labs has found in environments like yours.

Every detection on this page is live in the platform today. Connect your environment and see what surfaces.