Nimbus

RESEARCH

Benchmarks, without the hand-waving.

We publish what Nimbus does and doesn't do. If another tool claims 8× reply rates without showing their method, they're lying. Here's our data.

BENCHMARK — 2025 H2

Reply rates on AI-qualified outbound

21.4%

A-grade prospects (Nimbus)

8.7%

B-grade prospects (Nimbus)

4.3%

Industry average (manual)

Sample: 14,280 first-touch cold emails sent by Nimbus customers in Q3–Q4 2025. The control group is manually-sourced outbound from the same customers in the same period.

We do not count auto-replies, out-of-offices, or negative replies as replies. A reply is a human response asking a question or requesting a call.

ACCURACY

Scoring accuracy, independently graded

We ask real sales leaders to grade a random sample of 200 Nimbus-scored prospects. We compare their grade to the AI's.

Exact match

73%

AI and human assigned the same letter grade.

Within one grade

94%

AI was off by at most one letter (A↔B, B↔C, etc).

We never report 100% or 99% accuracy. Any vendor that does is either measuring on the training data or lying.

TRANSPARENCY

What we deliberately don't do

  • We don't auto-send. Every message is approved by the human whose inbox it leaves.
  • We don't scrape LinkedIn directly. We use a compliance-approved data partner that respects robots.txt and rate limits.
  • We don't personalize with private details. Nothing we reference is behind a login — everything the AI uses is public signal.
  • We don't train our AI on your messages or prospect data. Your data stays yours.