For any niche we run a Pipeline campaign on, somewhere between 30% and 60% of verified-yes companies come back from Apollo with no contact attached. The company exists. We've checked the homepage. We've watched a Sonnet pass call it in-niche on the strength of three specific phrases on its website. Apollo just doesn't have an email for anyone there.
That's the gap. After best-practice Apollo enrichment, a meaningful slice of the addressable market sits in a file called _no_contacts.csv and goes nowhere unless something else picks it up.
This piece is the playbook for the second-leg enrichment, where Any Mail Finder takes the domains Apollo couldn't enrich and returns verified decision-maker emails on a price model that reads like nobody's read it properly. It pairs naturally with the credit-efficient Apollo flow but stands alone if you've already got a clean domain list and just need contacts against it.
AMF's billing model, and how to exploit it
AMF charges only on verified results. I'd argue that sentence alone is the whole reason this works. If AMF returns email_status: valid, you're charged 2 credits on the decision-maker endpoint or 1 credit on the find-person endpoint. Everything else, risky, not_found, blacklisted, returns the email or status for free. Zero credits.
Re-running the same call within thirty days is also free. AMF caches by exact payload, so a flaky row a day later costs nothing. There's no need for a local cache. The platform's own server-side dedupe already handles it.
The Apollo flow runs on the opposite logic. Apollo bills per match attempt, so every credit-efficient move comes down to filtering aggressively before enrichment touches the list. AMF flips that. The cost-control question becomes whether the input list is in-niche at all, because AMF's billing absorbs the unmatched rows for free.
In practice that gives a per-campaign cost shape that holds up. On a recent UK MSP run I worked on, 612 in-niche rows entered AMF. AMF returned 312 valid leads, 18 role accounts, 74 risky, and 208 no-contacts. The credit count came in at 624. That's two credits per valid lead, billed on the valid ones only. The 208 no-contacts paid nothing and the 74 risky paid nothing. On a comparable Apollo enrichment of the same kind of row, every match attempt would have been billed regardless.
There's a second-order benefit. On a re-run a week later, because the operator wants to chase new homepages, or a category was missed first time, the rows AMF already enriched return the cached response and charge zero. Iteration is free until the 30-day window rolls.
Two endpoints, one decision
AMF's API surfaces two paid endpoints we use. The decision-maker endpoint takes a domain plus one or more category enums and returns one decision-maker per call (name, title, LinkedIn, email) at 2 credits per valid match. The find-person endpoint takes a domain plus a name and returns the email at 1 credit per valid match.
Which one runs depends entirely on what's in the input CSV. The flow auto-detects from the columns. If the file has first_name and last_name (or full_name) plus a domain column, find-person wins. If it has only a domain column, decision-maker wins.
The branch matters because the inputs come from different upstream sources. Apollo's _no_contacts.csv ships with a domain column and nothing about people, so it routes to decision-maker. An Airscale or Snov.io export usually has both names and domains, so it routes to find-person at half the cost per valid lead. A hand-built domain list goes to decision-maker.
There's no fallback chaining from one endpoint to the other. If find-person fails on a row (the named person doesn't have an email AMF can find), we don't quietly retry as decision-maker. The row goes to no-contacts and the operator can re-run that subset manually if it's worth it. The reason is honesty about cost. A silent fallback would double the per-row credit ceiling and the operator wouldn't see it on the original cost estimate.
From what I've seen, decision-maker runs on roughly seven out of every ten campaigns. Apollo enriches the people side reliably enough that the find-person path mostly turns up on Airscale and Snov inputs.
The 10 decision-maker categories
The decision-maker endpoint accepts a closed enum of ten categories: ceo, engineering, finance, hr, it, logistics, marketing, operations, buyer, sales. There is no free-text role parameter. If a buyer in your niche is the Head of Talent Acquisition, I'd map them to hr. A CISO maps to it. Custom personas configured in AMF's web UI exist as display-only metadata and are not exposed via the API. We've verified that against the v5.1 documentation. The UI personas do not bridge.
For most niches the right starting set is two categories, with a couple of variations by sector. Recruitment agencies and commercial finance brokers both map to [ceo, sales]. Managed IT services adds the IT director, so [ceo, sales, it]. Corporate training firms split across [ceo, hr, operations] because L&D buyers usually sit under HR. The order matters. AMF processes the array in priority order, and the first listed enum is the one it tries hardest to match.
Each persona is a separate API call. To target two personas at one company, the flow makes two calls with two different category enums, dedupes the results by email, and counts credits separately on each valid match. So a 2-persona run on 612 in-niche companies has a maximum credit ceiling of 2,448 (two calls per company at 2 credits per valid). The realistic number is round about 70% of that, since not every company has both a CEO and a sales head AMF can find.
Above three personas the maths usually breaks. I've watched AMF return the same person twice for related categories on small companies, and the dedup-by-email pass eats the extra spend without adding leads.
Niche verification before any AMF credit moves
Apollo arrives pre-filtered, because the upstream flow runs FireCrawl plus Sonnet against every company before Apollo enrichment fires. The list AMF receives from _no_contacts.csv is already in-niche. Airscale and Snov.io exports are not.
I've seen this go wrong on raw Airscale lists more than once. The "UK MSPs" list looks right at first glance. The category filter on Airscale's UI returned 1,547 rows. Half of them were MSPs, the rest were SaaS vendors selling tools to MSPs, IT recruiters, telecom resellers, datacentre operators, and a couple of training providers. If AMF runs on that list directly, every credit spent on a row that isn't an MSP is wasted. The lead won't convert in the campaign, and re-routing it afterwards is more work than not enriching it in the first place.
The preflight asks one question. Is this CSV niche-verified? If yes, AMF runs on the input as-is. If no, the same FireCrawl plus Sonnet flow that Apollo runs goes first, and AMF only sees the verified-yes subset. The deep walkthrough on the verification layer covers how that flow works (proxy: auto for Cloudflare, batched Sonnet calls, the prompt template). The mechanics in short: scrape every company's homepage, batch the markdown into chunks of thirty, ask Sonnet to verify against an explicit niche definition with include and exclude signals, then split into yes / no / unverified.
The numbers on the UK MSP run looked like this. Airscale export: 1,547 rows. After verification: 612 yes, 810 no, 125 unverified. AMF saw 612 rows, not 1,547. The 810 out-of-niche rows didn't cost a single AMF credit. The 125 unverified (FireCrawl scrape failed or markdown was a Cloudflare challenge page) were held for a manual pass and never entered enrichment.
Verification cost on FireCrawl Cloud was around 1 credit per company on basic mode and 6 if the retry pass triggered. On 1,547 rows the spend came in below £15, against a hypothetical AMF over-spend of round about 2,000 credits if we'd just enriched the lot.
The four output buckets
After enrichment finishes, every row lands in one of four CSVs. They share the run-slug as a basename, which travels with each file once it's detached from the run directory.
Valid leads ships as <run-slug>.csv. These are rows where AMF returned email_status: valid and the local part isn't a generic role account. The schema mirrors the Apollo enrichment output exactly, so the operator can stack them in Instantly or BulkMailChecker without rewriting the column shape. This is the file that gets handed off to sending.
Role accounts ships as <run-slug>_role_accounts.csv. These are valid emails where the local part matches the role-account list (info, contact, sales, admin, hello, team, hr, support, office, enquiries, marketing). Keep them for reference. Don't send to them. Cold outreach to a generic mailbox lands at someone whose job is to forward sales pitches to junk, and complaint rates on these addresses are higher than on personal addresses.
Risky and catch-all ships as <run-slug>_risky.csv. AMF returns risky when the SMTP probe couldn't conclusively verify the mailbox. The most common cause is a catch-all domain, where every address resolves regardless of whether the person exists. Re-verify these through BulkMailChecker before sending. From what I've seen, about half of the risky bucket comes back valid on a second pass; the other half bounces and we drop them.
No contacts ships as <run-slug>_no_contacts.csv. Rows where every AMF call returned not_found, blacklisted, missing domain, or HTTP 404. The schema mirrors Apollo's no-contacts shape so the file can roll into the next pass (LinkedIn, manual research) without re-mapping columns. The convention matters because the operator hands these off to whoever runs the long tail.
The gotchas worth knowing
The Authorization header on AMF's API takes the raw key. Not Bearer <key>. AMF's own API documentation example uses Bearer in places, which is plain wrong, and sending Bearer <key> returns a 401 that reads identically to a missing key. I built the first version of this against the Bearer-key assumption and lost a full morning chasing what I thought was a key issue before I noticed.
Real-world bounce on AMF "valid" results sits between 9% and 15%, despite AMF marketing 97% deliverability on its homepage. Dropcontact's 2025 benchmark on a 20,000-contact panel ranked AMF eleventh out of fifteen tools tested and recorded a 15.8% hard-bounce rate plus a 9.5% domain-error rate on returned emails. That's a 25.4% total unusable rate. Their verdict was direct: well below acceptable standards. We re-verify every AMF "valid" through BulkMailChecker before pushing to Instantly. The BMC pass costs around £0.005 per email and trims the bounce rate to about 2%. On a 312-lead campaign that's £1.56 of spend to protect sender reputation across 80 to 150 dedicated mailboxes.
AMF's UI custom personas don't reach the API. If your account has a "Head of Customer Success" persona configured in the web dashboard, the API will reject any call that references it. Map to the closest of the ten enums (operations for Customer Success, hr for Talent Acquisition, buyer for Procurement, it for CISO) and surface the lossy mapping to the user. Letting it run silently builds up a quiet pile of wasted calls.
5xx responses on a paid call don't auto-retry. AMF's 30-day repeat-free policy means a manual re-run a few minutes later is free, so the small risk of a double-charge edge case isn't worth a fallback.
The lead list after Apollo plus AMF
Between the Apollo flow and the AMF pass described here, the lead list is round about as complete as third-party data lets it be. What's left is the long tail. Senior buyers whose email isn't on public databases, companies with Cloudflare-blocked homepages, people whose LinkedIn URLs sit in old posts and not in profile metadata. That tail rewards manual research or a cold-LinkedIn-first cadence, and on a typical UK B2B run it's 5% to 10% of the original list.
For the cold outbound engagements we run for clients, the Apollo plus AMF combination is the standing infrastructure. Apollo handles the rows where it has the contact. AMF picks up the verified-yes companies Apollo couldn't enrich. Both sit behind a niche verification gate that stops third-party data noise from leaking into the campaign before a single send goes out.
The verification gate is the layer that decides which companies enter enrichment in the first place, and the one the entire lead-quality story rests on. The deep walkthrough on how it's built sits in the FireCrawl piece.


