Lead generation scraping API. Bring your own targets.
Apollo and ZoomInfo sell contacts at $0.10–$0.50 each. ScrapingAnt is the API your own prospecting tool runs on — the same AI extraction returns name / title / company / email JSON from any directory page. Credits-only model: build a 10K-contact list for under $20.
Infrastructure, not a contact database · you bring URLs, we extract
# Pass a schema — get a contact JSON back from any directory page.
$ curl 'https://api.scrapingant.com/v2/extract' \
--data-urlencode 'x-api-key=YOUR_KEY' \
--data-urlencode 'url=https://example-directory.com/companies/acme' \
--data-urlencode 'extract_properties=name,title,company,email,phone,website'
# → { "name": "...", "title": "...", "company": "...", ... }# Iterate your target URL list, build a contact table.
import requests, csv
with open("contacts.csv", "w") as f:
w = csv.DictWriter(f, fieldnames=["name", "title", "company", "email"])
w.writeheader()
for url in directory_urls:
r = requests.get("https://api.scrapingant.com/v2/extract", params={
"x-api-key": KEY, "url": url,
"extract_properties": "name,title,company,email",
})
w.writerow(r.json())# Google Maps business profiles → structured contact JSON.
$ curl 'https://api.scrapingant.com/v2/extract' \
--data-urlencode 'url=https://www.google.com/maps/place/...' \
--data-urlencode 'extract_properties=business_name,phone,address,website,hours' \
--data-urlencode 'browser=true' Why prospecting builders pick us.
Infrastructure, not a contact database. Built for indie B2B SaaS / sales-tools builders, not SDR end-users.
Parser-free contact JSON
AI extraction handles every directory. Pass name,title,company,email — get JSON.
/v2/extract → ~20× cheaper per contact
25 credits ≈ $0.005 at Startup-tier vs Apollo/ZoomInfo per-contact pricing.
See pricing →Country-targeted exits
Residential proxies across 100+ countries — geo-locked directories work transparently.
Residential pool →One schema. Every directory.
Per-source scraper templates are a maintenance treadmill — when a directory ships new HTML, your selectors break. ScrapingAnt's /v2/extract endpoint takes a property list — name,title,company,email,phone,website — and returns clean JSON from any directory page. The AI extraction layer adapts when source HTML changes; you keep shipping.
- Works on trade associations, conference pages, company team pages, public filings
- Custom schemas: add
department,seniority,profile_url, anything - Falls through to
/v2/generalraw HTML when you want your own parser
Built for builders. Not SDR end-users.
If your buyer-persona is "sales rep who wants a curated contact list" — go to Apollo. If your buyer-persona is "indie SaaS founder building a vertical-specific prospecting tool" — that's us. We hand you the data infrastructure; you wrap the curation, dedup, freshness model, and UI. Credits-only model means you control the cost-per-record economics: 10K records for under $20 on the Startup plan.
- Bring your own URL list — we don't store, dedupe, or resell contacts
- Your freshness model: scan daily, weekly, monthly — your decision, your bill
- Customers are responsible for use under GDPR / CCPA / CAN-SPAM
Residential proxies for the rate-limit-walled directories.
Aggressive B2B directories rate-limit by IP. Datacenter ranges get caught fast; residential exits look like real users. Pass proxy_type=residential on the same call and target by country: proxy_country=DE for German directories, proxy_country=GB for UK chambers, etc.
- 2M+ residential IPs across 100+ countries — one credit pool covers them all
- Sticky sessions for directories that bind contact reveal to login-state
- Failed requests cost 0 credits — rate-limit retries don't inflate the bill
Six lead-gen workloads teams build.
Same API, same credit pool — different ways of slicing the contact data underneath.
Industry-association directories
Trade association membership lists, chamber-of-commerce rosters, professional bodies that publish member contact pages.
Conference attendee lists
Public speaker bios, exhibitor lists, sponsor pages — the contact data conferences post freely on their sites.
Google Maps business profiles
Local-business listings with phone, address, website, hours. Geo-targeted by city or radius for sales-pipeline coverage.
Public regulatory filings
SEC EDGAR officer disclosures, UK Companies House director records, SAS-issued business identifiers — fully public, fully fair-game.
B2B company websites
Team pages, about-us pages, contact pages on prospect company sites. The cleanest source for verified job-title and email data.
Job-board hiring-manager signal
Track who is hiring at target companies — recent postings, hiring-manager pages, role-specific contact buttons.
Pricing
Industry leading pricing that scales with your business.
|
Plans
|
Enthusiast
100K credits / mo
$19/mo
|
★ Most Popular
Startup
500K credits / mo
$49/mo
|
Business
3M credits / mo
$249/mo
|
Business Pro
8M credits / mo
$599/mo
|
Custom
10M+ credits / mo
$699+/mo
|
|---|---|---|---|---|---|
| Monthly API credits | 100,000 | 500,000 | 3,000,000 | 8,000,000 | 10M+ |
| Support channel | Priority email | Priority email | Priority email | Priority + dedicated | |
| Integration help | Docs only | Custom code snippets | Debug sessions | Priority debug sessions | Full enterprise onboarding |
| Expert assistance | — | ||||
| Custom proxy pools | — | — | |||
| Custom anti-bot avoidances | — | — | |||
| Dedicated account manager | — | — | |||
| Start Free | Start Free → | Start Free | Start Free | Talk to Sales |
What teams are saying.
From solo developers shipping side projects to enterprise pipelines at Fortune 500s.
★★★★★ 5.0 on Capterra →★★★★★“Onboarding and API integration was smooth and clear. Everything works great. The support was excellent.”
★★★★★“Great communication with co-founders helped me to get the job done. Great proxy diversity and good price.”
★★★★★“This product helps me to scale and extend my business. The setup is easy and support is really good.”
What is a lead generation scraping API?
A lead generation scraping API is the data layer underneath a prospecting tool. It fetches public B2B directory pages, company sites, business profile listings, and regulatory filings, returning either raw HTML or structured contact JSON (name, title, company, email, phone). Your prospecting tool builds on top — the dashboard, the alerting, the CRM integration. ScrapingAnt's /v2/extract endpoint covers the extraction layer; /v2/general returns raw HTML when you want your own parser.
How is this different from Apollo or ZoomInfo?
Apollo and ZoomInfo are finished contact databases — they curate, dedupe, and resell B2B contacts at $0.10–$0.50 per contact. ScrapingAnt is the infrastructure that could let you build that yourself if you bring your own target list. Teams choose us when they want to scrape industry-specific directories Apollo doesn't cover, or when their workflow is "I have 10K specific company URLs, fetch contact data from each" — that pattern doesn't fit a curated database.
Which directories does it work with?
Any HTTP-accessible directory page. Trade associations and chambers of commerce. Public conference attendee lists. Google Maps business profiles. SEC EDGAR, UK Companies House, and other regulatory filings. Company team pages and contact pages. Because the API is URL-driven, you bring the target list and we handle the fetch + extraction layer for every URL on it.
How much would 10,000 contacts cost?
Each /v2/extract call with an AI schema is around 25 credits. 10,000 contacts ≈ 250K credits — that's the Startup plan ($49/month, 500K credits) with headroom. The same plan covers JS rendering, Markdown conversion, residential proxies, and other workloads on the same key.
Do you guarantee contact accuracy or freshness?
No. We return whatever the page contains at fetch time. If a contact left the company yesterday, the page might still list them. Contact-database vendors (Apollo, ZoomInfo) invest heavily in verification and freshness — that's their value-add. Our value is being the underlying fetch + extraction layer for teams who want to control the source, freshness model, and quality bar themselves.
Is scraping contact data legal?
Scraping publicly-accessible web pages is legally defended in the US under the hiQ Labs Ninth Circuit precedent. What you do with the scraped contact data is where the rules live — GDPR for EU contacts, CCPA for California, CAN-SPAM for cold email outreach. Customers are responsible for their use of scraped contact data under applicable laws. We provide the infrastructure; downstream compliance is your call.
Can I scrape job postings to find hiring signals?
Yes. Public job-board postings, company-site careers pages, and conference speaker pages all surface in /v2/extract. Common use cases: tracking competitor hiring patterns, finding hiring managers for target accounts, building lead lists for recruiting tools. Pass extract_properties=job_title,company,department,location,posted_date for a typical schema.
What about residential proxies for rate-limited directories?
Pass proxy_type=residential on the same call. 2M+ residential IPs across 100+ countries — country-targeted exits handle directories that block international IPs. Most teams start on datacenter and selectively upgrade individual sources to residential when they 403.
Building a vertical prospecting product?
Custom volume pricing, dedicated residential pools for stubborn directories, migration help from in-house scrapers, or a one-shot seed dataset — drop us a line and a real human gets back within a few hours.
“Our clients are pleasantly surprised by the response speed of our team.”