Brand Monitoring API. Mentions, impersonation, counterfeits — as real users see them.
The data infrastructure under your own brand-protection product. Residential IPs across 100+ countries surface mentions, counterfeit listings, and impersonation accounts exactly as users in each market see them. Structured JSON for your alerting pipeline — one credit pool, one API key.
Residential IPs · 100+ countries · failed requests cost 0
# Pull brand mentions off a SERP page as a JSON list — one call.
$ curl 'https://api.scrapingant.com/v2/extract' \
--data-urlencode 'x-api-key=YOUR_KEY' \
--data-urlencode 'url=https://www.google.com/search?q=your-brand' \
--data-urlencode 'proxy_country=US' \
--data-urlencode 'extract_properties=mentions(list: title, snippet, url, date)'
# → { "mentions": [{ "title": "…", "snippet": "…", "url": "…", "date": "…" }, …] }# Fan out across markets; write one JSONL row per mention.
from concurrent.futures import ThreadPoolExecutor
import requests, json
MARKETS = ["US", "DE", "BR", "JP", "IN"]
def fetch(country):
r = requests.get("https://api.scrapingant.com/v2/extract", params={
"x-api-key": "YOUR_KEY",
"url": "https://www.google.com/search?q=your-brand",
"proxy_country": country,
"extract_properties": "mentions(list: title, snippet, url, date)",
})
return country, r.json().get("mentions", [])
with ThreadPoolExecutor(max_workers=5) as ex:
for country, rows in ex.map(fetch, MARKETS):
for row in rows:
row["market"] = country
print(json.dumps(row)) # JSONL for downstream alerting// One call per market, Promise.all for concurrency.
import fetch from 'node-fetch';
const MARKETS = ['US', 'DE', 'BR', 'JP', 'IN'];
const rows = await Promise.all(MARKETS.map(async (country) => {
const res = await fetch(
'https://api.scrapingant.com/v2/extract?' +
new URLSearchParams({
'x-api-key': KEY,
url: 'https://www.google.com/search?q=your-brand',
proxy_country: country,
extract_properties: 'mentions(list: title, snippet, url, date)',
})
);
const data = await res.json();
return { country, mentions: data.mentions ?? [] };
}));
// rows = [{ country: 'US', mentions: [...] }, ...] Why brand monitors build on us.
Residential IPs across 100+ countries, country-accurate SERPs, structured mention rows — all on one credit pool.
Real residential, real-user view
Datacenter IPs trip cloaking; counterfeit listings hide. Residential rotation surfaces what a buyer in DE / BR / JP actually sees.
How residential rotation works →Country-accurate SERP capture
proxy_country=DE, JP, US — Google returns the local result set. Rank-track your brand in every market.
Three sources, one credit pool
News via /v2/markdown, forums + social via /v2/extract, SERPs via /v2/general. One key bills all three.
Real users see the real page. So should your monitor.
Marketplaces and social platforms increasingly cloak listings against known datacenter ranges — corporate IPs see a sanitised page, the buyer sees the counterfeit. Residential IP rotation across 100+ countries closes that gap: each request looks like a real ISP-issued connection from the target market, so the page you crawl is the page a buyer crawls. Flip proxy_type=residential on the SKUs that 403; the rest of the API stays identical.
- 2M+ real residential IPs — country, state, city targeting from one username parameter
- Sticky sessions for multi-page profile walks — same exit IP across the whole crawl
- Failed fetches cost 0 — anti-bot retries don't bloat the bill
Brand SERPs from every market. Rank-track at scale.
SERPs reorder by country. A clean #1 ranking in the US can sit beneath a competitor's paid ad in Germany or a counterfeit page in Japan — and the only way to see it is to query from each market. proxy_country=DE hits Google.de from a German egress IP; JP, BR, IN for their regional Googles. Pair with the Google search API for parsed organic + paid + map-pack rows.
- Organic + paid + featured snippets in one capture — see competitor ad bids on your trademark
- City-level targeting for regional brand-rank diff (DE-Berlin vs DE-Munich)
- Per-hour sweeps land on the Startup plan ($49 / 500K credits) — hundreds of markets daily
News, social, SERPs — one mentions table.
Most brand-monitor stacks juggle three vendors: an article reader for news, a social-API broker for platforms, a SERP API for search. ScrapingAnt covers all three from one credit pool. /v2/markdown returns clean article body — drop straight into your sentiment + topic classifier. /v2/extract with a free-form mentions(list: title, snippet, url, date) schema returns parser-free JSON rows for the mentions table. /v2/general with proxy_country captures SERPs across every market.
- One key, one billing line — kills the three-vendor reconciliation problem
- Parser-free JSON via
extract_properties— no per-site selector maintenance - Cross-link to review scraping for product-level sentiment alongside brand mentions
Six brand-protection workloads teams build on top.
Same API, same credit pool — different ways of slicing brand surveillance underneath.
News-mention tracking
Crawl press sites, blogs, and trade publications across markets. /v2/markdown strips chrome and ads, returns clean article body for sentiment + summary pipelines.
Social-platform brand surveillance
Pull public profile pages and post listings from social platforms via browser=true + residential IPs. See your brand mentions exactly as real users see them in each market.
Counterfeit marketplace listings
Crawl marketplace SKU pages from the buyer's country; flag listings using your trademarks with /v2/extract against your authorised seller list.
Impersonation account detection
Sweep social-profile listings and lookalike-domain SERPs from each country for usernames mirroring your brand. Diff against your authorised-handle table.
Trademark abuse in SERPs
Capture paid + organic SERPs in each market for your brand keywords. Spot competitors bidding on your trademarks or counterfeit pages outranking your domain.
Adverse-press / crisis alerting
Per-hour sweeps of news SERPs and major publications during incidents. Markdown output feeds straight into your sentiment + topic-cluster classifier.
Pricing
Industry leading pricing that scales with your business.
|
Plans
|
Enthusiast
100K credits / mo
$19/mo
|
★ Most Popular
Startup
500K credits / mo
$49/mo
|
Business
3M credits / mo
$249/mo
|
Business Pro
8M credits / mo
$599/mo
|
Custom
10M+ credits / mo
$699+/mo
|
|---|---|---|---|---|---|
| Monthly API credits | 100,000 | 500,000 | 3,000,000 | 8,000,000 | 10M+ |
| Support channel | Priority email | Priority email | Priority email | Priority + dedicated | |
| Integration help | Docs only | Custom code snippets | Debug sessions | Priority debug sessions | Full enterprise onboarding |
| Expert assistance | — | ||||
| Custom proxy pools | — | — | |||
| Custom anti-bot avoidances | — | — | |||
| Dedicated account manager | — | — | |||
| Start Free | Start Free → | Start Free | Start Free | Talk to Sales |
What teams are saying.
From solo developers shipping side projects to enterprise pipelines at Fortune 500s.
★★★★★ 5.0 on Capterra →★★★★★“Onboarding and API integration was smooth and clear. Everything works great. The support was excellent.”
★★★★★“Great communication with co-founders helped me to get the job done. Great proxy diversity and good price.”
★★★★★“This product helps me to scale and extend my business. The setup is easy and support is really good.”
What is a brand monitoring API?
A brand monitoring API is a managed endpoint that takes URLs (news pages, SERPs, marketplace listings, social profiles) and returns the data brand-protection products build on: clean article body, structured mention rows, captured SERPs from each country, headless-Chrome screenshots. ScrapingAnt's /v2/markdown returns LLM-clean article body, /v2/extract returns parser-free JSON for mention rows, and residential IPs across 100+ countries let you see what real users in each market see.
How is this different from Mention.com, Brandwatch, or Talkwalker?
Those are finished brand-monitoring platforms — they ship a dashboard, a workflow engine, an alerting layer, and a license model. ScrapingAnt is the data infrastructure beneath a brand-protection product. You bring the target sites, the storage, the dashboard; we hand back clean responses with predictable per-URL economics. Teams pick us when they want to embed brand data into their own platform, run scheduled sweeps across hundreds of clients, or monitor regions the boxed tools index poorly.
Does it work for social platforms?
Yes — pass browser=true for the headless-Chrome render and proxy_type=residential for an IP profile that platforms accept. Your team owns the per-platform positioning (public-data scope, authentication choices, ToS interpretation); we deliver the fetched HTML or extracted JSON. Sticky sessions via session=foo keep multi-page traversals on the same exit IP — important for platforms that fingerprint by IP velocity.
Can I monitor a country's SERPs daily?
Yes. A SERP capture via /v2/general with proxy_country=DE is about 5 credits including residential routing. 100 brand keywords × 10 markets × 1/day ≈ 30K credits/month — fits inside the Enthusiast plan ($19, 100K credits). Hourly sweeps across the same set land on the Startup plan ($49 / 500K credits). Failed fetches cost zero.
Does it deduplicate mentions?
No — we return what each URL contains. Deduplication lives in your downstream pipeline because the rule depends on your product (URL-only, URL+title hash, title-similarity fuzzy match, semantic embedding). Each /v2/extract response gives you a clean row with title, snippet, url, and date; pipe those into your existing dedupe layer.
Markdown or structured JSON for mention extraction?
Two endpoints, two shapes. /v2/markdown returns the full article body as clean Markdown — feed into your sentiment, topic-cluster, or summariser model. /v2/extract returns parser-free JSON given a free-form schema like mentions(list: title, snippet, url, date) — direct insert into your mentions table. Most teams use both: Markdown for full-text, extract for headline-level surveillance.
Sticky sessions for logged-in monitoring?
Yes — append a session=<id> parameter and the same exit IP is used across requests bearing that id. Useful when your monitor walks a paginated mention feed behind a login wall, or when you need stable cookies across a multi-page profile crawl. Combine with proxy_country=XX to keep the session pinned to one geography.
Building a brand-protection product?
Custom volume pricing for multi-thousand-keyword sweeps, dedicated residential pools per market, mention-extract schema design help, or migration help from in-house monitor stacks — drop us a line and a real human gets back within a few hours.
“Our clients are pleasantly surprised by the response speed of our team.”