MCP server for web scraping.
Give your AI agent live access to the open web. The hosted ScrapingAnt MCP server exposes three web scraping tools — HTML, Markdown, plain text — to Claude, Cursor, Windsurf, VS Code, Claude Code, and Cline. One command to install, headless Chrome and anti-bot built in.
10,000 free credits · failed requests cost 0 · works with any MCP client
# One command. No Chromium download. No profile setup.
$ claude mcp add scrapingant \
--transport http \
https://api.scrapingant.com/mcp \
-H "x-api-key: YOUR_API_KEY"// .cursor/mcp.json (or .windsurf/mcp.json)
{
"mcpServers": {
"scrapingant": {
"url": "https://api.scrapingant.com/mcp",
"transport": "streamableHttp",
"headers": {
"x-api-key": "YOUR_API_KEY"
}
}
}
}// claude_desktop_config.json
{
"mcpServers": {
"scrapingant": {
"url": "https://api.scrapingant.com/mcp",
"transport": "streamableHttp",
"headers": {
"x-api-key": "YOUR_API_KEY"
}
}
}
}# In Claude / Cursor / Windsurf chat:
> Read https://docs.example.com/api/auth and explain the OAuth flow
# Agent picks up the right tool automatically:
# scrapingant.get_web_page_markdown(url)
# scrapingant.get_web_page_html(url)
# scrapingant.get_web_page_text(url) What it actually unlocks.
Three tools, infinite uses. Here's what teams hand off to their agents.
Live web during the loop
Agents stop guessing about pages they've never seen — they fetch them mid-thought.
See workflows →Three formats per call
HTML for parsing. Markdown for context. Plain text for summarisation.
Tool details →Anti-bot built in
Same Chrome cluster, rotating proxies, CAPTCHA avoidance that backs /v2/general.
Three MCP web scraping tools. Pick the format your agent needs.
Each tool takes a URL and optional browser, proxy_type, and proxy_country parameters. The agent picks the right one automatically based on the prompt — “explain this article” routes to LLM-ready Markdown; “extract every link” routes to HTML; “count word frequency” routes to plain text. Need typed JSON instead of raw page content? Pair this with the AI data scraper.
- Markdown stripped of nav, ads, scripts — token-efficient context
- HTML preserved when you need DOM access in the agent
- Plain text for cheap summarisation passes
Live web access for AI agents, part of the loop.
The ScrapingAnt MCP server isn't a side feature — it changes what the agent can credibly claim to know. Coding assistants quote the latest framework docs instead of guessing. Research agents pull live pages mid-thought. Background agents check competitor pricing on a schedule. Support bots cite your help center. The pattern is always the same: prompt → tool call → cleaned content → answer.
- No more “I don't have access to that page” from your agent
- Same key works in chat threads, autonomous agents, and CI scripts
- Schedule recurring fetches by wrapping the tool in a cron / agent loop
Same cluster. Same uptime.
Every MCP call rides the same headless Chrome cluster, rotating residential and datacenter proxies, CAPTCHA avoidance, TLS fingerprinting, and automatic retries that back the JavaScript rendering API. The MCP server is just a thinner transport on top — your agent gets the same anti-bot reliability the rest of the ScrapingAnt API delivers.
- Real headless Chrome — handles SPAs, lazy-loaded grids, dynamic content
- Switch to
residentialproxies viaproxy_typeparameter — same call - Country-pin requests with
proxy_countrywhen geo accuracy matters
Where teams plug it in.
Six concrete workflows where MCP web access changes the game.
Research agents
Pull docs, blog posts, and press releases on demand — agent summarises, compares, or cites in-line during the same chat.
Talk to us →Coding assistants
Read framework docs, GitHub READMEs, or release notes mid-task — no more "I don't have access to that" answers.
Talk to us →Competitive intel
Check pricing pages, feature lists, and changelogs on a schedule. Agent flags diffs since the last run.
Talk to us →RAG corpora
Build retrieval indexes from named domains. Markdown chunks tokenize cleanly, embed predictably for vector stores.
Talk to us →Brand & review monitoring
Watch reviews, news, and social mentions. Agent summarises the new ones and posts to Slack overnight.
Talk to us →Customer support bots
Pull help-center articles, status pages, or external docs into your bot's context — answer with cited sources.
Talk to us →Pricing
Industry leading pricing that scales with your business.
|
Plans
|
Enthusiast
100K credits / mo
$19/mo
|
★ Most Popular
Startup
500K credits / mo
$49/mo
|
Business
3M credits / mo
$249/mo
|
Business Pro
8M credits / mo
$599/mo
|
Custom
10M+ credits / mo
$699+/mo
|
|---|---|---|---|---|---|
| Monthly API credits | 100,000 | 500,000 | 3,000,000 | 8,000,000 | 10M+ |
| Support channel | Priority email | Priority email | Priority email | Priority + dedicated | |
| Integration help | Docs only | Custom code snippets | Debug sessions | Priority debug sessions | Full enterprise onboarding |
| Expert assistance | — | ||||
| Custom proxy pools | — | — | |||
| Custom anti-bot avoidances | — | — | |||
| Dedicated account manager | — | — | |||
| Start Free | Start Free → | Start Free | Start Free | Talk to Sales |
What teams are saying.
From solo developers shipping side projects to enterprise pipelines at Fortune 500s.
★★★★★ 5.0 on Capterra →★★★★★“Onboarding and API integration was smooth and clear. Everything works great. The support was excellent.”
★★★★★“Great communication with co-founders helped me to get the job done. Great proxy diversity and good price.”
★★★★★“This product helps me to scale and extend my business. The setup is easy and support is really good.”
Frequently asked questions.
Still curious? Get in touch with our team — we usually reply within hours.
What is the ScrapingAnt MCP server?
The ScrapingAnt MCP server is a hosted Model Context Protocol endpoint that gives any MCP-aware AI client — Claude Desktop, Cursor, Windsurf, Claude Code, VS Code with the MCP extension, and Cline — three web scraping tools: get_web_page_html, get_web_page_markdown, and get_web_page_text. Add the URL plus your API key once and the agent can fetch live web pages mid-conversation, with our JavaScript rendering and proxy stack underneath every call.
What is MCP and why does it matter for web scraping?
Model Context Protocol is the spec AI clients (Claude, Cursor, Windsurf, VS Code, Cline, etc.) use to call external tools. ScrapingAnt exposes itself as an MCP server so the agent picks up our scraping tools the moment you add the config — no SDK to install, no glue code to write. Web scraping over MCP means agents can read pages they couldn't see before, including JavaScript-heavy SPAs that web_fetch-style tools render as empty shells.
Which tools does the MCP server expose?
Three. get_web_page_html returns raw HTML for parsing or DOM access. get_web_page_markdown returns clean LLM-ready Markdown — token-efficient context. get_web_page_text returns plain text for cheap summarisation. Each takes a URL plus optional browser, proxy_type, and proxy_country parameters.
Does the MCP server handle JavaScript-rendered pages?
Yes — by default. Every MCP call routes through real headless Chrome, so SPAs, lazy-loaded content, and React / Vue / Next.js pages return populated DOM. Set browser=false if you want raw HTML without rendering (saves credits on simple pages). Same engine that powers our JavaScript rendering API.
What about Cloudflare and other anti-bot walls?
Handled. Same anti-bot stack as the rest of ScrapingAnt — rotating proxies, TLS fingerprinting, CAPTCHA avoidance — runs underneath every MCP call. Switch proxy_type to residential to route through our residential proxy pool for tougher targets without changing your client config.
Can I run the MCP server for production agents, not just dev?
Yes. The server runs on our cloud cluster and scales with your call volume. Same uptime, same SLA, and same proxy fleet as /v2/general. The MCP transport is just a thinner layer on top — production-ready out of the box.
How is the MCP server billed?
Each MCP tool call maps to one HTTP request and uses API credits — the exact rate depends on your browser and proxy_type settings. Failed requests cost 0 credits. Every account gets 10,000 free credits per month with no card required, so you can wire the MCP server into a real agent loop before paying anything.
Does it work with VS Code, GitHub Copilot, and Claude Code?
Yes — through the MCP extension. Add the same JSON config to VS Code settings.json under the MCP servers key, restart, and the tools show up in Copilot Chat agents. For Claude Code, run claude mcp add scrapingant. For Cursor / Windsurf, drop the JSON into .cursor/mcp.json. Setup docs for every client →
How is the MCP server different from the AI data scraper?
Different shapes for different jobs. The MCP server returns page content — HTML / Markdown / text — which the agent then reasons over inside the LLM. The AI data scraper (/v2/extract) returns typed JSON — you describe the fields, get structured data back. Use the MCP server when you want the agent in the loop; use the extractor when you want a clean dataset.
Building an agent at scale?
High-volume MCP traffic, dedicated capacity, custom tools beyond the default three, or a one-shot research dataset — drop us a line and a real human gets back within a few hours.
“Our clients are pleasantly surprised by the response speed of our team.”