NEW ScrapingAnt MCP for Claude Code, Cursor & Windsurf — try it free →
★★★★★ 5.0 on Capterra

MCP server for web scraping.

Give your AI agent live access to the open web. The hosted ScrapingAnt MCP server exposes three web scraping tools — HTML, Markdown, plain text — to Claude, Cursor, Windsurf, VS Code, Claude Code, and Cline. One command to install, headless Chrome and anti-bot built in.

10,000 free credits · failed requests cost 0 · works with any MCP client

# One command. No Chromium download. No profile setup.
$ claude mcp add scrapingant \
    --transport http \
    https://api.scrapingant.com/mcp \
    -H "x-api-key: YOUR_API_KEY"
// .cursor/mcp.json (or .windsurf/mcp.json)
{
  "mcpServers": {
    "scrapingant": {
      "url": "https://api.scrapingant.com/mcp",
      "transport": "streamableHttp",
      "headers": {
        "x-api-key": "YOUR_API_KEY"
      }
    }
  }
}
// claude_desktop_config.json
{
  "mcpServers": {
    "scrapingant": {
      "url": "https://api.scrapingant.com/mcp",
      "transport": "streamableHttp",
      "headers": {
        "x-api-key": "YOUR_API_KEY"
      }
    }
  }
}
# In Claude / Cursor / Windsurf chat:
> Read https://docs.example.com/api/auth and explain the OAuth flow

# Agent picks up the right tool automatically:
#   scrapingant.get_web_page_markdown(url)
#   scrapingant.get_web_page_html(url)
#   scrapingant.get_web_page_text(url)
YOUR AGENT PROMPT TOOL CALL get_web_page_… CLAUDE / CURSOR / … MCP SCRAPINGANT render proxy clean return CLOUD CLUSTER JSON RESPONSE html markdown text ready to read 3 FORMATS no install · no Chromium · just an MCP URL
</> get_web_page_html raw HTML for parsing, custom selectors, or feeding to your own processor M↓ get_web_page_markdown clean LLM-ready Markdown — no boilerplate, ~9× fewer tokens than raw HTML Aa get_web_page_text plain text only — for summarisation, classification, or just-the-words tasks
Three tools, three formats

Three MCP web scraping tools. Pick the format your agent needs.

Each tool takes a URL and optional browser, proxy_type, and proxy_country parameters. The agent picks the right one automatically based on the prompt — “explain this article” routes to LLM-ready Markdown; “extract every link” routes to HTML; “count word frequency” routes to plain text. Need typed JSON instead of raw page content? Pair this with the AI data scraper.

  • Markdown stripped of nav, ads, scripts — token-efficient context
  • HTML preserved when you need DOM access in the agent
  • Plain text for cheap summarisation passes
MCP tool docs →
CODING ASSISTANT > "use the latest Vercel deploy docs" → get_web_page_markdown RESEARCH AGENT > "summarise pricing on these 12 URLs" → get_web_page_text · ×12 BRAND MONITOR > "any new reviews since yesterday?" → get_web_page_markdown COMPETITIVE INTEL > "did competitor X change pricing?" → get_web_page_html · diff RAG INDEXER > "ingest the whole docs subdomain" → get_web_page_markdown · ×N SUPPORT BOT > "what does the status page say?" → get_web_page_text PATTERN prompt → tool call → cleaned content → answer
What it's actually for

Live web access for AI agents, part of the loop.

The ScrapingAnt MCP server isn't a side feature — it changes what the agent can credibly claim to know. Coding assistants quote the latest framework docs instead of guessing. Research agents pull live pages mid-thought. Background agents check competitor pricing on a schedule. Support bots cite your help center. The pattern is always the same: prompt → tool call → cleaned content → answer.

  • No more “I don't have access to that page” from your agent
  • Same key works in chat threads, autonomous agents, and CI scripts
  • Schedule recurring fetches by wrapping the tool in a cron / agent loop
SHARED WITH /v2/general cloud Chrome rotating proxies CAPTCHA-free TLS fingerprint Cloudflare auto-retries + MCP transport on top — JSON over HTTP
Built on the cluster

Same cluster. Same uptime.

Every MCP call rides the same headless Chrome cluster, rotating residential and datacenter proxies, CAPTCHA avoidance, TLS fingerprinting, and automatic retries that back the JavaScript rendering API. The MCP server is just a thinner transport on top — your agent gets the same anti-bot reliability the rest of the ScrapingAnt API delivers.

  • Real headless Chrome — handles SPAs, lazy-loaded grids, dynamic content
  • Switch to residential proxies via proxy_type parameter — same call
  • Country-pin requests with proxy_country when geo accuracy matters
Pricing

Industry leading pricing that scales with your business.

Compare plans side by side. Every tier includes 10,000 free credits to start.
👈Swipe to compare all 5 plans👉
Plans
Enthusiast
100K credits / mo
$19/mo
★ Most Popular
Startup
500K credits / mo
$49/mo
Business
3M credits / mo
$249/mo
Business Pro
8M credits / mo
$599/mo
Custom
10M+ credits / mo
$699+/mo
Monthly API credits 100,000 500,000 3,000,000 8,000,000 10M+
Support channel Email Priority email Priority email Priority email Priority + dedicated
Integration help Docs only Custom code snippets Debug sessions Priority debug sessions Full enterprise onboarding
Expert assistance included included included included
Custom proxy pools included included included
Custom anti-bot avoidances included included included
Dedicated account manager included included included
Start Free Start Free → Start Free Start Free Talk to Sales
Hit your limit mid-month?
Restart your plan instantly — no waiting for the next billing cycle. Credits refresh the moment you pay, so scraping never has to stop.
10,000 free credits every month
No credit card required
Pay only for successful scrapes — failed requests cost 0
Customers

What teams are saying.

From solo developers shipping side projects to enterprise pipelines at Fortune 500s.

★★★★★ 5.0 on Capterra →
★★★★★

“Onboarding and API integration was smooth and clear. Everything works great. The support was excellent.

Illia K.
Android Software Developer
★★★★★

“Great communication with co-founders helped me to get the job done. Great proxy diversity and good price.”

Andrii M.
Senior Software Engineer
★★★★★

“This product helps me to scale and extend my business. The setup is easy and support is really good.”

Dmytro T.
Senior Software Engineer
FAQ

Frequently asked questions.

Still curious? Get in touch with our team — we usually reply within hours.

What is the ScrapingAnt MCP server?

The ScrapingAnt MCP server is a hosted Model Context Protocol endpoint that gives any MCP-aware AI client — Claude Desktop, Cursor, Windsurf, Claude Code, VS Code with the MCP extension, and Cline — three web scraping tools: get_web_page_html, get_web_page_markdown, and get_web_page_text. Add the URL plus your API key once and the agent can fetch live web pages mid-conversation, with our JavaScript rendering and proxy stack underneath every call.

What is MCP and why does it matter for web scraping?

Model Context Protocol is the spec AI clients (Claude, Cursor, Windsurf, VS Code, Cline, etc.) use to call external tools. ScrapingAnt exposes itself as an MCP server so the agent picks up our scraping tools the moment you add the config — no SDK to install, no glue code to write. Web scraping over MCP means agents can read pages they couldn't see before, including JavaScript-heavy SPAs that web_fetch-style tools render as empty shells.

Which tools does the MCP server expose?

Three. get_web_page_html returns raw HTML for parsing or DOM access. get_web_page_markdown returns clean LLM-ready Markdown — token-efficient context. get_web_page_text returns plain text for cheap summarisation. Each takes a URL plus optional browser, proxy_type, and proxy_country parameters.

Does the MCP server handle JavaScript-rendered pages?

Yes — by default. Every MCP call routes through real headless Chrome, so SPAs, lazy-loaded content, and React / Vue / Next.js pages return populated DOM. Set browser=false if you want raw HTML without rendering (saves credits on simple pages). Same engine that powers our JavaScript rendering API.

What about Cloudflare and other anti-bot walls?

Handled. Same anti-bot stack as the rest of ScrapingAnt — rotating proxies, TLS fingerprinting, CAPTCHA avoidance — runs underneath every MCP call. Switch proxy_type to residential to route through our residential proxy pool for tougher targets without changing your client config.

Can I run the MCP server for production agents, not just dev?

Yes. The server runs on our cloud cluster and scales with your call volume. Same uptime, same SLA, and same proxy fleet as /v2/general. The MCP transport is just a thinner layer on top — production-ready out of the box.

How is the MCP server billed?

Each MCP tool call maps to one HTTP request and uses API credits — the exact rate depends on your browser and proxy_type settings. Failed requests cost 0 credits. Every account gets 10,000 free credits per month with no card required, so you can wire the MCP server into a real agent loop before paying anything.

Does it work with VS Code, GitHub Copilot, and Claude Code?

Yes — through the MCP extension. Add the same JSON config to VS Code settings.json under the MCP servers key, restart, and the tools show up in Copilot Chat agents. For Claude Code, run claude mcp add scrapingant. For Cursor / Windsurf, drop the JSON into .cursor/mcp.json. Setup docs for every client →

How is the MCP server different from the AI data scraper?

Different shapes for different jobs. The MCP server returns page content — HTML / Markdown / text — which the agent then reasons over inside the LLM. The AI data scraper (/v2/extract) returns typed JSON — you describe the fields, get structured data back. Use the MCP server when you want the agent in the loop; use the extractor when you want a clean dataset.

Talk to us

Building an agent at scale?

High-volume MCP traffic, dedicated capacity, custom tools beyond the default three, or a one-shot research dataset — drop us a line and a real human gets back within a few hours.

“Our clients are pleasantly surprised by the response speed of our team.”

Oleg Kulyk
Founder, ScrapingAnt

A real human replies within a few hours · we don't share your email

Thanks — we'll be in touch shortly.
Something went wrong submitting the form. Please try again or email us directly.

Ready to scrape the web?

10,000 free credits every month. No credit card. Pay only for successful requests.

Sign up in under 30 seconds — no card, no commitment.