NEW ScrapingAnt MCP for Claude Code, Cursor & Windsurf — try it free →
★★★★★ 5.0 on Capterra

Claude Code can't see JavaScript. ScrapingAnt can.

The ScrapingAnt MCP server for Claude Code adds three web scraping tools — HTML, Markdown, plain text — to your CLI in 30 seconds. Real headless Chrome, rotating proxies, anti-bot built in. Installed with one claude mcp add. Free 10K credits/month.

10,000 free credits · failed requests cost 0 · no Chromium on disk

# One command. No Chromium download. No profile setup.
$ claude mcp add scrapingant \
    --transport http \
    https://api.scrapingant.com/mcp \
    -H "x-api-key: YOUR_API_KEY"

 scrapingant added · 3 tools registered · ready to use
# In your terminal:
> claude

# Then in chat:
> Read https://react-dashboard.example.com
  and summarise the key metrics

# Claude Code picks up the right tool automatically:
#   scrapingant.get_web_page_markdown(url)
# And gets back fully-rendered Markdown, not <div id="root"/>
> Read https://react-dashboard.example.com
  and summarise the key metrics

✗ I tried fetching the page but it returned
  only a JavaScript shell:

  <div id="root"></div>
  <script src="/static/js/bundle.js">

  I cannot read the rendered dashboard content.
> Read https://react-dashboard.example.com
  and summarise the key metrics

✓ The dashboard shows:
  · Monthly Active Users: 45,231 (+12%)
  · Revenue: $128,450 (+8% MoM)
  · Churn rate: 2.3% (-0.4%)
  · NPS: 72

  Top feature by usage: live filters (38%).
claude > read this dashboard tool: scrapingant.md ✓ ready CLAUDE CODE MCP SCRAPINGANT render proxy clean return CLOUD CLUSTER JSON RENDERED # Dashboard MAU: 45,231 Rev: $128k Churn: 2.3% NPS: 72 readable CLEAN MARKDOWN claude mcp add scrapingant · 30 seconds
claude — without scrapingant > read https://react-dashboard.example.com and summarise the metrics ✗ The page returned only: <div id="root"></div> <script src="/static/js/bundle.js"> I cannot read the rendered content. claude — with scrapingant > read https://react-dashboard.example.com and summarise the metrics ✓ The dashboard shows: · MAU: 45,231 (+12%) · Revenue: $128,450 (+8% MoM) · Churn: 2.3% (-0.4%) · NPS: 72 Top feature by usage: live filters.
The problem with web_fetch

Claude Code stops guessing at HTML.

Claude Code's built-in web_fetch is a plain HTTP request — no browser, no JavaScript execution. Anything rendered client-side comes back as <div id="root">. The ScrapingAnt MCP server routes through real headless Chrome (the same engine behind our JavaScript rendering API), runs the JS, waits for the page to settle, and returns the DOM Claude can actually read.

  • SPAs, dashboards, lazy-loaded content — all return populated
  • Claude Code keeps using web_fetch for static pages and APIs
  • The agent picks the right tool from the prompt — no flag flipping
</> get_web_page_html raw HTML for parsing, custom selectors, or feeding to your own processor M↓ get_web_page_markdown clean LLM-ready Markdown — no boilerplate, ~9× fewer tokens than raw HTML Aa get_web_page_text plain text only — for summarisation, classification, or just-the-words tasks
Three tools, three formats

Three MCP web scraping tools in your CLI.

Each tool takes a URL and optional browser, proxy_type, and proxy_country parameters. Claude Code picks the right one from the prompt — “explain this article” routes to LLM-ready Markdown; “extract every link” routes to HTML; “count word frequency” routes to plain text. Need typed JSON instead of raw page content? Stack with the AI data scraper.

  • Markdown for context, HTML for DOM access, text for cheap summaries
  • Show up in claude mcp list and Claude Code's tool picker
  • Same tools in Cursor, Windsurf, Cline, Claude Desktop, VS Code Copilot
MCP tool docs →
DOCS LOOKUP > "use the Astro 6 routing docs" → get_web_page_markdown DEBUG LIVE SPA > "why is the chart empty on staging?" → get_web_page_html CHANGELOG DIFF > "compare these 4 release notes" → get_web_page_markdown · ×4 VENDOR PRICING > "did Stripe change ACH fees this week?" → get_web_page_html · diff RAG INGEST > "ingest the whole docs subdomain" → get_web_page_markdown · ×N STATUS / INCIDENTS > "what does Cloudflare status say right now?" → get_web_page_text PATTERN prompt → tool call → cleaned content → answer
In a real Claude Code session

Live web access, part of the loop.

The MCP tools aren't a side feature — they change what Claude Code can credibly answer mid-task. It quotes today's framework docs instead of training-data drift. It reads the rendered DOM of a SPA you're debugging. It diffs vendor pricing pages on demand. It ingests a docs subdomain into your RAG index — all from the same chat or scripted run.

  • No more “I don't have access to that page” from Claude Code
  • Same key works in interactive sessions, claude --print, and CI
  • Schedule recurring fetches by wrapping the call in a cron / agent loop
SHARED WITH /v2/general cloud Chrome rotating proxies CAPTCHA-free TLS fingerprint Cloudflare auto-retries + MCP transport on top — JSON over HTTP
Built on the cluster

Same cluster. Same uptime.

Every Claude Code MCP call rides the same headless Chrome cluster, rotating residential and datacenter proxies, CAPTCHA avoidance, TLS fingerprinting, and automatic retries that back the JavaScript rendering API. The MCP server is just a thinner transport on top — Claude Code gets the same anti-bot reliability the rest of the ScrapingAnt API delivers. Same plumbing as the generic MCP server for web scraping, surfaced for the Claude Code workflow.

  • 50K+ datacenter IPs, 3M+ residential — handles anti-bot out of the box
  • Switch to residential proxies via proxy_type — same call
  • Country-pin requests with proxy_country when geo accuracy matters
  • Failed requests cost zero credits — never pay for a broken page
Pricing

Industry leading pricing that scales with your business.

Compare plans side by side. Every tier includes 10,000 free credits to start.
👈Swipe to compare all 5 plans👉
Plans
Enthusiast
100K credits / mo
$19/mo
★ Most Popular
Startup
500K credits / mo
$49/mo
Business
3M credits / mo
$249/mo
Business Pro
8M credits / mo
$599/mo
Custom
10M+ credits / mo
$699+/mo
Monthly API credits 100,000 500,000 3,000,000 8,000,000 10M+
Support channel Email Priority email Priority email Priority email Priority + dedicated
Integration help Docs only Custom code snippets Debug sessions Priority debug sessions Full enterprise onboarding
Expert assistance included included included included
Custom proxy pools included included included
Custom anti-bot avoidances included included included
Dedicated account manager included included included
Start Free Start Free → Start Free Start Free Talk to Sales
Hit your limit mid-month?
Restart your plan instantly — no waiting for the next billing cycle. Credits refresh the moment you pay, so scraping never has to stop.
10,000 free credits every month
No credit card required
Pay only for successful scrapes — failed requests cost 0
Customers

What teams are saying.

From solo developers shipping side projects to enterprise pipelines at Fortune 500s.

★★★★★ 5.0 on Capterra →
★★★★★

“Onboarding and API integration was smooth and clear. Everything works great. The support was excellent.

Illia K.
Android Software Developer
★★★★★

“Great communication with co-founders helped me to get the job done. Great proxy diversity and good price.”

Andrii M.
Senior Software Engineer
★★★★★

“This product helps me to scale and extend my business. The setup is easy and support is really good.”

Dmytro T.
Senior Software Engineer
FAQ

Frequently asked questions.

Still curious? Get in touch with our team — we usually reply within hours.

What is the ScrapingAnt MCP server for Claude Code?

It's a hosted Model Context Protocol endpoint that adds three web scraping tools to Claude Code — get_web_page_html, get_web_page_markdown, and get_web_page_text. Run one claude mcp add command and Claude Code can fetch live web pages mid-conversation, render JavaScript with real headless Chrome, and rotate through 3M+ proxies — all without local Chromium or extra glue code. It's the Claude Code-flavoured install of our generic MCP server for web scraping.

Does this replace Claude Code's built-in web_fetch?

It complements it. web_fetch is a simple HTTP request — fine for static pages, raw robots.txt, or hitting an API. The MCP tools route through real headless Chrome (the same engine behind our JavaScript rendering API), so they handle SPAs, JS-rendered dashboards, lazy-loaded grids, and anti-bot walls. Claude Code picks the right one based on the prompt.

Which Claude Code commands does it work with?

Any of them. The MCP tools show up in claude, claude --print, scripted runs, agent tasks — anywhere Claude Code can call a tool. List installed servers with claude mcp list; remove with claude mcp remove scrapingant.

What tools does the Claude Code MCP server expose?

Three. get_web_page_html returns raw HTML for parsing or DOM access. get_web_page_markdown returns clean LLM-ready Markdown — token-efficient context. get_web_page_text returns plain text for cheap summarisation. Each takes a URL plus optional browser, proxy_type, and proxy_country parameters.

How are credits charged from a Claude Code session?

Each tool call is one HTTP request. JS rendering costs 10 credits, raw fetch costs 1, residential proxy adds a multiplier. Failed requests cost 0. The free plan is 10,000 credits/month — that's ~1,000 JS-rendered pages or ~10,000 static fetches without entering a card.

Does it work with Cursor, Windsurf, Cline, VS Code Copilot?

Yes — the same MCP URL works in any MCP-compatible client. Drop the JSON config (URL + x-api-key header) into .cursor/mcp.json, claude_desktop_config.json, or VS Code's MCP settings. Setup docs →

Cloudflare, login walls, geo-blocked content — does it handle them?

For public web only. Same anti-bot stack as the rest of ScrapingAnt — rotating residential and datacenter proxies, TLS fingerprinting, CAPTCHA avoidance — runs underneath every MCP call. Switch proxy_type to residential to route through our residential proxy pool for tougher targets without changing your Claude Code config.

Can I use it in agent automations or CI scripts?

Yes. The MCP server is hosted, so a Claude Code agent loop or a CI job calling claude --print hits the same endpoint with the same key. No local Chromium, no per-machine setup.

How is this different from the AI data scraper?

Different shapes for different jobs. The MCP tools return page content — HTML / Markdown / text — which Claude Code reasons over inside the model. The AI data scraper (/v2/extract) returns typed JSON keyed to a plain-English schema you describe. Use the MCP tools when you want Claude Code to read and reason; use the extractor when you want a clean dataset back.

Talk to us

Need a custom plan?

High-volume pricing, residential pool tuning, dedicated infrastructure, custom scrapers — drop us a line and a real human gets back within a few hours.

“Our clients are pleasantly surprised by the response speed of our team.”

Oleg Kulyk
Founder, ScrapingAnt

A real human replies within a few hours · we don't share your email

Thanks — we'll be in touch shortly.
Something went wrong submitting the form. Please try again or email us directly.

Ready to scrape the web?

10,000 free credits every month. No credit card. Pay only for successful requests.

Sign up in under 30 seconds — no card, no commitment.