Claude Code can't see JavaScript. ScrapingAnt can.
The ScrapingAnt MCP server for Claude Code adds three web scraping tools — HTML, Markdown, plain text — to your CLI in 30 seconds. Real headless Chrome, rotating proxies, anti-bot built in. Installed with one claude mcp add. Free 10K credits/month.
10,000 free credits · failed requests cost 0 · no Chromium on disk
# One command. No Chromium download. No profile setup.
$ claude mcp add scrapingant \
--transport http \
https://api.scrapingant.com/mcp \
-H "x-api-key: YOUR_API_KEY"
✓ scrapingant added · 3 tools registered · ready to use# In your terminal:
> claude
# Then in chat:
> Read https://react-dashboard.example.com
and summarise the key metrics
# Claude Code picks up the right tool automatically:
# scrapingant.get_web_page_markdown(url)
# And gets back fully-rendered Markdown, not <div id="root"/>> Read https://react-dashboard.example.com
and summarise the key metrics
✗ I tried fetching the page but it returned
only a JavaScript shell:
<div id="root"></div>
<script src="/static/js/bundle.js">
I cannot read the rendered dashboard content.> Read https://react-dashboard.example.com
and summarise the key metrics
✓ The dashboard shows:
· Monthly Active Users: 45,231 (+12%)
· Revenue: $128,450 (+8% MoM)
· Churn rate: 2.3% (-0.4%)
· NPS: 72
Top feature by usage: live filters (38%). What Claude Code can do now.
Three new tools in your CLI — without leaving the terminal, without installing Chromium.
Scrape any JS-rendered site
React, Vue, Next.js, dashboards, gated SPAs — Claude Code reads the rendered DOM, not the empty shell.
See the diff →LLM-ready Markdown
Clean Markdown stripped of nav, ads, scripts — ~9× fewer tokens than raw HTML for the same content.
Tool details →Anti-bot built in
Same Chrome cluster, rotating proxies, CAPTCHA avoidance that backs /v2/general.
Claude Code stops guessing at HTML.
Claude Code's built-in web_fetch is a plain HTTP request — no browser, no JavaScript execution. Anything rendered client-side comes back as <div id="root">. The ScrapingAnt MCP server routes through real headless Chrome (the same engine behind our JavaScript rendering API), runs the JS, waits for the page to settle, and returns the DOM Claude can actually read.
- SPAs, dashboards, lazy-loaded content — all return populated
- Claude Code keeps using
web_fetchfor static pages and APIs - The agent picks the right tool from the prompt — no flag flipping
Three MCP web scraping tools in your CLI.
Each tool takes a URL and optional browser, proxy_type, and proxy_country parameters. Claude Code picks the right one from the prompt — “explain this article” routes to LLM-ready Markdown; “extract every link” routes to HTML; “count word frequency” routes to plain text. Need typed JSON instead of raw page content? Stack with the AI data scraper.
- Markdown for context, HTML for DOM access, text for cheap summaries
- Show up in
claude mcp listand Claude Code's tool picker - Same tools in Cursor, Windsurf, Cline, Claude Desktop, VS Code Copilot
Live web access, part of the loop.
The MCP tools aren't a side feature — they change what Claude Code can credibly answer mid-task. It quotes today's framework docs instead of training-data drift. It reads the rendered DOM of a SPA you're debugging. It diffs vendor pricing pages on demand. It ingests a docs subdomain into your RAG index — all from the same chat or scripted run.
- No more “I don't have access to that page” from Claude Code
- Same key works in interactive sessions,
claude --print, and CI - Schedule recurring fetches by wrapping the call in a cron / agent loop
Same cluster. Same uptime.
Every Claude Code MCP call rides the same headless Chrome cluster, rotating residential and datacenter proxies, CAPTCHA avoidance, TLS fingerprinting, and automatic retries that back the JavaScript rendering API. The MCP server is just a thinner transport on top — Claude Code gets the same anti-bot reliability the rest of the ScrapingAnt API delivers. Same plumbing as the generic MCP server for web scraping, surfaced for the Claude Code workflow.
- 50K+ datacenter IPs, 3M+ residential — handles anti-bot out of the box
- Switch to
residentialproxies viaproxy_type— same call - Country-pin requests with
proxy_countrywhen geo accuracy matters - Failed requests cost zero credits — never pay for a broken page
Where Claude Code users plug it in.
Six concrete patterns from teams running it day-to-day.
Read framework docs mid-task
Claude Code pulls the live Next.js, Astro, or Stripe docs while writing the code that uses them — answers reflect the current API, not training-data drift.
Talk to us →Debug a live SPA you wrote
Point Claude Code at your staging dashboard and ask why a chart looks off. It reads the rendered DOM, not just the empty shell.
Talk to us →Compare changelogs before upgrading
Drop in 4 GitHub release URLs. Claude Code reads each one, diffs the breaking changes, and writes the migration plan into your repo.
Talk to us →Build internal CLI tools
Wire scrapingant into a Claude Code project — it becomes a script that pulls vendor pricing, status pages, or dashboards on a cron.
Talk to us →Scrape research for a write-up
Ask Claude Code to read 10 blog posts, extract the relevant claims, and draft a comparison doc — all from inside one chat session.
Talk to us →Reverse-engineer a competitor flow
Have Claude Code walk a public sign-up page, describe the form, and sketch a backend that would support the same flow — DOM, not screenshots.
Talk to us →Pricing
Industry leading pricing that scales with your business.
|
Plans
|
Enthusiast
100K credits / mo
$19/mo
|
★ Most Popular
Startup
500K credits / mo
$49/mo
|
Business
3M credits / mo
$249/mo
|
Business Pro
8M credits / mo
$599/mo
|
Custom
10M+ credits / mo
$699+/mo
|
|---|---|---|---|---|---|
| Monthly API credits | 100,000 | 500,000 | 3,000,000 | 8,000,000 | 10M+ |
| Support channel | Priority email | Priority email | Priority email | Priority + dedicated | |
| Integration help | Docs only | Custom code snippets | Debug sessions | Priority debug sessions | Full enterprise onboarding |
| Expert assistance | — | ||||
| Custom proxy pools | — | — | |||
| Custom anti-bot avoidances | — | — | |||
| Dedicated account manager | — | — | |||
| Start Free | Start Free → | Start Free | Start Free | Talk to Sales |
What teams are saying.
From solo developers shipping side projects to enterprise pipelines at Fortune 500s.
★★★★★ 5.0 on Capterra →★★★★★“Onboarding and API integration was smooth and clear. Everything works great. The support was excellent.”
★★★★★“Great communication with co-founders helped me to get the job done. Great proxy diversity and good price.”
★★★★★“This product helps me to scale and extend my business. The setup is easy and support is really good.”
Frequently asked questions.
Still curious? Get in touch with our team — we usually reply within hours.
What is the ScrapingAnt MCP server for Claude Code?
It's a hosted Model Context Protocol endpoint that adds three web scraping tools to Claude Code — get_web_page_html, get_web_page_markdown, and get_web_page_text. Run one claude mcp add command and Claude Code can fetch live web pages mid-conversation, render JavaScript with real headless Chrome, and rotate through 3M+ proxies — all without local Chromium or extra glue code. It's the Claude Code-flavoured install of our generic MCP server for web scraping.
Does this replace Claude Code's built-in web_fetch?
It complements it. web_fetch is a simple HTTP request — fine for static pages, raw robots.txt, or hitting an API. The MCP tools route through real headless Chrome (the same engine behind our JavaScript rendering API), so they handle SPAs, JS-rendered dashboards, lazy-loaded grids, and anti-bot walls. Claude Code picks the right one based on the prompt.
Which Claude Code commands does it work with?
Any of them. The MCP tools show up in claude, claude --print, scripted runs, agent tasks — anywhere Claude Code can call a tool. List installed servers with claude mcp list; remove with claude mcp remove scrapingant.
What tools does the Claude Code MCP server expose?
Three. get_web_page_html returns raw HTML for parsing or DOM access. get_web_page_markdown returns clean LLM-ready Markdown — token-efficient context. get_web_page_text returns plain text for cheap summarisation. Each takes a URL plus optional browser, proxy_type, and proxy_country parameters.
How are credits charged from a Claude Code session?
Each tool call is one HTTP request. JS rendering costs 10 credits, raw fetch costs 1, residential proxy adds a multiplier. Failed requests cost 0. The free plan is 10,000 credits/month — that's ~1,000 JS-rendered pages or ~10,000 static fetches without entering a card.
Does it work with Cursor, Windsurf, Cline, VS Code Copilot?
Yes — the same MCP URL works in any MCP-compatible client. Drop the JSON config (URL + x-api-key header) into .cursor/mcp.json, claude_desktop_config.json, or VS Code's MCP settings. Setup docs →
Cloudflare, login walls, geo-blocked content — does it handle them?
For public web only. Same anti-bot stack as the rest of ScrapingAnt — rotating residential and datacenter proxies, TLS fingerprinting, CAPTCHA avoidance — runs underneath every MCP call. Switch proxy_type to residential to route through our residential proxy pool for tougher targets without changing your Claude Code config.
Can I use it in agent automations or CI scripts?
Yes. The MCP server is hosted, so a Claude Code agent loop or a CI job calling claude --print hits the same endpoint with the same key. No local Chromium, no per-machine setup.
How is this different from the AI data scraper?
Different shapes for different jobs. The MCP tools return page content — HTML / Markdown / text — which Claude Code reasons over inside the model. The AI data scraper (/v2/extract) returns typed JSON keyed to a plain-English schema you describe. Use the MCP tools when you want Claude Code to read and reason; use the extractor when you want a clean dataset back.
Need a custom plan?
High-volume pricing, residential pool tuning, dedicated infrastructure, custom scrapers — drop us a line and a real human gets back within a few hours.
“Our clients are pleasantly surprised by the response speed of our team.”