Skip to main content

Claude Code Can't Scrape JavaScript Sites. Here's the Fix.

· 5 min read
Oleg Kulyk

Claude Code Can't Scrape JavaScript Sites. Here's the Fix.

You ask Claude Code to fetch a website and summarize it. Instead of useful content, you get back an empty <div> and a bunch of script tags. If you've used Claude Code for web scraping for more than a week, you've hit this wall. Here's why it happens and how to fix it in 30 seconds.

The Problem

Here's a typical scenario. You're working in Claude Code and ask it to pull content from a React-based documentation site or a Next.js app:

❯ Fetch https://app.example.com/dashboard and summarize what's on the page

"The page appears to be a JavaScript application. The HTML contains:

<div id="root"></div>
<script src="/static/js/main.a1b2c3.js"></script>
<noscript>You need to enable JavaScript to run this app.</noscript>

I cannot access the rendered content as it requires JavaScript execution."

This happens with any modern SPA — React apps, Vue dashboards, Angular portals, Next.js pages with client-side data fetching. The content you want lives in JavaScript, not in the raw HTML.

Why This Happens

Claude Code's built-in web_fetch tool does a plain HTTP GET request. It grabs the raw HTML the server sends back before any JavaScript executes. Modern web apps ship a nearly empty HTML shell and render everything client-side. No browser means no JavaScript execution means no content.

It's not a Claude limitation — Claude is smart enough to parse and summarize anything. It just needs a tool that actually renders the page first.

🤖For AI Agent Developers

Give Your AI Agents Real-Time Web Access

ScrapingAnt's MCP server integrates directly with Claude Desktop, Cursor, VS Code, and more. Unlike black-box solutions, you control the entire search and extraction pipeline.

✓ No vendor lock-in✓ Full transparency✓ Works with Claude, Cursor, VS Code

The Fix: ScrapingAnt MCP Server

ScrapingAnt's MCP server gives Claude Code access to a cloud-based headless Chrome cluster with rotating proxies and CAPTCHA solving. One command to install:

claude mcp add scrapingant --transport http https://api.scrapingant.com/mcp -H "x-api-key: <YOUR-API-KEY>"

That's it. Claude Code now has three new tools:

  • get_web_page_html — full rendered HTML after JavaScript execution
  • get_web_page_markdown — clean Markdown output, perfect for LLM context
  • get_web_page_text — plain text extraction

Now the same request works:

❯ Fetch https://app.example.com/dashboard and summarize what's on the page

"This is an analytics dashboard showing:
- Monthly revenue: $128,450 (+8% MoM)
- Active users: 45,231
- Top product: Wireless Headphones Pro ($349)
- Conversion rate: 3.2% (up from 2.8% last month)

The dashboard also includes charts for daily active users
and a geographic breakdown of sales by region."

Claude automatically picks the right tool. You don't need to specify which one — just ask for what you want in plain English.

Three Things You Can Do With It

1. Pull docs into context without copy-pasting

Need to understand a library's API? Instead of manually copying documentation into the chat, ask Claude Code to fetch it directly. The markdown tool returns clean, formatted text — no nav bars, no footers, no cookie banners. Just the content.

❯ Fetch the React Query documentation page for useQuery and explain
how staleTime differs from gcTime

[Claude fetches the docs page via ScrapingAnt, gets clean markdown]

"Based on the current React Query docs:
- staleTime: How long data is considered fresh (default: 0)
- gcTime: How long inactive data stays in cache (default: 5 min)

The key difference: staleTime prevents refetches while data is fresh.
gcTime controls when cached data gets garbage collected entirely..."

This is especially useful for RAG workflows and AI agent pipelines where you need to feed live web content into your LLM context.

2. Research competitor sites

Working on a pricing page? Ask Claude Code to scrape three competitors and compare. It renders the JavaScript, extracts the actual content, and gives you structured analysis without leaving your terminal.

❯ Scrape the pricing pages of Firecrawl, Apify, and Browserless.
Compare their free tier limits in a table.

[Claude fetches all three pages, renders JS, extracts pricing]

| Provider | Free Credits | JS Rendering | Rate Limit |
|-------------|-------------|--------------|------------|
| Firecrawl | 500/month | Included | 10 req/min |
| Apify | $5 credit | Extra cost | 30 req/min |
| Browserless | 1000/month | Included | 5 req/min |

3. Extract structured data

Need product listings, job postings, or contact info from a page? Describe what you want in plain English. ScrapingAnt renders the page, Claude parses the result.

❯ Go to https://news.ycombinator.com and extract the top 5 posts
as JSON with title, url, points, and comment count

[Claude fetches rendered page, parses the content]

[
{
"title": "Show HN: I built a distributed SQLite",
"url": "https://example.com/distributed-sqlite",
"points": 342,
"comments": 128
},
...
]

No CSS selectors. No XPath. No maintenance when the site redesigns. You describe what you want, Claude figures out the extraction.

📚Related Reading

Understanding MCP Servers for Web Scraping and Data Extraction

Explore how MCP servers enhance AI-driven web scraping and data extraction, providing efficient integration with external resources and improving performance.

Pricing

The free plan gives you 10,000 API credits per month. JS rendering costs 10 credits per request, static pages cost 1. That's roughly 1,000 JavaScript-rendered pages or 10,000 static pages per month — free, no credit card required.

You only pay for successful requests. Failed scrapes don't cost credits.

For most developers using Claude Code for research and prototyping, the free tier is more than enough. If you need more, paid plans start at $19/month.

Getting Started

  1. Get a free API keysign up here (no credit card)
  2. Install the MCP server — run this in your terminal:
    claude mcp add scrapingant --transport http https://api.scrapingant.com/mcp -H "x-api-key: YOUR_API_KEY"
  3. Start scraping — ask Claude Code to fetch any page. It will automatically use ScrapingAnt for JavaScript-heavy sites.

Full documentation: ScrapingAnt MCP Server Docs

📚Related Reading

Building AI‑Driven Scrapers in 2025 - Agents, MCP, and ScrapingAnt

See how to wire AI agents and MCP-style tools to ScrapingAnt for autonomous data collection, monitoring, and enrichment workflows in 2025.


Claude Code is powerful but blind to JavaScript. ScrapingAnt gives it eyes. Unlike running a local Playwright MCP server that eats your machine's resources and gets blocked by bot detection, ScrapingAnt handles the infrastructure — thousands of proxies, CAPTCHA solving, and browser rendering — so you can focus on what matters: getting the data you need.

Forget about getting blocked while scraping the Web

Try out ScrapingAnt Web Scraping API with thousands of proxy servers and an entire headless Chrome cluster