Three MCP Tools for Web Content Extraction
</>get_web_page_html
M↓get_web_page_markdown
Aaget_web_page_text
What Will Web Access Cost?
How credits are calculated:
- SERP scraping: 10 credits per search engine query
- Page fetch (with JS): 10 credits per page
- Page fetch (no JS): 1 credit per page
Pricing
Transparent Credit-Based Pricing
Frequently asked questions.
How is this different from Tavily?
ScrapingAnt MCP gives you direct web access rather than a pre-processed search API. With Tavily, you send a query and get curated results. With ScrapingAnt, your agent performs its own search via SERP scraping, chooses which pages to fetch, and gets the raw content. This gives you full control over source selection and no hidden reranking algorithms. You build your own pipeline instead of relying on a black box.
Can I use this for RAG pipelines?
Absolutely. The get_web_page_markdown tool is designed exactly for this. It returns clean, LLM-ready Markdown that you can chunk and embed directly into your vector database. You control what gets indexed - fetch documentation, blog posts, news articles, or any web content your RAG system needs.
How do I search the web with ScrapingAnt MCP?
You scrape Google, DuckDuckGo, or Bing search results directly. Fetch the SERP page as HTML or Markdown, parse the results, and your agent can choose which links to follow. This gives you transparent search results without any vendor-side filtering or reranking. Your agent sees exactly what a human searcher would see.
What about rate limits and anti-bot protection?
ScrapingAnt handles this for you. Our infrastructure includes 50K+ datacenter IPs and 2M+ residential IPs with automatic rotation. Browser rendering handles JavaScript-heavy sites. For sites with aggressive anti-bot protection, use residential proxies with geo-targeting. Most sites work out of the box with default settings.
How do credits work for different request types?
SERP scraping (Google, Bing) costs 10 credits per search. Page fetches with JavaScript rendering cost 10 credits. Static page fetches (no JS) cost just 1 credit. Using residential proxies adds additional credits. The calculator above helps you estimate costs based on your specific usage patterns.
Which AI tools support MCP?
Our MCP server works with Claude Desktop, Cursor, VS Code (with GitHub Copilot), Claude Code (CLI), Cline, and Windsurf. Any tool that supports the Model Context Protocol standard can use our server. See our MCP Server page for detailed setup instructions for each tool.
Can I use this outside of MCP (direct API)?
Yes! ScrapingAnt's core Web Scraping API works the same way and can be called from any programming language. Use Python, JavaScript, Go, or any HTTP client. The MCP server is just one interface to the same powerful scraping infrastructure. See our main documentation for API usage.
"Our clients are pleasantly surprised by the response speed of our team."
ScrapingAnt Founder