One API for all your web scraping needs. Choose the optimal scraping provider for a given website.
Get $5 free credits to start in 60 seconds. No credit card required.
See which scrapers and proxy types work for a URL before you write any code.
Get started in minutes.
Create an account to get started.
Credits work with any scraper or proxy.
Create an API key and start making requests.
Consistent schema and optimized cost.
Receive the same JSON response regardless of which provider fulfilled it.
Use multiple scraping providers and libraries through a single integration.
We route to the best provider for each domain and automatically retry with the next best option if it fails.
Attempt requests via the cheapest provider first, escalating to premium only if necessary.
One request is all it takes.
#!/usr/bin/env bash
curl -X POST https://www.scraperouter.com/api/v1/scrape/ \
-H "Authorization: Api-Key {your_api_key}" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"scraper": "auto"
}'
import requests
response = requests.post(
"https://www.scraperouter.com/api/v1/scrape/",
headers={"Authorization": "Api-Key {your_api_key}"},
json={
"url": "https://example.com",
"scraper": "auto",
},
)
print(response.json())
const response = await fetch("https://www.scraperouter.com/api/v1/scrape/", {
method: "POST",
headers: {
"Authorization": "Api-Key {your_api_key}",
"Content-Type": "application/json",
},
body: JSON.stringify({
url: "https://example.com",
scraper: "auto",
}),
});
const data = await response.json();
console.log(data);
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"status_code": 200,
"url": "https://example.com",
"content": "<!doctype html>...",
"headers": {
"content-type": "text/html; charset=UTF-8"
},
"scraper": "apiritif/requests:2.32"
}
Want to learn more? Read the documentation
Pay-as-you-go. See detailed per-request pricing for each scraper.
It gives you one API between your code and scraping route complexity. You send one request to /api/v1/scrape/, ScrapeRouter runs the route you choose or chooses one for you, and you get one normalized response shape back.
Use plain requests when the target is simple and already works. ScrapeRouter starts to matter when sites change, block datacenter traffic, need browser rendering, or when browser-first defaults get too expensive at production volume.
In auto mode, ScrapeRouter tries configured scraper and proxy combinations in order and stops when one succeeds. The point is to keep route escalation out of your app while keeping the response schema stable.
Yes. If you already know the route you want, you can send an explicit scraper and proxy configuration. That is useful when you want tighter control over behavior and cost, or when a free check already showed you a workable starting route.
It is the fastest way to answer one practical question before integration: how should this URL be scraped? The report helps you see what route looks workable, what looks blocked, and what to try next in the API.
No. ScrapeRouter is a developer tool for technical teams. It helps you run and route scraping requests through one integration, but it is not a custom scraper agency or managed data collection service.
ScrapeRouter is pay-as-you-go and credit-based. Cost depends on the route used for the request, so simpler targets do not need to carry browser and premium proxy costs if they do not need them. Paid check reports and API usage stay inside the same billing model.
That is the main reason ScrapeRouter exists. The route can change underneath while your downstream integration keeps the same request and response contract, so you are not rewriting the rest of your pipeline every time a site changes its defenses.