The Unified Interface
For Scraping

One API for all your web scraping needs. Choose the optimal scraping provider for a given website.

Get $5 free credits to start in 60 seconds. No credit card required.

Run a free URL check

See which scrapers and proxy types work for a URL before you write any code.

How It Works

Get started in minutes.

Sign up

Create an account to get started.

Get free credits

Credits work with any scraper or proxy.

Get your API key

Create an API key and start making requests.

Scrape

Consistent schema and optimized cost.

Developer First Features

Unified Schema

Receive the same JSON response regardless of which provider fulfilled it.

Many Scrapers

Use multiple scraping providers and libraries through a single integration.

Smart Routing

We route to the best provider for each domain and automatically retry with the next best option if it fails.

Cost Optimization

Attempt requests via the cheapest provider first, escalating to premium only if necessary.

Quick Start

One request is all it takes.

#!/usr/bin/env bash
curl -X POST https://www.scraperouter.com/api/v1/scrape/ \
  -H "Authorization: Api-Key {your_api_key}" \
  -H "Content-Type: application/json" \
  -d '{
  "url": "https://example.com",
  "scraper": "auto"
}'
import requests

response = requests.post(
    "https://www.scraperouter.com/api/v1/scrape/",
    headers={"Authorization": "Api-Key {your_api_key}"},
    json={
        "url": "https://example.com",
        "scraper": "auto",
    },
)
print(response.json())
const response = await fetch("https://www.scraperouter.com/api/v1/scrape/", {
  method: "POST",
  headers: {
    "Authorization": "Api-Key {your_api_key}",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    url: "https://example.com",
    scraper: "auto",
  }),
});

const data = await response.json();
console.log(data);
{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "status_code": 200,
  "url": "https://example.com",
  "content": "<!doctype html>...",
  "headers": {
    "content-type": "text/html; charset=UTF-8"
  },
  "scraper": "apiritif/requests:2.32"
}

Want to learn more? Read the documentation

Simple Pricing

$0
/ month

Pay-as-you-go. See detailed per-request pricing for each scraper.

  • No minimums
  • No subscriptions
  • One consolidated bill
  • Real-time cost tracking
Sign up

FAQ

What does ScrapeRouter actually do?

It gives you one API between your code and scraping route complexity. You send one request to /api/v1/scrape/, ScrapeRouter runs the route you choose or chooses one for you, and you get one normalized response shape back.

When should I use it instead of plain requests?

Use plain requests when the target is simple and already works. ScrapeRouter starts to matter when sites change, block datacenter traffic, need browser rendering, or when browser-first defaults get too expensive at production volume.

What does auto mode mean?

In auto mode, ScrapeRouter tries configured scraper and proxy combinations in order and stops when one succeeds. The point is to keep route escalation out of your app while keeping the response schema stable.

Can I still force a specific scraper or proxy?

Yes. If you already know the route you want, you can send an explicit scraper and proxy configuration. That is useful when you want tighter control over behavior and cost, or when a free check already showed you a workable starting route.

What is the free URL check for?

It is the fastest way to answer one practical question before integration: how should this URL be scraped? The report helps you see what route looks workable, what looks blocked, and what to try next in the API.

Is this a done-for-you scraping service?

No. ScrapeRouter is a developer tool for technical teams. It helps you run and route scraping requests through one integration, but it is not a custom scraper agency or managed data collection service.

How does pricing work?

ScrapeRouter is pay-as-you-go and credit-based. Cost depends on the route used for the request, so simpler targets do not need to carry browser and premium proxy costs if they do not need them. Paid check reports and API usage stay inside the same billing model.

What happens if the target changes later?

That is the main reason ScrapeRouter exists. The route can change underneath while your downstream integration keeps the same request and response contract, so you are not rewriting the rest of your pipeline every time a site changes its defenses.