Examples
A scraping API usually gives you two ways in: raw HTTP, or an SDK.
from scraperouter import ScrapeRouter
client = ScrapeRouter(api_key="sr_live_xxx")
result = client.scrape(
url="https://example.com",
render_js=True,
country="us"
)
print(result.status_code)
print(result.html[:200])
Without an SDK, you do the same thing more manually:
import requests
response = requests.post(
"https://api.scraperouter.com/v1/scrape",
headers={"Authorization": "Bearer sr_live_xxx"},
json={
"url": "https://example.com",
"render_js": True,
"country": "us"
},
timeout=60
)
print(response.status_code)
print(response.json()["html"][:200])
The second version is fine. The problem is that in production, you usually end up repeating auth, timeout handling, retries, error mapping, pagination helpers, and little request-shaping details all over your codebase.
Practical tips
- Use the SDK if it saves boilerplate in a language your team already uses. Don't add one more dependency just because a homepage says "developer friendly."
- Check what the SDK actually does: auth handling, retries, timeouts, pagination, typed responses, async support.
- Read the raw API docs anyway. When something breaks in production, you usually end up debugging the underlying HTTP request.
- Watch for version drift: SDKs sometimes lag behind the API, especially for newer parameters.
- If you're doing scraping at any real volume, test how the SDK exposes failures: rate limits, target blocks, browser timeouts, malformed responses.
- For internal tooling, an SDK can keep things cleaner. For very custom pipelines, raw HTTP is sometimes simpler and easier to control.
- With ScrapeRouter, the useful part of an SDK is not "less code" by itself. It's having one client interface while the routing logic, provider differences, and failover stay behind the API instead of leaking into your app.
Use cases
- Faster integration: get a scraper working without hand-writing request objects and auth on day one.
- Team consistency: give everyone the same client methods instead of five slightly different homemade wrappers.
- Typed app code: in Python, TypeScript, or Go, SDKs can make responses and options less error-prone.
- Operational cleanup: centralize retries, timeouts, headers, and error handling instead of duplicating them across jobs.
- Multi-provider abstraction: if you're using a router layer like ScrapeRouter, an SDK can keep your application code stable even when the provider underneath changes.