Examples
A simple API call usually looks like this:
curl "https://api.example.com/products/123" \
-H "Authorization: Bearer YOUR_TOKEN"
And the response is typically something structured and predictable:
{
"id": 123,
"name": "Running Shoes",
"price": 79.99,
"in_stock": true
}
In Python:
import requests
response = requests.get(
"https://api.example.com/products/123",
headers={"Authorization": "Bearer YOUR_TOKEN"},
timeout=30,
)
response.raise_for_status()
print(response.json())
In scraping, people often wish a site had an API because it's cleaner than parsing fragile HTML. When there is no usable API, tools like ScrapeRouter give you an API for the messy part: fetching pages, handling proxies, browsers, retries, and blocks.
Practical tips
- Prefer APIs over HTML scraping when a real API exists: less brittle, cheaper to maintain, easier to parse.
- Read the docs, but verify behavior: plenty of APIs have edge cases, bad pagination, odd rate limits, or fields that quietly disappear.
- Handle failures like they are normal: timeouts, 429s, 5xxs, auth expiry.
- Version matters: if an API has versions like
/v1/and/v2/, pin to one instead of assuming it won't change. - Watch rate limits early: what works in testing often falls over in production.
- Log response status and latency so you can tell the difference between your bug and their outage.
A basic pattern:
import requests
import time
url = "https://api.example.com/products"
headers = {"Authorization": "Bearer YOUR_TOKEN"}
for attempt in range(3):
response = requests.get(url, headers=headers, timeout=30)
if response.status_code == 429:
time.sleep(2 ** attempt)
continue
response.raise_for_status()
data = response.json()
break
If you're scraping at scale, the same rule applies: the first request is easy, the 100,000th request is where the real API design and operational pain start to matter.
Use cases
- Internal services: one backend service asks another for user data, pricing, inventory, or job status.
- Third-party integrations: payments, maps, email delivery, CRM sync, analytics.
- Data access: pulling product catalogs, market data, or platform records in a structured way.
- Web scraping infrastructure: instead of managing proxies and browsers yourself, you call a scraping API and get the page or extracted data back.
- Automation: scripts and jobs that need a reliable way to trigger actions in another system.
For scraping teams, an API is often the line between a quick demo and something you can actually run every day without babysitting.