scraperouter/scrapling-stealthyfetcher:0.4
This scraper is based on an open-source project. We do not profit from it; you pay only for computation and proxy costs.
*Datacenter proxy transfer is included in the request cost. Additional transfer: Residential +$1.80/GB , Mobile +$4.00/GB
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
url
|
string | required | - | Target URL to scrape |
scraper
|
string | required | - | Scraper identifier |
method
|
string | optional | GET | HTTP method (GET or POST) |
headers
|
object | optional | - | HTTP headers dict |
cookies
|
array[object] | optional | - | List of cookie dicts |
data
|
any | optional | - | Request body data for POST requests |
proxy
|
string | object | optional | datacenter | Proxy type (datacenter/residential/mobile) or URL or config object |
scraper_options
|
object | optional | - | Scraper-specific options (see below) |
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
proxy
|
string | object | optional | - | |
timeout
|
integer | number | optional | - | |
addons
|
array[any] | optional | - | |
allow_webgl
|
boolean | optional | True | |
block_images
|
boolean | optional | False | |
block_webrtc
|
boolean | optional | False | |
blocked_domains
|
array[any] | optional | - | |
retries
|
integer | optional | - | |
retry_delay
|
integer | number | optional | - | |
cookies
|
array[any] | optional | - | |
disable_ads
|
boolean | optional | False | |
disable_resources
|
boolean | optional | False | |
extra_headers
|
object | optional | - | |
geoip
|
boolean | optional | False | |
google_search
|
boolean | optional | True | |
headless
|
boolean | optional | True | |
humanize
|
boolean | number | optional | True | |
mock_human
|
boolean | number | optional | - | |
init_script
|
string | optional | - | |
load_dom
|
boolean | optional | True | |
page_actions
|
array[any] | optional | - | |
network_idle
|
boolean | optional | False | |
solve_cloudflare
|
boolean | optional | False | |
wait
|
integer | number | optional | 0 | |
wait_selector
|
string | optional | - | |
screenshot
|
boolean | optional | False | |
network_requests
|
boolean | optional | False | |
cdp_url
|
string | optional | - | |
real_chrome
|
boolean | optional | False | |
hide_canvas
|
boolean | optional | False |
The API returns a unified response envelope. The
content
field contains the raw page HTML.
| Field | Type | Description |
|---|---|---|
id
|
string (uuid) | Unique request identifier |
status_code
|
integer | HTTP status code |
url
|
string | Final response URL |
headers
|
object | Response HTTP headers |
content
|
string | Page content (HTML) |
scraper
|
string | Scraper identifier used |
Internally, all scrapers normalize their output to a universal format before the API response is built.
| Field | Type |
|---|---|
id
|
string |
status_code
|
integer |
final_url
|
string |
headers
|
object |
content
|
string |
cookies
|
array | object |
errors
|
array[any] |
screenshot_url
|
string |
scraper_data
|
object |
scraperouter
|
ScrapeRouterResponseData |
The native response from this scraper includes additional fields beyond the universal format.
| Field | Type | Default |
|---|---|---|
status
|
integer | - |
reason
|
string | - |
url
|
string | - |
cookies
|
array | object | object | - |
headers
|
object | - |
request_headers
|
object | - |
history
|
array[string] | - |
body
|
string | - |
encoding
|
string | - |
har_path
|
string | - |
screenshot_paths
|
array[any] | - |
error
|
string | - |
curl -X POST https://www.scraperouter.com/api/v1/scrape/ \
-H "Authorization: Api-Key YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com", "scraper": "scraperouter/scrapling-stealthyfetcher:0.4"}'
import requests
response = requests.post(
"https://www.scraperouter.com/api/v1/scrape/",
headers={"Authorization": "Api-Key YOUR_API_KEY"},
json={
"url": "https://example.com",
"scraper": "scraperouter/scrapling-stealthyfetcher:0.4",
},
)
data = response.json()
print(data["content"][:500])
import requests
response = requests.post(
"https://www.scraperouter.com/api/v1/scrape/",
headers={"Authorization": "Api-Key YOUR_API_KEY"},
json={
"url": "https://example.com",
"scraper": "scraperouter/scrapling-stealthyfetcher:0.4",
"method": "GET",
"proxy": "residential",
"scraper_options": {
"timeout": 30,
"allow_webgl": true,
"google_search": true
},
},
)
data = response.json()
print(f"Status: {data['status_code']}")
print(f"URL: {data['url']}")
print(data["content"][:500])
{
"id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"status_code": 200,
"url": "https://example.com",
"headers": {
"content-type": "text/html; charset=utf-8"
},
"content": "<!DOCTYPE html><html>...</html>",
"scraper": "scraperouter/scrapling-stealthyfetcher:0.4"
}