What the check tests
It compares scraper and proxy combinations against the same URL so you can see which route actually clears.
Free feasibility check
Paste a URL to see what works, what gets blocked, and the cheapest route before you build.
Check scrapeability, compare working routes, and see what signals are likely to block plain HTTP.
It compares scraper and proxy combinations against the same URL so you can see which route actually clears.
Failed routes, challenge pages, and incomplete responses show where plain HTTP stops working.
WAF, CAPTCHA, CDN, and antibot detections help explain why a route fails and when to escalate.
The report is built to hand you a starting payload for /api/v1/scrape/ instead of just saying yes or no.
Validate scrapeability before you choose a scraper, proxy tier, or cost model.
Compare the route that broke against alternatives instead of guessing whether the problem is HTTP, proxy, or browser execution.
Pick the cheapest working route first, then escalate only if the report shows you need to.
The end state is not “interesting report.” It is a route you can send to the API with less guesswork.
/api/v1/scrape/.
Clear answers for search snippets, AI summaries, and anyone evaluating scrapeability.
It shows which scraper and proxy combinations can fetch the URL, which ones get blocked, what protection signals appear, and what the likely cost looks like before you integrate.
Look for failed combinations, repeated 403 or challenge responses, CAPTCHA or WAF detections, and cases where only a browser-capable route succeeds.
They indicate which protections are in front of the page. That helps you decide whether plain HTTP is enough or whether you need browser rendering, different proxy tiers, or a more expensive route.
Yes. The report is designed to hand off directly into `/api/v1/scrape/` with the chosen scraper, proxy tier, and any scraper-specific options.