Before you build anything, answer one question clearly: is this target scrapeable with the route you can afford?
That question is broader than “does a GET request return HTML?” You want to know:
- which route succeeds
- which route gets blocked
- whether the page needs browser execution
- whether WAF, CAPTCHA, or anti-bot tooling is in front of it
- what the likely per-request cost looks like
Start with the cheapest route
Try plain HTTP first. If a low-cost route succeeds and returns the data you need, stop there.
Watch for protection signals
403s, challenge pages, browser checks, repeated redirects, and CAPTCHA detections usually mean you need a different proxy tier, a different scraper, or both.
Escalate only when the report says you need to
Residential routing is often enough. Browser automation should be the next step only when cheaper routes fail.
Use the report as a handoff
The best preflight check does not just say “yes” or “no.” It gives you the exact route to start with in /api/v1/scrape/.
Run the free check to test your own URL before you build.