Examples
A browser app running on https://app.example.com tries to fetch data from https://target-site.com:
fetch("https://target-site.com/api/data")
.then(r => r.json())
.then(console.log)
.catch(console.error)
If target-site.com does not return the right CORS headers, the browser blocks the response even if the server technically answered.
A typical response header that allows it might look like this:
Access-Control-Allow-Origin: https://app.example.com
For scraping, this is why code that fails in the browser often works fine once moved server-side:
import requests
r = requests.get("https://target-site.com/api/data")
print(r.status_code)
print(r.text[:200])
That request is not being enforced by the browser, so CORS is usually not the blocker there.
Practical tips
- Do not confuse CORS with blocking: CORS is a browser rule, not proof that the target site is rejecting your scraper.
- If you're scraping from frontend JavaScript: expect CORS problems, preflight requests, and inconsistent behavior across endpoints.
- If you control the target API: return the right headers, example:
Access-Control-Allow-Origin,Access-Control-Allow-Methods,Access-Control-Allow-Headers. - If you do not control the target site: move the request to your backend or a scraping API instead of trying to hack around browser restrictions.
- Watch for preflight: some requests trigger an
OPTIONSrequest before the real one, usually because of custom headers, credentials, or non-simple methods. - For scraping in production: keeping requests server-side is usually the sane path. It avoids CORS entirely and gives you proper control over retries, proxies, headers, and session handling.
- With ScrapeRouter: this is one of those problems you usually sidestep by not scraping directly from the browser in the first place.
Use cases
- Frontend app calling its own API: normal CORS setup, predictable if you control both sides.
- Browser-based scraper or extension: CORS becomes a real constraint, especially when hitting third-party APIs or website endpoints directly.
- Server-side scraper: CORS usually does not matter, but people still waste time debugging it because the same request failed earlier in browser code.
- Hybrid scraping stack: browser for rendering, backend for extraction and routing. This is usually where teams stop fighting CORS and start solving the actual problem.