Glossary

CORS

CORS, short for Cross-Origin Resource Sharing, is a browser security mechanism that controls whether JavaScript running on one origin can make requests to another. It matters a lot if you're scraping from frontend code, but it does not apply the same way to server-side scrapers, which is why people often hit it in the browser and then overcomplicate the fix.

Examples

A browser app running on https://app.example.com tries to fetch data from https://target-site.com:

fetch("https://target-site.com/api/data")
  .then(r => r.json())
  .then(console.log)
  .catch(console.error)

If target-site.com does not return the right CORS headers, the browser blocks the response even if the server technically answered.

A typical response header that allows it might look like this:

Access-Control-Allow-Origin: https://app.example.com

For scraping, this is why code that fails in the browser often works fine once moved server-side:

import requests

r = requests.get("https://target-site.com/api/data")
print(r.status_code)
print(r.text[:200])

That request is not being enforced by the browser, so CORS is usually not the blocker there.

Practical tips

  • Do not confuse CORS with blocking: CORS is a browser rule, not proof that the target site is rejecting your scraper.
  • If you're scraping from frontend JavaScript: expect CORS problems, preflight requests, and inconsistent behavior across endpoints.
  • If you control the target API: return the right headers, example: Access-Control-Allow-Origin, Access-Control-Allow-Methods, Access-Control-Allow-Headers.
  • If you do not control the target site: move the request to your backend or a scraping API instead of trying to hack around browser restrictions.
  • Watch for preflight: some requests trigger an OPTIONS request before the real one, usually because of custom headers, credentials, or non-simple methods.
  • For scraping in production: keeping requests server-side is usually the sane path. It avoids CORS entirely and gives you proper control over retries, proxies, headers, and session handling.
  • With ScrapeRouter: this is one of those problems you usually sidestep by not scraping directly from the browser in the first place.

Use cases

  • Frontend app calling its own API: normal CORS setup, predictable if you control both sides.
  • Browser-based scraper or extension: CORS becomes a real constraint, especially when hitting third-party APIs or website endpoints directly.
  • Server-side scraper: CORS usually does not matter, but people still waste time debugging it because the same request failed earlier in browser code.
  • Hybrid scraping stack: browser for rendering, backend for extraction and routing. This is usually where teams stop fighting CORS and start solving the actual problem.

Related terms

Same-Origin Policy Preflight Request Proxy Headless Browser Web Scraping API Rate Limiting