Glossary

Rotating Proxies

Rotating proxies are proxy networks that change the IP address used for outgoing requests, either on every request or on a defined schedule. In scraping, they help reduce bans, rate limits, and captchas, but they do not magically fix bad request patterns, broken sessions, or sloppy scraper behavior.

Examples

A simple rotation setup changes IPs between requests so a target site does not see everything coming from one address.

import requests

proxies = {
    "http": "http://user:pass@proxy.example.com:8000",
    "https": "http://user:pass@proxy.example.com:8000",
}

for url in [
    "https://httpbin.org/ip",
    "https://httpbin.org/ip",
    "https://httpbin.org/ip",
]:
    r = requests.get(url, proxies=proxies, timeout=30)
    print(r.text)

With many providers, the proxy endpoint stays the same and the provider rotates the exit IP behind it.

curl -x http://user:pass@proxy.example.com:8000 https://httpbin.org/ip

If you need a sticky session instead of a fresh IP every time, some providers let you pass a session key.

curl -x "http://user-session123:pass@proxy.example.com:8000" https://target-site.com

Practical tips

  • Do not rotate blindly: if a site expects a login session, cart, or multi-step flow, changing IPs on every request can break the session faster than it avoids blocking.
  • Match proxy type to the target: datacenter proxies are cheaper and faster, residential proxies usually block less, mobile proxies are expensive and mostly for harder targets.
  • Rotation is only one layer: you still need sane concurrency, realistic headers, cookie handling, and retry logic.
  • Watch the failure pattern: 403s, 429s, captcha pages, and sudden latency spikes usually tell you more than the provider dashboard does.
  • Cost gets stupid fast: residential rotation can work, but if your scraper is noisy or retries too much, bandwidth bills climb quickly.
  • Use sticky sessions when needed: for login flows, pagination, account dashboards, and anything stateful.
  • Use fresh rotation when needed: for broad search result collection, public pages, and high-volume fetches where session continuity does not matter.
  • If you are using ScrapeRouter: this is one of the things you usually do not want to manage yourself. The whole point is avoiding hand-built provider logic, failover rules, and constant proxy tuning.

Use cases

  • Large-scale public page scraping: search results, product listings, directory pages, job listings.
  • Avoiding per-IP rate limits: when a site starts slowing down or blocking after too many requests from one address.
  • Geo-targeted collection: using proxies from specific countries or cities to see localized content.
  • Captcha reduction: not elimination, reduction. Rotation helps, but bad browser fingerprints or aggressive request patterns still trigger challenges.
  • Provider failover setups: teams rotate not just IPs, but proxy sources, because one network eventually gets burned on some targets.
  • Session-aware scraping: using sticky IPs for a period, then rotating when the session is done.

Related terms

Residential Proxies Datacenter Proxies IP Ban Rate Limiting CAPTCHA Session Persistence Proxy Pool Web Scraping API