Examples
A basic scraping setup with a forward proxy looks like this:
- Your scraper sends a request to the proxy
- The proxy sends that request to the target site
- The target site responds to the proxy
- The proxy returns the response to your scraper
Using a forward proxy with requests in Python:
import requests
proxies = {
"http": "http://user:pass@proxy.example.com:8000",
"https": "http://user:pass@proxy.example.com:8000",
}
resp = requests.get(
"https://httpbin.org/ip",
proxies=proxies,
timeout=30,
)
print(resp.text)
Same idea with curl:
curl -x http://user:pass@proxy.example.com:8000 https://httpbin.org/ip
If you're using ScrapeRouter, you generally don't wire up raw forward proxies yourself. You call one endpoint and ScrapeRouter handles proxy selection, rotation, and fallback behind the scenes:
curl -X POST https://www.scraperouter.com/api/v1/scrape/ \
-H "Authorization: Api-Key $api_key" \
-H "Content-Type: application/json" \
-d '{
"url": "https://httpbin.org/ip"
}'
Practical tips
- A forward proxy hides your origin IP, not your whole fingerprint: sites still look at TLS behavior, headers, cookies, browser signals, request rate, and session consistency
- Datacenter proxies are cheaper but easier to block: residential and mobile proxies cost more, but they survive longer on harder targets
- HTTP and HTTPS support matters: a proxy that works for basic HTTP requests can still fail on CONNECT tunnels, TLS-heavy sites, or browser traffic
- Session handling matters more than people think: if login, cart state, or anti-bot checks depend on one IP staying consistent, random rotation will break your flow
- One proxy provider is a dependency, not a strategy: when routes degrade, blocks spike, or a region goes bad, you need fallback options or a router layer
- Measure success rate, latency, and cost together: the cheapest proxy pool gets expensive fast if retries explode and engineers have to babysit it
- Test the actual target, not just IP-check tools: this works:
import requests
proxies = {
"http": "http://user:pass@proxy.example.com:8000",
"https": "http://user:pass@proxy.example.com:8000",
}
resp = requests.get("https://target-site.example.com/products", proxies=proxies, timeout=30)
print(resp.status_code)
print(resp.text[:200])
and this only tells you the proxy exists:
curl -x http://user:pass@proxy.example.com:8000 https://httpbin.org/ip
- Expect forward proxies to fail in production: dead IPs, burned subnets, bad geolocation, slow exits, auth errors, and provider outages are normal, not edge cases
Use cases
- Web scraping: rotate outbound IPs, spread request load, and access location-specific content
- Geo-targeted testing: verify pricing, search results, or localized pages from specific countries or cities
- Access control for internal teams or bots: send outbound traffic through a known egress point instead of exposing many origin IPs
- Rate-limit mitigation: avoid hammering a target from one IP, though this only works if the rest of the request pattern is sane
- Centralized outbound routing: apply auth, logging, filtering, or traffic policy in one layer before requests leave your infrastructure
- Production scraping stacks: pair forward proxies with retries, fingerprinting, session management, and routing logic so the system keeps working when one proxy source starts failing