Glossary

ISP

In scraping, ISP usually means an ISP proxy: an IP address announced by an internet service provider but hosted on server hardware. It sits in the middle between datacenter and residential proxies, which is why teams use it when datacenter IPs get blocked too easily but residential is too slow, expensive, or messy.

Examples

An ISP proxy is usually the thing you try when plain datacenter IPs work in testing but fall apart once volume goes up.

  • Datacenter proxy: cheap, fast, blocked more often
  • Residential proxy: harder to block, slower, usually more expensive
  • ISP proxy: server speed with a reputation that often looks closer to residential
# Example request sent through an ISP proxy
curl -x http://user:pass@isp-proxy.example:8000 https://httpbin.org/ip
import requests

proxies = {
    "http": "http://user:pass@isp-proxy.example:8000",
    "https": "http://user:pass@isp-proxy.example:8000",
}

r = requests.get("https://httpbin.org/ip", proxies=proxies, timeout=30)
print(r.json())

In practice, teams often route targets like this: - low-friction pages: datacenter - medium-friction pages: ISP - high-friction or very sensitive targets: residential or browser-based scraping

Practical tips

  • Don't treat ISP proxies as magic. They still get blocked, they still burn, and they still need rotation and retry logic.
  • Use them when the target is sensitive to obvious datacenter ranges but doesn't justify full residential cost.
  • Watch the economics:
  • datacenter is usually cheapest
  • ISP is often the middle ground
  • residential gets expensive fast at scale
  • Test by target, not by marketing label. One site may accept ISP IPs just fine, another may flag them immediately.
  • Measure the things that actually matter:
  • success rate
  • challenge rate
  • cost per successful page
  • latency
  • how often you need to swap providers
  • If you're already juggling multiple proxy types across targets, that's usually where a routing layer starts making sense. ScrapeRouter exists for exactly that kind of mess: picking the right path per request instead of hardwiring one provider and hoping it keeps working.

Use cases

  • Retail scraping: product pages where datacenter IPs start getting throttled once concurrency increases
  • SERP collection: workloads that need better trust than datacenter, without paying residential rates for every request
  • Account-light flows: public pages with moderate bot protection, where residential is overkill but raw server IPs die quickly
  • Fallback layer: when a target mostly works on datacenter and only certain routes, geos, or volumes need something less obvious

Related terms

Datacenter Proxy Residential Proxy Proxy Rotation IP Reputation Rate Limiting Web Scraping API