Automated crawlers such as WebCopy can scan websites substantially faster than a human browser. This can result in small servers being overloaded, or server administrators to put in place automated blocks for clients that pull too much data at once. WebCopy includes some basic limit settings that the discerning user can enable to comply with the rules of remote hosts.

Disabling limits

To disable all limits and let the crawler run at maximum speed

  1. From the Project Properties dialogue, select the Speed Limits category
  2. Select the Do not use limits option

Limiting to specific requests per second

To allow only a maximum number of URLs to be processed per second

  1. From the Project Properties dialogue, select the Speed Limits category
  2. Select the Limit to requests per second option
  3. Enter the maximum number of requests WebCopy is allowed to perform in the Maximum requests per second field

Limiting to specific requests per minute

To allow only a maximum number of URLs to be processed per minute

  1. From the Project Properties dialogue, select the Speed Limits category
  2. Select the Limit to requests per minute option
  3. Enter the maximum number of requests WebCopy is allowed to perform in the Maximum requests per minute field

See Also

Configuring the Crawler

Working with local files

Controlling the crawl

JavaScript

Security

Modifying URLs

Creating a site map

Advanced

Deprecated features

© 2010-2024 Cyotek Ltd. All Rights Reserved.
Documentation version 1.10 (buildref #185.15779), last modified 2024-03-31. Generated 2024-03-31 14:04 using Cyotek HelpWrite Professional version 6.19.1