Automated crawlers such as WebCopy can scan websites substantially faster than a human browser. This can result in small servers being overloaded, or server administrators to put in place automated blocks for clients that pull too much data at once. WebCopy includes some basic limit settings that the discerning user can enable to comply with the rules of remote hosts.

Disabling limits

To disable all limits and let the crawler run at maximum speed

  1. From the Project Properties dialogue, select the Speed Limits category
  2. Select the Do not use limits option

Limiting to specific requests per second

To allow only a maximum number of URLs to be processed per second

  1. From the Project Properties dialogue, select the Speed Limits category
  2. Select the Limit to requests per second option
  3. Enter the maximum number of requests WebCopy is allowed to perform in the Maximum requests per second field

Limiting to specific requests per minute

To allow only a maximum number of URLs to be processed per minute

  1. From the Project Properties dialogue, select the Speed Limits category
  2. Select the Limit to requests per minute option
  3. Enter the maximum number of requests WebCopy is allowed to perform in the Maximum requests per minute field

See Also

Configuring the Crawler

Working with local files

Controlling the crawl

JavaScript

Security

Modifying URLs

Creating a site map

Advanced

Deprecated features

© 2010-2024 Cyotek Ltd. All Rights Reserved.
Documentation version 1.10 (buildref #186.15944), last modified 2024-08-18. Generated 2024-08-18 08:01 using Cyotek HelpWrite Professional version 6.20.0