Crawling can be left to try and scan as much of a website as it can access, or you can be limit it to only crawl to a certain depth.

Note

Scan depth checks only apply to the main domain being crawled

How does WebCopy determine depth?

WebCopy determines the depth of a URL by looking at the number of path components it is made up of, excluding the document name if possible.

URLDepth
http://www.example.com/0
http://www.example.com/index.html0
http://www.example.com/products/1
http://www.example.com/products/index.html1
http://www.example.com/products/webcopy2

Configuring a scan depth

  1. From the Project Properties dialog, select the General category
  2. Check the Limit crawl depth option
  3. Enter the maximum level that WebCopy will scan

Important

Scan depth is taken from base domain, not the starting address

See Also

Configuring the Crawler

Working with local files

Controlling the crawl

JavaScript

Security

Modifying URLs

Creating a site map

Advanced

Deprecated features