Crawling can be left to try and scan as much of a website as it can access, or you can be limit it to only crawl to a certain depth.

Scan depth checks only apply to the main domain being crawled

How does WebCopy determine depth?

WebCopy determines the depth of a URL by looking at the number of path components it is made up of, excluding the document name if possible.

URLDepth
http://www.example.com/0
http://www.example.com/index.html0
http://www.example.com/products/1
http://www.example.com/products/index.html1
http://www.example.com/products/webcopy2

Configuring a scan depth

  1. From the Project Properties dialogue, select the General category
  2. Check the Limit crawl depth option
  3. Enter the maximum level that WebCopy will scan

Scan depth is taken from base domain, not the starting address

See Also

Configuring the Crawler

Working with local files

Controlling the crawl

JavaScript

Security

Modifying URLs

Creating a site map

Advanced

Deprecated features

© 2010-2024 Cyotek Ltd. All Rights Reserved.
Documentation version 1.10 (buildref #186.15944), last modified 2024-08-18. Generated 2024-08-18 08:01 using Cyotek HelpWrite Professional version 6.20.0