By default, WebCopy will only scan the primary host you specify, for example http://example.com.

If you need to copy non-HTML resources from other domains (e.g. a CDN), this would normally be automatically handled via the use of the Download all resources option. However, if you wanted to crawl HTML that isn't located on a sub- or sibling-domain, you can configure WebCopy to download HTML from additional domains.

Important

Some project settings are ignored when crawling additional domains, for example the crawling above the root URL.

Configuring additional domains

  1. From the Project Properties dialogue, select the Additional Hosts category
  2. Enter each additional host you want to crawl, one host per line. Do not enter protocol or path information, only include the domain name. You can use regular expressions if required.
  3. Click OK to save your changes. When you next crawl this website, any URLs belonging to the hosts you specify will no longer be skipped, but will be crawled as though they were part of the primary project URL.

If your expression includes any of the ^, [, ., $, {, *, (, \, +, ), |, ?, <, > characters and you want them to processed as plain text, you need to "escape" the character by preceding it with a backslash. For example, if your expression was application/epub+zip this would need to be written as application/epub\+zip otherwise the + character would have a special meaning and no matches would be made. Similarly, if the expression was example.com, this should be written as example\.com, as . means "any character" which could lead to unexpected matches.

See Also

Configuring the Crawler

Working with local files

Controlling the crawl

JavaScript

Security

Modifying URLs

Creating a site map

Advanced

Deprecated features

© 2010-2024 Cyotek Ltd. All Rights Reserved.
Documentation version 1.10 (buildref #186.15944), last modified 2024-08-18. Generated 2024-08-18 08:00 using Cyotek HelpWrite Professional version 6.20.0