By default, WebCopy will only scan the primary host you specify, for example
http://example.com. You can instruct WebCopy to include other hosts with completely different domain names, for example if the site you are copying makes use of a CDN.
Some project settings are ignored when crawling additional hosts, for example the crawling above the root URL.
Consider making use of the Download all resources option for scenarios where non-HTML content is located on secondary servers. The Crawl Mode option can be used to include sub or sibling domains of the root host.
Configuring additional hosts
- From the Project Properties dialog, select the Additional Hosts category
- Enter each additional host you want to crawl, one host per line. Do not enter protocol or path information, only include the domain name. You can use regular expressions if required.
- Click OK to save your changes. When you next crawl this website, any URLs belonging to the hosts you specify will no longer be skipped, but will be crawled as though they were part of the primary project URL.
If your expression includes any of the
> characters and you want them to processed as plain text, you need to "escape" the character by preceding it with a backslash. For example, if your expression was
application/epub+zip this would need to be written as
application/epub\+zip otherwise the
+ character would have a special meaning and no matches would be made. Similarly, if the expression was
example.com, this should be written as
. means "any character" which could lead to unexpected matches.
Configuring the Crawler
Working with local files
- Extracting inline data
- Remapping extensions
- Remapping local files
- Updating local time stamps
- Using query string parameters in local filenames
Controlling the crawl
- Content types
- Crawling above the root URL
- Crawling additional root URLs
- Downloading all resources
- Including sub and sibling domains
- Limiting downloads by file count
- Limiting downloads by size
- Limiting scans by depth
- Limiting scans by distance
- Scanning data attributes
- Setting speed limits
- Working with Rules
Creating a site map
- Aborting the crawl using HTTP status codes
- Defining custom headers
- Following redirects
- HEAD vs GET for preliminary requests
- HTTP Compression
- Origin reports
- Saving link data in a Crawler Project
- Setting cookies
- Setting the web page language
- Specifying a User Agent
- Specifying accepted content types
- Using Keep-Alive