This functionality is currently under review and may be removed in a future version of WebCopy. If you currently use this feature, we would be grateful if you could email email@example.com and explain your use case for the feature.
For some sites, you may have a link to a folder, and others might link to the default document of that folders. WebCopy would class these as two separate entries and generate additional elements accordingly.
If you define default documents, WebCopy will try and link page-less URLs to URLs containing the default document name.
To configure default documents
- From the Project Properties dialog, expected the Deprecated category and select the Default Documents option.
- Enter the default document in the Default documents field.
You can specify multiple default documents by entering each document on a new line
Configuring the Crawler
Working with local files
- Extracting inline data
- Remapping extensions
- Remapping local files
- Updating local time stamps
- Using query string parameters in local filenames
Controlling the crawl
- Content types
- Crawling above the root URL
- Crawling additional hosts
- Crawling additional root URLs
- Downloading all resources
- Including sub and sibling domains
- Limiting downloads by file count
- Limiting downloads by size
- Limiting scans by depth
- Limiting scans by distance
- Scanning data attributes
- Setting speed limits
- Working with Rules
Creating a site map
- Aborting the crawl using HTTP status codes
- Defining custom headers
- Following redirects
- HEAD vs GET for preliminary requests
- HTTP Compression
- Origin reports
- Saving link data in a Crawler Project
- Setting cookies
- Setting the web page language
- Specifying a User Agent
- Specifying accepted content types
- Using Keep-Alive