WebCopy allows you to configure which protocols are used when communicated with sites using HTTPS.
Configuring which security protocols to use
- From the Project Properties dialogue, expand the Advanced category and select Security
- Check each protocol you wish to use
Important
Some websites can only be accessed if TLS 1.1 or 1.2 is enabled. By default, WebCopy enables all supported protocols, including the legacy SSL3 protocol. If trying to crawl a website fails with a "An existing connection was forcibly closed by the remote host" error, this may be due to disabling required protocols.
Ignoring SSL errors
If the SSL certificate associated with a website is invalid or untrusted, WebCopy will refuse to copy the site. You can force such sites to be copied by ignoring certificate errors.
Important
Due to the security risks of compromised websites, enabling this setting is not recommended
- From the Project Properties dialogue, expand the Advanced category and select Security
- Check the Ignore certificate errors option
Note
When first attempting to copy a website, WebCopy will check if a certificate is present and valid. If a certificate is present but not valid, and the Ignore certificate errors option is not set a prompt will be displayed allowing the certificate to be viewed and to confirm if copying should continue.
See Also
Configuring the Crawler
Working with local files
- Extracting inline data
- Remapping extensions
- Remapping local files
- Updating local time stamps
- Using query string parameters in local filenames
Controlling the crawl
- Content types
- Crawling multiple URLs
- Crawling outside the base URL
- Downloading all resources
- Including additional domains
- Including sub and sibling domains
- Limiting downloads by file count
- Limiting downloads by size
- Limiting scans by depth
- Limiting scans by distance
- Scanning data attributes
- Setting speed limits
- Working with Rules
JavaScript
Security
Modifying URLs
Creating a site map
Advanced
- Aborting the crawl using HTTP status codes
- Cookies
- Defining custom headers
- HEAD vs GET for preliminary requests
- HTTP Compression
- Origin reports
- Redirects
- Saving link data in a Crawler Project
- Setting the web page language
- Specifying a User Agent
- Specifying accepted content types
- Using Keep-Alive