Important
This functionality is currently under review and may be removed in a future version of WebCopy. If you currently use this feature, we would be grateful if you could email [email protected] and explain your use case for the feature.
You can configure WebCopy to route all requests through a proxy server.
Important
Proxy server settings have been flagged as deprecated as they currently do not support modern proxy server protocols.
Using your system proxy server
If proxy settings have been configured in your operating system, you can instruct WebCopy to use the system defined proxy.
- From the Project Properties dialogue, expand the Deprecated option and click Proxy
- To enable the use of a proxy server, check the Use Proxy checkbox
- In the Advanced group, check the Use system proxy server settings option
Configuring a proxy server
- From the Project Properties dialogue, expand the Deprecated option and click Proxy
- To enable the use of a proxy server, check the Use Proxy checkbox
- Enter the address of the proxy server into the Address field, including setting the Port.
- To bypass the proxy server for local addresses, check the Bypass proxy server for local address option. You can optionally enter other address to bypass in the Exceptions list
- If the proxy server requires authorization to use, enter the login credentials in the User name, Password and, optionally, Domain fields. Alternatively, to use default credentials, check the Use default credentials option in the Advanced group
See Also
Configuring the Crawler
Working with local files
- Extracting inline data
- Remapping extensions
- Remapping local files
- Updating local time stamps
- Using query string parameters in local filenames
Controlling the crawl
- Content types
- Crawling multiple URLs
- Crawling outside the base URL
- Downloading all resources
- Including additional domains
- Including sub and sibling domains
- Limiting downloads by file count
- Limiting downloads by size
- Limiting scans by depth
- Limiting scans by distance
- Scanning data attributes
- Setting speed limits
- Working with Rules
JavaScript
Security
- Crawling private areas
- Manually logging into a website
- TLS/SSL certificate options
- Working with Forms
- Working with Passwords
Modifying URLs
Creating a site map
Advanced
- Aborting the crawl using HTTP status codes
- Cookies
- Defining custom headers
- HEAD vs GET for preliminary requests
- HTTP Compression
- Origin reports
- Redirects
- Saving link data in a Crawler Project
- Setting the web page language
- Specifying a User Agent
- Specifying accepted content types
- Using Keep-Alive