WebCopy
Cyotek WebCopy Help
v1.9
Send Feedback
Sitemap
Welcome
Introducing WebCopy
What is Cyotek WebCopy?
License Agreement
System Requirements
Third Party Licenses
Getting Started
Downloading WebCopy
Scanning or downloading a website
How does WebCopy scan pages?
What are WebCopy's limitations?
WebCopy Components
32-bit vs 64-bit
User Interface
Information Panels
Results
Errors
Sitemap
Skipped
Files
Differences
Cookies
Menu Reference
Toolbar Reference
Status Bar Reference
Using the Website Links dialogue box
Using the Regular Expression Editor
Project Properties Reference
Accept Content Types
Additional Hosts
Additional URLs
Advanced
Content Types
Cookies
Custom Attributes
Custom Headers
Default Documents
Domain Aliases
Folder
Forms
General
HTTP Compression
Limits
Link Map
Local Files
Passwords
Proxy
Query Strings
Redirects
Rules
Security
Sitemap
Speed Limits
URL Normalisation
URL Transforms
User Agent
Web Browser
Web Page Language
Options Reference
Advanced
Appearance
Copy
General
Notifications
Web Page Language
Scanning or downloading a website
Quickly scanning a website
Scanning a website
Downloading a website
Post scan options
Working with Crawler Projects
Creating project using a Wizard
Creating a new blank project
Opening an existing project
Saving the current project
Saving the current project with a new name
Configuring the crawler
Specifying the web site to copy
Configuring the output location
Aborting the crawl using HTTP status codes
Configuring domain aliases
Content types
Cookies
Crawling multiple URLs
Crawling outside the base URL
Creating a site map
Defining custom headers
Downloading all resources
Extracting inline data
Fixing sites using mixed prefixes
Forcing HTTPS
HEAD vs GET for preliminary requests
HTTP Compression
Ignoring URL Case
Including additional domains
Including sub and sibling domains
Limiting downloads by file count
Limiting downloads by size
Limiting scans by depth
Limiting scans by distance
Origin reports
Query String Manipulation
Redirects
Remapping extensions
Remapping local files
Saving link data in a Crawler Project
Scanning data attributes
Setting speed limits
Setting the web page language
Specifying a User Agent
Specifying accepted content types
Specifying default documents
TLS/SSL certificate options
Transforming URLs
Updating local time stamps
Using a proxy server
Using Keep-Alive
Using query string parameters in local filenames
Working with JavaScript enabled websites
Crawling private areas
Working with Rules
What is a rule?
Adding rules
Editing rules
Deleting rules
Reordering rules
Testing rules
Executing rules
Working with Forms
Form properties
Adding forms
Editing forms
Deleting forms
Capturing forms
Testing a form
Working with Passwords
Password properties
Prompting for passwords
Adding passwords
Editing passwords
Deleting passwords
Manually logging into a website
Exporting links to CSV
Viewing reports
Troubleshooting crawl issues
Skip reasons
Customising WebCopy
Options
Automatic update checks
Automatically opening the sitemap
Backup configuration
Changing the application language
Font configuration
Notification options
Opening the previous document
Setting buffer sizes
Setting the default save folder
Showing the splash screen
Sitemap options
Changing the user interface theme
About External Tools
Managing Extensions
Managing Settings
Command line arguments
CLI vs GUI
GUI command line arguments
CLI command line arguments
Specifying Arguments
Exit Codes
Tools
Website Link Checker
GUI Client
Command Line Interface
Tutorials and examples
Sample project / demonstration website
Copying your first website
Using the web site information panels
Using rules to exclude content
How to only copy images
How to log into a website
Technical resources
Project File Format
Netscape cookie file format
Support and online resources
Getting support
Reporting errors
Sending positive feedback
Sending negative feedback
Submitting feedback
Knowledge Base
Community Forums
Checking for updates
Customer feedback program
Acknowledgements