Before you can copy a web site, you need to specify the path where the downloaded files will be saved. You can also choose to optionally flatten the downloaded site hierarchy.

Specifying the save folder

  1. From the Project Properties dialog, select the Folder category
  2. Enter the folder where the downloaded files will be stored in the Save folder field

Creating sub-folders by domain

By default, WebCopy will download all web sites to C:\Downloaded Web Sites. Each new project will also have the Create folder for domain option enabled which will force WebCopy to store downloaded material in a sub-folder based on the domain being copied. For example, if the site being copied was the WebCopy demonstration site https://demo.cyotek.com, the download location would be C:\Downloaded Web Sites\demo.cyotek.com

To disable this behaviour, uncheck the Create folder for domain option and ensure the Save folder is set to a unique path.

The default save location for new crawler projects can be configured from the Options dialog.

Flattening website folders

WebCopy will try and mirror the folder structure of the local copy to that of the website. However, in some cases this may not be appropriate. By setting the Flatten website folder option, all downloaded content will be in the same folder regardless of its location on the remote site.

Emptying the website folder

If you are rescanning an existing web, WebCopy will try to reuse existing filenames. However, if the website URLs frequently, or if you do not save the link meta data into the project, setting the Empty website folder before copy option will delete the contents of the save folder prior to copying the web site.

Check the Use Recycle Bin option to have WebCopy move all deleted files into the Recycle Bin. If this option is unchecked, files will be permanently erased.

Important

The WebCopy GUI application will prompt to continue if the Empty website folder before copy option is set and files are present in the destination folder. The console CLI client will not prompt.

See Also

Configuring the Crawler

Working with local files

Controlling the crawl

JavaScript

Security

Modifying URLs

Creating a site map

Advanced

Deprecated features