A WebCopy project file stores all the settings used to crawl a given website, allowing users to set up repeatable jobs. After one or more scans of a website have been made, it also stores metadata about the structure of the website.


A crawler project does not store any downloaded resources, only the information required to crawl a site.


Password and form data is stored in plain text within a WebCopy project file.

Learn more about working with crawler projects from the links below.

In this article