WebCopy requires two user provided pieces of information before a website can be crawled, the first being the primary address to copy and the second the location where to store downloaded materials.

The crawl process can be configured in many ways, from the use of technical settings for controlling the HTTP protocol to using rules for control what content is download and what is ignored. The following topics detail these options.


Web crawling is not an exact science and while the default crawl settings should work for many websites, some customisation and knowledge of how the website to be copied is structured and built may be required

To display the project properties dialog

  • From the Project menu, click Project Properties Project Properties.

Configuring the Crawler

Working with local files

Controlling the crawl



Modifying URLs

Creating a site map


Deprecated features

© 2010-2021 Cyotek Ltd. All Rights Reserved.
Documentation version 1.8 (buildref #768.-), last modified 2021-03-30. Generated 2023-04-02 08:02 using Cyotek HelpWrite Professional version 6.19.1