Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner. Spidering is used by many search engines as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches.
Comment
Japanese crawler collects web pages for researches related to web-search and data mining.