txt file is then parsed and will instruct the robotic as to which internet pages are certainly not to get crawled. Like a search engine crawler could maintain a cached copy of the file, it may well every now and then crawl pages a webmaster does not prefer to crawl. Webpages typically prevented from staying crawled contain login-specific web pages