Enables or disables the web crawler’s robots.txt support.
Can be set in: collection.cfg
This setting should only be used as a last resort as it disables the web crawler’s adherence to the
This parameter enables or disables the crawler’s support for
This setting is useful if you need to legitimately crawl a website where the web crawler is blocked due to
robots.txt directives and the site owner is unable to update the robots.txt file to provide Funnelback with access.
The default behaviour of the web crawler is to check for
robots.txt and honor any directives.
Using this setting can also have unwanted side effects for the web crawler (such as disabling support for sitemap.xml files) and sites that you are crawling. You should carefully check your web crawler logs to ensure you’re not storing or accessing content that you don’t wish to access and add appropriate exclude patterns. (For example, you should ensure any search results pages and calendar feeds are explicitly added to your exclude patterns).
robots.txt could also result in Funnelback being blacklisted from accessing your site by the site owner.