Web collections - controlling what information is included

Funnelback provides various controls that help to define what is included and excluded from a web crawl.

By default Funnelback will exclude a lot of URLs because they are not relevant to a set of search results. e.g. it makes no sense to index linked files such as images, CSS and Javascript files as they add no value to a search.

Include/exclude rules

Include/exclude rules can be used to define a set of patterns that are compared to the document’s URL. These patterns define if a URL should be kept or rejected based on the URL itself.

a web page that must be visited in order to reach other pages must be included otherwise any child or linked pages may not be discovered by the web crawler. If you wish to exclude a page (such as a home page) that must be crawled through then this can be removed using a kill_exact.cfg or kill_pattern.cfg after the index is built.

The crawler processes each URL it encounters against the various options in the data source configuration to determine if the URL will be included (+) or excluded () from further processing:

Example

Assuming you had the following options:

include_patterns=/red,/green,/blue
exclude_patterns=/green/olive

Then the following URLs will be included or excluded…​

URL Success? Comments

/orange

FAIL

fails include

/green/emerald

PASS

passes include, passes exclude

/green/olive

FAIL

passes include, fails exclude

Regular expressions in include/exclude patterns

To express more advanced include or exclude patterns you can use regular expressions for the include_patterns and exclude_patterns configuration options.

Regular expressions follow Perl 5 syntax and start with regexp: followed by a compound regular expression in which each distinct include/exclude pattern is separated by the | character.

Regex and simple include/exclude patterns cannot be mixed within a single configuration option.

An example of the more advanced regexp: form is:

exclude_patterns=regexp:search\?date=|^https:|\?OpenImageResource|/cgi-bin/|\.pdf$

which combines five alternative patterns into one overall pattern expression to match:

  1. search?date= for example, to exclude calendars.

  2. HTTPS urls

  3. Dynamic content generated by URLs containing ?OpenImageResource.

  4. Dynamic content from CGI scripts.

  5. PDF files.

regex special characters that appear in patterns must be escaped (e.g. \? and \.):
include_patterns=regexp:\.anu\.edu\.au

Excluding URLs during a running crawl

The crawler supports exclusion of URLs during a running crawl. The crawler.monitor_url_reject_list collection.cfg parameter allows an administrator to specify additional URLs patterns to exclude while the crawler is running. These URL patterns will apply from the next crawler checkpoint and should be converted to a regular exclude pattern once the crawl completes.

Robots.txt, robots meta tags and sitemap support

The Funnelback web crawler supports the following:

robots.txt

Funnelback honours robots.txt directives as outlined at http://www.robotstxt.org/robotstxt.html

The FunnelBack user agent can be used to provide Funnelback specific robots.txt directives

e.g. Prevent access to /search* and /login* for all crawlers but allow Funnelback to access everything.

User-agent: *
Disallow: /search
Disallow: /login

User-agent: Funnelback
Disallow:

Sitemap: http://www.example.com/sitemap.xml

Sitemap.xml

Funnelback supports the extraction of links (which are added to the crawl frontier) from linked sitemap.xml files (including nested and compressed sitemaps) that are specified in the robots.txt. Please note that other directives within sitemap.xml files (such as lastmod and priority) are ignored.

Funnelback does not process sitemap files by default - this must be enabled using the crawler.use_sitemap_xml configuration option.

Robots meta tags

Funnelback honors robots meta tags as outlined at http://www.robotstxt.org/meta.html as well as the nosnippet and noarchive directives.

The following directives can appear within a <meta name="robots"> tag:

follow / nofollow
index / noindex
nosnippet / noarchive

e.g.

<!-- index this page but don't follow links -->
<meta name="robots" content="index, nofollow" />
<!-- index this page but don't follow links and don't allow caching or snippets -->
<meta name="robots" content="index, nofollow, nosnippet" />
robots directives supplied via HTTP headers are not supported.

HTML <a rel="nofollow">

Funnelback honours nofollow directives provided in rel attribute of a HTML anchor <a> tag. e.g.

<!-- don't follow this link -->
<a href="mylink.html" rel="nofollow" />

Funnelback noindex/endnoindex and Googleoff/Googleon directives

Funnelback noindex tags (and the Google equivalents) are special HTML comments that can be used to denote parts of a web page as not containing content. This hides the text from the indexer and words included in a noindex region will not be included in the search index. However, any links contained within a noindex region will still be extracted and processed. e.g.

... This section is indexed ...
 <!--noindex-->

... This section is not indexed ...

 <!--endnoindex-->

... This section is indexed ...

Google HTML comment tags that are equivalent to the Funnelback noindex/endnoindex tags are also supported. The following are aliases of Funnelback’s native tags:

<!-- noindex --> == <!-- googleoff: index --> == <!-- googleoff: all -->
<!-- endnoindex --> == <!-- googleon: index --> == <!-- googleon: all -->
other googleoff/on tags are not supported.

Noindex tags should ideally be included within a site template to exclude headers, footers and navigation. Funnelback also provides a built-in inject no-index filter that can write these noindex tags into a downloaded web page based on rules. However this should only be used if it is not possible to modify the source pages as changes to the source page templates can result in the filter not working correctly.

See also

© 2015- Squiz Pty Ltd