Upgrading to Funnelback 16

If you are running a version of Funnelback earlier than 15.24 it is recommended that you first upgrade to 15.24 before attempting to upgrade to Funnelback 16.

Funnelback 16 is a major Funnelback upgrade that introduces a number of fundamental and breaking changes to the way in which Funnelback operates. As a result upgrading or migrating existing search implementations into v16 is quite complicated and involves many steps.

Before you start

Ensure you make a note of when each collection is scheduled to update as this will not be preserved once you have upgraded to v16. You can view the collection update schedules in v15.24 by selecting schedule automatic updates from the system menu of the administration interface.

Server configuration

SAML configuration

This applies to Funnelback installations that use SAML for administration interface authentication.

A breaking changes was made to Funnelback’s SAML implementation for v16.

In v15 there were five service providers, but in v16 there is only a single service provider - the Funnelback admin-api.

As a result any redirect URLs and metadata URLs in the v15 implementations may need to be changed to the relevant values for the admin-api service provider.

The groovy permission mapper will need to be updated to include a valid Funnelback 16 client ID. See the Groovy permission mapper example

Collection upgrades

The Funnelback upgrade utility should be used to import groups of Funnelback 15 collections into Funnelback 16. The tool automates many of the steps required to upgrade a v15 collection.

This includes:

  • Conversion of collections into search packages and data sources. This includes updating collection ID references in configuration where possible.

  • Conversion of profiles into results pages.

  • Import of users and roles

  • Freemarker templates: templates will be upgraded, where possible, to account for data model changes. Note: Complex Freemarker and Handlebar scripts may not always be upgraded correctly due to ambiguity in the data model references. e.g. listMetadata.image[0] should be upgraded to listMetadata.image.[0] but this may not be detected correctly.

  • Hook scripts: The upgrade utility will attempt to upgrade hook scripts to account for data model changes. In particular:

    • replacement of property inputParameterMap with inputParameters

    • replacement of property rawInputParameters with inputParameters

    • replacement of property metaData with listMetadata

    Due to the nature of Groovy syntax, the tool cannot guarantee all aspects of hook script will be upgraded correctly and upgraded scripts should still be manually checked. e.g. If the content of one of listed above property was assigned to separate variable, the context of that property is lost so any usage of the variable will not be upgraded.

Update search integrations

All integrations with the search will need to be updated due to the change in collection ID.

This includes:

  • Search boxes: search form parameters will need to be updated to use the new collection ID.

  • Auto-completion: any search forms that use auto-completion will need to be updated to use the new collection ID.

  • All API integrations: Integrations need to be updated to use the new collection ID. Until this is done the API will return an error about an invalid collection ID.

It is not always easy to get search box and auto-completion integrations updated quickly. It is possible to configure a collection ID mapping that will map an old collection ID to a new collection ID so that the old IDs can continue to be used (this only applies to public search (not API) endpoints). However, the collection remapping configuration should only be applied as a short term solution to enable the website owner to update their search boxes and this should be removed as soon as possible.

Setting the automatic update schedule for your data sources

The collection update schedule used for v15 collection updates is not imported into v16 due to major changes in both how collections are structured and a completely overhaul of the update scheduling system.

Data source updates must be configured before any automated updates will occur. This will need to be configured for each data source in your search.

The update system has been rewritten and adds many new update scheduling options. It is now built-in to Funnelback and no longer reliant on cron or the Windows Task Scheduler.

Scheduling data source updates outlines the configuration keys that must be set to configure an automated update for a data source.

Post upgrade steps

The upgrade utility will import the set of collections that were specified and convert them into the new search package/data source/results page structure.

At a minimum you should have a search package that includes one or more data sources and a results page.

You should perform some post upgrade checks to ensure that the upgrade is completed successfully.

  1. Check all the imported configuration for direct references to the old collection IDs. The upgrade utility should update all the references to the old collection name.

  2. Run test searches against the imported collection and compare the search results to look for missing features or errors. Note it is not unusual for the result order to change but a similar number of results is expected.

  3. After running some searches inspect the modern UI logs for the collection to ensure that the upgrade hasn’t broken any of the hook scripts (which will usually fail without reporting any error to the user). However any errors will be logged and will usually result in the hook script terminating which will prevent the hook script functionality from being applied to the search.

  4. Analytics logs should be re-synced at the time you set the upgraded search live. This ensures that all the query logs are pulled to the new search so that analytics will be complete.

Failed upgrades

If the upgrade process failed for some reason, you forgot to include some collections or you just need to start again then you can re-run the upgrade utility and set the --force command line option.

if you choose to re-run on a subset of collections to avoid the need to recheck other successful collections then you will need to modify the command to also set the additional collection/user/role mappings to ensure that all the ID references are correctly updated.