!--a11y-->
Crawler Service 
The crawler service allows crawlers to collect resources located in internal or external repositories, for example, for indexing purposes. A crawler returns the resources and the hierarchical or net-like structures of the respective repositories.
Services and applications that need repositories to be crawled (for example, the index management service) request a crawler from the crawler service.
The crawler service is a prerequisite for the following services.
It is a generic service that can be used by any CM service or application.
You can use parameters to influence the behavior of a particular crawler (see Crawlers and Crawler Parameters). In the configuration of the crawler service, you specify a set of crawler parameters that are used by default for index management tasks. You can also specify how many crawlers should run in parallel.
Crawler Service Parameters
Parameter |
Required |
Description |
Maximum Number of Parallel Running Crawlers |
No |
Number of crawlers that run in parallel. A restriction makes sense to reduce the load that is generated by crawlers running in parallel on the portal nodes, the database, and the backend systems being crawled. No entry or 0 indicates no restriction. |
Default Crawler Parameters |
Yes |
Specifies a set of crawler parameters that are used by default if no other set is defined. The default setting is the standard set. You can also use this for delta crawling. |
The crawler service is preconfigured and activated in the standard KM configuration. Normally, you do not need to change its configuration. To call up the configuration, choose Content Management ® Global Services ® Crawler Service.