In order to find documents that are stored on external remote Web servers when searching in the portal, you have to configure the Web repositories and then crawl and index them.
Depending on your requirements you can use the simple Web repository or a standard Web repository (if you need special functions).
Case A: Simple Configuration
You carry out the following steps in order to be able to search for the content of remote Web servers without carrying out a lot of configuration.
All Web addresses that you then create can be reached using the simple Web repository and can be indexed periodically.
If required, you specify a proxy server in the configuration of the simply Web repository manager. For more information, see Simple Web Repository Manager .
If you want to implement filters for crawling, you can carry out steps 5 - 7 of the exensive configuration for case A.
Case B: Extensive Configuration
If you want to use functions such as form-based registration or implementing filters when crawling the content of remote Web servers, you have to create a standard Web repository.
Carry out the following steps in order to be able to search the content of a standard Web repository.
The repository service properties must be activated in the configuration of the Web repositories so that the content of the Web repositories can be classified.
The following is true for both cases.
You can use the crawler monitor to monitor the crawler process. When the crawler has transmitted the results to TREX, you can monitor the indexing process using the TREX monitor . When TREX has indexed the content of the Web repository, you can search for documents stored there.