You back up TREX indexes and queues online using Python scripts without stopping TREX and while the TREX search continues to be available. You then restore the saved data offline. This is the usual way to back up TREX data. It is a good idea to back up the TREX indexes if the original index creation process took a long time and you want to avoid having to reindex if the full-text information is lost. We also recommend that you back up your data if a large number of documents have been added to an index since the original indexing process.
Backup and restore of TREX during live operation is only possible under certain circumstances. It is absolutely necessary to observe these prerequisites, because otherwise inconsistencies and corrupt indexes or queues occur, with subsequent data loss.
These scripts are located in the /usr/sap/<SAPSID>/TRX<instance_number>/exe/python_support directory.
You cannot automatically suppress all write access to TREX from within TREX. You can only take administrative action (manual monitoring) to check that no write access of these types takes place.
The application using TREX (for example, Knowledge Management in the Enterprise Portal) should also not carry out any TREX-related write operations. For example, you must check that no crawler process is running and that the KM repositories are set to read only. This restriction is to prevent data inconsistencies and data loss between TREX and the application using the TREX data.
Data backup can only be carried out with active write access. Each index is backed up consistently on its own. If, from an application perspective, multiple indexes constitute a logical index, it is possible that these indexes might be saved with statuses that do not match.
We recommend starting the backup on the host that the TREX master index server runs on. In this way, you can avoid overloading the network.
Use the importManager.py and exportManager.py Python scripts to carry out the following activities: