Administration
You can only start and stop Knowledge Management (KM) together with the portal. For more information, see Starting and Stopping.
The stand-alone
engine Search and Classification (TREX) is started and stopped
independently of KM and the portal. For more information, see
Starting and Stopping
TREX. If you want TREX to be temporarily unavailable, you do not need to
stop KM or the portal. However, you will not be able to use searching and
indexing functions during this time. These functions are available again as
soon as TREX is restarted.
Like the portal, Knowledge Management can be operated in a distributed system landscape. For information about load balancing, see Planning Guide – System Landscape Directory at service.sap.com/sld.
Like the portal,
Knowledge
Management uses the user
management of the J2EE Engine. For more information, see
User Management
Engine.
The procedures for backing up and restoring Knowledge Management are the same as those described in the portal technical operations manual, in the Backup/Restore and Recovery section.
KM content in internal repositories and CM repositories in DB persistence mode is stored in the database along with most of the KM configuration data. To back up KM content, you have to back up the database. For information about backing up and restoring the database, see Backing up and Restoring AS Java.
As well as backing up the database, you have to back up the directory hierarchy of the /etc file system repository. This contains system and configuration data. By default, this repository is located at .../usr/sap/<SAP System ID>/SYS/global/config/cm/etc. Include this directory in the data backup and restore process for KM. Use standard mechanisms for this.
If you only want to restore individual files in your productive system (with their properties and ACLs) instead of restoring the entire database, proceed as follows:
...
1. Import the backed up database into a second portal installation (backup system). This can be a test system, for example.
2.
Create a
WebDAV repository
manager to cater for the CM repository in question in the backup
system.
3. You can now use the WebDAV repository manager in the productive system to access the required files and copy them to the required folders in the productive system, thereby restoring the files in question.

When using WebDAV to restore files, you can only include the properties and ACLs of the documents in question. You can only restore additional metadata such as feedback, ratings, and read flags by restoring the entire database.
If you are operating CM repositories in DBFS persistence mode, you must use an adapted procedure for backing up and restoring data. For more information, see Backing Up and Restoring CM Repositories in DBFS Mode.
Content on remote servers that you have included in Knowledge Management using repository managers is not normally backed up using the procedures for backup and restore described by SAP. Take this into account when planning your data backup and restore. Use the usual procedures of the IT department in your company.

We recommend storing an overview of all connected remote servers with your project documents. To subsequently create this overview, you can find the required information (address of the remote server and path details) in the Knowledge Management configuration. To call up the configuration, choose System Administration → System Configuration in the portal. Then choose Knowledge Management → Content Management → Repository Managers in the detailed navigation pane. Note the information relevant for the server (for example, the entries in the Root Directory parameter) for the CM, file system, and WebDAV repository managers, as well as all other repository managers used in your scenario. You can then use this overview when backing up the data.
Also bear in mind
the
Data
Backup and Restore for TREX section. You should note that you must suspend
all KM crawling tasks in the
indexing monitor
before starting a complete data backup (offline) for TREX. After restoring the
data, you must restart the crawling tasks.