Plan the number of manager and controller nodes for the cluster, and follow best
practices for multiple node configuration.
How Many Manager and Controller Nodes?
- If you are not concerned about failure recovery or load sharing, a
single-node cluster may be enough.
- When you add nodes to the cluster, you can add them on the
same host machine as the first node or on different hosts. In a production
environment,
SAP
recommends that you install additional nodes on different hosts, with no
more than one manager and one controller per host. This allows you to take
advantage of the load sharing and failure recovery features offered by the
clustering architecture.
- When you add the first few nodes to a cluster,
SAP recommends that you maintain a
one-to-one ratio of managers to controllers. Once you have four manager
nodes, the benefit of adding more diminishes. In a medium-sized or large
cluster, there are typically more controller nodes than manager nodes—add
more controllers as your portfolio of projects grows.
- The number of projects a controller can support is partially dependent on
machine size.
- If you plan to use the failover feature for failure
recovery,
SAP
recommends that you configure at least three managers and three controllers
in your cluster.
- Define no more than one manager for every host, in every cluster.
Configuring Multiple Nodes in a Cluster
- Set a common base directory for projects. This allows all project log store
files to save to a common location.
- Reference common security files. All nodes must have the
same security configuration. All nodes require a keystore, regardless of
authentication mode. The keystore file must be shared among all nodes in the
cluster. All manager nodes in a cluster share common configuration files,
including keystore files, LDAP, Kerberos, and RSA files. These common files,
which are located by default in
STREAMING_HOME/security,
must reside in a shared location that all nodes in the cluster can
access.
- Put input files and output file destinations in a shared location if the
project needs to be able to fail over to a controller on another machine. If
the project does not need to fail over, set controller affinities to limit
which nodes the project can run on and store input files and output file
destinations on the specified nodes only.
- Set the same cache name for all manager nodes in a cluster to access the cache.
- Specify unique names for all nodes for cluster configuration through
ESP Cockpit. See Configuring a Node for more information.
Node names should only contain numbers, letters, and underscores.