Kolab Deployment on Redundant Servers¶
This deployment provides high-availability to a Kolab Groupware deployment through redundancy of its underlying storage.
Please see either of the following sections for details on their semantics:
Using DRBD¶
When used solely for redundancy, DRBD replicates the disk of a single node to another node, implying the replica master is the active server, while the slave remains passive. That is to say, that the replica master can, while the replica slave can not access the data on the replicated volume.
In this scenario, the master is the active server running all Kolab servers, while the slave is completely passive (it cannot write to the disk being replicated). A failover would ensure the master is not running any longer, and start the services on the slave – which therefore becomes the master.
In a more complex scenario, two separate volumes could be replicated in an cross-link scenario, allowing the resources of both nodes to be used in parallel, establishing a limited form of load-balancing.
Using DRBD in a so-called dual-primary mode requires the use of a shared filesystem such as Global File System (GFS, not to be confused with GlusterFS).
Designing, installing, configuring and maintaining a DRBD cluster is beyond the scope of this document, and not commonly a solution the Kolab community supports.
For more assistence, please consider contacting a local IT solutions provider, `Kolab Systems AG`_ or any its Certified Partners.