Overview of a Shared Disk Cluster


Overview of a Shared Disk Cluster

In a traditional environment, specific servers control specific drive resources. Thus, when a specific server goes offline due to a subsystem failure or for scheduled maintenance, the drive resources controlled by that server also go offline and are no longer available.
This is a very common situation, but clearly, it is not a desirable one. You can avoid this situation and keep the drive resources of a specific server available, even if subsystems within that server fail, by implementing a high-availability, shared-disk cluster.

(Below) Figure 1, shows a very simple view of a shared-disk cluster.
In a dual-node, shared-disk cluster environment, two identical (or very similar) servers share the same drive resources.
The shared-drive resources reside in a separate storage expansion enclosure, and cabling between each server (sometimes called a node) and the expansion enclosure provides both servers equal access to the shared drives.

When both servers are online in this 'active/active' clustered environment, the servers share the workload because each server can control and manage specific shared-disk resources.
In the event that one server fails or goes offline for scheduled maintenance, the remaining active server automatically assumes control of all shared-disk resources and keeps them available.

Notes:

  1.  The cluster software packages were designed and tested for use with the high-availability functions  provided by the IBM ServeRAID II Ultra SCSI Adapter, the IBM Netfinity ServeRAID-3H Ultra2 SCSI  Adapter, or the IBM Netfinity Fibre Channel RAID Controller.
  2.  To support clustered configurations, the ServeRAID II adapter firmware, BIOS code, device drivers,  and utility programs must  be at version 2.40 or higher.
  3.  All dual-node, shared-disk cluster examples that appear in this reference use IBM ServeRAID II Ultra  SCSI Adapters, IBM Netfinity ServeRAID-3H Ultra2 SCSI Adapters, or IBM Netfinity Fibre Channel  RAID Controllers to manage the shared-disk resources.


Both servers continuously monitor each other's functional status through a network-crossover cable.
This network-crossover cable, sometimes referred to as the cluster's heartbeat , connects two IBM PCI EtherJet(TM) Adapters (one in each server) and provides the dedicated, point-to-point communication link between the servers.

Note: You must  use IBM 100/10 PCI EtherJet Adapters or IBM 10/100 EtherJet PCI Adapters for the cluster's heartbeat connection.
You can use the integrated Ethernet controllers that come standard on some server models to connect the server to the public network, however, these integrated controllers are not certified for use as the cluster's heartbeat connection.

 


Back to  Jump to TOP-of-PAGE

Please see the LEGAL  -  Trademark notice.
Feel free - send a Email-NOTE  for any BUG on this page found - Thank you.