PC Server 330 Tower Cluster Example


PC Server 330 Tower Cluster Example



(Below) Figure 4 shows a low-cost, high-availability, shared-disk cluster consisting of two PC Server 330 systems and two PC Server 3518 Enterprise Expansion Enclosures.
In addition to its standard features, each PC Server 330 contains two 266 MHz Intel— Pentium— II microprocessors with 512 KB 16 of level-2 cache (one microprocessor standard), 128 MB 17 of ECC system memory (64 MB standard), two 4.51 GB hot-swap hard disk drives, one IBM 100/10 PCI EtherJet Adapter, and one IBM ServeRAID II Ultra SCSI Adapter.
(See 'Parts List for the PC Server 330 Tower Cluster Example' for a complete list of the components used in this example.)

Note: Although this example shows ServeRAID II adapters, you also could use ServeRAID-3H adapters.
 

Figure 4. PC Server 330 Tower Cluster Example 


The network-crossover cable, sometimes referred to as the cluster's heartbeat, provides the dedicated, point-to-point communication link between the servers. This cable connects the IBM 100/10 PCI EtherJet Adapters (one in each server) and enables the servers to continuously monitor each other's functional status. The servers connect to the public network using the Ethernet controllers on the system boards. Using the public-network connection and the dedicated heartbeat link together ensures that a single network-hardware failure will not initiate a failover situation.


Notes:

  1.  You must  use IBM 100/10 PCI EtherJet Adapters for the cluster's heartbeat connection.
  2.  You can use the integrated Ethernet controllers that come standard on some server models to connect  the server to the public network, however, these integrated controllers are not certified for use as the  cluster's heartbeat connection.
  3.  You must  use a point-to-point, Category 5 crossover cable for the heartbeat connection. Connections  through a hub are not supported.


To maintain high availability, the two hard disk drives in each server are defined as RAID level-1 logical drives (Array A) using the single-channel ServeRAID controller on the system board.
Because these nonshared drives store the operating system and shared-disk clustering software needed during startup, these drives were defined first using the ServeRAID configuration program.
Notice that the ServeRAID adapters are installed in PCI slot 4.
When you use the integrated RAID controller to manage the startup (boot) drives, you must  install the ServeRAID adapters that will manage the shared drives in PCI slot 4, 5, or 6 (respectively) to avoid a PCI bus conflict during startup.

The only difference between the hardware configuration of Server A and the hardware configuration of Server B is the SCSI ID settings for the ServeRAID adapters.
Channels 1 and 2 of the ServeRAID adapter in Server A are set to SCSI ID 7, and Channels 1 and 2 of the ServeRAID adapter in Server B are set to SCSI ID 6.
On both ServeRAID adapters, Channel 3 is available for use as a quorum-arbitration link with the Microsoft Cluster Server software, or for future expansion with the Vinca clustering software.

In this example, the 3518 expansion enclosures have identical hardware configurations.
In addition to the standard features of the 3518, the enclosures each contain a power-supply upgrade option, an additional backplane, and two enhanced SCSI repeaters.
The maximum achievable hot-swap storage capacity for each enclosure is 163.8 GB using eighteen 9.1 GB drives.
However, this example shows only nine 9.1 GB drives in each enclosure, and provides space for future expansion.
To help maintain high availability, the 18 hard disk drives are defined as four RAID level-5 logical drives (arrays A, B, C, and D).
To further increase the availability of these shared drives, each ServeRAID adapter has its own hot-spare (HSP) drive.
A hot-spare drive is a disk drive that is defined for automatic use in the event of a drive failure.
If a physical drive fails and it is part of a RAID level-1 or RAID level-5 logical drive, the ServeRAID adapter will automatically start to rebuild the data on the hot-spare drive.

Note: ServeRAID adapters cannot share hot-spare drives. To maintain high availability and enable the automatic-rebuild feature, you must  define a hot-spare drive for each ServeRAID adapter.


In both enclosures, the jumpers on the backplanes in Bank D are set for Bank D and for high addressing (SCSI IDs 8, 9, 10, 11, 12, and 13). A cable connects the Bank C and Bank D backplanes, creating one continuous SCSI bus in each enclosure.

Channel 1 of the ServeRAID adapter in Server A connects to the enhanced SCSI repeater that connects to Bank C of expansion unit 1, and Channel 1 of the ServeRAID adapter in Server B connects to the enhanced SCSI repeater that connects to Bank D of expansion unit 1.
Channel 2 of the ServeRAID adapter in Server A connects to the enhanced SCSI repeater that connects to Bank C of expansion unit 2, and Channel 2 of the ServeRAID adapter in Server B connects to the enhanced SCSI repeater that connects to Bank D of expansion unit 2.


The enhanced SCSI repeaters contain circuits that can automatically sense the functional status of the server.
When the SCSI repeater circuitry detects that the server attached to it is failing or offline, the SCSI repeater automatically enables termination for that end of the SCSI bus.
This helps increase the availability of the shared disks and enables the serviceability of the failing or offline server.


Ideally, the servers and storage enclosures are connected to different electrical circuits, however, this is rarely possible.
To help prevent the loss of data and to maintain the availability of the shared disks during a power outage or power fluctuation, always connect the servers and expansion enclosures to uninterruptible power supplies (UPS).


Back to  Jump to TOP-of-PAGE

Please see the LEGAL  -  Trademark notice.
Feel free - send a Email-NOTE  for any BUG on this page found - Thank you.