PC Server 325 Rack Cluster Example 1


PC Server 325 Rack Cluster Example 1



(Below) Figure 3 shows a low-cost, high-availability, shared-disk cluster consisting of two rack models of the PC Server 325 and one Netfinity EXP10 enclosure.
In addition to its standard features, each PC Server 325 contains one IBM ServeRAID II Ultra SCSI Adapter, one IBM 100/10 PCI EtherJet Adapter, and two 4.51 GB hard disk drives.
(See 'Parts List for the PC Server 325 Rack Cluster Example 1' for a complete list of the components used in this example.)

Note: Although this example shows ServeRAID II adapters, you also could use ServeRAID-3H adapters.
 

Figure 3. PC Server 325 Rack Cluster Example 1 

The capacity of the Netfinity Rack is 42U. Each server occupies 5U and the EXP10 enclosure occupies 3U.
You can house this 13U cluster and its support devices (such as console, keyboard, and uninterruptible power supplies) in one IBM Netfinity Rack or in an industry-standard, 19-inch rack that meets EIA-310-D standards and has a minimum depth of 71.12 cm (28 inches).
(See 'Selecting the Rack Enclosures' for more information.)

In this example, the server hardware is configured the same as in the Entry Tower Cluster example, which appears in Figure 2.
However, by using the rack-model server and the Netfinity EXP10 storage enclosure, the amount of physical space needed to store the cluster decreases significantly and the overall storage capacity increases by 18.2 GB.
(Each 3518 enclosure can support eighteen 9.10 GB hot-swap drives, and each Netfinity EXP10 enclosure can support ten 18.2 GB hot-swap drives.)

The network-crossover cable, sometimes referred to as the cluster's heartbeat, provides the dedicated, point-to-point communication link between the servers. This cable connects the IBM 100/10 PCI EtherJet Adapters (one in each server) and enables the servers to continuously monitor each other's functional status.
The servers connect to the public network using the Ethernet controllers on the system boards.
Using the public-network connection and the dedicated heartbeat link together ensures that a single network-hardware failure will not initiate a failover situation.


Notes:

  1.  You must  use IBM 100/10 PCI EtherJet Adapters for the cluster's heartbeat connection.
  2.  You can use the integrated Ethernet controllers that come standard on some server models to connect  the server to the public network, however, these integrated controllers are not certified for use as the  cluster's heartbeat connection.
  3.  You must  use a point-to-point, Category 5 crossover cable for the heartbeat connection. Connections  through a hub are not supported.


To maintain high availability, the two hard disk drives in each server are defined as RAID level-1 logical drives (Array A) using Channel 2 of the ServeRAID adapters. Because these nonshared drives store the operating system and shared-disk clustering software needed during startup, these drives were defined first using the ServeRAID configuration program.

The internal SCSI cables remain attached to the CD-ROM drives, but the end connectors that were attached to the SCSI controllers on the system boards are now attached to the Channel 2 connectors on the ServeRAID adapters. The hard disk drive attached to the end connector on the internal SCSI cable in each server has its termination set to Enabled. The other hard disk drive in each server has its termination set to Disabled.

Note: The termination for the CD-ROM drive is permanently set to Disabled. You can not Enable termination on the CD-ROM drive.


The only difference between the hardware configuration of Server A and the hardware configuration of Server B is the SCSI ID settings for the ServeRAID adapters. Channels 1 and 2 of the ServeRAID adapter in Server A are set to SCSI ID 7. Channel 1 of the ServeRAID adapter in Server B is set to SCSI ID 6, because it shares the same SCSI bus as Channel 1 of the ServeRAID adapter in Server A. Channel 2 of the ServeRAID adapter in Server B connects to the nonshared drives and is set to SCSI ID 7 to avoid a conflict with the CD-ROM drive, which is set to SCSI ID 6. On both ServeRAID adapters, Channel 3 is available for use as a quorum-arbitration link with the Microsoft Cluster Server software, or for future expansion with the Vinca clustering software.

The maximum storage capacity 11 for a Netfinity EXP10 is 182 GB, using ten 18.2 GB hot-swap drives. However, this example shows eight 9.1 GB hot-swap hard disk drives, which provides space for future expansion. To help maintain high availability, the drives are grouped into two RAID level-5 logical drives (arrays B and C). To further increase the availability of the shared drives, each ServeRAID adapter has its own hot-spare (HSP) drive. A hot-spare drive is a disk drive that is defined for automatic use in the event of a drive failure. If a physical drive fails and it is part of a RAID level-1 or RAID level-5 logical drive, the ServeRAID adapter will automatically start to rebuild the data on the hot-spare drive.

Note: ServeRAID adapters cannot share hot-spare drives. To maintain high availability and enable the automatic-rebuild feature, you must  define a hot-spare drive for each ServeRAID adapter.


A SCSI cable (provided with the expansion enclosure) connects the SCSI Bus 1 OUT and SCSI Bus 2 IN connectors on the rear of the enclosure, forming one continuous SCSI bus.

Using auto-sensing cables, Channel 1 of the ServeRAID adapter in Server A is connected to the SCSI Bus 1 IN connector, and Channel 1 of the ServeRAID adapter in Server B is connected to the SCSI Bus 2 OUT connector.

Note: To help increase the availability of the shared disks and enable the serviceability of a failing or offline server, you must  use Netfinity EXP10 Auto-Sensing Cables, IBM Part Number 03K9352, to connect clustered servers to Netfinity EXP10 enclosures.


The EXP10 auto-sensing cables contain circuits that can automatically sense the functional status of the server. When the circuitry in an auto-sensing cable detects that the server attached to it is failing or offline, the cable circuitry automatically enables termination for that end of the SCSI bus. This helps increase the availability of the shared disks and enables the serviceability of the failing or offline server.

The SCSI ID assignments for the shared hot-swap drives are controlled by the backplanes inside the Netfinity EXP10 enclosure. The IDs alternate between low and high addresses, and might cause some confusion. To avoid confusion with the SCSI IDs, consider placing a label with the SCSI IDs across the front of the drive bays. In this example configuration, the SCSI ID assignments from left (bay 1) to right (bay 10) are: 0 8 1 9 2 10 3 11 4 12.

Ideally, the servers and storage enclosures are connected to different electrical circuits, however, this is rarely possible. To help prevent the loss of data and to maintain the availability of the shared disks during a power outage or power fluctuation, always connect the servers and expansion enclosures to uninterruptible power supplies (UPS).


Back to  Jump to TOP-of-PAGE

Please see the LEGAL  -  Trademark notice.
Feel free - send a Email-NOTE  for any BUG on this page found - Thank you.