Netfinity 7000 Tower Cluster Example

Netfinity 7000 Tower Cluster Example

(Below) Figure 8 shows a high-availability, shared-disk cluster consisting of two Netfinity 7000 servers and four 3518 Enterprise Expansion Enclosures.
In addition to its standard features, each Netfinity 7000 contains four 200 MHz Pentium® Pro microprocessors with 1 MB of level-2 cache (one microprocessor standard), three IBM ServeRAID II Ultra SCSI Adapters, three IBM 100/10 PCI EtherJet Adapters, four 4.51 GB hot-swap hard disk drives, and three redundant power supplies (two standard).
(See 'Parts List for the Netfinity 7000 Tower Cluster Example' for a complete list of the components used in this example.)

Note: Although this example shows ServeRAID II adapters, you also could use ServeRAID-3H adapters.
(Black marked HDD-drives in this figure, should be interpreted as HSP drives.)

Figure 8. Netfinity 7000 Tower Cluster Example 

The network-crossover cable, sometimes referred to as the cluster's heartbeat, provides the dedicated, point-to-point communication link between the servers. This cable connects two IBM 100/10 PCI EtherJet Adapters (one in each server) and enables the servers to continuously monitor each other's functional status.

  1.  You must  use IBM 100/10 PCI EtherJet Adapters or IBM 10/100 EtherJet PCI Adapters for the cluster's heartbeat connection.
  2.  You must  use a point-to-point, Category 5 crossover cable for the heartbeat connection. Connections through a hub are not supported.

Each server also contains two more EtherJet adapters.
These adapters provide multiple connections to external networks (in this example, Public Network 1 and Public Network 2).
Using the public-network connections and the dedicated heartbeat link together ensures that a single network-hardware failure will not initiate a failover situation.

In both servers, the internal SCSI cable that connects to the backplane was moved from the Ultra SCSI controller on the system board to the Channel 3 connector on ServeRAID Adapter 1.
Then, using Channel 3 of ServeRAID Adapter 1, three of the hard disk drives in each server were defined as RAID level-5 logical drives (Array A).
Because these nonshared drives store the operating system and shared-disk clustering software needed during startup, these drives were defined first using the ServeRAID configuration program.
In addition, this example shows multiple ServeRAID adapters installed in each server.
When you install multiple hard-disk controllers, RAID controllers, or ServeRAID adapters in the same server, you must  install the device that will manage the startup (boot) drives in a PCI slot that is scanned before subsequent hard-disk controllers or RAID adapters.
In the Netfinity 7000, the PCI slots are scanned in the following order: 1, 2, 3, 4, 5, 6.
To further increase availability, each server contains a hot-spare (HSP) drive for the internal nonshared array.

The only difference between the hardware configuration of Server A and the hardware configuration of Server B is the SCSI ID settings for the ServeRAID adapters.
Channels 1, 2, and 3 of all three ServeRAID adapters in Server A are set to SCSI ID 7.
In Server B, Channels 1 and 2 of all three ServeRAID adapters are set to SCSI ID 6, because they share the same SCSI buses as Channels 1 and 2 of the ServeRAID adapters in Server A.
Channel 3 of ServeRAID Adapters 1 and 2 in Server B are both set to SCSI ID 7, because they are not connected to any shared disks.
Channel 3 of ServeRAID Adapter 3 in each server is available for use as a quorum-arbitration link with the Microsoft Cluster Server software, or for future expansion with the Vinca clustering software.

In addition to the standard features of the 3518 expansion enclosure, each enclosure contains a power-supply upgrade option, two additional backplanes, three or four enhanced SCSI repeaters, one daisy-chain cable, and eighteen 9.1 GB hot-swap hard disk drives.
The enhanced SCSI repeaters contain circuits that can automatically sense the functional status of the server.
When the SCSI repeater circuitry detects that the server attached to it is failing or offline, the SCSI repeater automatically enables termination for that end of the SCSI bus.
This helps increase the availability of the shared disks and enables the serviceability of the failing or offline server.
In this example, expansion units 1 and 2 have the same basic hardware configuration, and expansion units 3 and 4 have the same basic hardware configurations.
However, each expansion unit has unique disk-array configurations.

To help maintain high availability, the 72 hard disk drives in the four 3518 enclosures are defined as 14 RAID level-5 logical drives (notice the array designations above each drive).
To further increase the availability of these drives, each ServeRAID adapter has its own hot-spare drive (notice the HSP above 6 of the drives).
A hot-spare drive is a disk drive that is defined for automatic use in the event of a drive failure.
If a physical drive fails and it is part of a RAID level-1 or RAID level-5 logical drive, the ServeRAID adapter will automatically start to rebuild the data on the hot-spare drive.

Note: ServeRAID adapters cannot share hot-spare drives.
To maintain high availability and enable the automatic-rebuild feature, you must  define a hot-spare drive for each ServeRAID adapter.

Both servers share 12 of the 14 arrays, however, Server A has total control of Array A in Bank E of expansion unit 1, and Server B has total control of Array A in Bank E of expansion unit 2.

Note: Installing nonshared drives in a clustered 3518 enclosure is supported only when using the most current enhanced SCSI repeaters.
The most current enhanced SCSI repeater is card part number 07L8392, which is provided in option part number 94G7585.
Earlier versions of the enhanced repeater are not supported when storing nonshared drives in a clustered 3518 storage enclosure.

In all four enclosures, the Bank D backplane jumpers are set for Bank D and for high addressing (SCSI IDs 8 to 13) and the Bank E backplane jumpers are set for Bank E and for low addressing (SCSI IDs 0 to 5).
Cables connect the Bank C and Bank D backplanes, creating one long SCSI bus in each enclosure that can support up to 12 hot-swap drives.

Ideally, the servers and storage enclosures are connected to different electrical circuits, however, this is rarely possible.
To help prevent the loss of data and to maintain the availability of the shared disks during a power outage or power fluctuation, always connect the servers and expansion enclosures to uninterruptible power supplies (UPS).

Back to  Jump to TOP-of-PAGE

Please see the LEGAL  -  Trademark notice.
Feel free - send a Email-NOTE  for any BUG on this page found - Thank you.