Netfinity 7000 M10 Rack Fibre Channel Cluster Example


Netfinity 7000 M10 Rack Fibre Channel Cluster Example



(Below) Figure 10 shows a very robust high-availability, shared-disk cluster consisting of two rack models of the recently announced Netfinity 7000 M10, two Netfinity Fibre Channel Hubs, one Netfinity Fibre Channel RAID Controller unit with a Netfinity Optional Failsafe RAID Controller installed, and three Netfinity EXP15 expansion enclosures.
 

Figure 10. Netfinity 7000 M10 Rack Fibre Channel Cluster Example 

In addition to its standard features, each Netfinity 7000 M10 contains two Netfinity Fibre Channel Adapters, two IBM 10/100 PCI EtherJet Adapters, one IBM Netfinity ServeRAID-3L Ultra2 SCSI Adapter, three additional 400 MHz microprocessors, one optional 400 Watt redundant power supply, three 256 MB memory kits, and three 9.1 GB hard disk drives.
(See 'Parts List for the Netfinity 7000 M10 Cluster Example' for a complete list of the components used in this example.)


The recently announced Netfinity fibre-channel products, such as the IBM Netfinity Fibre Channel RAID Controller, support data transfer speeds of up to 100 MB/sec at a maximum cable length or distance of up to 10 kilometers (6 miles).

The network-crossover cable, sometimes referred to as the cluster's heartbeat, provides the dedicated, point-to-point communication link between the servers.
This cable connects two of the IBM 10/100 PCI EtherJet Adapters (one in each server) and enables the servers to continuously monitor each other's functional status.
The other two EtherJet adapters (one in each server) are used to connect the servers to the public network.
Using the public-network connection and the dedicated heartbeat link together ensures that a single network-hardware failure will not initiate a failover situation.

Notes:

  1.  You must  use IBM 10/100 PCI EtherJet Adapters for the cluster's heartbeat connection.
  2.  You must  use a point-to-point, Category 5 crossover cable for the heartbeat connection.
     Connections through a hub are not supported.


Server A and Server B are configured identically.
To maintain high availability, the three hard disk drives in each server are connected to the single-channel ServeRAID-3L adapters, and they are configured as RAID level-5 logical drives.
In each server, the internal SCSI cable that comes attached to the SCSI controller on the system board has been moved from the system-board connector to the internal channel connector on the ServeRAID-3L adapter.
These nonshared drives store the operating system and shared-disk clustering software needed during startup.

Note: If you install multiple hard-disk controllers, RAID controllers, or ServeRAID adapters in the same server, be sure to install the device that will manage the startup (boot) drives in a PCI slot that is scanned before subsequent hard-disk controllers or RAID adapters.
In the Netfinity 7000 M10, the PCI slots are scanned in the following order: PCI bus A with 64-bit expansion slots 1, 2, 3, 4, and 5, PCI bus B with 32-bit expansion slots 6, 7, 8, 9, and 10, then PCI bus C with 32-bit expansion slots 11 and 12.


Other items in this example that increase the availability and reliability of this sample cluster include the additional memory, additional microprocessors, redundant power supplies, redundant fibre-channel adapters, redundant fibre-channel hubs, and the Netfinity Failsafe Fibre Channel RAID Controller.
Each server comes with 256 MB of memory and supports up to 8 GB of system memory. In this example, the three additional 256 MB memory kits bring the total system memory for each server up to 1 GB, and the additional three microprocessors enable four-way, symmetric multiprocessing.
Each server also comes with two, 400 Watt hot-swap power supplies.
Adding the third, 400 Watt hot-swap power supply provides power redundancy.
In addition, the Optional Failsafe RAID Controller provides redundant protection in the unlikely event of a Netfinity Fibre Channel RAID Controller failure.


This sample configuration also shows the maximum capacity of ten, 18.2 GB drives in each Netfinity EXP15 enclosure, which provides a potential of 540 GB of raw storage capacity.
Although you could define these drives in many different ways, this example shows two drives in enclosure 3 defined as hot-spare (HSP) drives and the 28 remaining drives grouped into six arrays.

To help maintain high availability, 24 of the drives are equally grouped into four RAID level-5 logical drives (arrays A, B, C, and D).
The remaining four drives are equally grouped into two RAID level-1 logical drives (arrays E and F).


Option Switch 1 on the rear of each EXP15 enclosure is set to the 'On' position, forming two independent SCSI buses in each enclosure.

The Channel 1 connector on the Fibre Channel RAID Controller unit is connected to the SCSI Bus 1 IN connector on EXP15 Enclosure 1, the Channel 2 connector on the controller unit is connected to the SCSI Bus 2 IN connector on EXP15 Enclosure 1, the Channel 3 connector on the Fibre Channel RAID Controller unit is connected to the SCSI Bus 1 IN connector on EXP15 Enclosure 2, the Channel 4 connector on the controller unit is connected to the SCSI Bus 2 IN connector on EXP15 Enclosure 2, the Channel 5 connector on the Fibre Channel RAID Controller unit is connected to the SCSI Bus 1 IN connector on EXP15 Enclosure 3, and finally, the Channel 6 connector on the controller unit is connected to the SCSI Bus 2 IN connector on EXP15 Enclosure 3.


The SCSI ID assignments for the shared hot-swap drives are controlled by the backplanes inside the Netfinity EXP15 enclosures.
When configured as two independent SCSI buses, the SCSI IDs are repetitive and might cause some confusion.
To avoid confusion with the SCSI IDs, consider placing a label with the SCSI IDs across the front of the drive bays.
In this example configuration, the SCSI ID assignments for each enclosure from left (bay 1) to right (bay 10) are: 0 0 1 1 2 2 3 3 4 4.


The level of redundancy provided in this configuration starts with the dual fibre-channel adapters in each server.
By installing two fibre-channel adapters in each server and connecting them as shown to the two redundant fibre-channel hubs, both servers have equal access to the fibre-channel RAID controllers.
This increases the availability of the shared drives and eliminates having any single points of failure.
In addition, both RAID controllers have equal access to the drives in the EXP15 enclosures.
The drives are distributed across the six independent SCSI buses (two in each EXP15 enclosure) as shown in the following table.

SCSI Bus Drive 1 Drive 2 Drive 3 Drive 4 Drive 5
Bus 1 Array A Array B Array C Array D Array E
Bus 2 Array A Array B Array C Array D Array E
Bus 3 Array A Array B Array C Array D Array F
Bus 4 Array A Array B Array C Array D Array F
Bus 5 Array A Array B Array C Array D Hot Spare
Bus 5 Array A Array B Array C Array D Hot Spare


Ideally, the servers and storage enclosures are connected to different electrical circuits, however, this is rarely possible.
To help prevent the loss of data and to maintain the availability of the shared disks during a power outage or power fluctuation, always connect the servers and expansion enclosures to uninterruptible power supplies (UPS).


The capacity of the Netfinity Rack is 42U. Each server occupies 11U, each EXP15 enclosure occupies 3U, each Netfinity Fibre Channel Hub occupies 1U, and the Netfinity Fibre Channel RAID Controller unit occupies 4U.
You can house this 37U cluster and its support devices (such as console, keyboard, and uninterruptible power supplies) in IBM Netfinity Racks or in industry-standard, 19-inch racks that meet EIA-310-D standards and have minimum depths of 71.12 cm (28 inches).
(See 'Selecting the Rack Enclosures' for more information.)


Back to  Jump to TOP-of-PAGE

Please see the LEGAL  -  Trademark notice.
Feel free - send a Email-NOTE  for any BUG on this page found - Thank you.