Netfinity 5500 M10 Rack Fibre Channel Cluster Example


Netfinity 5500 M10 Rack Fibre Channel Cluster Example



(Below) Figure 7 shows a high-availability, shared-disk cluster consisting of two rack models of the Netfinity 5500 M10, one Netfinity Fibre Channel Hub, one Netfinity Fibre Channel RAID Controller, and two Netfinity EXP15 enclosures.
In addition to its standard features, each Netfinity 5500 M10 contains one Netfinity Fibre Channel Adapter, one IBM 100/10 PCI EtherJet Adapter, one additional 400 MHz microprocessor, one optional 400 Watt redundant power supply, one 512 MB memory kit, and three 9.1 GB hard disk drives.
(See 'Parts List for the Netfinity 5500 M10 Cluster Example' for a complete list of the components used in this example.)
 

Figure 7. Netfinity 5500 M10 Rack Fibre Channel Cluster Example 

The recently announced Netfinity fibre-channel products, such as the IBM Netfinity Fibre Channel RAID Controller, support data transfer speeds of up to 100 MB/sec at a maximum cable length or distance of up to 10 kilometers (6 miles).


The network-crossover cable, sometimes referred to as the cluster's heartbeat, provides the dedicated, point-to-point communication link between the servers.
This cable connects the IBM 100/10 PCI EtherJet Adapters (one in each server) and enables the servers to continuously monitor each other's functional status.
The servers connect to the public network using the Ethernet controllers on the system boards.
Using the public-network connection and the dedicated heartbeat link together ensures that a single network-hardware failure will not initiate a failover situation.

Notes:

  1.  You must  use IBM 100/10 PCI EtherJet Adapters or IBM 10/100 EtherJet PCI Adapters for the  cluster's heartbeat connection.
  2.  You can use the integrated Ethernet controllers that come standard on some server models to connect  the server to the public network, however, these integrated controllers are not certified for use as the  cluster's heartbeat connection.
  3.  You must  use a point-to-point, Category 5 crossover cable for the heartbeat connection. Connections  through a hub are not supported.


Server A and Server B are configured identically.
To maintain high availability, the three hard disk drives in each server are defined as RAID level-5 logical drives using the integrated ServeRAID controller on the system board.

Other items in this example that increase the availability and reliability of the servers include the additional memory, microprocessors, and power supplies.
Each server comes with 128 MB of memory and supports up to 2 GB of system memory.
In this example, the additional 512 MB memory kit brings the total system memory for each server up to 640 MB, and the additional microprocessor enables symmetric multiprocessing for each server.
Each server also comes with one, 400 Watt hot-swap power supply, but supports an additional 400 Watt hot-swap supply for power redundancy.


The maximum storage capacity 34 for each Netfinity EXP15 is 182 GB, using ten 18.2 GB hot-swap drives.
However, this example shows ten 9.1 GB hot-swap hard disk drives in each enclosure. To help maintain high availability, the drives are grouped into four RAID level-5 logical drives (arrays A, B, C, and D).
To further increase the availability of the shared drives, each enclosure has its own hot-spare (HSP) drive.

Option Switch 1, on the rear of the EXP15 enclosures, is set to the 'Off' position. This forms one continuous SCSI bus in each enclosure.

The Channel 1 connector on the Fibre Channel RAID Controller unit is connected to the SCSI Bus 1 IN connector on EXP15 Enclosure 1 and the Channel 2 connector on the controller unit is connected to the SCSI Bus 1 IN connector on EXP15 Enclosure 2.


The SCSI ID assignments for the shared hot-swap drives are controlled by the backplanes inside the Netfinity EXP15 enclosures.
When configured as one continuous SCSI bus, the SCSI IDs alternate between low and high addresses.
These alternating IDs might cause some confusion. To avoid confusion with the SCSI IDs, consider placing a label with the SCSI IDs across the front of the drive bays.
In this example configuration, the SCSI ID assignments for each enclosure from left (bay 1) to right (bay 10) are: 0 8 1 9 2 10 3 11 4 12.


Ideally, the servers and storage enclosures are connected to different electrical circuits, however, this is rarely possible.
To help prevent the loss of data and to maintain the availability of the shared disks during a power outage or power fluctuation, always connect the servers and expansion enclosures to uninterruptible power supplies (UPS).

The capacity of the Netfinity Rack is 42U. Each server occupies 8U, each EXP15 enclosure occupies 3U, the Netfinity Fibre Channel Hub occupies 1U, and the Netfinity Fibre Channel RAID Controller unit occupies 4U.
You can house this 27U cluster and its support devices (such as console, keyboard, and uninterruptible power supplies) in IBM Netfinity Racks or in industry-standard, 19-inch racks that meet EIA-310-D standards and have minimum depths of 71.12 cm (28 inches).
(See 'Selecting the Rack Enclosures' for more information.)


Back to  Jump to TOP-of-PAGE

Please see the LEGAL  -  Trademark notice.
Feel free - send a Email-NOTE  for any BUG on this page found - Thank you.