Selecting Servers and Enclosures
This section contains descriptions of IBM servers, storage enclosures, and optional devices that you can
use to configure a shared-disk cluster. This section also contains some tips to help make the task of
configuring your cluster a little easier.
The hardware needed to configure a dual-node, high-availability, shared-disk cluster generally consists of
the following basic elements:
Note: Some server models have integrated Ethernet controllers, which you can use to connect the servers to the public network.
Together, these basic elements provide the flexibility you need to design a dual-node cluster that will meet
both your current and future needs.
This section does not provide information about all IBM PC Server and Netfinity products available for
clustering.
To obtain information about earlier PC Server models and other products that IBM has tested
in a clustered environment, visit one or more of the following World Wide Web pages:
http://www.pc.ibm.com
http://www.pc.ibm.com/support
http://www.pc.ibm.com/us/options/
http://www.pc.ibm.com/us/netfinity/
http://www.pc.ibm.com/us/netfinity/clustering.html
(1) The cluster software packages for the shared-disk cluster examples described in this reference were designed and tested for use
with the high-availability functions provided by the IBM Netfinity ServeRAID-3H Ultra2 SCSI Adapter, the IBM ServeRAID II Ultra
SCSI Adapter, and the IBM Netfinity Fibre Channel RAID Controller.
(2) The shared-disk cluster packages require that you use IBM 100/10 or 10/100 PCI EtherJet Adapters for the dedicated link between
the two servers.
Please see the LEGAL - Trademark notice.
Feel free - send a for any BUG on this page found - Thank you.