Configuration Tips for the Netfinity 7000 M10
Configuration Tips for the Netfinity 7000 M10
- The Netfinity 7000 M10 backplane supports four hot-swap drive bays.
This backplane is connected to one of the two Ultra SCSI controllers on the system board.
You can install a ServeRAID adapter to control drives in the hot-swap bays, and then move the internal SCSI cable connector from the Ultra
SCSI controller on the system board to an internal channel connector on the ServeRAID adapter.
- When you install multiple hard-disk controllers, RAID controllers, or ServeRAID adapters in the same
server, you must install the device that will manage the startup (boot) drives in a PCI slot that is
scanned before subsequent hard-disk controllers or RAID adapters.
In the Netfinity 7000 M10, the PCI slots are scanned in the following order: PCI bus A, 64-bit expansion slots 1, 2, 3, 4, and 5, PCI
bus B, 32-bit expansion slots 6, 7, 8, 9, and 10, then PCI bus C, expansion slots 11 and 12.
- Each ServeRAID adapter supports up to eight logical drives.
If a failure occurs, the remaining ServeRAID adapter will need to support its own logical drives and the logical drives of its counterpart in the failing server.
Therefore, the total number of shared logical drives for each set of ServeRAID adapters must not exceed eight.
A good way to ensure that you do not exceed this limit would be to define no more than four logical drives for each ServeRAID adapter.
- With the ServeRAID adapters, you can set the stripe-unit size to 8 K (the default), 16 K, 32 K, or 64 K.
After you set a stripe-unit size and store data on the logical drives, you cannot change the size without
destroying data in the logical drives.
Both adapters in a pair must use the same stripe-unit size.
- When the stripe-unit size is set to 8 K or 16 K, the maximum number of physical hard disk drives
in an array is 16.
- When the stripe-unit size is set to 32 K or 64 K, the maximum number of physical hard disk drives
in an array is eight.
- You must use IBM 10/100 PCI EtherJet Adapters for the cluster's heartbeat connection.
- You must use a point-to-point, Category 5 crossover cable for the heartbeat connection.
Connections through a hub are not supported.
- When using the Vinca High Availability for NetWare program, refer to the NetWare documentation for
information about calculating the amount of system memory needed to support the number and
capacity of hard disk drives you intend to install.
Back to
Please see the LEGAL - Trademark notice.
Feel free - send a
for any BUG on this page found - Thank you.