Configuration Tips for the Netfinity 7000
Configuration Tips for the Netfinity 7000
- The Netfinity 7000 contains two backplanes, and each backplane is connected to a Wide Ultra SCSI
controller on the system board.
If you want to configure an array using drives in the hot-swap bays, you can move the SCSI cable connector
from a controller on the system board to an internal channel connector on the ServeRAID adapter.
- If you want to control all 12 of the hot-swap bays using one channel of the ServeRAID adapter or
using one of the Ultra SCSI controllers on the system board, you will need an IBM Netfinity Backplane
Repeater Kit, Part Number 94G7426, to connect the two backplanes.
- When you install multiple hard-disk controllers, RAID controllers, or ServeRAID adapters in the same
server, you must install the device that will manage the startup (boot) drives in a PCI slot that is
scanned before subsequent hard-disk controllers or RAID adapters. In the Netfinity 7000, the PCI
slots are scanned in the following order: 1, 2, 3, 4, 5, 6.
- Each ServeRAID adapter supports up to eight logical drives.
If a failure occurs, the remaining ServeRAID adapter will need to support its own logical drives and the
logical drives of its counterpart in the failing server.
Therefore, the total number of shared logical drives for each set of ServeRAID adapters must not exceed eight.
A good way to ensure that you do not exceed this limit would be to define no more than four logical drives for each ServeRAID adapter.
- With the ServeRAID adapters, you can set the stripe-unit size to 8 K (the default), 16 K, 32 K, or 64 K.
After you set a stripe-unit size and store data on the logical drives, you cannot change the size without
destroying data in the logical drives.
Both adapters in a pair must use the same stripe-unit size.
- When the stripe-unit size is set to 8 K or 16 K, the maximum number of physical hard disk drives
in an array is 16.
- When the stripe-unit size is set to 32 K or 64 K, the maximum number of physical hard disk drives
in an array is eight.
- You must use IBM 100/10 PCI EtherJet Adapters or IBM 10/100 EtherJet PCI Adapters for the cluster's heartbeat connection.
- You must use a point-to-point, Category 5 crossover cable for the heartbeat connection.
Connections through a hub are not supported.
- When using the Vinca High Availability for NetWare program, refer to the NetWare documentation for
information about calculating the amount of system memory needed to support the number and
capacity of hard disk drives you intend to install.
Back to
Please see the LEGAL - Trademark notice.
Feel free - send a
for any BUG on this page found - Thank you.