blog'o thnet

To content | To menu | To search

Tag - MPxIO

Entries feed

Wednesday 18 July 2012

Update the HBA firmware on Oracle-branded HBAs

Updating the emlxs driver will no longer automatically update the HBA firmware on Oracle-branded HBAs. If an HBA firmware update is required on an Oracle-branded HBA, a WARNING message will be placed in the /var/adm/messages file, such as this one:

# grep emlx /var/adm/messages
[...]
Jul 18 02:37:11 beastie emlxs: [ID 349649 kern.info] [ 1.0340]emlxs0:WARNING:1540: Firmware update required. (A manual HBA reset or link reset (using luxadm or fcadm) is required.)
Jul 18 02:37:15 beastie emlxs: [ID 349649 kern.info] [ 1.0340]emlxs1:WARNING:1540: Firmware update required. (A manual HBA reset or link reset (using luxadm or fcadm) is required.)
[...]

If found, this message is stating that the emlxs driver has determined that the firmware kernel component needs to be updated. To perform this update, execute luxadm -e forcelip on Solaris 10 (or a fcadm force-lip on Solaris 11) against each emlxs instance that reports the message. As stated in the documentation:

This procedure, while disruptive, will ensure that both driver and firmware are current. The force lip will temporarily disrupt I/O on the port. The disruption and firmware upgrade takes approximately 30-60 seconds to complete as seen from the example messages below. The example shows an update is needed for emlxs instance 0 (emlxs0) and emlxs instance 1 (emlxs1), which happens to correlate to the c1 and c2 controllers in this case.

# fcinfo hba-port
HBA Port WWN: 10000000c9e43860
        OS Device Name: /dev/cfg/c1
        Manufacturer: Emulex
        Model: LPe12000-S
        Firmware Version: 1.00a12 (U3D1.00A12)
        FCode/BIOS Version: Boot:5.03a0 Fcode:3.01a1
        Serial Number: 0999BT0-1136000725
        Driver Name: emlxs
        Driver Version: 2.60k (2011.03.24.16.45)
        Type: N-port
        State: online
        Supported Speeds: 2Gb 4Gb 8Gb
        Current Speed: 8Gb
        Node WWN: 20000000c9e43860
HBA Port WWN: 10000000c9e435fe
        OS Device Name: /dev/cfg/c2
        Manufacturer: Emulex
        Model: LPe12000-S
        Firmware Version: 1.00a12 (U3D1.00A12)
        FCode/BIOS Version: Boot:5.03a0 Fcode:3.01a1
        Serial Number: 0999BT0-1136000724
        Driver Name: emlxs
        Driver Version: 2.60k (2011.03.24.16.45)
        Type: N-port
        State: online
        Supported Speeds: 2Gb 4Gb 8Gb
        Current Speed: 8Gb
        Node WWN: 20000000c9e435fe

In order not to interrupt the service, and because MPxIO (native multipathing I/O) is in use, each emlxs instance will be update one after each other.

# date
Wed Jul 18 09:34:11 CEST 2012

# luxadm -e forcelip /dev/cfg/c1

# grep emlx /var/adm/messages
[...]
Jul 18 09:35:48 beastie emlxs: [ID 349649 kern.info] [ 5.0334]emlxs0: NOTICE: 710: Link down.
Jul 18 09:35:53 beastie emlxs: [ID 349649 kern.info] [13.02C0]emlxs0: NOTICE: 200: Adapter initialization. (Firmware update needed. Updating. id=67 fw=6)
Jul 18 09:35:53 beastie emlxs: [ID 349649 kern.info] [ 3.0ECB]emlxs0: NOTICE:1520: Firmware download. (AWC file: KERN: old=1.00a11  new=1.10a8  Update.)
Jul 18 09:35:53 beastie emlxs: [ID 349649 kern.info] [ 3.0EEB]emlxs0: NOTICE:1520: Firmware download. (DWC file: TEST:             new=1.00a4  Update.)
Jul 18 09:35:53 beastie emlxs: [ID 349649 kern.info] [ 3.0EFF]emlxs0: NOTICE:1520: Firmware download. (DWC file: STUB: old=1.00a12  new=2.00a3  Update.)
Jul 18 09:35:53 beastie emlxs: [ID 349649 kern.info] [ 3.0F1D]emlxs0: NOTICE:1520: Firmware download. (DWC file: SLI2: old=1.00a12  new=2.00a3  Update.)
Jul 18 09:35:53 beastie emlxs: [ID 349649 kern.info] [ 3.0F2C]emlxs0: NOTICE:1520: Firmware download. (DWC file: SLI3: old=1.00a12  new=2.00a3  Update.)
Jul 18 09:36:01 beastie emlxs: [ID 349649 kern.info] [ 3.0143]emlxs0: NOTICE:1521: Firmware download complete. (Status good.)
Jul 18 09:36:06 beastie emlxs: [ID 349649 kern.info] [ 5.055E]emlxs0: NOTICE: 720: Link up. (8Gb, fabric, initiator)

# date
Wed Jul 18 09:39:51 CEST 2012

# luxadm -e forcelip /dev/cfg/c2

# grep emlx /var/adm/messages
[...]
Jul 18 09:41:35 beastie emlxs: [ID 349649 kern.info] [ 5.0334]emlxs1: NOTICE: 710: Link down.
Jul 18 09:41:40 beastie emlxs: [ID 349649 kern.info] [13.02C0]emlxs1: NOTICE: 200: Adapter initialization. (Firmware update needed. Updating. id=67 fw=6)
Jul 18 09:41:40 beastie emlxs: [ID 349649 kern.info] [ 3.0ECB]emlxs1: NOTICE:1520: Firmware download. (AWC file: KERN: old=1.00a11  new=1.10a8  Update.)
Jul 18 09:41:40 beastie emlxs: [ID 349649 kern.info] [ 3.0EEB]emlxs1: NOTICE:1520: Firmware download. (DWC file: TEST:             new=1.00a4  Update.)
Jul 18 09:41:40 beastie emlxs: [ID 349649 kern.info] [ 3.0EFF]emlxs1: NOTICE:1520: Firmware download. (DWC file: STUB: old=1.00a12  new=2.00a3  Update.)
Jul 18 09:41:40 beastie emlxs: [ID 349649 kern.info] [ 3.0F1D]emlxs1: NOTICE:1520: Firmware download. (DWC file: SLI2: old=1.00a12  new=2.00a3  Update.)
Jul 18 09:41:40 beastie emlxs: [ID 349649 kern.info] [ 3.0F2C]emlxs1: NOTICE:1520: Firmware download. (DWC file: SLI3: old=1.00a12  new=2.00a3  Update.)
Jul 18 09:41:48 beastie emlxs: [ID 349649 kern.info] [ 3.0143]emlxs1: NOTICE:1521: Firmware download complete. (Status good.)
Jul 18 09:41:53 beastie emlxs: [ID 349649 kern.info] [ 5.055E]emlxs1: NOTICE: 720: Link up. (8Gb, fabric, initiator)

That's it. Lastly, the documentation says:

At this point, the firmware upgrade is complete as indicated by the Status good message above. A reboot is not strictly necessary to begin using the new firmware. But the fcinfo hba-port command may still report the old firmware version. This is only a reporting defect that does not affect firmware operation and will be corrected in a later version of fcinfo. To correct the version shown by fcinfo, a second reboot is necessary. On systems capable of DR, you can perform dynamic reconfiguration on the HBA (via cfgadm unconfigure/configure) instead of rebooting.

For my part, I tried to unconfigure/configure each emlxs instance using cfgadm without a reboot, but this didn't work as expected on Solaris 10. The fcinfo utility still report the old firmware version, seems until the next reboot.

Sunday 1 May 2011

Switching From RDAC To MPIO For DSXX00 SAN Array

Here is a simple procedure switching from a RDAC/fcparray management mode to a MPIO multipath mode for SAN disks presented from an IBM DSXX00 array.

Verification of the current monopath configuration:

# manage_disk_drivers
1: DS4100: currently RDAC; supported: RDAC/fcparray, MPIO
2: DS4300: currently RDAC; supported: RDAC/fcparray, MPIO
3: DS4500: currently RDAC; supported: RDAC/fcparray, MPIO
4: DS4700/DS4200: currently RDAC; supported: RDAC/fcparray, MPIO
5: DS4800: currently RDAC; supported: RDAC/fcparray, MPIO

Listing of the disks from the array:

# fget_config -vA
---dar0---

User array name = 'CUSTOMERSOFT'
dac0 ACTIVE dac5 ACTIVE

Disk     DAC   LUN Logical Drive
utm            127
hdisk7   dac5    6 beastie1_oracle
hdisk14  dac5   13 beastie2_datavg
hdisk15  dac0   14 beastie3_datavg
hdisk2   dac0    1 beastie3_rootvg
hdisk3   dac0    2 beastie4_rootvg
hdisk4   dac5    3 beastie5_rootvg
hdisk5   dac5    4 beastie2_rootvg
hdisk6   dac0    5 bakup
hdisk8   dac0    7 customer1
hdisk9   dac0    8 customer3
hdisk10  dac5    9 customer6
hdisk11  dac0   10 customer14
hdisk12  dac5   11 beastie2_db2
hdisk13  dac0   12 beastie3_scheduler
hdisk16  dac0   15 customer9
hdisk17  dac0   16 customer8

Listing of the disks as seen from the operating system:

# lsdev -Cc disk | grep DS
hdisk2  Available 00-08-02 1814     DS4700 Disk Array Device
hdisk3  Available 00-08-02 1814     DS4700 Disk Array Device
hdisk4  Available 02-00-02 1814     DS4700 Disk Array Device
hdisk5  Available 02-00-02 1814     DS4700 Disk Array Device
hdisk6  Available 00-08-02 1814     DS4700 Disk Array Device
hdisk7  Available 02-00-02 1814     DS4700 Disk Array Device
hdisk8  Available 00-08-02 1814     DS4700 Disk Array Device
hdisk9  Available 00-08-02 1814     DS4700 Disk Array Device
hdisk10 Available 02-00-02 1814     DS4700 Disk Array Device
hdisk11 Available 00-08-02 1814     DS4700 Disk Array Device
hdisk12 Available 02-00-02 1814     DS4700 Disk Array Device
hdisk13 Available 00-08-02 1814     DS4700 Disk Array Device
hdisk14 Available 02-00-02 1814     DS4700 Disk Array Device
hdisk15 Available 00-08-02 1814     DS4700 Disk Array Device
hdisk16 Available 00-08-02 1814     DS4700 Disk Array Device
hdisk17 Available 00-08-02 1814     DS4700 Disk Array Device

Switch to a multipath management for the SAN volumes presented from the DS4700/DS4200 arrays:

# manage_disk_drivers -c 4
DS4700/DS4200 currently RDAC/fcparray
Change to alternate driver? [Y/N] Y
DS4700/DS4200 now managed by MPIO

It is necessary to perform a bosboot before rebooting the system in
order to incorporate this change into the boot image.

In order to change to the new driver, either a reboot or a full
unconfigure and reconfigure of all devices of the type changed
must be performed.

Reboot the system:

# bosboot –a
bosboot: Boot image is 39636 512 byte blocks.

# shutdown –Fr
[...]

Verification of the new multipath configuration:

# manage_disk_drivers
1: DS4100: currently RDAC; supported: RDAC/fcparray, MPIO
2: DS4300: currently RDAC; supported: RDAC/fcparray, MPIO
3: DS4500: currently RDAC; supported: RDAC/fcparray, MPIO
4: DS4700/DS4200: currently MPIO; supported: RDAC/fcparray, MPIO
5: DS4800: currently RDAC; supported: RDAC/fcparray, MPIO

Listing of the disks from the array:

# mpio_get_config -vA
Frame id 0:
    Storage Subsystem worldwide name: 60ab80016253400009786efca
    Controller count: 2
    Partition count: 1
    Partition 0:
    Storage Subsystem Name = 'CUSTOMERSOFT'
        hdisk      LUN #   Ownership          User Label
        hdisk0         1   A (preferred)      beastie3_rootvg
        hdisk1         2   A (preferred)      beastie4_rootvg
        hdisk2         3   B (preferred)      beastie5_rootvg
        hdisk3         4   B (preferred)      beastie2_rootvg
        hdisk4         5   A (preferred)      bakup
        hdisk5         6   B (preferred)      beastie1_oracle
        hdisk6        16   A (preferred)      customer8
        hdisk7         7   A (preferred)      customer1
        hdisk8         8   A (preferred)      customer3
        hdisk9         9   B (preferred)      customer6
        hdisk10       10   A (preferred)      customer14
        hdisk11       11   B (preferred)      beastie2_db2
        hdisk12       12   A (preferred)      beastie3_scheduler
        hdisk13       13   B (preferred)      beastie2_datavg
        hdisk14       14   A (preferred)      beastie3_datavg
        hdisk15       15   A (preferred)      customer9

Listing of the disks as seen from the operating system:

# lsdev -Cc disk | grep DS
hdisk0  Available 06-08-02 MPIO Other DS4K Array Disk
hdisk1  Available 06-08-02 MPIO Other DS4K Array Disk
hdisk2  Available 06-08-02 MPIO Other DS4K Array Disk
hdisk3  Available 06-08-02 MPIO Other DS4K Array Disk
hdisk4  Available 06-08-02 MPIO Other DS4K Array Disk
hdisk5  Available 06-08-02 MPIO Other DS4K Array Disk
hdisk6  Available 06-08-02 MPIO Other DS4K Array Disk
hdisk7  Available 06-08-02 MPIO Other DS4K Array Disk
hdisk8  Available 06-08-02 MPIO Other DS4K Array Disk
hdisk9  Available 06-08-02 MPIO Other DS4K Array Disk
hdisk10 Available 06-08-02 MPIO Other DS4K Array Disk
hdisk11 Available 06-08-02 MPIO Other DS4K Array Disk
hdisk12 Available 06-08-02 MPIO Other DS4K Array Disk
hdisk13 Available 06-08-02 MPIO Other DS4K Array Disk
hdisk14 Available 06-08-02 MPIO Other DS4K Array Disk
hdisk15 Available 06-08-02 MPIO Other DS4K Array Disk
hdisk16 Available 06-08-02 MPIO Other DS4K Array Disk

That's it.

Monday 22 September 2008

About GNU/Linux Software Mirroring And LVM

Here, the final aim was to provide data access redundancy through SAN storage hosted on remote sites across Wide Area Network (WAN) links. After some relatively long and painful tries to mimic software mirroring as found on HP-UX platform using Logical Volume Management (LVM), i.e. at the logical volume level, I finally give up deciding this functionality will definitely not fit my need. Why? Here are my comments.

  1. It is not possible to provide clear and manageable storage multipath when the need to distinguish between the multiple sites is important, ala mirror across controllers found on Veritas VxVM on Sun Solaris system, for example. So, managing many physical volumes along with lots of logical volumes is very complicated.
  2. There is no exact mapping capability between logical volume storage on a given physical volume.
  3. The need to have a disk-based log, i.e. a persistent log. Yes, one can always provide the option --corelog at the creation time to the logical volume initial build and have an in-memory log , i.e. a non-persistent log, but this requires the entire copies (mirrors) be resynchronized upon reboot. Not really viable on multi-TB environments.
  4. A write-intensive workload on a file system living on a logical volume mirror will suffer high latency: the overhead is important, and the time to do mostly-write jobs grow dramatically. It is really hard to get high level statistics, only low level metrics seems consistent: sd SCSI devices and dm- device mapper components for each paths entries. Not from the multipath devices standpoint, which is the more interesting from the end user and SA point of view.
  5. You can't extend a logical volume, which is really a no-go per-se. On that point, the Red Hat support answered that this functionality may be added in a future release, the current state may eventually be a Request For Enhancement (RFE), if a proper business justification is provided. One must break the logical volume mirror copy, then rebuild it completely. Not realistic when the logical volume is made of a lot of physical extents across multiple physical volumes.
  6. A LVM configuration can be totally blocked by itself, and not usable at all. The fact is, LVM use persistent storage blocks to keep track of its own metadata. The metadata size is set at physical volume creation time only, and can't be change afterward. This size is statically defined as 255 physical volume blocks, and can be adjust from the LVM configuration file. The problem is, when this circular buffer space (stored in ASCII) fills up--such as when there are a lot of logical volumes in a mirrored environment--it is not possible to do anything more with LVM. So you can't add more logical volume, can't add more logical volume copies,... and can't delete them trying to reestablish a proper LVM configuration. Well, here are the answers given by the Red Hat support to two keys questions in this situation:
    • How to size the metadata, i.e. if we need to change it from the default value, how can we determine the new proper and appropriate size, and from which information?
      I am afraid but Metadata size can only be defined at the time of PV creation and there is no real formula for calculating the size in advance. By changing the default value of 255 you can get a higher space value. For general LVM setup (with less LV's and VG's) default size works fine however in cases where high number of LV's are required a custom value will be required.
    • We just want to delete all LV copies, which means to return to the initial situation and have 0 copy for all LV, i.e. only one LV per-se, in order to be able to change LVM configuration again (we can't do anything on our production server right now)?
      I discussed this situation with peer engineers and also referenced a similar previous case. From the notes of the same the workaround is to use the backup file (/etc/lvm/backup) and restore the PV's. I agree that this really not a production environment method however seems the only workaround.

So, the production RDBMS Oracle server is finally now being evacuate to an other machine. Hum... Hope to see better enterprise experience using the mdadm package to handle RAID software, instead of mirror (RAID-1) LVM. Maybe more about that in an other blog entry?

Friday 16 May 2008

Comparison: EMC PowerPath vs. GNU/Linux dm-multipath

I will present some notes about the use of multipath solutions on Red Hat systems: EMC PowerPath and GNU/Linux dm-multipath. Along those notes, keep in mind that they were based on tests done when pressure was very high to put new systems in production, so lack of time resulted in less complete tests than expected. These tests were done more than a year ago, and so before the release of RHEL4 Update 5 and some of RHBA related to both LVM and dm-multipath technologies.

Keep in mind that without purchasing an appropriate EMC license, PowerPath can only be used in failover mode (active-passive mode). Multiple paths accesses are not supported in this case: no round-robin, and no I/O load balancer for example.

EMC PowerPath

Advantages

  1. Not specific to the SAN Host Bus Adapter (HBA).
  2. Support for multiple and heterogeneous SAN storage provider.
  3. Support for most UNIX and Unix-like platforms.
  4. Without a valid license, can only work in degraded mode (failover).
  5. Is not sensible to a change in the SCSI LUN renumbering. Adapt accordingly the corresponding multiple sd devices (different paths to a given device) with its multipath definition of the emcpower device.
  6. Provide easily the ID of the SAN storage.

Drawbacks

  1. Not integrated with the operating system (which generally has its own solution).
  2. The need to force a RPM re-installation in case of a kernel upgrade on RHEL systems (due to the fact that kernel modules are stored in a path containing the exact major and minor versions of the installed (booted) kernel.
  3. Non-automatic update procedure.

GNU/Linux device-mapper-multipath

Advantages

  1. Not specific to the SAN Host Bus Adapter (HBA).
  2. Support for multiple and heterogeneous SAN storage provider.
  3. Well integrated with the operating system.
  4. Automatic update using RHN (you must be a licensed and registered user in this case).
  5. No additional license cost.

Drawbacks

  1. Only available on GNU/Linux systems.
  2. Configuration (files and keywords) very tedious and difficult.
  3. Without the use of LVM (Logical Volume Management), it has not the ability to follow SCSI LUN renumbering! Even in this case, be sure not to have blacklisted the newly discovered SCSI devices (sd).

Last, please find some interesting documentation on the subject below:

Saturday 9 February 2008

Deleting SCSI Device Paths For A Multipath SAN LUN

When releasing a multipath device under RHEL4, different SCSI devices corresponding to different paths must be cleared properly before removing the SAN LUN effectively. When the LUN was delete before to clean up the paths at the OS level, it is always possible to remove them afterwards. In the following example, it is assume that the freeing LVM manipulations were already done, and that the LUN is managed by EMC PowerPath.

  1. First, get and verify the SCSI devices corresponding to the multipath LUN:
    # grep "I/O error on device" /var/log/messages | tail -2
    Feb  4 00:20:47 beastie kernel: Buffer I/O error on device sdo, \
     logical block 12960479
    Feb  4 00:20:47 beastie kernel: Buffer I/O error on device sdp, \
     logical block 12960479
    # powermt display dev=sdo
    Bad dev value sdo, or not under Powerpath control.
    # powermt display dev=sdp
    Bad dev value sdp, or not under Powerpath control.
    
  2. Then, get the appropriate scsi#:channel#:id#:lun# informations:
    # find /sys/devices -name "*block" -print | \
     xargs \ls -l | awk -F\/ '$NF ~ /sdo$/ || $NF ~ /sdp$/ \
     {print "HBA: "$7"\tscsi#:channel#:id#:lun#: "$9}'
    HBA: host0      scsi#:channel#:id#:lun#: 0:0:0:9
    HBA: host0      scsi#:channel#:id#:lun#: 0:0:1:9
    
  3. When the individual SCSI paths are known, remove them from the system:
    # echo 1 > /sys/bus/scsi/devices/0\:0\:0\:9/delete
    # echo 1 > /sys/bus/scsi/devices/0\:0\:1\:9/delete
    # dmesg | grep "Synchronizing SCSI cache"
    Synchronizing SCSI cache for disk sdp:
    Synchronizing SCSI cache for disk sdo:
    

Monday 9 July 2007

Installing a VIOS from the HMC Using a backupios Archive File

Once the corresponding partition has been defined on the managed system, log on to the HMC using an account having hmcsuperadmin authority. hscroot is such an account. Then, to install the VIOS partition using a previously generated backupios tar file, issue a command similar to the following:

$ installios \
   -s Server-9113-550-SN65E3R4F \
   -S uu.xx.yy.zz \
   -p vios01 \
   -r installation \
   -i vv.xx.yy.zz \
   -d nfssrv:/path/to/backupios/archive \
   -m 00:11:22:aa:bb:cc \
   -g ww.xx.yy.zz \
   -P 100 \
   -D full

Where:

  • -s: Managed system
  • -p: Partition name
  • -r: Partition profile
  • -d: Path to installation image(s) (/dev/cdrom or srv:/path/to/backup)
  • -i: Client IP address
  • -S: Client IP subnet mask
  • -g: Client gateway
  • -m: Client MAC address
  • -P: Port speed (optional, 100 is the default (10, 100, or 1000))
  • -D: Port duplex (optional, full is the default (full, or half))

Note that he profile named installation is very similar to the profile named normal: it just doesn't include all the extra-stuff necessary for our final pSeries configuration, i.e. SAN HBA, virtual LAN, etc. This is necessary not to install on SAN disks, or try to use a virtual Ethernet adapter during VIOS installation process. After rebooting on the fresh installed VIOS, connect to the console and check for:

  1. Clean-up the content of the /etc/hosts file, in particular be sure that the FQDN and short name of the NIM server are mentioned properly.
  2. Configure the IP address(es) on the physical interface(s), and the corresponding hostname--and don't forget that they will be modify latter in order to create SEA device!
  3. Recreate the mirror in order to use the two first disks (with exact mapping), and be sure to have two copies the lg_dumplv logical volume (not really sure about this one, but it doesn't hurt anyway...).
  4. Update the content of the /etc/resolv.conf file.
  5. Be able to resolve hostnames using other network centralized mechanisms:
    # cat << EOF >> /etc/netsvc.conf
    hosts = local, nis, bind
    EOF
    
  6. Don't forget to erase the installation NIM configuration found under /etc/niminfo and set it as a new NIM client for the current NIM server:
    # mv /etc/niminfo /etc/niminfo.orig
    # niminit -a name=vios01 \
     -a master=nim.example.com \
     -a pif_name=en0 \ # May be `en5' if the SEA was already configured.
     -a connect=nimsh
    
  7. Change the padmin account password.

Last, here are some welcome tuning configuration steps:

  • Update the VIOS installation software with the external bundle pack, if available.
  • Reboot the VIOS using the profile named normal (whi include all the targeted hardware definitions).
  • There are a few parameters to change on the fibre channel adapter and fscsi interface on top of it. The first one is dyntrk, which allow fabric reconfiguration without having to reboot the Virtual I/O Server. The second one is fs_err_recov, which will prevent the Virtual I/O Server to retry sending an operation on a disk if the disk become unavailable. We change it because the Virtual I/O Client will take care of accessing the disk using MPxIO and thus, will redirect the I/O operations to the second Virtual I/O Server. The last parameter we change is the one that controls the number of commands to queue to the physical adapter. A reboot is necessary in order to change these parameters:
    $ chdev -dev fscsi0 -attr dyntrk=yes -perm
    fscsi0 changed
    $ chdev -dev fscsi0 -attr fc_err_recov=fast_fail -perm
    fscsi0 changed
    $ chdev -dev fcs0 -attr num_cmd_elems=2048 -perm
    fcs0 changed
    
  • We can safely change the software transmit queue size and descriptor queue size with the following commands. Since the adapter is in use, we change the settings in ODM only, and the new configuration will be use at next reboot:
    $ chdev -dev ent0 -attr tx_que_sz=16384 -perm
    ent0 changed
    $ chdev -dev ent1 -attr tx_que_sz=16384 -perm
    ent1 changed
    $ chdev -dev ent0 -attr txdesc_que_sz=1024 -perm
    ent0 changed
    $ chdev -dev ent1 -attr txdesc_que_sz=1024 -perm
    ent1 changed
    
  • And be sure to force the speed and mode of the desired Ethernet interfaces:
    $ chdev -dev ent0 -attr media_speed=100_Full_Duplex -perm
    ent0 changed
    $ chdev -dev ent1 -attr media_speed=100_Full_Duplex -perm
    ent1 changed
    
  • Now, we need to create the Shared Ethernet Adapter to be able to access the external network and bind the virtual adapter to the real one:
    $ chdev -dev en0 -attr state=detach
    en0 changed
    $ chdev -dev en1 -attr state=detach
    en1 changed
    $ mkvdev -sea ent0 -vadapter ent3 -default ent3 -defaultid 1
    ent5 Available
    en5
    et5
    $ mkvdev -sea ent1 -vadapter ent4 -default ent4 -defaultid 3
    ent6 Available
    en6
    et6
    $ mktcpip -hostname vios01 \
       -inetaddr vv.xx.yy.zz \
       -interface en5 \
       -netmask uu.xx.yy.zz \
       -gateway ww.xx.yy.zz \
       -nsrvaddr tt.xx.yy.zz \
       -nsrvdomain example.com \
       -start
    
  • Don't forget to install the MPxIO driver provided by EMC on their FTP web site:
    # cd /mnt/EMC.Symmetrix
    # TERM=vt220 smitty installp
    # lslpp -al | grep 'EMC.Symmetrix' | sort -u
                                 5.2.0.3  COMMITTED  EMC Symmetrix Fibre Channel
      EMC.Symmetrix.aix.rte      5.2.0.3  COMMITTED  EMC Symmetrix AIX Support
      EMC.Symmetrix.fcp.MPIO.rte
    
  • Assuming that the clock is given by the default gateway network device, we can set and configure the NTP client this way:
    # ntpdate ww.xx.yy.zz
    # cp /etc/ntp.conf /etc/ntp.conf.orig
    # diff -c /etc/ntp.conf.orig /etc/ntp.conf
    *** /etc/ntp.conf.orig  Fri Sep 30 18:05:17 2005
    --- /etc/ntp.conf       Fri Sep 30 18:05:43 2005
    ***************
    *** 36,41 ****
      #
      #   Broadcast client, no authentication.
      #
    ! broadcastclient
      driftfile /etc/ntp.drift
      tracefile /etc/ntp.trace
    --- 36,42 ----
      #
      #   Broadcast client, no authentication.
      #
    ! #broadcastclient
    ! server ww.xx.yy.zz
      driftfile /etc/ntp.drift
      tracefile /etc/ntp.trace
    #
    # chrctcp -S -a xntpd
    

Side note: This entry was originally contributed by Patrice Lachance, which first wrote about this subject.

Saturday 6 August 2005

Details About SAN Disks and MPxIO Capabilities on a VIOS

Obtaining these sorts of particular and specific informations (such as MultiPath I/O status) from a Virtual I/O Server can be very easily achieved using the following one (long) line shell script, helped by the lsdev(1), lscfg(1) and lspath commands:

# for disk in `lsdev | grep hdisk | egrep  -v "SCSI Disk Drive|Raid1" | awk '{print $1}'`
> do
> lscfg -v -l ${disk} | egrep "${disk}|Manufacturer|Machine Type|ROS Level and ID|Serial Number|Part Number"
> echo "`lspath -H -l ${disk} | grep ${disk} | awk '{print\"\tMultiPath I/O (MPIO) status: \"$1\" on parent \"$3}'`"
> echo ""
> done

  hdisk3           U787B.001.DNW3897-P1-C3-T1-W5006048448930A41-L9000000000000  EMC Symmetrix FCP MPIO RaidS
        Manufacturer................EMC     
        Machine Type and Model......SYMMETRIX       
        ROS Level and ID............5670
        Serial Number...............9312A020
        Part Number.................000000000000510001000287
        MultiPath I/O (MPIO) status: Enabled on parent fscsi0
        MultiPath I/O (MPIO) status: Enabled on parent fscsi1

  hdisk4           U787B.001.DNW3897-P1-C3-T1-W5006048448930A41-LA000000000000  EMC Symmetrix FCP MPIO RaidS
        Manufacturer................EMC     
        Machine Type and Model......SYMMETRIX       
        ROS Level and ID............5670
        Serial Number...............9312E020
        Part Number.................000000000000510001000287
        MultiPath I/O (MPIO) status: Enabled on parent fscsi0
        MultiPath I/O (MPIO) status: Enabled on parent fscsi1
[...]

Pattern SCSI Disk Drive is excluded since it represents local SCSI disks, as well as pattern Raid1 because it is a view corresponding to parity disks (which are logical disks only used by SAN administrators).