Verify that the virtual adapter doesn't exists anymore:
$ lsmap -vadapter vhost3
Device "vhost3" is not a Server Virtual SCSI Adapter (SVSA).
If necessary, remove the virtual devices: in this case the virtual devices
are some logical volumes but they may be some real storage devices, as with SAN
disks. For example:
# rmlv hdbromonts00
Warning, all data contained on logical volume hdbromonts00 will be destroyed.
rmlv: Do you wish to continue? y(es) n(o)? y
rmlv: Logical volume hdbromonts00 is removed.
#
# rmlv hdbromonts01
Warning, all data contained on logical volume hdbromonts01 will be destroyed.
rmlv: Do you wish to continue? y(es) n(o)? y
rmlv: Logical volume hdbromonts01 is removed.
Don't forget to delete and recreate the corresponding virtual
adapter on the VIOS via the HMC console, otherwise this will cause problem
during the installation phase since the VIOS server will not properly give
access to the underlying storage disk(s) or device(s)!
Recreate the device tree on the VIOS:
$ cfgdev
$
$ lsdev -virtual | grep vhost3
vhost3 Available Virtual SCSI Server Adapter
Case #2: configure a newly created virtual adapter
On the other side, if the virtual adapter is freshly created -- as in the
previous example -- it is needed to inform the VIOS operating system about
it:
Log in to the NIM server and set the standalone machine configuration for
beastie:
# TERM=vt220 smitty nim
/*
* Perform NIM Administration Tasks
* Manage Machines
* Define a Machine
* Host Name of Machine [beastie]
* Perform NIM Software Installation and Maintenance Tasks
* Install and Update Software
* Install the Base Operating System on Standalone Clients
* (Choose: "beastie machines standalone")
* (Choose: "spot - Install a copy of a SPOT resource")
* (Chosse: "spot530 resources spot")
* (Select: "LPP_SOURCE [lpp_source530]")
* (Select: "ACCEPT new license agreements? [yes]")
* (Select: "Initiate reboot and installation now? [no]")
* (Select: "ACCEPT new license agreements? [yes]")
*/
The only required prerequisite is to have a fully working name
resolution.
Boot and automatically network install the DLPAR
Start the DLPAR console via the HMC administration station: activate it and
boot it using the SMS mode.
Then, simply configure the network stack of the Initial Program Load to be
able to netboot the partition and remotely install it:
/*
* Steps:
* 2. Setup Remote IPL (Initial Program Load)
* 2. Interpartition Logical LAN U9113.550.65E3A0C-V8-C3-T1 72ee80008003
* 1. IP Parameters
* 1. Client IP Address [10.126.213.196]
* 2. Server IP Address [10.126.213.193]
* 3. Gateway IP Address [000.000.000.000]
* 4. Subnet Mask [255.255.255.000]
* 2. Adapter Configuration
* 2. Spanning Tree Enabled
* 2. No <===
* 3. Ping Test
* 1. Execute Ping Test
* 5. Select Boot Options
* 1. Select Install/Boot Device
* 7. List all Devices
* 2. - Virtual Ethernet
* ( loc=U9113.550.65E3A0C-V2-C3-T1 )
* 2. Normal Mode Boot
* 1. Yes
*/
Notes:
10.126.213.191 represents the IP address of the DLPAR
10.126.213.193 represents the IP address of the NIM server
(bootp and tftp servers)
Then select the default console device, the installation language (English)
and follow the illustrated guide:
In the proposed screen, choose to modify the default installation
settings...
... in the optional choice...
... and for additional software:
Select the Server package:
Choose "Install with the settings listed above.".
Verify the overall selection...
... and happily wait for the installation to proceed:
Configuration steps on the DLPAR
Well. The DLPAR beastie is now installed and basically
configured but lacks a lot of tools and none of the network infrastructure is
currently available from it (DNS, NIS, NFS, etc.).
Here is what can be done to render this machine (a little) more
usable.
Internal IP address (high inter-partition bandwidth)
As with the Inter-Domain Network (IDN) found on an E10K from Sun
Microsystems, it may be interesting to use an internal LAN to communicate
between LPARs using a high-bandwidth network. So, just pick up an unused IP
address from the chosen private range and apply the settings to the correct
network interface (e.g. configured with the right network ID):
# chdev -l en0 -a netaddr='193.168.10.5' -a netmask='255.255.255.0' -a state=up
Allocate data storage space
Applications and users data are kept outside the OS space and reside in
their own volume group. All the data are hosted on SAN disk as provided via the
VIOS:
# lsdev | grep hdisk
hdisk0 Available Virtual SCSI Disk Drive
hdisk1 Available Virtual SCSI Disk Drive
#
# lsvg
rootvg
#
# lsvg -p rootvg
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 639 468 127..85..00..128..128
#
# mkvg -y beastievg hdisk1
beastievg
#
# lsvg -p beastievg
beastievg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk1 active 539 539 108..108..107..108..108
Seems ready to put real stuff here right now. Later, if new disk will be
added through the VIOS, simply add the new virtual SCSI disk to the volume
group:
# cfgmgr
#
# lsdev | grep hdisk
hdisk0 Available Virtual SCSI Disk Drive
hdisk1 Available Virtual SCSI Disk Drive
hdisk2 Available Virtual SCSI Disk Drive
#
# extendvg beastievg hdisk2
0516-1254 extendvg: Changing the PVID in the ODM.
#
# lsvg -p beastievg
beastievg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk1 active 539 539 108..108..107..108..108
hdisk2 active 539 539 108..108..107..108..108
Update the operating system to the latest Maintenance Level (e.g. 5.3ML3 at
the time of this writing)
Log in to the NIM server and set the standalone machine configuration for
beastie:
# TERM=vt220 smitty nim
/*
* Perform NIM Software Installation and Maintenance Tasks
* Install and Update Software
* Update Installed Software to Latest Level (Update All)
* (Choose: "beastie machines standalone")
* (Choose: "53ML3 resources lpp_source")
*/
Most of the time, these kind of software changes require the system to be
rebooted in order for the changes to be made effective:
# shutdown -r now
Attaching the partition to the general network infrastructure (LDAP, NIS,
NFS, DNS, etc.)
# ntpdate 10.126.213.1
30 Sep 18:04:48 ntpdate[282628]: step time server 10.126.213.1 offset -25063.236234 sec
Then, configure the NTP deamon (xntpd) and start the
service:
# cp /etc/ntp.conf /etc/ntp.conf.orig
#
# diff -c /etc/ntp.conf.orig /etc/ntp.conf
*** /etc/ntp.conf.orig Fri Sep 30 18:05:17 2005
--- /etc/ntp.conf Fri Sep 30 18:05:43 2005
--- 36,42 ----
#
# Broadcast client, no authentication.
#
! #broadcastclient
! server 10.126.213.1
driftfile /etc/ntp.drift
tracefile /etc/ntp.trace
#
# chrctcp -S -a xntpd
0513-059 The xntpd Subsystem has been started. Subsystem PID is 336032.
After some time, be sure that the system get its time from the network
device (verify the * in front of the remote node):
# ntpq -pn
remote refid st t when poll reach delay offset disp
==============================================================================
*10.126.213.1 10.126.192.132 3 u 9 64 37 0.85 13.601 877.12
Install and configure OpenSSH and the administrative tool
sshd_adm
The aim of this part is to have a tuned configuration of OpenSSH for all
clients, and have a specialized configuration for sshd_adm, a
second OpenSSH installation which purpose is to be dedicated to administrators
and administrative tasks.
Install the OpenSSL RPM package provided by the Linux toolbox for AIX:
# mount -n nim -v nfs /export/lpp_source /mnt
# rpm -ivh /mnt/cd_roms/AIX_Toolbox_for_Linux_Applications_for_Power_Systems_11.2004/RPMS/ppc/openssl-0.9.7d-1.aix5.1.ppc.rpm
openssl ##################################################
Then, install the OpenSSH RPM packages, found on the IBM web site (e.g.
openssh-3.8.1p1_53.tar.Z for AIX 5.3):
# mkdir /tmp/_sm_inst.$$ /* Put the downloaded package here. */
# zcat /tmp/_sm_inst.$$/openssh-3.8.1p1_`uname -v``uname -r`.tar.Z | (cd /tmp/_sm_inst.$$ && tar xf -)
# /usr/lib/instl/sm_inst installp_cmd -a -Q -d /tmp/_sm_inst.$$ -f openssh.* -c -N -g -X -G -Y
geninstall -I "a -cgNQqwXY -J" -Z -d /tmp/_sm_inst.176290 -f File 2>&1
The Unix administrative tool lsof (LiSt Open Files)
Log in to the NIM server and set the standalone machine configuration for
beastie:
# TERM=vt220 smitty nim
/*
* Perform NIM Software Installation and Maintenance Tasks
* Install and Update Software
* Install Software
* (Choose: "beastie machines standalone")
* (Choose: "AIX_Toolbox_for_Linux resources lpp_source")
* (Select: "lsof-4.61 ALL")
*/
Configure the OS to boot via a SAN disk (thanks to the VIOS)
So, the operating system is now at the required ML (e.g. 5.3ML3) to support
a SAN boot disk via the VIOS. Assuming that all the steps to make available
this disk to the VIOC are already done, here is how to fully put this partition
onto the SAN.
Information about the corresponding SAN disk provided through the VIOS:
# extendvg rootvg hdisk3
0516-1254 extendvg: Changing the PVID in the ODM.
0516-1162 extendvg: The Physical Partition Size of 16 requires the creation of
1078 partitions for hdisk3. The limitation for volume group rootvg is
1016 physical partitions per physical volume. Use chvg command with -t
option to attempt to change the maximum Physical Partitions per Physical
volume for this volume group.
0516-792 extendvg: Unable to extend volume group.
As shown, a limitation was reached, especially because of the current
settings of the rootvg system volume group. In fact, it is due to
the original size of the virtual disk hosting the system file systems... and so
the size of the corresponding logical volume on the VIOS,
beastielv. Change the rootvg charateristic as
proposed and extend the volume group one more time:
# chvg -t 2 rootvg
0516-1193 chvg: WARNING, once this operation is completed, volume group rootvg
cannot be imported into AIX 430 or lower versions. Continue (y/n) ?
y
0516-1164 chvg: Volume group rootvg changed. With given characteristics rootvg
can include upto 16 physical volumes with 2032 physical partitions each.
#
# extendvg rootvg hdisk3
The next step is to evacuate the content of hdisk0
(logical volume via a virtual SCSI disk) to hdisk3 (dedicated SAN
disk via a virtual SCSI disk):
# migratepv hdisk0 hdisk3
0516-1011 migratepv: Logical volume hd5 is labeled as a boot logical volume.
0516-1246 migratepv: If hd5 is the boot logical volume, please run 'chpv -c hdisk0'
as root user to clear the boot record and avoid a potential boot
off an old boot image that may reside on the disk from which this
logical volume is moved/removed.
migratepv: boot logical volume hd5 migrated. Please remember to run
bosboot, specifying /dev/hdisk3 as the target physical boot device.
Also, run bootlist command to modify bootlist to include /dev/hdisk3.
As explain by the command migratepv, modifiy some boot settings
is necessary here and can't be avoided. Clear the boot record
on the `old' system disk:
# chpv -c hdisk0
Verify and create a boot image on the newly system disk:
# bosboot -vd hdisk3 && bosboot -ad hdisk3
bosboot: Boot image is 23320 512 byte blocks.
Alter the device boot list to include (new) and remove (old) disks:
# bootlist -m normal -o
hdisk0
# bootlist -m normal hdisk3
# bootlist -m normal -o
hdisk3
#
# ls -ilF /dev/ipldevice /dev/rhdisk3
1727 crw------- 2 root system 17, 3 Oct 4 13:01 /dev/ipldevice
1727 crw------- 2 root system 17, 3 Oct 4 13:01 /dev/rhdisk3
We are done with this by now. Just clean rootvg, remove
hdisk0 and restart the partition:
Build the DLPAR on the Hardware Management Console
#
Log in to the HMC using the Java WSM (Web-based System Manager) client.
Then, follow the illustrated guide:
Set the logical name of the partition:
Because a load manager is not needed, say so to the creation
wizard:
Provide a name for the default profile:
Time to set the memory allocation sizes:
Select the kind of logical partition, which is shared in this
case:
Choose the desired entitlement capacity settings,...
... the shared mode and the use of virtual CPU (or not):
Select the I/O unit(s) and its(their) attribution(s) from the
available hardware on the p550:
Don't choose I/O pools (its purpose is for Mainframe installation
only).
Answer that virtual I/O cards are required and create two Ethernet
I/O modules on the two local virtual networks port's ID, as for
this one:
Do the same thing for a SCSI I/O module, using the same port's ID
for the local and remote card (on the Virtual I/O Server):
Be sure to set them to required since the virtual SCSI adapter
will host the boot disk and update the number of virtual cards
slots, if necessary:
The use of a power management partition is not needed here.
Last, select the optional settings for the profil, e.g. activate the
surveillance of the connections:
**Important note:**Don't forget to add a new virtual SCSI I/O module
(of server type) on the VIOS in order to connect to the just created
VIOC. If you made this modification dynamically on the running VIOS,
please report the changes back to the corresponding profile or you will
be in big trouble at the next VIOS reboot!
Manage and allocate the Virtual I/O Server resources
#
The new DLPAR will use one sort of VIOS resource: storage device(s).
Case #1: reuse an already existing virtual adapter
#
If we want to reuse the virtual adapter provided to the DLPAR, we need
to clean things up a little. So, delete the corresponding virtual
adapter and its virtual devices attached:
$ lsmap -vadapter vhost3
SVSA Physloc Client Partition ID
vhost6 U9113.550.65E3A0C-V3-C11 0x00000000
VTD NO VIRTUAL TARGET DEVICE FOUND
Assuming we will use one local SCSI disk space to hold the OS and one
SAN disk to host the application data, here are the required steps to
configure the VIOS.
Create a logical volume for the OS and be sure to have an unused SAN
disk:
$ rmlv beastielv
Warning, all data contained on logical volume beastielv will be destroyed.
rmlv: Do you wish to continue? y(es) n(o)? y
rmlv: Logical volume beastielv is removed.
Resize the /tmp file system and the swap space logical volume#
Because the operating system was installed from scratch and the file
system sizes were automatically adapted to the underlying SCSI virtual
disk (in fact a logical volume provided by the VIOS on a locally
attached SCSI disk), some default values may not be relevant for day to
day use, especially on machine of server type.
In that sense, the size of the /tmp file system may be changed to a
more sensible setting:
Don't forget to add the new machine in the ldm local site utility
in order to be able to manage it and obtain some useful information get
on a daily basis.
# ldmadd beastie
TESTING beastie REMOTE ACCESS ...
sysadm: access denied on beastie ...
Configuring sysadm's ssh authorized_keys file ... beastie done.
Checking some prerequisites on beastie ...
ok
Updating ldm agent files on beastie ...
[...]