Tags: Live Upgrade, Patch, Upgrade
/dev/dsk/c1d0s1 swap 4301821440 - - /dev/dsk/c1d0s0 ufs 8595417600 / - /dev/dsk/c1d0s7 ufs 58407713280 /export -
boot environment name: snv_39
Filesystem fstype device size Mounted on Mount Options
snv_38 yes yes yes no - snv_39 yes no no yes -
After writing about how to patch (or upgrade) a running system playing with a mirrored OpenSolaris SVM, here is a little step-by-step how to on upgrading (or patching, etc.) a live system using the Live Upgrade feature.
Before installing or running Live Upgrade, you are required to install a
limited set of patch revisions. Make sure you have the most recently
updated patch list by consulting sunsolve.sun.com
. Search for the info
doc
72099
on the SunSolve web site (you must have a registered Sun support
customer account to be able to view this document).
Note: In the following procedure, we will assume that all we want (and need) to upgrade to is provided via a one large DVD ISO image.
If all seems OK, you must begin to update the current running system
with the appropriate lu
packages, i.e. those provided for the targeted
OS revision. You can either use the provided tools:
# /cdrom/cdrom0/Solaris_11/Tools/Installers/liveupgrade20
Or do it yourself:
# pkgrm SUNWluu SUNWlur
# pkgadd -d /cdrom/cdrom0/Solaris_11/Product SUNWlur SUNWluu
Since the current OS is totally installed on the first slice of the
first disk (c1d0s0
), and that the slice six (c1d0s6
) is exactly the
same size as the first one, we will use it for the second ABE device for
our purpose and create the corresponding Boot Environment.
# lucreate -c snv_38 -n snv_39 -m /:/dev/dsk/c1d0s6:ufs
/* If the snv_38 BE already exists, just create the new one for snv_39. */
# lucreate -n snv_39 -m /:/dev/dsk/c1d0s6:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment <snv_38> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
Updating system configuration files.
The device </dev/dsk/c1d0s6> is not a root device for any boot environment.
Creating configuration for boot environment <snv_39>.
Source boot environment is <snv_38>.
Creating boot environment <snv_39>.
Checking for GRUB menu on boot environment <snv_39>.
The boot environment <snv_39> does not contain the GRUB menu.
Creating file systems on boot environment <snv_39>.
Creating <ufs> file system for </> on </dev/dsk/c1d0s6>.
Mounting file systems for boot environment <snv_39>.
Calculating required sizes of file systems for boot environment <snv_39>.
Populating file systems on boot environment <snv_39>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <snv_39>.
Creating compare database for file system </>.
Updating compare databases on boot environment <snv_39>.
Making boot environment <snv_39> bootable.
Updating bootenv.rc on ABE <snv_39>.
Population of boot environment <snv_39> successful.
Creation of boot environment <snv_39> successful.
Verify the correct attribution of the different file systems, in
particular between those which are cloned (required by a Solaris
installation, such as /
, /var
, /usr
, and /opt
) and those which
are shared (such as /export
).
# lufslist -n snv_38
boot environment name: snv_38
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
/dev/dsk/c1d0s1 swap 4301821440 - -
/dev/dsk/c1d0s6 ufs 8595417600 / -
/dev/dsk/c1d0s7 ufs 58407713280 /export -
You then just need to upgrade the second BE using the installation media of the desired release or revision.
# luupgrade -u -n snv_39 -s /cdrom/cdrom0
Install media is CD/DVD. </cdrom/cdrom0>.
Waiting for CD/DVD media </cdrom/cdrom0> ...
Copying failsafe multiboot from media.
Uncompressing miniroot
Creating miniroot device
miniroot filesystem is <ufs>
Mounting miniroot at </cdrom/cdrom0/Solaris_11/Tools/Boot>
Validating the contents of the media </cdrom/cdrom0>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <11>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <snv_39>.
Checking for GRUB menu on ABE <snv_39>.
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE <snv_39>.
Performing the operating system upgrade of the BE <snv_39>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Deleted empty GRUB menu on ABE <snv_39>.
Adding operating system patches to the BE <snv_39>.
The operating system patch installation is complete.
ABE boot partition backing deleted.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot
environment <snv_39> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot
environment <snv_39> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <snv_39>. Before you activate boot
environment <snv_39>, determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment <snv_39> is complete.
Installing failsafe
Failsafe install is complete.
If something went wrong during the upgrade of the new Boot Environment
snv_39
, you can always restart with a very fresh one using the
lumake -n snv_39
command. If all went smooth, you can now check and
compare the newly created BE:
# lucompare -t snv_39 -o /tmp/lucompare.snv_39
# lumount -n snv_39
/.alt.snv_39
# mount -p | grep snv_39
/dev/dsk/c1d0s6 - /.alt.snv_39 ufs - no rw,intr,largefiles,logging,xattr,onerror=panic
# luumount -n snv_39
#
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
snv_38 yes yes no no -
snv_39 yes no yes no -
#
# zpool export datazp
# shutdown -y -g 0 -i 6
Et voilĂ ! After the reboot, you must see something similar to:
# uname -a
SunOS unic 5.11 snv_39 i86pc i386 i86pc
#
# cat /etc/release
Solaris Nevada snv_39 X86
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 01 May 2006
Last, please find some invaluable documentation on the subject below: