Blog

Press Review #20

Feb 12, 2013 | 6 minutes read
Share this:

Tags: Press

Here is a little press review mostly around Oracle technologies and Solaris in particular, and a little lot more:

After discussing Oracle VM, OS virtualization, and some aspects of resource management in the previous articles of this series, this article will now cover a special area of resource management and virtualization of resources: network virtualization and network resource management.

The network is a special shared resource that glues all the virtual machines (VMs), zones, and systems together and provides a communication channel with the world. Thus, the network is a very important layer of the virtualization stack.

Solaris 10 was launched in 2005, with ground-breaking features like: DTrace, SMF (Services), Zones, LDom's, and later ZFS. The latest, and perhaps last, update of Solaris 10 was expected in 2012, to co-inside with an early release of the SPARC T5. In 2013, Oracle released yet another update, suggesting the T5 is close to release. The latest installment of Solaris 10 is referred to as 01/13 release, for January 2013, appears to be the final SVR4 Solaris release, with expected normal Oracle support extending to 2018. Many serious administrators will refer to this release as Solaris 10 Update 11.

This time it was very different as looking at the POWER7+ Power 750/760 s we thought: "Hang on!! We have seen this before! It looks just like a Power770 but one U taller (5U instead of 4U)." So it is a completely different machine inside - if it was not for the 32 CPU cores in the Power 750 model and so fits in the uprated range in the same place, it should have been given new number. I guess using the same number means we all know where it fits. The 760 is very much the same machine but with all the higher features as the 750 but you can't convert between them.

Cela fait déjà un petit moment que j'ai écrit la 1er partie de cet article et j'imagine que vous aviez hâte dans connaître le dénouement ?

Pour rappel, la forte consommation CPU (notamment le temps system) provenait de l'agent Grid d'Oracle. Suite à la modification dans le kernel du mode d'allocation des pages (pg_contig_disable), la charge CPU semblait se répartir plus équitablement entre le temps system et le temps user. Mais... quelque chose me choquait encore...

Après avoir créé vos repos (méthode pas-à-pas disponible dans un précédant article), il est temps de créer votre serveur AI personnalisé. Je vais découper ce sujet en deux partie, un article sur l'architecture Sparc et un autre sur l'architecture x86. Et pourquoi donc ? J'utilise deux méthodes d'initialisations différentes, wanboot pour l'architecture Sparc et la paire pxe/dhcp pour l'architecture x86. Du coup je préfère distinguer ces deux architectures.

Les apparences sont quelques fois trompeuses… Petite démonstration confirmant cet adage.

Situons le contexte : plusieurs DBA Oracle nous remontent un incident sur un de leur serveur. Le symptôme étant le suivant : connexion impossible. Après une rapide vérification le diagnostic initiale semble être le bon. Petite connexion au déport console du serveur et me voilà devant un message des plus explicite « Unable to fork ». Hop, je provoque un petit panic et en avant pour une petite analyse.

Lors d'un précédent article, j'ai traité la mise en place d'un serveur AI personnalisé pour l'architecture Sparc (déploiement via Wanboot). Comme convenu, je vais traité ici la mise en place d'un serveur AI mais sur l'architecture x86. La différence entre ces deux architectures (d'un point vue installation) se situe principalement sur la phase d'initialisation juste avant le début de l'installation.

Sur une architecture x86, la phase d'initialisation est le plus souvent exécutée par le couple pxe / dhcp. Il est donc nécessaire de configurer un serveur dhcp permettant d'interpréter la requête pxe que le client enverra. Il peut s'agir d'un serveur dédié ou mutualisé avec le serveur AI. Dans mon exemple ci-dessous, il n'y a qu'un serveur pour la configuraton dhcp et AI.

Oracle VM Server for SPARC is a high performance virtualization technology for SPARC servers. It provides native CPU performance without the virtualization overhead typical of hypervisors. The way memory and CPU resources are assigned to domains avoids problems often seen in other virtual machine environments, and there are intentionally few "tuning knobs" to adjust.

However, there are best practices that can enhance or ensure performance. This blog post lists and briefly explains performance tips and best practices that should be used in most environments.

Rather than describe this in text, the best thing to do is show it in demo format. Fortunately, the wizardly Steen Schmidt has produced outstanding Youtube videos showing Oracle VM Manager in action at https://www.youtube.com/user/gandalf3100.

First make sure you have nc(1) available it is in the pkg:/network/netcat package.

Then configure COM1 serial port in the VM settings as a pipe. Tell VirtualBox the name you want for the pipe and get it to create it.

As 2012 comes to a close, I thought it would be a good time to look back at some of the changes that have been made to the Trusted Extensions features in Oracle Solaris.

The Linux YAMA Loadable Security Module (LSM) provides a small number of protections over and above standard DAC (Discretionary Access Controls). These can be roughly mapped over to Solaris as follows...

Intel had produced the Itanium architecture to compete in the higher-end 64 bit arena and eventually sun-set their aging 32 bit x64 architecture. With the release of AMD's x64 architecture, and vendors such as Sun Microsystems abandoning the Itanium roadmap for AMD x64 - pressure was placed upon Intel to include 64 bit instructions in the x86 chipset. Now with Intel x86 supporting 64 bit processing, there is little reason for Itanium to exist, placing pressure on remaining Itanum system vendors.

The uptrack-update command applies patches to your Linux kernel while your system is still running. A Ksplice Uptrack subscription gets you so much more than rebootless kernel updates. Here are some details.

Solaris 11 ships with OpenLDAP to use as an LDAP server. To configure, you're going to need a simple slapd.conf file and an LDIF schema file to populate the database.

This article describes the Linux out-of-memory (OOM) killer and how to find out why it killed a particular process. It also provides methods for configuring the OOM killer to better suit the needs of many different environments.

Petite observation lors d’une recette cluster d’un Oracle RAC 11gR2 sur Solaris 10 Sparc. Pendant le test suivant « perte d’une baie SAN », nous avons observé un petit problème lors de la resynchronisation des diskgroups ASM utilisant des volumes sous ACFS. Nous nous attendions à utiliser la fonctionnalité fastresync et pourtant...

The Oracle Linux team is pleased to announce the availability of Oracle Linux 6.4, the fourth update release for Oracle Linux 6.