Wednesday 27 May 2009

Implementing remote tar solutions with SSH based security

Although AIX, being a UNIX standards compliant operating system comes with a variety of backup and restore tools by default, there are still some situations where system administrators feel them stuck up with these available tools. A very common situation is where a system administrator is asked to backup of a system which does not have any locally available attached tape drive. On AIX, IBM provides Sysback to do the remote backups on filesystems, volume groups and directories levels. Even you can do remote mksysb with Sysback tool. However, Sysback is a licensed product and you have to purchase separate license for this tool from IBM. There are few other commercial products available for remote backups and restore but off course they have constraint of cost

System administrators, have been using remote tar solution (based of combination of various standard UNIX commands including dd , tar and rsh ) to solve this problem since long time. Some system administrators also use rdump command to get this work done.
In this article, I will cover different ways to do remote backup and restore on AIX using tar command. In first section, I will describe traditional way of doing remote tar by combining it with rsh and dd commands and in later section I will describe usage and configuration of GNU tar on AIX, which you can easily combine with Ssh to ensure data flow security over the network during remote backup and restore operations.


Traditional way of using tar for remote Backup/Restore Operations


Just like system administrators of any other UNIX operating system, AIX system administrators can also use tar command with rsh and dd command combination to do the remote backup and restore. This solution is useful in circumstances where corporate does not give any permission to system administrator to use any GNU based tools in their production environment.

In my environment, I have one Lpar named “Jproapp” on P570 which does not have any tape drive attached and available for backups. On the other hand, another p640 server named “okmedb” has one DLT tape drive available. I decided to take a remote tar backup of a filesystem “/oradb/data” (which was mounted on Jproapp) to this tape drive available on okmedb. One possible solution would be to mount this filesystem using NFS on okmedb and then take a simple tar backup to locally available tape drive. However, like other system administrators, I wanted to avoid NFS in production environment due to its performance issues. So I opted for using traditional way of using tar for remote backup operation. I simply resolved hostnames of these servers on each other (so that I can ping Jproapp by name while on okmedb and vice versa) and then created a user named “operator” on both systems (who should have at least read access to /oradb/data filesystem on Jproapp system).
Next step would be rsh setup for this user. I simply created a .rhosts file in $HOME directory of “operator” user on okmedb system with the following contents (so that this operator user from Jproapp system can remotely execute commands on okmedb).
---------------------------------------------------------------------
Operator Jproapp
----------------------------------------------------------------------
I checked that rsh from Jproapp to okmedb for user operator is working fine by executing following command on “Jproapp”
/home/operator> rsh okmedb date

Command should display date and time from okmedb without asking any password.

Once rsh setup is done, rest is simply execution of tar command on Jproapp as follows:
/home/operator> cd /oradb/data
/home/operator> tar –cvf - * | rsh okmedb “dd of=/dev/rmt0 bs=64k conv=block”

To restore the data, you have to again rely on some sort of similar commands combination. For example to restore same data in /home/operator/tmp directory of Jproapp, you have to use following commands sequence

/home/operator> cd tmp
/home/operator> rsh okmedb “dd if=/dev/rmt0 bs=512b” | tar –xvf –

It should be noticed that while taking backups, data can be sent with larger block size (as I used 64k). However, while you try to restore data using larger block size, you may face problems. In my case, I faced “not enough memory” errors even with block size of 10k while restoring same data which was originally backed up with 64k block size. Because of this, I had to use a smaller block size of 512b while restoring data with remote tar technique. This, no doubt slow down restoration process, but the process itself completed without any errors.


GNU tar, a useful solution for remote backups

GNU tar is a GNU based tape archiver program which is capable of doing similar backup and restoration tasks, as done by traditional tar. However, it comes with some more powerful features which are currently not available for native tar. Some important features are as follows:

1. Remote backup/restoration support
2. Large file backup support
3. Incremental backup support
4. Data appending capability on same tape cartridge


More details on this GNU project is available on web site

http://www.gnu.org/software/tar/

Off course, other features of GNU tar are also important (like large files support), we will concentrate only on remote backup/restoration capability of GNU tar on AIX in this article.
There are many ways to get GNU tar for AIX. You can download source code of latest available version of GNU tar from ftp://download.gnu.org.ua/pub/alpha/. You can also obtain executable version (which then can be executed to get installp format of binary) from Bull web site (http://www.bullfreeware.com). However, easiest way to obtain GNU tar for AIX 5L is to use AIX Tool Box for Linux Applications CD or AIX Tool Box web site (http://www-03.ibm.com/servers/aix/products/aixos/linux/download.html) which provide the software in rpm format. So, I downloaded this GNU software from AIX Tool Box web site and then installed it on my both servers (Jproapp and okmedb) by using rpm manager available on AIX.

#home/root> rpm -i gnu-tar-1.14-2.aix5.1.ppc.rpm

It is very important to note down that you have to not only install GNU tar on the server which is going to use GNU tar for backup and restoration but also on the server which will act as remote tape server ( server with tape drive locally attached ) in the solution. Reason of this is that, in order to access the tape drive on a remote machine, GNU tar uses the remote tape server written at the University of California at Berkeley and according to GNU tar documentation, this remote tape server software must be installed as prefix/libexec/rmt on any machine whose tape drive you want to use remotely. GNU tar calls this rmt component by running an rsh or Ssh to the remote machine, optionally using a different login name if one is supplied.

A copy of the source for the remote tape server is provided by default with GNU tar. It is Copyright © 1983 by the Regents of the University of California, but can be freely distributed. It is compiled and installed by default along with GNU tar.
As installation prefix for rpm based GNU utilities on AIX 5L is /opt/freeware, I was expecting “rmt” availability in /opt/freeware/bin after GNU tar installation. However, installation of GNU tar did not create rmt in any such directory and execution of GNU tar began to fail. There is no such AIX specific GNU tar documentation which could help me in overcoming this issue so I looked for other options. The only “rmt” executable present on AIX 5L is /usr/sbin/rmt so I decided to create a symbolic link on both servers as follows:
#/home/root> mkdir /opt/freeware/libexec
#/home/root> ln –s /usr/sbin/rmt /opt/freeware/libexec/rmt
And it worked. When I executed following command on Jproapp, it started backup over the network using GNU tar and remote tape drive present on okmedb server.
#/home/root>/opt/freeware/bin/tar –cvf operator@okmedb:/dev/rmt0 /oradb/data/*.dbf

For restoration, I executed (on Jproapp)
#/home/root> cd /oradb/data/tmp
#/home/root> /opt/freeware/bin/tar –xvf operator@okmedb:/dev/rmt0

You can easily restore even a single file also by specifying name of the file at the end of above mentioned command.
GNU tar, in the above scenario has two main advantages. First, you can easily restore files on the server with locally attached server without any problem. For example, in my scenario I can use simple tar –xvf command on okmedb system to restore data directly on local system which has been backed up from jproapp system remotely. Second , is off course simplicity of command line being used in case of GNU tar solutin. I also experienced some performance improvement in case of GNU tar usage as compared to native tar usage with rsh and dd comands.

SSH Integartion into remote tar solutions

Many organizations require proper security measures to be enforced while they allow data flow on their networks. Also on some Unix/Linux variants, rsh is not enabled by default and ssh appears to be the only remote communication method for these operating systems.
GNU tar supports remote backups and restoration over SSH (although if you see online manuals of GNU tar, there is no specification mentioned about SSH support).The trick is to use –rsh command parameter available with GNU tar.
To setup GNU tar configuration to work with SSH, I had to first of all install Openssh /Openssl on two AIX systems (Jproapp & Okmedb). I used OpenSSH 3.8.1.0 and Openssl 0.9.6.7 (which are available for download from Bull Free software archive web site) and install /configure Openssh based SSH server to run and listen on port 22 (on Okmedb Server) for any incoming connections from Jproapp.
As my objective was to take data backups from Jproapp server over the network to the tape drive available on Okmedb server, so I concentrated towards configuring Openssh in such a way that Jproapp user on Jproapp server should be able to login to Okmedb server using RSA or DSA authentication ( without having any need of passwords usage).I generated public/private dsa based keys pair without any pssphrase on Jproapp.
#/home/jproapp>ssh-keygen –t dsa –b 2048
And then copied it to Okmedb, using scp command
#/home/jproapp>scp /home/jproapp/.ssh/id_dsa.pub jproapp@okmedb:/home/jproapp
Note: You may face some problems while using scp at this stage so better to modify paths to include scp command in $PATH environment variable for the user on both servers (either by editing /etc/environment file or .profile for that specific user).
Next, I created .ssh directory in /home/jproapp and created authorized file (on Okmedb) as follows:
#/home/jproapp> mkdir .ssh;cd.ssh
#/home/jproapp/.ssh> cat /home/jproapp/id_dsa.pub >>authorized_keys
Finally I tested ssh connectivity from Jproapp to okmedb server with DSA authentication. Command should return system date and time from okmedb without prompting for any passwords.
#/home/jproapp>ssh okmedb date
Once SSH configuration is done, we can easily use –-rsh command parameter with GNU tar on Jproapp to use SSH instead of rsh for remote communication.
/home/jproapp>whereis tar
/opt/freeware/bin
/home/jproapp>tar -cvf jproapp@okmedb:/dev/rmt0 --rsh=/usr/bin/ssh .
The whole data flow over the network will now use SSH. You can also verify this SSH usage by stopping sshd running on okmedb and then executing same above GNU tar command (which will give following error, now, in this case)
/home/jproapp> tar -cvf jproapp@okmedb:/dev/rmt0 --rsh=/usr/bin/ssh .
ssh: connect to host kmedbold port 22: Connection refused
/opt/freeware/bin/tar: jproapp@kmedbold\:/dev/rmt0: Cannot open: There is an input or output error.
/opt/freeware/bin/tar: Error is not recoverable: exiting now
SSH can also be configured to use with traditional remote tar technique. For example, in my already configured scenario, if I execute following command on Jproapp, remote tar will start with usage of SSH.
tar -cvf - * | ssh okmedb "dd of=/dev/rmt0 bs=64k conv=block"
In summary, GNU tar provides a comprehensive way of doing remote backups over the network. There is however an essential need of having this tool well tested according to specific scenarios and environments. This could easily be achieved by doing frequent restorations for testing purposes as a part of comprehensive backup strategy.


About Author: Khurram Shiraz is senior system Administrator at KMEFIC, Kuwait. In his eight years of IT experience, he worked mainly with IBM technologies and products especially AIX, HACMP Clustering, Tivoli and IBM SAN/ NAS Storage. He also has worked with IBM Integrated Technology Services group. His area of expertise includes design and implementation of high availability and DR solutions based on pSeries, Linux and windows infrastructure. He can be reached at aix_tiger@yahoo.com.



Note: This article is again one of my published work , which was published in AIX Update July 2007 edition.

Monday 25 May 2009

What combination to select for Oracle RAC on IBM Pseries?

This is infact a frequently asked question by my customers. They all seem confused by all possible combinations for building Oracle RAC clusters on pseries servers.

I decided to summarize all these possible options. In one of my coming posts , i will try to summarize pros and cons for each possible combination.

Oracle Release Recommended Configuration

Oracle 10.2.0.4 AIX 5.3/AIX 6.1 + Oracle Clusterware 10.2.0.4 + GPFS 3.2
Oracle 10.2.0.2 AIX 5.2/5.3 + Oracle Clusterware 10.2.0.2 + GPFS 2.3
Oracle 10.2.0.4 AIX 5.3/AIX 6.1 + Oracle Clusterware 10.2.0.4 + Oracle ASM

OR

Oracle 10.2.0.4 + AIX 5.3 +HACMP 5L + GPFS 2.3
Oracle 9.2 + AIX 5.2/5.3 + HACMP 5L + GPFS 2.3

Sunday 24 May 2009

Concept of GOD or Allah in ISLAM

Let's start with one of the most important but simplest concept of ISLAM.Infact , this concept is not unique to ISLAM , you will find same concept ( somewhat spoiled and modified) in other holy religions like Christianity and Jewism.

We , as Muslims believe that there is no one to be worshiped except Allah( or GOD). He is alone, creator of Universe. He has no son, wife , no daughters.. He is Unique and alone. There is no one , who can compete with him in any one of his characteristics .

We also believe that Allah has sent prophets ( or messengers) , who were all human beings from Prophet Adam to prophet Muhammad (PBUH)with same message to human beings ( WORSHIP ME , NO ONE EXCEPT ME IS ELIGIBLE TO BE WORSHIPED.. I AM YOUR CREATOR, I AM YOUR GOD).

Unfortunately human beings of modern age modified this holy message and now you can see human beings are worshiping worthless things like IDOLS, ANIMALS, FIRE, WATER..Is it our worth , as an intellectual human beings to worship inferior animals like snakes or cows or pigs? which dont have any intellegentia? Or it worth to worship FIRE which can stopped with water drops? Or is it worth to horrifying faced idols which can not even remove flies sitting on their faces?

Think over it , we have to worship only ONE GOD , who has created us and off course whole universe!!

Now many non believers of GOD will ask "Why we can not see him , if he exists ". This is most commonly asked question and even prophets were asked with same question? Answer is , yes you can see him , but not with your physical eyes. Open you heart with truthfulness and then you will be able to see him everywhere, from blue skies to huge mountains , from blue seas to beautiful flowers. How can these things be created without any creator ? When a small thing like a needle can not be created without any creator , how can big volcanoes,mountains and whole universe can be created without any creator ? Can anybody answer my this question?

Friday 22 May 2009

One of Main Objectives of our Lives!!!

Today i have decided to add one important category of articles . This category is named as "What is Islam - A lovely Truth" and it will contain short and simple articles about basic principles and faiths of Islam.

Most of my colleagues and friends know me as a "moderate" Muslim. A Muslim who is strong in his beliefs but may not practicing all necessary requirements of this holy religion , however on the other hand just striving his best to practice all these instructions . And yet only as an another attempt i have decided to start this series of articles , which will definitely aim to present Islam in its most simplest form to all my readers and off course to my non Muslim friends as well.

Indeed , as Muslims , it is our greatest duty to present Islam to all our non-Muslim friends with love and simplicity .

May Allah help us to save at least any single human being from going to Hell and indefinite punishment.

A business on demand solution for backups using Dlpar, SSH & Sudo

Managing system level backups has been an uphill task for P5 based Lpars system administrators. In most of cases system administrators are asked to do the system level backups of multiple Lpars with single tape drive or DVD ram drive. This tape drive or DVD ram drive or CD RW drive is then required to be moved across all these Lpars using IBM DLPAR technology. Some system administrators use sysback tool from IBM which allows system administrators to take AIX level backups (mksysb, vg backups, filesystem level backups etc) to a tape library remotely. It therefore eliminates need of performing Dlpar operations as it is able to take remote backups on TCPIP level. In some environments, where TSM is available, sysback tool allows integration into TSM where system administrators can use TSM policies to manage versions of system level backups with their desired policies.

In scenario where no Sysback or TSM is available, system administrators have to rely on AIX built-in tools of mksysb, savevg etc to protect their system level configurations and data. All these AIX based operating system level backup tools can only take backups to locally available devices ( like tape drives or DVD drives ) for these Lpars. While performing any DLPAR operation to move these devices, two important things to be keep in mind

1. First, these devices should be present in the system as child resources of any “movable “physical adapter. When I say “movable”, it means that this physical adapter ideally should not contain any required resource for system… only those resources should be present which are declared as desired resources in Lpar profile.

2. If any other resource exist as child resource of that physical adapter (which contain tape drive or DVD ram drive as a child resource) , then deletion of physical adapter PCI slot and recursively deletion of tape drive resource should be done in a very careful manner.

While number of Lpars within a physical server increases with only few backup devices available, system administrators might find it hard to manage these DLpar operations , especially when backup operations have to be done on daily basis.

In this article , I will take my readers to guided through step by step solution , which is a fully automated solution to move backup devices ( like DVD-RAM device ) between Lpars. This solution makes use of UNIX shell scripting with basic tools of SSH & Sudo to do the whole work automated for system administrators.

Basic Features of Solution

The solution comprises of the following features:

1. It automatically detects how many Lpars are available on a given physical P5 or P4 system. It then also detects and shows the Lpar which contains the device in available state.
2. It asks you for identifying target Lpar (on which the device has to be moved) and then after confirmation, deletes all parent devices from source Lpar and then perform required DLpar operation from source Lpar to target Lpar.
3. After making the device in available state on target Lpar, it can use the device for taking system or vg or file system level backups.

To perform all these tasks automatically, solution uses SSH to access HMC and perform DLPAR operations. As a security requirement, HMC should not be accessed remotely using root user while deletion of devices and running cfgmgr on AIX operating system have to be done by root equivalent user , so to meet both requirements , I also used Sudo as a major component of this automated solution.

Following is pictorial representation of whole solution











Building whole solution Steps by Steps
My scenario consisted of two P570 servers and two P550 servers which are managed by single HMC. There are six Lpars on P570-1, four on P570-2 and three Lpars on each P550. Each of these P5 servers, off course, possesses single DVD-RAM drive for backup purposes. As a prerequisite of this solution, DLPAR operations should be running on all the servers and all Lpars should be capable of acquiring processors, memory and IO resources through Dlpar operations.
A) Host Name resolution setup:

I , first of all identified one of Lpar on P570-2 ( aqbtest) as a management Lpar for this whole solution. I established name resolution setup on this Lpar so that all other Lpars on P570-1, P570-2 and P570-3 servers should be pingable by name and Ip address from this management Lpar. Hmc should also be resolvable by hostname from this management Lpar.
One important thing which you should consider while implementing this whole solution is that Lpar names (as they appear on HMC interface) should also be resolvable from management Lpar. In most of the cases, hostnames and Lpar names are same , but however if they are different then still you can use /etc/hosts file or DNS to resolve Lpar names also from management Lpar. This requirement arises from shell scripting done for this solution and will be explained further in Section D.

B) SSH access setup:

Second step would be establishing SSH relationship and access from this management Lpar to each and every other Lpar as well as to HMC.
SSH access to HMC is established in slightly different way as compared to SSH access to Lpars. I used Openssh on AIX from Bull web site ( freeware.openssh.rte 3.8.1.0), installed it alongwith openssl library ( openssl 0.9.6.7).I created a user on aqbtest Lpar named as hscadmin and also on Hmc HMC1. I assigned "Managed system profile " to hscadmin user on HMC .I also allowed remote comnmand execution ( so that HMC can allow SSH remote connections to be established with it )
On AIX lpar "aqbtest", i generated RSA key pairs by following commands
/home/root> su - hscadmin
/home/hscadmin> ssh -keygen -t rsa ( accept default values with blank passphrase )
/home/hscadmin> export hscadminkey=`cat id_rsa.pub`
/home/hscadmin> ssh hscadmin@HMC1 mkauthkeys -a \ "$hscadminke\/" ( replace it with back slash while final editing )
The above command will copy public key from AIX Lpar aqbtest to HMC1. Once copied , you can also directly login to HMC as hscadmin using ssh and varify that key has been copied successfully or not by executing " cat .ssh/authorized_keys2 " command.
You should now be able to login to HMC from AIX management Lpar without any password prompt. You can verify by executing
/home/hscadmin> ssh HMC1 lsusers

which will show all users presnt on hmc.

If you face any problem while login into hmc using ssh , you can always make the authorized_keys file empty and then try again with above procedure. To make this file empty, you can follw the following command sequence on AIX management lpar
/home/hasadmin> touch /tmp/mykeyfile ( an empty file )
/home/hscadmin> scp /tmp/mykeyfile hscadmin@HMC1:.ssh/authorized_keys2

Now the same management Lpar should also be able to execute commands remotely to all other Lpars without any password prompt.For this , again i decided to use SSH with DSA authentication so that management lpar should be login and excute commands on all lpars remotely without any password prompt.

I created hscadmin user ( which may be an ordinary users ) on another Lpar on P570-1 server ( named aqbdb) and then install openssh on this Lpar. I then generated DSA key pair on management Lpar "aqbtest"

/home/hscadmin> ssh-keygen -t dsa -b 2048

/home/hscadmin> scp id_dsa.pub hscadmin@aqbdb:/home/hscadmin/.ssh/dsa_aqbtest.pub

on aqbdb Lpar ,

/home/hscadmin> touch .ssh/authorized_keys
/home/hscadmin> cat .ssh/dsa_aqbtest.pub >> authorized_keys

Now you should be able to login from management Lpar aqbtest to Lpar aqbdb , as hscadmin user, using ssh , without any password prompt.

C) Sudo Setup:

The real security challenge in this solution was resolved by using sudo. On AIX systems and Lpars , majority of system related commands like cfgmgr and rmdev can only be executed by root user while using root user on hmc, for remote comamnds execution is real security hazard. I therefore decided to use hscadmin as main user for this solution . This hscadmin user on AIX lpars is an ordinary user , however with the help of sudo this user is allowed to execute commands like cfgmgr and rmdev.On hmc , same user is given with the profile of " Managed system profile".

I used Sudo tool from Bull website and install it in general way of software installtion on AIX. I then configured sudo tool to allow hscadmin user on AIX lpars to execute rmdev,cfgmgr & other commands.
Contents of sudoers file on AIX Lpars are as follows. This file has to be created on every Lpar which is participating in this solution (including management Lpar) and ideally should have same contents.
-----------------------------------------------------------------------------------------------
# Same thing without a password
# %wheel ALL=(ALL) NOPASSWD: ALL

# Samples
# %users ALL=/sbin/mount /cdrom,/sbin/umount /cdrom
# %users localhost=/sbin/shutdown -h now

User_Alias HSC = hscadmin
Host_Alias SERVERS = aqbdb,abqcomm,abqapp,aqbapp,abqdb,bkmecomm,aubcomm
,kmeapp,bkmedb,kfhapp,kmecomm,abqapp,bkmeapp,kmedb,kfhdb
Runas_Alias HSC = root, hscadmin
Cmnd_Alias RMDEV = /usr/sbin/rmdev
Cmnd_Alias FND = /usr/bin/find
Cmnd_Alias ODM = /usr/bin/odmget
Cmnd_Alias LSLOT = /usr/sbin/lsslot
Cmnd_Alias CFG = /usr/sbin/cfgmgr
Defaults@SERVERS log_year, logfile=/var/log/sudo.log, !authenticate
hscadmin SERVERS = (HSC) FND , ODM , LSLOT , RMDEV , CFG
----------------------------------------------------------------------------------------------


D)Shell Scripts Creation:

There are two shell scripts which are mainly used in this whole solution. These two shell scripts are named cdlpar.sh and cdmov.sh. The cdlpar.sh is main script which is to be executed as hscadmin user from management Lpar.
There are some static configuration parameters which have to be defined in this shell script for one time. One of the most important parameter is Unit ID and Bus id(containing IO adapter and CD as it’s child device ). You can gather this information easily from HMC and put this information in cdlpar.sh for one time.
When you execute cdlpar.sh , it first of all login to HMC automatically and displays all P5 servers which are attached and controlled by this HMC .When you enter your mentioned P5 system, it will set values of Unit ID and Bus ID accordingly.
It will then show all lpars on your selected P5 system and then also detect the Lpar containing CD device.
This Lpar would be source lpar for DLPAR operation ( as mentioned by slpar environment variable in script).Now you have to input target lpar ( the lpar to which you want to move CD device ).
After getting all necessary information, cdlpar.sh will perform actual dlpar operation. However before that operation , cdlpar script calls cdmov.sh on target lpar and deletes all child devices definition from operating system and then return control back to cdlpar.sh.

If all child devices are deleted successfully by cdmov.sh , dlpar operation is performed by cdlpar.sh and finally cfgmgr is executed on target lpar to make CD device available for use.

I just made cdmov.sh script slightly interactive so that before deleting child devices, script will show you all devices which it is going to remove and prompt for go ahead.However, this shell script can easily be modified to operate in non-interactive mode.
------------------------------------------------------------------------
#Script Name: cdlpar.sh
#Script Purpose: To detect Backup Device present on which Lpar
#Script Purpose: To get devices related information for CD on server level
#Script Purpose: To invoke real Dlpar operations
#Script Presence: To be present only on management Lpar
---------------------------------------------------------------------------------------

#!/bin/ksh
function chk_err
{
if [[ $? != 0 ]]
then
echo "Exiting on errors... Please check TSM Server activity log"
exit $?
fi }


z=`hostname`
echo Following are the managed systems currently maneged by this HMC

ssh hscadmin@HMC "lssyscfg -r sys -F name" > mansystems

cat mansystems

echo "please enter the managed system , on which you want to do the Dlpar operation"

read msys

case $msys in

9133-55A-SN65C155G) unitid=U787B.001.DNW9488
busid=3
;;
9133-55A-SN65C154G) unitid=U787B.001.DNW947F
busid=3
;;
9117-570-SN65EAFEE) unitid=U7879.001.DQD12AU
busid=2
;;
9117-570-SN65EB03E) unitid=U7879.001.DQD12AT
busid=2
;;
*) echo "please enter the correct choice for managed system"
echo "exiting from script"
exit 1
;;
esac

export unitid
export busid


echo "Following are the lpars on this managed system $msys"


ssh hscadmin@HMC "lssyscfg -r lpar -m $msys -F name" > lparsonsys

cat lparsonsys

echo

echo

ssh hscadmin@HMC "lshwres -r io --rsubtype slot -m $msys --filter "units=$unitid",buses=$busid"" -F drc_index,description,lpar_name" > iores1

cat iores1 | grep "Other Mass Storage Controller" 1>/dev/null

if [[ $? = 0 ]]
then
slpar=`cat iores1 | grep -E "Other Mass | Storage controller" | awk -F\, ' { print $3 } '`

export slpar

else

slpar=`cat iores1 | grep "Storage controller" | awk -F\, ' { print $3 } '`

export slpar

fi

echo " On this managed system $slpar is the Lpar with CD drive "

echo

echo

echo Now please enter target lpar to which you want to move CD

read tlpar

echo Getting desired DRC Index value .......

echo please wait......................

sleep 2

ssh hscadmin@HMC "lshwres -r io --rsubtype slot -m $msys --filter "units=$unitid",buses=$busid"" -F drc_index,description,lpar_name" > iores

cat iores | grep "Other Mass Storage Controller" 1>/dev/null
if [[ $? = 0 ]]
then
drcval=`more iores | grep -E "Other Mass Storage Controller" | grep $slpar | awk -F\, ' { print $1 }'`
echo $drcval
else
drcval=`more iores | grep -E "Storage controller" | grep $slpar | awk -F\, ' { print $1 }'`
echo $drcval
fi

echo " Trying to moving physical adapter resource from $slpar LPAR to $tlpar LPAR on system $msys"


echo please wait ................

sleep 2


echo First Removing all child devices on $slpar

if [ "$slpar" = "$z" ]
then
/home/hscadmin/cdmov.sh
n=$?
else
ssh hscadmin@$slpar /home/hscadmin/cdmov.sh
n=$?
fi

if [[ "$n" = 0 ]]

then
echo child devices removed successfully ......

echo please wait .......Performing DLPAR operation now.......

sleep 2
ssh hscadmin@HMC "chhwres -r io -m $msys -o m -p $slpar -t $tlpar -l $drcval"

k=$?

if [[ "$k" = 0 ]]

then echo " DLPAR operation completed sucessfully "

echo

echo

echo " Now running cfgmgr on $tlpar "

echo " Please wait ..................."

if [ "$tlpar" = "$z" ]
then
sudo -u root cfgmgr
nn=$?
else
ssh hscadmin@$tlpar "sudo -u root cfgmgr "
nn=$?
fi
exit 0

else

echo " Dlpar Operation failed "

exit 1

fi

else
echo child devices on system could not be removed ....

echo please remove the problem manually...............

exit 1
fi

exit 0





--------------------------------------------------------------------
# Script Name: cdmov.sh
# Script Purpose: To delete all child devices and clean up
# Script Purpose: from Operating system before actual Dlpar operations
# Script Presence: To be present on all Lpars

----------------------------------------------------------------------


#!/bin/ksh
cd /home/hscadmin
function chk_err
{
if [[ $? != 0 ]]
then
echo "Exiting on errors... Please check the problem and resolve"
exit $?
else
exit 0
fi
}

echo Now nullifying old devlist file

echo please wait ..................

sleep 2

> devlist
> devlist2

z=`sudo -u root odmget -q name=cd0 CuDv | grep parent | awk -F\" '{ print $2 }'`

y=`sudo -u root odmget -q name=$z CuDv | grep parent | awk -F\" '{ print $2 }'`


pcix=`sudo -u root lsslot -c slot | grep $y | awk '{print $5}'`

echo Detecting PCI device containing cdrom.......

sleep 2

echo The pci device $pcix contains cdrom as a child device

echo

echo

echo Now going to display which devices will be removed by this script

echo please wait .............

echo

echo Generating list of devices........

sleep 2

j=`sudo -u root odmget -q parent=$pcix CuDv | grep name | awk -F\" '{ print $2 }'`

for i in `sudo -u root odmget -q parent=$pcix CuDv | grep name | awk -F\" '{ print $2 }'`

do

echo $i > devlist

x1=`sudo -u root odmget -q parent=$i CuDv | grep name | awk -F\" '{ print $2 }'`

for m in $x1
do
echo $m >> devlist
x2=`sudo -u root odmget -q parent=$m CuDv | grep name | awk -F\" '{ print $2 }'`
echo $x2 >> devlist
done
done
echo List of devices to be removed by removing $pcix is as follows:
echo
cat devlist | awk NF > devlist2
echo
echo
cat devlist2
read
echo You want to remove $pcix and all above associated devices ?.....
echo press "y" to proceed or press "n" to exit safely
read ch
case $ch in

y) echo you have selected yes ....
echo Now proceeding to delete devices....please wait
sleep 1
sudo -u root rmdev -dl $pcix -R
exit $?
#chk_err
;;
n) echo you have selected no
echo So nothing will be removed from system
exit 1
;;
*) echo invalid choice
exit 1
;;
esac



E)Solution Roll out:

Solution can be rolled out in many ways. As described earlier, main steps would be establishing SSH between management Lpar and HMC as well as between management Lpar and other Lpars participating in whole solution, configuring Sudo, and then placement of cdmov.sh script on every Lpar. Once solution is rolled out successfully, you can feel yourself feel free from headache of deleting devices from operating system on Lpars along with manual Dlpar operations themselves before each backup activity. You can also modify solution to become fully non-interactive or arguments based so that you can schedule these shell scripts through cron, followed by scheduled mksysb or savevg operations on Lpars.




Note: This article is one of my published articles in AIX UPDATE, UK. It was published in March 2007 edition of AIX Update.

Tuesday 19 May 2009

Excellent SQL commands for TSM Administrators

* Which client nodes currently locked from server access?

select node_name from nodes where locked='YES'



*How to use string operators in select statement - TSM.

select * from actlog where message like 'ANR2565I%'



* Which administrative clients currently locked from server access?

select admin_name from admins where locked='YES'



* Which client nodes that has not specified the correct password lately?

select node_name from nodes where invalid_pw_count <>0



* Which administrative clients that has not specified the correct password lately?

select admin_name from admins where invalid_pw_count <>0



* Which nodes in the WINDOWS policy domain are not associated with the daily backup schedule STANDARD?

select node_name from nodes where domain_name='WINDOWS'and node_name-

not in (select node_name from associations -

where domain_name='WINDOWS'and schedule_name='STANDARD')



* Which administrators have policy authority?

select admin_name from admins -

where upper(system_priv)<>'NO'or upper(policy_priv)<>'NO'



* What messages of type E (ERROR) or W (WARNING) have been issued in the time period for which activity log records have been maintained?

select date_time,msgno,message from actlog where severity='E'or severity='W'



* Which administrative schedules have been defined or altered by administrator ADMIN ?

select schedule_name from admin_schedules where chg_admin='ADMIN'



* What are the relative administrative schedule priorities?

select schedule_name,priority from admin_schedules order by priority



* Which management classes have an archive copy group with a retention period greater than 365 days?

select domain_name,set_name,class_name -

from ar_copygroups where retver='NOLIMIT'or cast(retver as integer)>365



* Which management classes specify more than 5 backup versions?

select domain_name,set_name,class_name -

from bu_copygroups where verexists ='NOLIMIT'or cast(verexists as integer)>5



* Which client nodes are using the client option set named SECURE ?

select node_name from nodes where option_set='SECURE'



* How many client nodes are in each policy domain?

select domain_name,num_nodes from domains



* How many files have been archived from each node?

select node_name,count(*)from archives group by node_name



* Which clients are using space management?

select node_name from auditocc where spacemg_mb <>0



* If the reclamation threshold were to be changed to 50 percent for storage pool TAPE , how many volumes would be reclaimed?

select count(*)from volumes -

where stgpool_name='TAPE'and upper(status)='FULL'and pct_utilized <50



* If the DAILY management class in the STANDARD policy domain is changed or deleted, how many backup files would be affected for each node?

select node_name,count(*)as "Files"-

from backups where class_name='DAILY'and -

node_name in (select node_name from nodes where domain_name='STANDARD')-

group by node_name



* For all active client sessions, determine how long have they been connected and their effective throughput in bytes per second.

select session_id as "Session",-

client_name as "Client",state as "State",-

current_timestamp-start_time as "Elapsed Time",(-

cast(bytes_sent as decimal(18,0))/cast((current_timestamp-start_time)-

seconds as decimal(18,0)))as "Bytes sent/second",-

(cast(bytes_received as decimal(18,0))/cast((current_timestamp-start_time)-

seconds as decimal(18,0)))as "Bytes received/second"-

from sessions



* How long have the current background processes been running and what is their effective throughput in time and files per second?

select process_num as "Number",process,-

current_timestamp-start_time as "Elapsed Time",-

(cast(files_processed as decimal(18,0))/cast((current_timestamp-start_time)-

seconds as decimal(18,0)))as "Files/second",-

(cast(bytes_processed as decimal(18,0))/cast((current_timestamp-start_time)-

seconds as decimal(18,0)))as "Bytes/second"-

from processes



* How many client nodes are there for each platform type?

select platform_name,count(*)as "Number of Nodes" from nodes group by platform_name



* How many filespaces does each client node have, listed in default ascending order?

select node_name,count(*)as "number of filespaces"-

from filespaces group by node_name order by 2



* How to display all columns for all tables from syscat.columns without headers

select char(concat(concat(t.tabname,'.'),c.colname),35)as "TC",char -

(coalesce(nullif(substr(c.typename,1,posstr(c.type name,'(')-1)-

,''),c.typename),10),char(c.length,5),c.remarks -

from syscat.columns as c,syscat.tables AS t -

where c.tabname =t.tabname order by tc



* How to examine which volumes are UNAVAILABLE

select VOLUME_NAME,ACCESS from volumes where access ='UNAVAILABLE'



* How to examine which volumes have more than three write errors

select VOLUME_NAME,WRITE_ERRORS from volumes where write_errors >3



* How to examine which volumes have read errors

select VOLUME_NAME,READ_ERRORS from volumes where read_errors >0



* How to examine which volumes have an error state different from No

select VOLUME_NAME,ERROR_STATE from volumes where error_state !='No'



* How to examine which volumes have access different from READWRITE

select VOLUME_NAME,ACCESS from volumes where access !='READWRITE'



* How to examine which volumes have less than ten percent utilization in device class beginning with the letters SUN

select volume_name,pct_utilized,status,access from volumes-

where pct_utilized <10 and devclass_name like 'SUN%'



* How to examine which volumes do not have an access beginning with the letters READ

select volume_name,pct_utilized,pct_reclaim,stgpool_name,-

status,access from volumes where access not like 'READ%'



* How to list the content of all volumes and display the filesize in MB, ordered by client node name, volume name and size

select node_name,-

volume_name,-

decimal(file_size/1024/1024,12,2)mb,-

concat(substr(file_name,1,posstr(file_name,'')-1),-

substr(file_name,posstr(file_name,'')+1))-

from contents -

order by node_name,volume_name,mb



* How to find all clients which store their backup data in the DISKPOOL storage pool

select node_name as "CLIENT NODENAME",-

bu_copygroups.destination as "STGPOOL DESTINATION",-

nodes.domain_name as "CLIENT DOMAIN",-

bu_copygroups.domain_name as "COPYGROUP DOMAIN"-

from nodes,bu_copygroups where -

nodes.domain_name =bu_copygroups.domain_name and -

bu_copygroups.destination=upper('diskpool')and -

bu_copygroups.set_name=upper('active')-

order by nodes.domain_name



* How to find all volumes which have data for a specified client, and their status

select volumeusage.volume_name,-

volumes.access,-

volumes.error_state,-

volumeusage.stgpool_name -

from volumeusage,volumes -

where volumeusage.node_name='ONE-ON-ONE'and-

volumeusage.volume_name=volumes.volume_name -

order by volume_name



* How to find all storage pools where a client (FRED) has stored data

select distinct(STGPOOL_NAME)from OCCUPANCY where node_name='FRED'

Saturday 16 May 2009

Did we suffer today again?

Yes we lost final against Korea today , but off course the way green shirts have played this whole tournament is remarkable.

Even in the final, their game was excellent but we missed the chances and especially sohail abbas did not click. I was yesterday reading articles and news on fieldhoceky.com and realized that every player of team wanted to win. They believe that we did not have won any major title since long time and it is now required desperately to win some titles otherwise game of hockey will vanish from Pakistan.

Anyhow sometimes we have to accept the fact that everything is not in our hand. We can try our best and result is up to Almighty.

Returning on very good aspect of this tournament was interest shown by hockey lovers in Pakistan. They again showed their strength and put so much pressure on PHF and government that finally PTV telecasted Final match "Live" today.It is really great success of hockey fans.

I must say to green shirts"Keep it up". The way you played in this Asia cup, if you continue to play in future as well , you may be in final of Next World cup.

Committment, Team work and Fitness.. These are three things which can revive pakistan hockey again

Keep it UP,

Thursday 14 May 2009

Why we "hockey Lovers" always suffer?

When i am adding this new entry in my blogs , i am filled with feeling of happiness as well as sorrow.
I am happy because Pakistani hockey team reaches final of any big tournament (ASIA CUP 2009) after very long period of time. We , as Pakistani hockey fans, are now so used to losing matches that news of winning matches really astonishes us.

I still remember my days of childhood , when green shirts only loose to teams like Australia and West Germany and i usually go for hunger as sorrow. But now when green shirts loose their matches against minnows like China & Belgium , i still hear the news with courage . Internally , probably i have accepted the fact that in world of hockey we have completely lost our glory...

Anyhow, at least today i am glad.. Team has performed well so far in this tournament and we did not loose a single match till now ( except we drew against china in first match).

Beating India and Malaysia are really good signs...

Now returning to my sorrow. I am sad because i missed the action. I was expecting that any sports channel ( from India/ Pakistan/ Korea/ Malaysia) will telecast these matches , but was shocked to hear the news that "NO" channel in this modern world will telecast the live or recorded action from this biggest tournament of Asian hockey.
It is so surprising to hear that even Malaysian fans misses live action from their national team matches.

This happened because MHF sold the exclusive telecast rights of tournament to an internet based company and this company and its website till today , not able to telecast any live match till now.Even no important match video available in its so called store and video in demand and Live TV.


Thousands of lovers visited this website, but so sad that no one of them could see either live or recorded telecast of any important match.

You can also visit this website and search for any recording of any important match like INDIA-PAKISTAN , but you will not be able to find any one.

This is most pathetic attitude shown by MHF towards any tournament. I would blame not only MHF , but also blame AHF and eventually IHF , which claims to work for making hockey a popular and famous game.

HOW CAN YOU MAKE ANY SPORTS A POPULAR SPORTS WITHOUT TELE-COVERAGE nowadays.

Then comes role of our INDO-PAK TV channels. We have plenty of world renowned channels in sub-continent including PTV, GEO SUPER, ARY, GEO from pakistan and ZEE TV, ZEE SPORTS, DOORDARSHAN,NEO SOPRTS, TENSOPRTS, ESPN), but can you believe me these channels can telecast live matches from IPL or between Srilanka/Westindies five days test matches but can not telecast final match of ASIA cup hockey or Azlan shah hockey tournament or Champions Trophy.

Now turn on any sports channel of subcontinent and you will enjoy every sort of cricket matches... from test matches to T20 matches.I am not against cricket, but i would like to say that every sports should be given its right place.

My Indian hockey loving fellows will also agree that although India has been ruler of hockey for long time and hockey is "still " their "National Game" , but hockey is being treated as step child in last 10-15 years in india as well ( just like in our country ).

I dont know , why Governments of India/Pakistan still maintain hockey as "National Sports" after this pathetic treatment..They should make Cricket or Tennis as their national sports ...Keeping any sports as so called " national sports " and then killing even roots of that sports makes no sense.

Some people would say , it is because we are not winning on green fields for long time.. that's why people are loosing interest in hockey matches?We don't have enough sponsors to telecast matches..

But this argument is not acceptable for me . India won , only one time cricket world cup more than 15 years ago and Pakistan won only one world cricket cup in 1992 and that's it...since then both India and Pakistan continue to loose cricket matches. Ok they win also , but same happened in hockey as well. Sometimes we won and sometime we loose. On the other hand at least our hockey history is strong; Pakistan is four time world champion and four time olympic champion in hockey and similarly India has won olympic hockey 4,5 times as well.

I would say it is just poor marketing strategy of IHF and AHF , due to which this glorious game of hockey is being neglected.Similarly in our subcontinent , it is just internal politics due to which we "hockey lovers" always suffer...either sometimes by hearing news of loosing against Japan or sometimes sitting in front of our TV sets in a hope to see any single hockey match "Live " or "Recorded".

Friday 8 May 2009

A visit to Failaka island(Kuwait)





It is very hard to find some really good places to visit in vicinity of Kuwait.Last December, however i got an opportunity to find a very nice and peaceful piece of world named as Failika island.

Failaka Island is located in the northern part of the Persian Gulf. Springtime on Failaka Island is regarded as particularly special by Kuwaitis. Failaka has quite a different ecosystem to mainland Kuwait, and its budding flowers and changing temperatures are much appreciated. Although the island's infrastructure remains poor, Failaka is beginning to develop a local tourist industry; it provides fishing, boating, swimming, sailing and water sports.

Last December when my family was not in kuwait, i decided to visit Failika island with one of my friend. Tickets to Failaka island are available from back side of Marina Mall. You can find a small cabin , from where you can buy return tickets to Failaka island for around 12KD per person. It also includes intercontinental lunch at Failaka Village restaurant.

We started our journey at around 12:00 PM in afternoon. At the start of journey sea seemed to me very calm and quite. We both very happy to see and feel such calmness of weather.We spent some time at deck of ship ( Umm-ul-Khair which means mother of peace ) but soon asked by crew to go inside and remain be seated as sea was becoming rough.
And soon , we realized that it was a wise advice , as ship started to swing due to rough and huge waves of sea. It was expected to arrive at Failaka island in 45 minutes , but it took around 1:30 hours to reach there due to bad weather.Journey also cost me , at least three vomits , due to huge swings of vessel.

Once you reach the peaceful island of Failaka, you really forget tiredness of whole journey.I would suggest you to go to Failaka island in some pleasant weather.Better months to visit Failaka island would be from October to March, when there are little cold breeze and sun is not so warm . Although Kuwaiti Government has planted lot of trees in whole island , still there are places without trees.

You can hire a small sports car from Island for around 10 KD per hour through which you can ride through whole island. Proper cars are also available through rental services.

On the west side of island , you will find some ruins of Kuwait-Iraq war. Iraqi forces really destroyed whole island. There was a very big glass factory in island before war, but iraqi forces destroyed it completely before leaving island.

Govt has built a very beautiful but small children theme park. You can also visit meuseum and old houses of island.

A very few people now actually lives in island , although before war there were too many people. You can also visit their traditional houses and purchase traditional herbal medicines, iturs ( Arabic perfumes )and other gifts from them.

By the way , if you want to spend a peaceful and calm night in island , you can take double bed room in Failaka heritage hotel for 35 KD per night.

Monitor Data centre Temperatures with AIX servers

Temperatures inside data centre are a very important element to be monitored for any IT infrastructures. In most of IT environments, data centres have large number of servers being installed and each of these servers contribute to temperature inside the data centres. A few people knows that as processing load on servers increases , over all temperature of servers processors increases and hence forth average data centre temperature increases.
On the other hand, if you don’t have proper air conditioning and air flow infrastructure inside data centre, your business infrastructure could face severe problems in maintaining services continuous availability.

In this article, I will highlight, how you can use your AIX servers to monitor average temperature within data centre and generate email alerts when data centre‘s temperature goes up and enters in dangerous zone.


Building up the monitoring solution

While you build up temperature monitoring solutions with your AIX servers, you have to consider which AIX servers you are going to use in your solution. For this purpose, I divide available AIX servers into main categories. First category is those AIX servers which are non-HMC managed, but posses’ different kind of environmental sensors (including thermal sensors). These servers are bit old one, but still provide a reliable way of determining average temperature inside data centres.
The second category, on the other hand, comprises of relatively new pSreies servers which are managed by HMC. Although both categories of pSeries servers are capable of being part of temperature monitoring solution, I will first concentrate in developing the solution using a p440 server which belongs to first category and then use same basic technique for second category of pSeries servers.

Let’s start with old p440 server .This kind of old pSeries server has built-in environmental sensor available which can provide you valuable information about the whole environment (fan speeds and temperature of your system )

To check availability of such kind of environmental sensors , just execute following command on your pSeries server

[root@sys /] /usr/lpp/diagnostics/bin/uesensor -a
3 0 11 31 P1
9001 0 11 2100 F1
9001 1 11 2760 F2
9001 2 11 1890 F3
9001 3 11 1890 F4
9002 0 11 5129 P1
9002 1 11 3129 P1
9002 2 11 5129 P1
9002 3 11 12077 P1
9004 0 11 3 P3-V1
9004 1 11 3 P3-V2
9004 2 11 3 P3-V3


However, if your pSeries server does not support this sensor, you will get a message like following:
/home/root> /usr/lpp/diagnostics/bin/uesensor –l


Now based on this tool, I wrote following script which generates SMS to predefined mobile numbers of operations persons if the processor temperature (which closely equals to average temperature inside data centre) raises above 25 C and generates email alerts for data centre personnel’s if the data centre temperature is greater than 22 C. This script is caused to run every half hour using AIX crontab facility and generates also temperature logs data which can be used for future analysis.

------------------------------------------------------------------------------------------------
# Script Name: checktemp.sh
# Script Purpose: To monitor average temperature inside data center
# Script Author: Khurram Shiraz
-------------------------------------------------------------------------------------------
#!/bin/ksh
export NSORDER=local
dt=`date`
tmplogs=/tmp/templogs
tmpmess=/tmp/tempmess
z=`/usr/lpp/diagnostics/bin/uesensor -l | grep -p "thermal sensor" | grep Value | awk '{ print $3}'`
echo "$z at $dt" >>$tmplogs
if [[ $z -gt 25 ]]
then
echo " Data Center temperature is critical, $z at $dt" >>$tmplogs
echo " Data Center temperature is critical, $z at $dt" > $tmpmess
mail -s " Data Center Temperature" mishry@kmefic.com.kw khurram@fic.com.kw < $tmpmess
cd /home/kmetsm
./SMS ### A java program for sending SMS to predefined mobile numbers
cd –
exit 0
fi
if [[ $z -gt 22 ]]
then
echo "Data Center temperature is alarming, $z at $dt" >> $tmplogs
echo " Data Center temperature is critical, $z at $dt" > $tmpmess
mail -s " Data Center Temperature" khurram@fic.com.kw < $tmpmess
fi
exit 0


------------------------------------------------------------------------------------------------

Data generated by this script about computer room temperature will have following format.

21 at Thu May 17 15:17:11 SAUST 2007
21 at Thu May 17 15:30:00 SAUST 2007
21 at Thu May 17 16:00:00 SAUST 2007
21 at Thu May 17 16:30:00 SAUST 2007
21 at Thu May 17 17:00:00 SAUST 2007
21 at Thu May 17 17:30:00 SAUST 2007
21 at Thu May 17 18:00:00 SAUST 2007
21 at Thu May 17 18:30:00 SAUST 2007
21 at Thu May 17 19:00:00 SAUST 2007
21 at Thu May 17 19:30:00 SAUST 2007
21 at Thu May 17 20:00:00 SAUST 2007
21 at Thu May 17 20:30:00 SAUST 2007
21 at Thu May 17 21:00:01 SAUST 2007
21 at Thu May 17 21:30:00 SAUST 2007
20 at Thu May 17 22:00:00 SAUST 2007
20 at Thu May 17 22:30:00 SAUST 2007
20 at Thu May 17 23:00:00 SAUST 2007
20 at Thu May 17 23:30:00 SAUST 2007


Same type of data can be used for preparing a chart which can be presented to management for their review (or it can be also being placed on to intranet web site for internal review by operations or IT department). A simple way of doing it would be use of Microsoft Excel, which provides capability of creation of different types of charts.

Now the only question, which we will try to cover is that how we can achieve same objective, if we don’t have any old PSeries Server with environmental sensors. However, as discussed earlier, even if you have any HMC managed PSeries server , you can easily achieve same objective using sensors available on HMC.
The key to the solution, in that case, would be “lshwinfo” command which is available for execution on HMC.

The lshwinfo command displays hardware information such as temperature of the managed system:

Format of this command is as follows:


lshwinfo -r sys -e frame-name -n object-name [ | —all ] [-F < format > ] [ —help]
where:

• -r – the resource type to display. A valid value is sys for system.

• -e – the name of the frame the system is in.

• -n – the name of the object to perform the listing on. This parameter cannot be specified with -all.

• -all – list all the objects of a particular resource type. This parameter cannot be used with -n.

• -F – if specified, has a delimiter-separated list of property names to be queried. Valid values are temperature, current, voltage, power, and total_power.

So to get temperature from HMC, you can execute following command on HMC:

lshwinfo -r sys -e "frame1" -n "object name" –Ftemperature


Based on this technique, I first of all established SSH setup between any AIX server or Lpar and HMC , so that I can execute commands from one of my AIX Lpar to HMC (without any password prompt).
Following are main steps for setup of SSH between AIX Lpar and HMC.

First step would be installation of Openssh on AIX Lpar. For this purpose I used Openssh software available on Bull web site ( freeware.openssh.rte 3.8.1.0), installed it along with openssl library ( openssl 0.9.6.7).I then created a user on AIX Lpar with name of hscadmin and als ocreated same user on hmc HMC1. I assigned "Managed system profile” to hscadmin user on HMC .I also allowed remote command execution (so that HMC can allow SSH remote connections to be established with it)
On AIX lpar "aqbtest", i generated RSA key pairs by following commands
/home/root> su - hscadmin
/home/hscadmin> ssh -keygen -t rsa ( accept default values with blank passphrase )
/home/hscadmin> export hscadminkey=`cat id_rsa.pub`
/home/hscadmin> ssh hscadmin@HMC1 mkauthkeys -a / "$hscadminkey/" ( replace it with back slash while final editing )
The above command will copy public key from AIX Lpar aqbtest to HMC1. Once copied , you can also directly login to HMC as hscadmin using ssh and varify that key has been copied successfully or not by executing " cat .ssh/authorized_keys2 " command.
You should now be able to login to HMC from AIX management Lpar without any password prompt. You can verify by executing
/home/hscadmin> ssh HMC1 lsusers

which will show all users presnt on hmc.

If you face any problem while login into hmc using ssh , you can always make the authorized_keys file empty and then try again with above procedure. To make this file empty, you can follw the following command sequence on AIX management lpar
/home/hasadmin> touch /tmp/mykeyfile ( an empty file )
/home/hscadmin> scp /tmp/mykeyfile hscadmin@HMC1:.ssh/authorized_keys2

Once you tested remote prompt-less login from AIX LPAR to HMC, you can easily use following shell script to get average data centre temperature , as seen by HMC.
----------------------------------------------------------------------------------------------

#!/bin/ksh

export NSORDER=local
dt=`date`
tmplogs=/tmp/templogs
tmpmess=/tmp/tempmess
z=`ssh hscadmin@HMC " lshwinfo -r sys -e "frame1" -n "KMEobj" –Ftemperature “`
echo "$z at $dt" >>$tmplogs
if [[ $z -gt 25 ]]
then
echo " Data Center temperature is critical, $z at $dt" >>$tmplogs
echo " Data Center temperature is critical, $z at $dt" > $tmpmess
mail -s " Data Center Temperature" mishry@kmefic.com.kw khurram@fic.com.kw < $tmpmess
cd /home/kmetsm
./SMS ### A java program for sending SMS to predefined mobile numbers
cd –
exit 0
fi
if [[ $z -gt 22 ]]
then
echo "Data Center temperature is alarming, $z at $dt" >> $tmplogs
echo " Data Center temperature is critical, $z at $dt" > $tmpmess
mail -s " Data Center Temperature" khurram@fic.com.kw < $tmpmess
fi
exit 0

------------------------------------------------------------------------------------------------------------------------


Summary:
It is obvious now that data centres equipped with both old types of RISC servers as well as latest PSeries servers can easily be monitored with respect to average temperature inside these data centres. Once you get these temperature values, you can develop charts as well as you can feed these values to any small database for long term recording and analysis.


Note: This is one of my article , which was published in June 2007 issue of AIX Update.Hopefully , you would have enjoyed that with uniqueness of idea behind.

Wednesday 6 May 2009

An Online Backup Solution using Advanced Features on IBM DS8000

Design and implementation of a fool proof backup strategy has been an important topic for companies over the years. With the growth of data (like Terabytes) in recent years organizations are now looking forward to have such fool proof backup solutions which can help them to have their services online and available to their users with having minimal performance impacts during backup window.

Historically Database administrators are relying on some online backup tools and techniques provided by their databases. For example Oracle database has been supporting online or hot backup strategy using traditional begin backup and end backup statements for last many years . Now RMAN is also available which can be integrated with any backup software like TSM or Netbackup to provide online backups solutions for Oracle databases.

Main problem arises when size of database is too large (like terabytes). In that case time requirements for putting databases in online mode become a problem. In fact as long as databases are in online backup mode, database effectively remains in readonly mode to end users. So for most of organizations, it is desirable to make this time period to be as small as possible. Here comes the role of latest snapshot techniques. These snapshots tools (majority of which are provided on storage hardware level) are comprehensive way for resolution of this problem and form foundation for strategic backup solutions for such huge databases.

Nearly all high end IBM storage subsystems provide such kind of snapshot tool. In IBM terminology, this tool is commonly known as “FlashCopy “ which is available as a separate licensed feature for IBM DS4000, DS6000 & DS8000 storage subsystems series.

This feature is in fact a data snapshot technique which copies data bit by bit on storage hardware level without having any performance impact on server itself. Normal FlashCopy operations supported by IBM DS6000 or DS8000 storage subsystems usually take no longer than few seconds to make a snapshot flash of source database with terabytes of size. Anyone who is using this FlashCopy technique as a integrated part of its online backup solution, is therefore left with only task of making this snapshot available on operating system ( so as to be taken to tape cartridges etc ) with the shortest possible period of time .

In this article, I will cover different aspects of IBM DS8000 FlashCopy feature along with its implementation and integration to make a comprehensive and fully automated online backup solution for a very large Oracle database (~1.2 Terabytes). It is worth noting that although FlashCopy feature provided by every IBM storage subsystems series is technically same, however its implementation may vary from series to series. This is because, as IBM is using different user interfaces to manage storage subsystems series differently. For example, DS6000 & DS8000 are managed by DS storage manager running on windows & Storage HMC platforms (Linux) respectively while DS4000 is managed by FAST storage manager software which can be installed on variety of operating systems including AIX and windows. Similarly cli( command line interface tool) for DS6000/DS8000 has many different commands as compared to cli used for DS4000 series. I therefore in this article will concentrate for developing online automated backup solution using DScli for DS8000 storage subsystem.

Advanced Copy Services from IBM

The DS8000 series advanced copy services are powerful data backup, remote mirroring and recovery functions that can help protect data from unforeseen events. Copy services runs on the IBM Total Storage DS series and are designed to support a wide range of servers including IBM pSeries, iSeries, and zSeries environments.

Comparable Copy Services functions are also available on the IBM Total Storage Enterprise Storage Server (ESS) Models 800 and 750 as well as on DS6000 series. Copy services include the following types of functions:

o IBM TotalStorage Flashcopy®, a point-in-time copy function

o Remote mirror and copy functions include:

o IBM TotalStorage Metro Mirror (previously known as Synchronous PPRC)

o IBM TotalStorage Global Mirror (previously known as Asynchronous PPRC)

You can manage Copy services functions through the DS8000 series’ CLI, as well as the GUI-based interface provided by the IBM Total Storage DS Storage Manager which is available on S-HMC (Linux based servers supplied with DS8000 for storage management).

What is FlashCopy Technology

FlashCopy feature is designed to provide the ability to create full volume copies of data on storage hardware level. When you set up a FlashCopy operation, a relationship is established between source and target volumes, and a bitmap of the source volume is created. Once this relationship and a bitmap are created, the target volume can be accessed as though all the data had been physically copied. While a relationship between the source and target volume exists, a background process copies the tracks from the source to the target volume. Hence IBM FlashCopy tool appears to provide an instant point-in-time flash of Luns present on DS Storage subsystems. This point-in-time flash in fact can contain a consistent snapshot of original source data (taken at a specific point in time), if necessary measures have been taken on operating system and database level to make this flash as a consistent flash of data. This is very important due to the fact this snapshot has been taken on hardware level and application or database has no knowledge that a snapshot process is in progress. So, success of any backup solution comprising of IBM FlashCopy technique (and in general any storage or hardware snapshot technique) depends upon data consistency measures taken during actual snapshot operation.

Activating FlashCopy Feature on DS8000 Storage Subsystems

FlashCopy being a premium feature requires a separate license which can be brought along with DS storage subsystem or can be ordered as an upgrade (also called MES in IBM terminology) for existing DS storage subsystems.

For DS6000 & DS8000 storage subsystems, it is mandatory to activate license activation codes (or at least the Operating Environment License code- OEL). This can be done through DS SMC or through DS CLI console. Other advance features like FlashCopy (or PPRC) can be activated after activation of OEL.

For activation of Flashcopy feature for DS8000, you must gather following information first:

  1. What is machine signature for DS8000. This is the most important information which is needed to activate your FlashCopy feature. Machine signature can be easily found out by using following DScli Commands:

dscli> lssi

dscli> Date/Time: March 30, 2005 6:53:05 PM CEST IBM DSCLI Version: 5.0.1.99

Name ID Storage Unit Model WWNN State ESSNet

============================================================================

- IBM.2107-7520431 IBM.2107-7520430 922 5005076303FFC19D Online Enabled

dscli> showsi IBM.2107-7520431

Date/Time: March 30, 2005 6:53:11 PM CEST IBM DSCLI Version: 5.0.1.99 DS: IBM.2107-7520431

Name -

desc -

ID IBM.2107-7520431

Storage Unit IBM.2107-7520430

Model 922

WWNN 5005076303FFC19D

Signature 896e-c0a3-38e9-5702

State Online

  1. What is machine serial number? The serial number of the DS8000 can be taken from the front of the base frame (lower right corner).On DS command line interface you can also use lssu command for this purpose.

  1. What are order confirmation codes (OCC)? The order confirmation code is printed on the DS8000 series order confirmation code document, which is usually sent to the client’s contact person together with the delivery of the machine.

After noting down machine serial number /machine signature and OCC, you can access following IBM internet site to generate activation codes for FlashCopy.

https://www-03.ibm.com/storage/dsfa/index.jsp

On this website, after putting all these information for your DS storage , you will be redirected to ViewActivation Codes window where you can download, or highlight, then flash and paste, or write down, your activation codes. If you select Download now, you will be prompted to select a file location. The file you download will be a very small XML file.

We opted for writing activation codes in our small note book; no doubt it is more handy approach!!!

In our case , activation code for FlashCopy which we got from above web site was 234-1934-J153-10DC-01FC-CA7D-5678-5678 , so next step was simply application of this activation code. We did this using DScli option

dscli> applykey -key 234-1934-J153-10DC-01FC-CA7D-5678-5678 IBM. 2107-7520431

Date/Time: 2 May 2005 14:47:06 IBM DSCLI Version: 5.0.3.5 DS: IBM. 2107-7520431

CMUC00199I applykey: License Machine Code successfully applied to storage image

IBM.2107-7520431

We then verified activation of FlashCopy on DS6000 using lskey command

dscli> lskey IBM. 2107-7520431

Date/Time: March 30, 2005 6:53:30 PM CEST IBM DSCLI Version: 5.0.1.99 DS: IBM. 2107-7520431

Activation Key Capacity (TB) Storage Type

================================================

Flashcopy 5 FB

Operating Environment 5 All

Starting with DScli

DScli is a very powerful tool which can be used for managing IBM DS storage subsystems. Because of its interactive nature and also because of its support for scripting mode, it is very handy tool and can be used easily for automating backup solutions comprising of DS8000 flash services.

We started building our backup solution with installation of DScli. For DS8000, DScli supports nearly every major operating system including AIX 5L and windows. We selected one of our Lpar on P570 to act as DScli management station. Every DS8000 Storage HMCs has an external Ethernet interface which is supposed to be attached to customer network. We established a separate VLAN comprising of these external net (172.17.20.xx) and assign 172.17.20.100 and 172.17.20.101 ip addresses to DS8000 Storage HMC’s external Ethernet interfaces using DS manager interface. We then assign Ip address 172.17.20.102 to one of the Ethernet interface of our AIX Lpar using “smitty chinet” and tested TCPIP connectivity to both of storage HMCs of DS8000. We also created a user in SHMC with admin privilege so that dscli commands can be executed using this account.

We then installed DScli on our AIX Lpar using root user. You must have a version of Java 1.4.1 or higher that is installed on your system in a standard directory. The DS CLI installer checks the standard directories to determine if a version of Java 1.4.1 or higher exists on your system. If this version is not found in the standard directories, the installation fails. We therefore set our shell environment correctly (correct JAVA_HOME environment variable), mount the DScli installation CD and execute

setupaix.bin –console command as root user. This will install DScli in its default directory for AIX which is /opt/ibm/dscli.

We then created dscli profile which is dscli.profile text file. We mentioned Storage HMC’s Ip addresses (as hmc1 and hmc2) along with user name and password.

Below is the content of dscli profile which is used in our scenario

----------------------------------------------------------------------

#DS CLI Profile

#

# Management Console/Node IP Address (es) are specified using the hmc parameter

# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command line options.

# hmc1 is first SHMC for DS8000

hmc1:172.16.5.100

hmc2:172.16.5.101

username: dsadmin

# The password for dsadmin:

password: passw6sd

# Default target Storage Image ID

devid: IBM.2107-7520431

--------------------------------------------------------------------

We then tested DScli functionality from our AIX Lpar as follows:

/opt/ibm/dscli/dscli lsuser

This command should list all users on SHMC without asking for any password prompt, if every thing was configured correctly.

Once DScli setup is done , there are lot of other things to be done regarding storage configuration on DS8000 ( like array sites , arrays ,volume groups, host systems and open systems volumes creation, configuration of IO ports topology etc ). These are beyond the scope of this article but good details on the subject can be found in following IBM Red Books:

In our implementation we created three DS8000 open system volumes (which could hold our Oracle data filesystems along with archive log filesystems) and assigned these volumes to AIX node (bkkwt) using volume group concept of DS8000 storage hierarchy. Later three more open system volumes (having same sizes as that of previously created ones) were created in same volume group so that these could be used as target volumes in FlashCopy relationships.

SDD 1.6.0, which is multipathing software from IBM was also installed on AIX host with proper host attachment script for DS8000.This software caused DS8000 Luns to appear as vpaths devices (rather than hdisk) on AIX operating system.

Joining all pieces together – Automated Backup Solution Implementation



Our requirement was to develop an online backup solution for 2 TB oracle 9.2 database environments running on AIX 5.3 and HACMP 5.2. We achieved this by integrating DScli, IBM FlashCopy with UNIX shell scripting for this specific environment, however in general same solution can be used ( with some scenario specific changes ) for any database which supports online backups.

DS8000 FlashCopy creation and deletion commands were called from shell scripts and then specific AIX LVM commands were used to make target Luns available on operating system level. As source filesystems are mounted when Flash operation was performed, special measures were taken in this automated solution to ensure that no writing activity was happening during flash operation. This is the only way that we can ensure backup consistency. On database level , oracle begin backup and end backup SQL commands were used to temporarily suspend write operations and on AIX level “freeze” option with chfs command was used to ensure that all data in filesystem cache should be written to disk before start of FlashCopy operation. This new “freeze” option, which is available for only AIX JFS2 filesystems removes need of using AIX “sync” command which nearly does same purpose for JFS filesystems but not guarantee it . We calculated time required for completion of actual FlashCopy operation ( which was in our case approx 20 seconds) so we freeze our JFS2 filesystems ( containing data and archive logs ) for 45 seconds so that no write activity should be done on OS level during FlashCopy operation. As soon as FlashCopy operation is completed, filesystems were thaw and then oracle end back statements were executed.

We then used powerful AIX LVM commands (including recreatevg command) to make these target filesystems available on same AIX server containing source filesystems. Hence source filesystems as well as target filesystems are mounted on same server in my implementation (although it was possible to mount target filesystems on any AIX node different from source AIX node). These Target file systems were then backed up to TSM server using TSM B/A AIX client with help of TSM scheduler.

We create two shell scripts with all these pieces together. These scripts are included in appendix. One of these script named “flashcreate.sh” created flashcopy while other “flashdelete.sh” was used to delete target FlashCopy drives and clean up all ODM information before repeating same flash creation process once again.

We did not observe mandatory requirement of running fsck command against target flash filesystems before mounting it on AIX server as we already used freeze option of JFS2 filesystems which ensured all data in filesystem cache have already been written to disk before FlashCopy operation starts. However, for implementation using simple JFS filesystems, it is mandatory to run fsck command against target filesystems before mounting them on AIX. In our scenario, our backup window, however, allowed us to execute fsck command on target filesystems, so we adopted it as an additional tool to ensure data consistency on operating system level. We noticed a time requirement of almost 45 minutes to run a thorough fsck –y command against all target filesystems (almost 1 Terabyte) while fsck commands were run sequentially.

We selected nocp option with mkflash command. Infact for establishing FlashCopy relationships on DS8000, you may select one of the two possible modes, background copy and no-background copy (nocp). With the parameter nocp it is possible to identify if the data of the source volume should be copied to the target volume in the background or not. If -nocp isn’t used, a copy of all data from source to target takes place in the background. With -nocp selected, only updates to the source volume will cause writes to the target volume to save the time-zero data there. This option is therefore useful in solutions where an instant copy is required to be made available for backup purposes.

In our solution, as target flash filesystems have to be mounted on same AIX system having source filesystems, these are mounted (and hence archived daily to TSM server using TSM scheduler) with different mount points as compared to original mount points. For example in our implementation, these are mounted with mount points prefixed with /flash. As a result, while restoring from TSM server, it is required to create and mount these target filesystems with same mount points( like /flash/oracle/data1 etc). Once restored (say to /flash/oracle/data1) on DR server, these mount points can easily be changed back to /oracle/data1 by using chfs command before starting of application or database from DR system. You may also create a post TSM scheduler script which can use OS chfs command for changing mount points after each TSM scheduled restoration.


Appendix A - Scripts

------------------------------------------------------------------------------------------------------------------

# written : For R3BNKORA AIX node

# Date : August 2006

# Script : begin_backup.sql

# Purpose : It will place all oracle tablespaces into begin backup mode

# and hence will ensure database consistency before online backup is

# taken using FlashCopy technique.

--------------------------------------------------------------------------------------------------------------------

#!/bin/ksh

connect /as sysdba

alter tablespace BNKORABTABD begin backup;

alter tablespace BNKORABTABI begin backup;

alter tablespace BNKORACLUD begin backup;

alter tablespace BNKORALOADD begin backup;

alter tablespace BNKORALOADI begin backup;

alter tablespace BNKORAPOOLD begin backup;

alter tablespace BNKORAPOOLI begin backup;

alter tablespace BNKORAPROTD begin backup;

alter tablespace BNKORAPROTI begin backup;

alter tablespace BNKORAROLL begin backup;

alter tablespace BNKORASOURCED begin backup;

alter tablespace BNKORASOURCEI begin backup;

alter tablespace BNKORASTABD begin backup;

alter tablespace BNKORASTABI begin backup;

alter tablespace BNKORATEMP begin backup;

alter tablespace BNKORAUSER1D begin backup;

alter tablespace BNKORAUSER1I begin backup;

alter tablespace SYSTEM begin backup;

alter system switch logfile;

alter system switch logfile;

alter system switch logfile;

alter system switch logfile;

--------------------------------------------------------------------------------------------------------------

#

# written : For R3BNKORA AIX node

# Date : August 2006

# Script : end_backup.sql

# Purpose : To bring all Oracle tablespaces back to normal state

-----------------------------------------------------------------------------------------------------------

#!/bin/ksh

connect / as sysdba

alter tablespace BNKORABTABD end backup;

alter tablespace BNKORABTABI end backup;

alter tablespace BNKORACLUD end backup;

alter tablespace BNKORALOADD end backup;

alter tablespace BNKORALOADI end backup;

alter tablespace BNKORAPOOLD end backup;

alter tablespace BNKORAPOOLI end backup;

alter tablespace BNKORAPROTD end backup;

alter tablespace BNKORAPROTI end backup;

alter tablespace BNKORAROLL end backup;

alter tablespace BNKORASOURCED end backup;

alter tablespace BNKORASOURCEI end backup;

alter tablespace BNKORASTABD end backup;

alter tablespace BNKORASTABI end backup;

alter tablespace BNKORATEMP end backup;

alter tablespace BNKORAUSER1D end backup;

alter tablespace BNKORAUSER1I end backup;

alter tablespace SYSTEM end backup;

-----------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------------------------

#script name: flashrecreate.sh

#

# written For: R3BNKORA AIX node

# Date : December 2005

# Created By: Khurram Shiraz

# Purpose : Shell Script for creating flash snapshots and making them #available on AIX so that TSM client can backup flashed filesystems to TSM Server.

-----------------------------------------------------------------------

#!/bin/ksh

TSTFL="/scripts/lockfile"

if [ ! -f $TSTFL ];

then

echo Please ensure that FlashCopy Pairs are already removed before this script execution

echo It seems that they are still in place

echo therefore exiting!!!!

exit 1

else

echo Putting Oracle into hot backup Mode

echo please wait ............................

#

su - bnkora -c "sqlplus /nolog < /scripts/begin_backup.sql"

sleep 10

chfs -a freeze=60 /oracle/data1

chfs –a freeze=60 /oracle/data2

chfs –a freeze=60 /oracle/data3

chfs –a freeze=60 /oracle/data4

chfs –a freeze=60 /oracle/archivelogs

# Execution of DScli commands

/opt/ibm/dscli/dscli mkflash -dev IBM.2107-7520431 -nocp 1100:1105

/opt/ibm/dscli/dscli mkflash -dev IBM.2107-7520431 -nocp 1101:1106

/opt/ibm/dscli/dscli mkflash -dev IBM.2107-7520431 -nocp 1102:1104

chfs –a freeze=off /oracle/data1

chfs –a freeze=off /oracle/data2

chfs –a freeze=off /oracle/data3

chfs –a freeze=off /oracle/data4

chfs –a freeze=off /oracle/archivelogs

# Putting Oracle back to normal Mode

su - bnkora -c "sqlplus /nolog < /scripts/end_backup.sql"

# Now working for Flashed Data.......

#

cfgmgr

# Starting preparation of LVM & VGs for mounting of filesystems

chdev -l vpath0 -a pv=clear

chdev -l vpath1 -a pv=clear

chdev -l vpath2 -a pv=clear

recreatevg -y flashvg1 -Y flash –L /flash vpath0

recreatevg -y flashvg2 -Y flash –L /flash vpath1

recreatevg -y flashvg3 -Y flash –L /flash vpath2

echo ……. now running fsck & mounting fs

fsck -y /flash/oracle/data1

mount /flash/oracle/data1

fsck -y /flash/oracle/data2

mount /flash/oracle/data2

fsck -y /flash/oracle/data3

mount /flash/oracle/data4

fsck -y /flash/oracle/data4

mount /flash/oracle/data3

fsck –y /flash/oracle/archivelogs

mount /flash/oracle/archivelogs

cd /scripts

rm lockfile

exit 0

fi

------------------------------------------------------------------------------------------------------------# flashdisable .sh

#

# Written : For R3BNKORA AIX node

# Date : December 2005

# Purpose : Shell Script for disabling flash target drives from TSM client # node and removing all related OS information.

-----------------------------------------------------------------------

#!/bin/ksh

# Unmount all fileystems which are created during Flashcopy operation

#

unmount /flash/oracle/data1

unmount /flash/oracle/data2

unmount /flash/oracle/data3

unmount /flash/oracle/data4

unmount /flash/oracle/archivelogs

# Varyoff all Flashcopy volume Groups

#

varyoffvg flashvg1

varyoffvg flashvg2

varyoffvg flashvg3

# Export all Flashcopy volume groups

exportvg flashvg1

exportvg flashvg2

exportvg flashvg3

# Remove all snapshot logical drives (vpaths and associated hdisks)

rmdev -dl vpath0

rmdev -dl vpath1

rmdev -dl vpath2

rmdev -dl hdisk21

rmdev –dl hdisk22

rmdev –dl hdisk23

rmdev –dl hdisk24

rmdev –dl hdisk11

rmdev –dl hdisk13

rmdev –dl hdisk17

rmdev –dl hdisk19

rmdev –dl hdisk29

rmdev –dl hdisk31

rmdev –dl hdisk33

rmdev –dl hdisk35

/opt/ibm/dscli/dscli rmflash -dev IBM.2107-7520431 -quiet 1100:1105

/opt/ibm/dscli/dscli rmflash -dev IBM.2107-7520431 -quiet 1101:1106

/opt/ibm/dscli/dscli rmflash -dev IBM.2107-7520431 -quiet 1102:1104

cd /scripts

touch lockfile

exit 0

Reference:

IBM White paper “Storage Solutions for Oracle Database:

Snapshot Backup and Recovery with IBM Total Storage Enterprise Storage Server”

IBM Red Book “IBM TotalStorage DS8000 Series: Copy Services in Open Environments SG24-6788-00”

The IBM TotalStorageDS8000 Series: “Concepts and Architecture SG24-6471-00”

About Author: Khurram Shiraz is senior system Administrator at KMEFIC, Kuwait. In his eight years of IT experience, he worked mainly with IBM technologies and products especially AIX, HACMP Clustering, Tivoli and IBM SAN/ NAS Storage. He also has worked with IBM Integrated Technology Services group. His area of expertise includes design and implementation of high availability and DR solutions based on pSeries, Linux and windows infrastructure. He can be reached at aix_tiger@yahoo.com.



Note: This article was originally published in October 2007 by SysAdmin Magzine US ( www.samag.com). Now many electronic versions of this article is available on internet.

 How to Enable Graphical Mode on Red Hat 7 he recommended way to enable graphical mode on RHEL  V7 is to install first following packages # ...