Tuesday, 29 December 2009

Easiest way to calculate SAN data transfer rate from AIX

In order to check how much data throughput speed from your FC connections of AIX servers to any san attached storage

1. Identify any filesystem , created on any vg residing on SAN disks ( let say /data1)

2. time dd if=/dev/zero of=/data1/testfile bs=32768 count=30000

It will create around 10 GB file in /data1 so make sure you have enough space there. After getting value of time period required to copy such file , you can divide this size by time to get the data transfer speed in Mb/Sec.

For example , if real time value returned by time command output results around 1 min so value of data transfer would be 10x1024/60= 170 Mb/sec.

Friday, 25 December 2009

VSCSI disks mapping to Physical disks AIX VIO

The other day , during one implementation of a project , i stuck on a silly point.

I was implementing Oracle RAC on two P570 servers which are located on distant sites and fully virtualized through dual VIO configuration. We assigned many disks from DS6000 to VIO servers and name these disks according to naming convention required by Oracle DBA. For example virtual disk for second voting disk as required by Oracle DBA was named on VIO servers as "lparnamevtdisk2"...

We ensured that we configured correct disk for correct purpose by executing these two following commands
# oem_setup_env
# datapath device 13 ( as we were using SDDPCM /AIX MPIO on VIO servers).

Now but we went back to AIX client Lpars and executed cfgmgr, we were able to get large number of disks , but got confused as apparanetly all these disks are vscsi disks and dont have any serial number which can matched with SAN Lun Serial numbers...

Then after some research , i got following information that in order to map virtual scsi disks on AIX client Lpars with original physical disks , you have to execute

#lscfg -vl hdiskxx ( on AIX client Lpar ) and note down serial number ( which could be something like L86xxxx) and then you have to go to VIO partition and then check for same serial number on virtual disks devices with command lsmap -all or if you know exact vhost for your client Lpar you can also use lsmap | grep vhostx.

Guys! I am Back....

Guys,

Sorry i was away for almost more than one month on annual vacations so could'nt update my blogs for a while .. Now i am back... so you can expect some new material very soon on my web site...

Thursday, 19 November 2009

Wpars versus Lpars- A quick comparision

While Lpars and Wpars are both virtualization features of IBM Power systems , there are inherently differences which do reside between Lpars and Wpars.

I would say a major difference which exist between Lpars and Wpars is that Lpars are hardware based virtualization approach while Wpars are software based approach.
WPARs are lightweight and quicker to install, because they share many of the file systems and resources of the global AIX system in which they reside.
On the other hand While using an LPAR requires you to install an entire operating system, creation of system WPARs only installs private copies of a few file systems, and application WPARs share even more of the global system's resources. As a result, a WPAR can be created in just a few minutes without installation media. Ongoing administration and maintenance of WPARs should be simpler—fewer AIX licenses might be required, and you don’t have to install fixes and updates on so many virtual systems. There is a command for synchronizing the filesets of a WPAR with the corresponding filesets on the global system, so you have the choice of propagating AIX fixes to WPARs or continuing to run with the current versions of system files.

While LPARs offer a significantly higher degree of workload isolation, WPARs might provide "good enough" isolation for your particular workloads, especially temporary ones such as development or test environments. Similarly, with LPARs, you can achieve a greater degree of control over the usage of resources—by allocating entire processors or precise fractions of processors to an LPAR, for example. With WPARs, you don’t have such fine control over resource allocations, but you can allocate target shares or percentages of CPU utilization to a WPAR (if have used the AIX Workload Manager, you will find the share and percentage resource allocation scheme familiar). Similar differences exist for the allocation of memory, number of processes, and other resources.

Sunday, 15 November 2009

NPIV or Virtual SCSI- Which one is better?

Last days i was investigating about real difference between NPIV and Virtual SCSI options available on IBM Pseries platform for virtualisation implementation.Unfortunately i could not find that except that if you have a FC based tape library or any other tape device and you want to share it between your Lpars ( which is rear case in today's market as it is captured by enterprise backup solutions like TSM or Veritas which does not require this kind of setup) or when you want to avoid word SCSI in front of your management, you can opt NPIV approach otherwise vscsi approach is well established approach in terms of performance ad reliability.

However , for NPIV , following are minimum requirements:

1. Eight Gig FC Adapater ( NPIV capable)

2. NPIV capable SAN switches

3. VIO 2.1

So rule of thumb is that if you dont have above mentioned luxuries , you can stick to VSCSI approach.

Wednesday, 4 November 2009

World cup hockey 2009 - Qualifying Round France

Qualifying tournament for World cup hockey 2010 is currently being played in Lille France. as it is

Our team , Pakistan Hockey team ( so called Green shirts ) have played nicely till date and they have won all three matches against Russia,France and Italy by quite big margin. It is a good news for hockey lovers in Pakistan but in my view until and unless Pakistani boys do not won the final of this tournament , we should not be so much happy. Reason being our team has played very good in some other recent tournaments , but loose their spirits in final match. Now , here in Lille, there is no such chance for pakis. They will play 2010 world cup hockey if and only if they win this tournament.

Good news is that sohail Abbas is scoring good on penality corners. This is a healthy sign.
I hope that inshallah , Pakistani team will win this very important tournament as it is do or die situation for game of hockey in Pakistan

Another good thing about this tournament is that FIH is covering well this tournament.Highlights of all matches are available on youtube and also on following web site

http://www.worldcupqualifiermenfrance.sportcentric.com/vsite/vcontent/page/custom/0,8510,5227-199239-216462-47641-301903-custom-item,00.html

Friday, 30 October 2009

AIX 32 and 64 bit Dilemma

Software requirement for 64 bit AIX OS

Beside hardware requirement for running 64 bit operating system on IBM POWER systems , the other main requirement is fileset.The bos.64bit is the Base Operating System 64-bit runtime fileset. If bos.64bit is not installed, you do not have the /etc/methods/cfg64 file. Without the /etc/methods/cfg64 file, you will not have the option of enabling or disabling the 64-bit environment via SMIT, which updates the inittab with the load64bit line (simply adding this line does not enable the 64-bit environment).

The command lslpp -l bos.64bit will reveal if this fileset is installed. The
bos.64bit fileset is on the 4.3.x media, however, installing it does not ensure
that you will be able to run 64-bit software.

With the bos.64bit fileset installed on non 64-bit hardware, you should be able
to compile your 64-bit software; however, you will not be able to run 64-bit
programs on your 32-bit hardware.



Hardware required

You must have 64-bit hardware to run 64-bit applications. At AIX levels 4.3.2
and 4.3.3, to determine whether your system has 32-bit or 64-bit hardware
architecture:

Log in as root.
At the command line, enter:
bootinfo -y

This produces the output of either 32 or 64, depending on whether the hardware
architecture is 32-bit or 64-bit.

In addition, if you enter lsattr -El proc0, at any version of AIX, the output of
the command should return the type of processor for your server.

The types of 64-bit processors are as follows:

PowerPC_RS64
PowerPC_RS64 II
PowerPC_RS64 III
PowerPC_Power3
PowerPC_Power3 II



Kernel extensions vs. 64-bit kernel


To determine if the 64-bit kernel extension is loaded, from the command line
enter:

genkex |grep 64

You should see information similar to the following:

149bf58 a3ec /usr/lib/drivers/syscalls64.ext

NOTE: Having the driver extensions, does not mean that the kernel is a 64-bit
kernel. A 64-Bit Kernel became available at 5.1 oslevel.

The driver extensions just allows the 64-bit application to be compiled by a
32-bit kernel. If the 32-bit kernel has a 64-bit processor, the syscalls64.ext
will allow the 64-bit application to execute. Yet at 5.1, a 64-bit kernel and a
64-bit processor has better performance with 64-bit applications.

To truly change the kernel to 64-bit, you need to be at the 5.1 oslevel. The
means to change to a 64-bit kernel are:

From 32-bit to 64-bit:

ln -sf /usr/lib/boot/unix_64 /unix
ln -sf /usr/lib/boot/unix_64 /usr/lib/boot/unix
lslv -m hd5
bosboot -ad /dev/ipldevice
shutdown -Fr
bootinfo -K (should now be 64)

To change the kernel back to 32-bit:

From 64-bit to 32-bit:

ln -sf /usr/lib/boot/unix_mp /unix
ln -sf /usr/lib/boot/unix_mp /usr/lib/boot/unix
lslv -m hd5
bosboot -ad /dev/ipldevice
shutdown -Fr
bootinfo -K (should now be 32)


32-bit and 64-bit performance comparisons on IBM POWER systems


To examine the benefits and drawbacks of going from 32-bit to 64-bit mode and
further effects on the system, consult the following, AIX 64-bit Performance in
Focus, which is available at IBM Redbooks.

In most cases, running 32-bit applications on 64-bit hardware is not a problem,
because 64-bit hardware can run both 64-bit and 32-bit software. However, 32-bit
hardware cannot run 64-bit software. To find out if any performance issues exist
for applications that are running on the system, such as Lotus Notes and Oracle,
refer to those application's user guides for their recommended running
environment.

Tuesday, 27 October 2009

Restricting your AIX Error Logs

Sometimes, you do not want certain error conditions to show in the errorlog. If at this very moment, you think “what a silly idea this is” – please refrain from any further judgement – eventually you will get the picture.
AIX error reporting facilities, use templates in order to know what conditions constitue an error, and how to collect and display the associated with them information.
For these in need or more in-depth info, please look it up in AIX docs or on-line.
Instructing error logging facilities what not to report and/or not to include in the log (among many other things) is done with help of the errupdate command. This command can process your directives contained in an ASCI files or directly from the command line. Error IDENTIFIER is used to identify the error you want to work with. Multiple entries (error IDENTIFIERs and the associated with them processing instructions) must be separated with a blank line.
Look at the few lines shown next showing interaction with errupdate via command line:

root@MarcoPolo: /root> errupdate
=B6048838:
REPORT=FALSE

The first character you type is the = character to indicate modification of existing reporting behaviour associated with error label B6048838. Do you notice the : character following the error label? After you hit the Enter key, you can enter any of the following directives: REPORT, LOG and ALERT. Each may equal either TRUE or FALSE. When you are done, hit Enter twice to activate the changes.
REPORT - The info about events for which REPORTING is disabled is saved in the error log but it is not displayed with the errpt command.
LOG - The info about events for which LOGGING is disabled is not sent to the error log file.
To achieve identical results using an ASCI file to specify the modifications, follow the procedure bellow:
root@MarcoPolo: /root> mkdir -p /var/adm/errorFilter
root@MarcoPolo: /root> cd /var/adm/errorFilter
root@MarcoPolo: /var/adm/errorFilter> vi errorFilter Edit to your satisfaction.
root@MarcoPolo: /var/adm/errorFilter> cat errorFilter
=B6048838:
REPORT=FALSE
LOG=FALSE
ALERT=FALSE

root@MarcoPolo: /var/adm/errorFilter> errupdate ./errorFilter
0 entries added.
0 entries deleted.
1 entries updated.
The results will not only be the requires modifications but also a file in the same directory as the errorFilter named errorFilter.undo - its name reveals its purpose.

Friday, 23 October 2009

How to backup your VIO Server

Backing up the Virtual I/O Server

There are 4 different ways to backup/restore the Virtual I/O Server as illustrated in the following table.

Backup method Restore method
To tape From bootable tape
To DVD From bootable DVD
To remote file system From HMC using the NIMoL facility and installios
To remote file system From an AIX NIM server


Backing up to a tape or DVD-RAM

To backup the Virtual I/O Server to a tape or a DVD-RAM, the following steps must be performed

1. check the status and the name of the tape/DVD drive
#lsdev | grep rmt (for tape)
#lsdev | grep cd (for DVD)

2. if it is Available, backup the Virtual I/O Server with the following command
#backupios –tape rmt#
#backupios –cd cd#

If the Virtual I/O Server backup image does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created. This command creates one or more bootable DVDs or tapes that you can use to restore the Virtual I/O Server

Backing up the Virtual I/O Server to a remote file system by creating a nim_resources.tar file

The nim_resources.tar file contains all the necessary resources to restore the Virtual I/O Server, including the mksysb image, the bosinst.data file, the network boot image, and SPOT resource.
The NFS export should allow root access to the Virtual I/O Server, otherwise the backup will fail with permission errors.

To backup the Virtual I/O Server to a filesystem, the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir

2. Mount the exported remote directory on the directory created in step 1.
#mount server:/exported_dir /backup_dir

3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir

The above command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC.

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.


The backupios command empties the target_disk_data section of bosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need to repopulate the target_disk_data section of bosinst.data and replace this file in the nim_resources.tar. All other parts of the nim_resources.tar image must remain unchanged.


Backing up the Virtual I/O Server to a remote file system by creating a mksysb image

You could also restore the Virtual I/O Server from a NIM server. One of the ways to restore from a NIM server is from the mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server from a NIM server from a mksysb image, verify that the NIM server is at the latest release of AIX.

To backup the Virtual I/O Server to a filesystem the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir
2. Mount the exported remote directory on the just created directory
#mount NIM_server:/exported_dir /backup_dir
3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir/filename.mksysb -mksysb

How to upgrade ML/TL of AIX through alternate disk installation method

1. Pre-installation checks

To check packages/file set consistency
# lppchk –v

If we found some errors. We can get more information about problem & resolve it before continue with installation.
# lppchk -v -m3

Check the current installed ML/TL
# instfix -i|grep ML
# oslevel –s

Check Rootvg

Commit all package/fileset installed on the servers
# smit maintain_software

Check if rootvg is mirrored and all lv's are mirrored correctly (excluding dump and boot volumes). If your rootvg is not mirrored we can skip later in document part for alt_disk_install,
# lsvg -p rootvg
# lsvg rootvg
# lsvg -l rootvg


2. Preinstallation Tasks

Check for HACMP cluster

Check if cluster software is installed .Check for HACMP running on server.

# lslpp -l | grep -i cluster
Check if the cluster processes are active
# lssrc -g cluster

If HACMP is used, a current fix pack for HACMP should be installed when a new AIX Technology Level is installed. Currently available HACMP fix packs can be downloaded via http://www14.software.ibm.com/webapp/set2/sas/f/hacmp/home.html



3. Check for IBM C/C++ compiler

Updates needs to be installed with TL up gradation. Same can be downloaded from below mentioned links.
http://www-1.ibm.com/support/docview.wss?rs=2239&uid=swg21110831

4. Check for Java VersionIf Java is used, current software updates for the Java version(s) should be installed when a new AIX Technology Level is installed. If Java is being used in conjunction with other software, consult the vendor of that software for recommended Java levels

The Java version(s) installed on AIX can be identified with the commands
# lslpp -l | grep -i java

Default Java version can be identified with the
# java -fullversion command.
Java fixes can be downloaded from below link.
http://www14.software.ibm.com/webapp/set2/sas/f/hacmp/home.html


5. Check for recommended TL/SP for system

Gets information of latest TL/SP for system using Fix Level Recommendation Tool available in below link
http://www14.software.ibm.com/webapp/set2/flrt/home

Download latest updates from IBM fix central website & dump in NIM server.

Create resources in NIM servers.

Run mksysb backup of servers on safer side.

Check for running application compatibility if any. Confirm it with application owner.
Free hdisk1 for alternate disk installation

Remove the secondary dump device if present from hdisk1. Then change the settings for secondary dump device to /dev/sysdumpnull.
# sysdumpdev –P –s /dev/sysdumpnull

Unmirror rootvg
#unmirrorvg rootvg

migrate logical volume from hdisk1 to hdisk0 which are not mirrored.
# migratepv hdisk1 hdisk0.

Clear boot record from hdisk0
# chpv -c hdisk1

Add new boot image to the first PV to have “fresh” boot record just for safer side
# bosboot –ad /dev/hdisk0

Set bootlist to hdisk0
# bootlist –m normal hdisk0 hdisk1 (hdisk1 after installation will contain upgraded OS)

Removes the second PV from rootvg
# reducevg rootvg hdisk1



7. Alternate disk migration

Carry out alternate disk installation via nim on hdisk1. We will carry out preview install. If it gets succeed we will go ahead & install TL/SP in applied mode
# smit nimadm

Reboot system. It will be booted from hdisk1 which contains upgraded OS.
# shutdown -Fr



8. Recreate the mirror of rootvg

After few days of stable work and some tests from application users.

Remove alternate disk installed disk
# alt_disk_install –X

Add disk hdisk0 in rootvg
# extendvg rootvg hdisk0

Check for estimated dump
# sysdumpdev –e

Re-create secondary dump device
# sysdumpdev –P –s “dump_device”

Mirror rootvg with hdisk1 in background.
# nohup mirrorvg '-S' rootvg hdisk1 &

Create bootimage on hdisk1
# bosboot -ad /dev/hdisk1

Add hdisk1 to bootlist
# bootlist -m normal hdisk0 hdisk1

Synchronize rootvg
# nohup syncvg -v rootvg &

Friday, 16 October 2009

Tip: A small script to notify new error entries in error log

Although IBM is now pushing system director concept into AIX as well, to monitor overall health of system, i still found following small shell script very helpfull which can be used to notify any new errors in AIX error log
-----------------------------------------------
#!/bin/ksh
# Script to notify new errors in AIX error log


TOTALERRS=`errpt | grep -v "IDENTIFIER" | wc -l`

if [ ! -f /usr/local/bin/errpt.count ]
then
echo 0 > /usr/local/bin/errpt.count
fi

OLDERRS=`cat /usr/local/bin/errpt.count`
((NEWERRS=TOTALERRS-OLDERRS))

if [ ${NEWERRS} -gt 1 ]
then
echo "Please check errpt, ${NEWERRS} errors found!" | /usr/bin/mailx -vs "`hostname`: errpt report" recipient@domain.com
elif [ ${NEWERRS} -gt 0 ]
then
errpt | grep -v "IDENTIFIER" | head -${NEWERRS} | cut -c 42- |
while read ERRMSG
do
echo "errpt:${ERRMSG}" | /usr/bin/mailx -vs "`hostname`: errpt report" recipient@domain.com
done
fi

echo ${TOTALERRS} > /usr/local/bin/errpt.count

exit 0
-----------------------------------------------

Wednesday, 14 October 2009

My trip to Istanbul:Fascinating city of Civilizations








Istanbul has a long and fascinating history which encompasses over centuries and three prominent eras. It starts with era of Bayzentine nation, followed by Romans and then era of Muslims ( othoman empire).In closer vicinity of meters only, you will find symbols of all these historical eras and you become deeply impressed by greatness of this historical city.

We reached Istanbul SAW airport around 11:00 clock in morning. SAW airport is around 51 KM from istanbul city and it took around 2 hours to reach our hotel which was located in Bayziat area of istanbul city. I was horrified by seeing traffic jams on road of istanbul but it is a fact that like all other big cities of world , istanbul also facing issues of traffic. They have both trams and metro in istanbul city , but still traffic jams are common in city.

Our hotel was small but clean. Main advantage was that it was very closed to main tourist attractions like blue mosque, hagia sofia , Grand Bazaarand Topkapi palace. We were able to reach all these places by walk , within 20 minutes.

We started our first day with a short visit to Grand Bazaar and istanbul university, followed by Bayzait Mosque. All of these locations were very close to our hotel so we took advatnage of that and visited all of them in same afternoon/morning.

Second day , we visited Blue Mosque and Hugia sofia. Both of these places are really wonderful.The only thing which i disliked about hugia sofia is that Government has converted it to Meuseum. I think they should retain it as either church or Mosque , but converting it to a meuseum makes no sense.

Third day , we went to Ayup Mosque to pray fatiha for our great muslim saint and close friend of our prophet (PBUH).

We then visited Topkapi palace which was constructed in Othmon period of muslim era. It is a fantastic palace with all its walls full of Gold. It is a memorable palace , and give you a memory of fantastic and glorious era of othmon empire.

Last day we went to Emunono port to catch a hour based ferry trip. They charged us around 9 lira per person. It was one and hour long trip but really memorable. I advice all travelers who visit Istanbul not to miss this golden oppurtunity.

Sunday, 4 October 2009

Changing Herald in AIX

Here are two ways to customize the AIX login prompt.

The first way is to add a "herald" in the default stanza in the /etc/security/login.cfg file as follows

default:
sak_enabled = false
logintimes =
logindisable = 0
logininterval = 0
loginreenable = 0
logindelay = 0
herald = "AIX TIGER HOME\r\nID:"

The second method uses the "chsec" command to modify the same file:

chsec -f /etc/security/login.cfg -s default -a herald="AIX TIGER HOME\r\nID:"

Note: for additional security, I recommend changing the standard Unix "login" prompt to something else like "ID". The "login" prompt almost invariably identifies the system as Unix to hackers.

Friday, 2 October 2009

WPARS in AIX 6- Part-1

Workload Partitioning is a virtualization technology that utilizes software rather than firmware to isolate users and/or applications.


A Workload Partition (WPAR) is a combination of several core AIX technologies. There are differences of course, but here the emphasis is on the similarities. In this essay I shall describe the characteristics of these technologies and how workload partitions are built upon them.

There are two types of WPAR: system and application.My focus is on system WPAR as this more closely resembles a LPAR or a seperate system. In other words, a system WPAR behaves as a complete installation of AIX. At a later time application workload partitions will be described in terms of how they differ from a system WPAR. For the rest of this document WPAR and system WPAR are to be considered synonomous.

AIX system software has three components: root, user, and shared. The root component consists of all the software and data that are unique to that system or node. The user (or usr) part consists of all the software and data that is common to all AIX systems at that particular AIX software level (e.g., oslevel AIX 5.3 TL06-01, or AIX 5.3 TL06-02, or AIX 6.1). The shared component is software and data that is common to any UNIX or Linux system.

In it's default configuration a WPAR inherits it's user (/usr) and shared (/usr/share, usually physically included in /usr filesystem) components from the global system. Additionally, the WPAR inherits the /opt filesystem. The /opt filesystem is the normal installation area in the rootvg volume group for RPM and IHS packaged applications and AIX Linux affinity applications and libraries. Because multiple WPAR's are intended to share these file fystems (/usr and /opt) they are read-only by WPAR applications and users. This is very similiar to how NIM (Network Installation Manager) diskless and dataless systems were configured and installed. Since only the unique rootvg volume group file systems need to be created (/, /tmp, /var, /home) creation of a WPAR is a quick process.

The normal AIX boot process is conducted in three phases:
1) boot IPL, or locating and loading the boot block (hd5);
2) rootvg IPL (varyonvg of rootvg),
3) rc.boot 3 or start of init process reading /etc/inittab

A WPAR activation or "booting" skips step 1. Step 2 is the global (is hosting) system mounting the WPAR filesystems - either locally or from remote storage (currently only NFS is officially supported, GPFS is known to work, but not officially supported at this time (September 2007)). The third phase is staring an init process in the global system. This @init@ process does a chroot to the WPAR root filesystem and performs an AIX normal rc.boot 3 phase.

WPAR Management

WPAR Management in it's simpliest form is simply: Starting, Stopping, and Monitoring resource usage. And, not to forget - creating and deleting WPAR.

Creating a WPAR is a very simple process: the onetime prequistite is the existance of the directory /wpars with mode 700 for root. Obviously, we do not want just anyone wondering in the virtualized rootvg's of the WPAR. And, if the WPAR name you want to create resolves either in /etc/hosts or DNS (and I suspect NIS) all you need to do is enter:
# mkwpar -n
If you want to save the output you could also use:
# nohup mkwpar -n & sleep 2; tail -f nohup.out
and watch the show!

This creates all the wpar filesystems (/, /home, /tmp, /var and /proc)
and read-only entries for /opt and /usr. After these have been made, they are
mounted and "some assembly" is performed, basically installing the root part
of the filesets in /usr. The only "unfortunate" part of the default setup is
that all filesystems are created in rootvg, and using generic logical partition
names (fslv00, fslv01, fslv02, fslv03). Fortunately, there is an argument
(-g) that you can use to get the logical partitions made in a different
volume group. There are many options for changing all of these and they
will be covered in my next document when I'll discuss WPAR mobility.

At this point you should just enter:
# startwpar
It will wait for prompt and from "anywhere" you can connect to the running WPAR just
as if it was a seperate system. Just do not expect to make any changes in /usr
or /opt (software installation is also a later document).
AIX / HMC Tip Sheet
HMC Commands
lshmc –n (lists dynamic IP addresses served by HMC)
lssyscfg –r sys –F name,ipaddr (lists managed system attributes)
lssysconn –r sys (lists attributes of managed systems)
lssysconn –r all (lists all known managed systems with attributes)
rmsysconn –o remove –ip (removes a managed system from the HMC)
mkvterm –m {msys} –p {lpar} (opens a command line vterm from an ssh session)
rmvterm –m {msys} –p {lpar} (closes an open vterm for a partition)
Activate a partition
chsysstate –m managedsysname –r lpar –o on –n partitionname –f profilename –b normal
chsysstate –m managedsysname –r lpar –o on –n partitionname –f profilename –b sms
Shutdown a partition
chsysstate –m managedsysname –r lpar –o {shutdown/ossshutdown} –n partitionname [-immed][-restart]
VIO Server Commands
lsdev –virtual (list all virtual devices on VIO server partitions)
lsmap –all (lists mapping between physical and logical devices)
oem_setup_env (change to OEM [AIX] environment on VIO server)
Create Shared Ethernet Adapter (SEA) on VIO Server
mkvdev –sea{physical adapt} –vadapter {virtual eth adapt} –default {dflt virtual adapt} –defaultid {dflt vlan ID}
SEA Failover
ent0 – GigE adapter
ent1 – Virt Eth VLAN1 (Defined with a priority in the partition profile)
ent2 – Virt Eth VLAN 99 (Control)
mkvdev –sea ent0 –vadapter ent1 –default ent1 –defaultid 1 –attr ha_mode=auto ctl_chan=ent2
(Creates ent3 as the Shared Ethernet Adapter)
Create Virtual Storage Device Mapping
mkvdev –vdev {LV or hdisk} –vadapter {vhost adapt} –dev {virt dev name}
Sharing a Single SAN LUN from Two VIO Servers to a Single VIO Client LPAR
hdisk = SAN LUN (on vioa server)
hdisk4 = SAN LUN (on viob, same LUN as vioa)
chdev –dev hdisk3 –attr reserve_policy=no_reserve (from vioa to prevent a reserve on the disk)
chdev –dev hdisk4 –attr reserve_policy=no_reserve (from viob to prevent a reserve on the disk)
mkvdev –vdev hdisk3 –vadapter vhost0 –dev hdisk3_v (from vioa)
mkvdev –vdev hdisk4 –vadapter vhost0 –dev hdisk4_v (from viob)
VIO Client would see a single LUN with two paths.
spath –l hdiskx (where hdiskx is the newly discovered disk)
This will show two paths, one down vscsi0 and the other down vscsi1.


AIX Performance TidBits and Starter Set of Tuneables


Current starter set of recommended AIX 5.3 Performance Parameters. Please ensure you test these first before implementing in production as your mileage may vary.
Network
no –p –o rfc1323=1
no –p –o sb_max=1310720
no –p –o tcp_sendspace=262144
no –p –o tcp_recvspace=262144
no –p –o udp_sendspace=65536
no –p –o udp_recvspace=655360
nfso –p –o rfc_1323=1
NB Network settings also need to be applied to the adapters
nfso –p –o nfs_socketsize=600000
nfso –p –o nfs_tcp_socketsize=600000
Memory Settings
vmo – p –o minperm%=5
vmo –p –o maxperm%=80
vmo –p –o maxclient%=80
Let strict_maxperm and strict_maxclient default
vmo –p –o minfree=960
vmo –p –o maxfree=1088
vmo –p –o lru_file_repage=0
vmo –p –o lru_poll_interval=10

IO Settings

Let minpgahead and J2_minPageReadAhead default
ioo –p –o j2_maxPageReadAhead=128
ioo –p –o maxpgahead=16
ioo –p –o j2_maxRandomWrite=32
ioo –p –o maxrandwrt=32
ioo –p –o j2_nBufferPerPagerDevice=1024
ioo –p –o pv_min_pbug=1024
ioo –p –o numfsbufs=2048
If doing lots of raw I/O you may want to change lvm_bufcnt
Default is 9
ioo –p –o lvm_bufcnt=12
Others left to default that you may want to tweak include:
ioo –p –o numclust=1
ioo –p –o j2_nRandomCluster=0
ioo –p –o j2_nPagesPerWriteBehindCluster=32
Useful Commands
vmstat –v or –l or –s lvmo
vmo –o iostat (many new flags)
ioo –o svmon
schedo –o filemon

Building High Performance clusters on RHEL

Guys

My latest article on tricks to build high performance clusters on linux platform has been published in November edition of Linux Magazine. This edition has been printed and will be available on News shelfs around the world by end of October.
If you really want to convert your old computers into real high performance clusters, you may look into the magazine for that article.

Article will be available on my blog website , six months after publish date, as according to contract.

Sunday, 27 September 2009

Implementing Virtualization on P570 Server

Implementing Virtualization on P570 server

When management asked why our IBM pSeries hardware, in particular CPU, was so expensive and whether there was anything we could do to reduce the cost to the business, we reviewed the CPU usage on the majority of our LPARs and realized (like most Unix shops) that our systems were only about 20% utilized most of the time. Some of these systems had many dedicated processors doing very little work a great deal of the time. This idle time was very expensive when you consider each processor could cost in the range of $40K-$100K.
Rather than "wasting" CPU resources, we needed to find a way of increasing our CPU utilization and reducing our hardware costs. The answer was immediately available to us in the form of IBM's new micro-partitioning technology on the POWER5 (p5) platform. This technology, along with Virtual I/O, is part of the Advanced POWER Virtualization (APV) feature (standard with p595) on System p. Micro-partitioning allows a system to virtualize its processors for sharing by multiple LPARs. Virtual I/O (VIO) provides the capability for an I/O adapter to be shared by multiple LPARs on the same managed system. We chose not to deploy VIO to any business system as we were still concerned with its performance; thus, all LPARs would continue to use dedicated adapters.
In this article, I will describe how we moved to micro-partitioning and how we monitored and measured our shared processor pool. Topics covered will include: a brief introduction to micro-partitioning, sizing guidelines, p5 LPAR configuration pointers, monitoring/measuring the shared processor pool, and performance considerations. If you are new to Power5 and micro-partitioning, there are some excellent Redbooks available from IBM on these topics. I've provided links to the documents I found most useful. I recommend you review these documents to gain a better understanding of micro-partitioning. Like any new technology, implementing virtualization requires careful preparation and planning first.
To determine whether you and your organization are ready for virtualization, you can review the "Virtualization Company Assessment" article on the IBM Wiki site. The information there is quite useful in determining whether you have a grasp on virtualization concepts and are ready to implement virtualization on your p5 systems.

Micro-Partitioning

The old POWER4 (p4) technology allowed us to allocate whole processors only to an LPAR (i.e., CPUs were dedicated to a specific partition and could not be shared with other LPARs). The micro-partitioning feature on the new p5 systems allowed us to move away from the dedicated processor model and share our processing resources among several LPARs at once. With micro-partitioning, processors are assigned to the shared processor pool. The pool is then utilized by shared processor LPARs, which access the processors in the pool based on their entitled capacity and priority (weight).
Each processor can be shared by up to 10 shared processor LPARs. The minimum capacity that can be assigned is 1/10 of a processor (specified as 0.1 processing units). A micro-partition can be either capped or uncapped. A capped LPAR can access CPUs up to the number of processors specified in its profile. An uncapped partition can obtain more processing power if required from the pool, up to the desired number of virtual processors in the LPARs profile. A weight can be allocated to each uncapped LPAR so that, if several uncapped LPARs are competing for processing power at the same time, the more important systems will be given higher priority for processing resources. The p5 Hypervisor is the virtualization engine behind the APV technology on the p5 system. It is responsible for managing micro-partitioning (and all virtualization) on the p5 platform.
This technology would give us the ability to divide a physical processor's computing power into fractions of a processing unit and share them among multiple LPARs. For example, we could allocate as little as 0.10 processing units as opposed to dedicating an entire CPU. We saw two very big advantages to micro-partitioning. The first was better utilization of physical CPU resources (i.e., our dedicated processor systems were built with spare CPU capacity to cater for peaks in workload and future growth in application demands). This led to wasted CPU resources. With micro-partitions, we could build LPARs with minimal CPU resources but have the capability of dynamically accessing extra CPU power if required. And the second big advantage was more partitions. On a p4 system, once all the CPUs had been allocated to LPARs, you could not build any more partitions. For example on a p650 with 8 CPUs, if you built one LPAR per CPU, you were limited to 8 LPARs per p650. Using micro-partitioning, there can be many more partitions (theoretically up to 254 per 595) than physical CPUs.
Sizing Guidelines
To size our p4 LPARs as p5 micro-partitions, we decided to make the machine (i.e., the p5 Hypervisor) do the hard work for us. We reviewed the average CPU usage for each system to get an idea of the typical utilization. This was done by loading several months of nmon performance data from each system into the nmon2web tool and looking at the resulting long-term reports (see the references section for a link to nmon2web).
We then set the minimum/desired processing units to a value suitable for the anticipated "typical" workload. To cater for peaks in workload, we uncapped the LPAR and allocated an appropriate weight. It was then up to the hypervisor to assign free CPU to whichever LPAR (or LPARs) required it. Our micro-partitions would be uncapped, the Desired Virtual Processors would be higher than required to allow consumption of unused processor, and we would use weights to prioritize unused processor to critical production systems. Also, SMT (Simultaneous Multi-Threading) would be enabled to increase CPU utilization by allowing multiple program threads to use a single virtual processor at once (also known as hyperthreading).
Virtual processors are allocated in whole numbers. Each virtual processor can represent between 0.1 and 1.0 CPUs, known as processing units. The number of virtual processors configured for an LPAR establishes the usable range of processing units. For example, an LPAR with two virtual processors can operate with between 0.1 and 2.0 processing units. The "Desired Processing Units" parameter sets the desired amount of physical CPU that will always be available to an LPAR, known as the entitled capacity. The "Desired Virtual Processor" option sets the preferred number of virtual processors that will be online when the LPAR is activated. It also establishes the upper limit of "uncapped" processor resources that can be utilized by the LPAR. For example, an uncapped LPAR with two (desired) virtual processors can access, at most, two physical processors.
The "Maximum Processing Units" setting defines the number of processing units that can be dynamically assigned to an LPAR. For example, an LPAR with two processing units and a maximum of six will be able to dynamically increase its processing units by allocating an additional four processing units (via the DLPAR operation on the HMC). The "Maximum Virtual Processors" parameter determines the number of virtual processors that could be dynamically assigned to an LPAR. For example, an LPAR with one virtual processor and a maximum of three would be able to dynamically increase the number of virtual processors to three by allocating an additional two virtual processors (this can be done manually with a DLPAR operation via the HMC).
The important point here is that the "Desired Processing Units" and "Desired Virtual Processors" settings are the key to controlling how much uncapped resource a micro-partition can access in the shared pool.
For example, one of our Web servers was previously configured with four dedicated CPUs. The CPU utilization for this system was less than 50%. A great deal of CPU was being under-utilized. So, the sizing for this micro-partition was:
Desired processing units = 2.0
Maximum processing units = 6.0
Minimum processing units = 2.0
Desired virtual processors = 4
Minimum virtual processors = 2
Maximum virtual processors = 6
Sharing mode = uncapped
Weight = 100
The LPAR is activated with 2.0 desired processing units and 4 desired virtual processors. This means that LPAR will always have access to at least 2.0 processing units but could operate in the range of 2.0 to 4 processing units. If the LPAR requires processing resources beyond its desired processing units of 2.0, the hypervisor will automatically attempt to allocate additional processing capacity to this partition until it reaches the desired number of virtual processors (4). This behavior is depicted in Figure 2 .
Although all 4 virtual processors will appear to be online, AIX will "fold away" any VPs that are not in use and will bring them online when required; this is known as processor folding. Processor folding (introduced in AIX 5.3 TL3) will put idle VPs to sleep and awaken them when the workload demands additional processing power. The processors will still appear when commands such as mpstat are run. If the workload can be satisfied by using less than 4 processors, some of the CPUs will appear idle.
The "Minimum Processing Units" setting defines the number of processing units that must be available in the shared pool for this LPAR to start (in this case 2.0). If the minimum is not available, then the LPAR will not start.
Again, it is important to note that the maximum settings don't represent the amount of processing units that an uncapped LPAR can access. The maximum amount of processing power that an uncapped LPAR can access is limited by the "Desired Virtual Processor" setting.
If we determine that the desired number of processing units for this LPAR is too low, we can use DLPAR via the HMC and manually increase the desired processing units up to its maximum (in this case 6). Likewise, if it is determined that the desired number of virtual processors is insufficient, we could use DLPAR via the HMC and manually increase the number of virtual processors up to its maximum of 6. If we use the HMC to change the desired virtual processors to 6, then the upper limit of uncapped processors that the LPAR could access becomes 6.
The minimum and maximum setting, for processing units and virtual processors, represents the ranges between which the desired values can be dynamically changed (usually via the HMC). It is for these reasons that we deliberately set the maximum processing units and maximum virtual processors to a higher number. Thus, we can make adjustments to a partitions configuration without the need for an outage.
Our weighting scheme was designed to ensure that our critical production systems (typically customer facing or external applications performing one of the primary functions of our business, i.e., paying claims and/or selling insurance) were given priority to CPU resources. Weights were assigned based on the criticality of a service, as follows:
Tier Weight Description
1 255 Customer supporting critical services
2 200 All other business critical services
3 100 All other production services
4 50 Business development and test services
5 25 Supporting infrastructure services
6 10 Infrastructure development and test services
To view the weight of an LPAR, we could run the lparstat command with the -i flag and look for the "Variable Capacity Weight":
$ lparstat -i | grep Var
Variable Capacity Weight : 100
LPAR Configuration Guidelines
Configuring a micro-partition on a p5 system is similar to creating an LPAR on a p4 system via the HMC. The biggest difference is in configuring the shared processor options in the LPARs profile. When creating the LPAR, you will be prompted with a choice between Shared and Dedicated Processors. After selecting Shared, you will then need to configure the LPARs processing units, sharing mode, weight, and virtual processors. Starting with the processing units, you will need to enter a minimum, desired, and maximum value.
Clicking on the "Advanced" button will allow you to enter the sharing mode of the LPAR, which is either capped or uncapped. If you choose uncapped, you will need to enter a weight for this LPAR. Entering the minimum, desired, and maximum number of virtual processors is next. We ensured that the "Allow shared processor pool utilization authority" option was ticked, because this allows us to view the available physical processors in the shared pool by running the lparstat command on the LPAR. To enable this feature, double-click on the LPAR (not its profile), select Hardware, then "Processors and Memory" and tick the "Allow shared processor pool utilization authority" box.
$ lparstat 1 3

System configuration: type=Shared mode=Uncapped smt=On lcpu=2 \
mem=2048 psize=21 ent=0.75

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ---- ----- ----- ----- ----- ------ --- ---- -----
2.6 4.0 0.0 93.4 0.06 8.4 4.0 17.47 1861 28
0.8 3.0 0.0 96.2 0.04 5.3 2.0 17.46 916 26
0.8 2.9 0.0 96.4 0.04 5.1 2.0 17.33 925 16
The "app" column shows the available physical processors in the shared pool. Based on this output, you can determine (among other things) how much processing power is free in your shared processor pool. Looking at the output, we can see that there are 21 processors in the pool (psize) and that roughly 17 processors are available/free in the pool (app). Also we can see that the LPAR is consuming 5.1% (%entc) of its entitled capacity (ent) or 5.1% of 0.75, which equates to 0.04 of a physical CPU (physc).
Migrating
The first phase of this project targeted 50 LPARs for migration to the p595s. The remaining 70 LPARs were scheduled for 2007. Each of these LPARs required an LPAR definition on the p595. Rather than manually create an LPAR definition for each LPAR via the HMC, I used a handy Perl script called createTgtLPAR to automate the LPAR creation process.
The createTgtLPAR script is part of a package from IBM called the mig2p5 tools. These assist in the migration of older RS/6000 and pSeries systems to the new p5 platform. The tools (in conjunction with NIM) can automate several tasks normally required to upgrade and move an LPAR to p5. For example, the tools can migrate a mksysb of the old system to AIX 5.3, create a new LPAR on the p5 system based on the hardware configuration of the old system, size the LPAR based on its workload, allocate required CPU, memory, IO (even virtual I/O), and then install this mksysb image to the new p5 LPAR. The mig2p5 scripts are listed as follows:
/usr/lpp/bos.sysmgt/nim/methods/getSrcUtil
/usr/lpp/bos.sysmgt/nim/methods/getSrcCfg
/usr/lpp/bos.sysmgt/nim/methods/getTgtRsrc
/usr/lpp/bos.sysmgt/nim/methods/genTgtCfg
/usr/lpp/bos.sysmgt/nim/methods/createTgtLPAR
The tools have been incorporated into the nim_move_up utility. For more information, see:
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/ \
index.jsp?topic=/com.ibm.aix.cmds/doc/aixcmds4/nim_move_up.htm
Once the LPARs were created and their configuration was finished (i.e., all necessary network and Fibre Channel adapters were assigned, cabled, and configured), we were ready to start migrating each LPAR. Each LPAR was migrated via a standard NIM mksysb restore, followed by a SAN re-zone of the data volume groups to the new FC WWPNs and an import of each data volume group on the new LPAR. Migrating 50 LPARs took many months of coordination and negotiation with our business units, but by the end of the migration phase we had 50 micro-partitions up and running across our three p595s. The number of processors and micro-partitions in each shared pool varied on each 595 (e.g., 595-1 had 17 processors in the pool with 11 micro-partitions, 595-2 had 21 processors in the pool with 17 micro-partitions, and 595-3 had 36 processors in the pool with 22 micro-partitions). Now, all we had to do was manage them.
Monitoring and Measuring the Shared Processor Pool
One of our main concerns during the planning stage was how we were going to monitor and measure our shared processor pool. We set ourselves the objective that the amount of unused processing capacity available in the shared processor pool would be monitored to ensure that sufficient processing power was available if required. To cater for peak load, we wanted to ensure that between 10-30% of the shared processor pool would be available during core business hours. The question was how?
Fortunately, I discovered several tools of use in this area. The topas command in AIX 5.3 has been enhanced to display cross-partition statistics and recording of this information. The recording functionality was introduced with Technology Level 5 for AIX 5.3. This new topas panel displays metrics for all LPARs running on a p5 system. Dedicated and shared partitions are displayed in separate sections with appropriate metrics. The -C option to the topas command allowed us to view cross-partition information:
$ topas -C
Topas CEC Monitor Interval: 10 Sat Nov 18 22:44:39 2007
Partitions Memory (GB) Processors
Shr: 22 Mon: 122 InUse: 109 Shr: 17 PSz: 36 Shr_PhysB: 8.40
Ded: 0 Avl: - Ded: 0 APP: 27.6 Ded_PhysB: 0.00

Host OS M Mem InU Lp Us Sy Wa Id PhysB Ent %EntC Vcsw PhI
----------------------------shared---------------------------------
lpar01 A53 U 8.0 8.0 2 90 5 0 0 1.00 0.50 199.3 430 248
lpar02 A53 U 10 9.9 2 92 1 0 5 1.00 0.75 133.1 402 266
lpar03 A53 U 4.0 4.0 2 93 1 0 5 1.00 0.75 133.0 476 258
lpar04 A53 U 4.0 3.9 2 96 3 0 0 1.00 0.75 132.8 199 488
lpar05 A53 U 8.0 8.0 4 88 7 1 2 1.48 1.25 118.7 1459 415
lpar06 A53 U 12 11 4 63 2 6 28 1.07 1.50 71.2 1842 276
lpar07 A53 U 8.0 8.0 2 5 2 0 0 0.52 0.75 69.7 7028 149
lpar08 A53 U 4.0 3.5 4 4 33 0 62 0.56 1.25 44.9 13301 129
lpar09 A53 U 4.0 3.9 2 23 14 10 51 0.33 0.75 44.2 4877 109
lpar10 A53 C 0.2 0.2 2 5 16 0 78 0.03 0.10 28.7 692 9
lpar11 A53 U 4.0 4.0 2 1 8 0 90 0.07 0.50 13.2 1452 29
lpar12 A53 C 4.0 2.8 2 5 4 0 89 0.09 0.75 12.3 586 42
lpar13 A53 C 4.0 0.4 2 0 5 0 93 0.01 0.10 10.9 507 3
lpar14 A53 U 2.0 2.0 2 1 3 0 94 0.03 0.50 6.5 574 10
lpar15 A53 U 0.5 0.4 4 0 3 0 96 0.03 0.50 5.3 1252 13
lpar16 A53 U 10 8.0 2 0 2 0 96 0.03 0.75 4.1 541 17
lpar17 A53 U 12 10 4 0 1 0 97 0.03 1.00 3.1 594 13
lpar18 A53 U 8.0 8.0 6 0 2 0 97 0.08 2.50 3.0 544 19
lpar19 A53 U 4.0 3.7 2 0 1 0 98 0.02 0.75 2.5 520 9
lpar20 A53 C 4.0 3.0 2 0 1 0 98 0.02 1.00 1.7 501 5
lpar21 A53 U 8.0 6.1 4 0 0 0 99 0.02 1.25 1.2 536 11
lpar22 A53 U 8.0 6.1 4 0 0 0 97 0.08 2.50 3.0 536 11
Several important metrics are displayed in relation to shared processing; specifically, these are: the active physical CPUs in the pool (PSz), the available processors in the pool (APP), the busy shared processors (Shr_PhysB), an LPARs entitlement (Ent), the entitlement consumed for an LPAR (%Entc), and the sharing mode (M) of either capped (C) or uncapped (U). This view gave us a way to quickly see (per 595) the shared processor activity in the pool and in each micro-partition.
The topas -R command could record the Cross-Partition data. This was of great use to us, because we needed to collect statistics on our shared pool usage. Recordings covered a 24-hour period and were retained for 8 days before being automatically deleted. This allowed a week's worth of data to be retained. Recordings were made to the /etc/perf/ directory with file names in the form topas_cec.YYMMDD. To convert the collected data into a format that could be used for reporting, we used the new topasout command.
We chose to run the topas recorder on a single (non-business function) LPAR per p595. For this purpose, we deployed a VIO server and client to each 595. The VIO client would become our shared processor pool reporting LPAR. Given that the VIO server and client would not be under much load, we configured the LPARs with minimal resources (i.e., 0.1 processing units and 256M of memory). For more information on configuring a VIO server and client, please refer to the Redbook, "Advanced POWER Virtualisation on IBM System p5". We created two new file systems in rootvg to collect the topas data.
/etc/perf
/etc/perf/archive
Both file systems were roughly 600M in size. The /etc/perf file system held the data collected daily by the topas recorder and /etc/perf/archive contained an archive of old daily data. The topas_cec files were copied to the archive directory at 2 minutes to midnight to ensure we didn't lose any data. This archive was created by the following script in root's crontab:
58 23 * * * /usr/local/bin/cpperf.ksh > /var/log/cppperf.log 2>&1
The script contained the following commands:
cp -p /etc/perf/topas_cec* /etc/perf/archive/
gzip -f /etc/perf/archive/topas_cec*
We then started the topas -R command like so:
# /usr/lpp/perfagent/config_topas.sh add
AIX Topas Recording (topas -R) enabled
Sending output to nohup.out
Each day a new topas_cec file is written to and saved for future reporting purposes.
# ls -ltr /etc/perf/topas_cec* | tail -1

-rw-r--r-- 1 root sys 9905952 Jan 13 22:53 /etc/perf/topas_cec.070113
To view detailed and summary reports directly from the topas_cec files, we used the topasout command:
# topasout -R summary /etc/perf/topas_cec.060828 | more
#Report: CEC Summary --- hostname: lpar01 version:1.1
Start:08/28/06 09:33:21 Stop:08/28/06 23:59:22 Int: 5 Min Range: 866 Min
Partition Mon: 15 UnM: 0 Shr: 15 Ded: 0 Cap: 0 UnC: 15
-CEC------ -Processors------------------------- -Memory (GB)------------
Time ShrB DedB Mon UnM Avl UnA Shr Ded PSz APP Mon UnM Avl UnA InU
09:38 0.61 0.00 12.8 0.0 0.0 0 12.8 0 32.0 31.4 94.0 0.0 0.0 0.0 51.3
09:43 0.80 0.00 12.8 0.0 0.0 0 12.8 0 32.0 31.2 94.0 0.0 0.0 0.0 51.5
09:48 1.08 0.00 12.8 0.0 0.0 0 12.8 0 32.0 30.9 94.0 0.0 0.0 0.0 51.6
09:53 1.00 0.00 12.8 0.0 0.0 0 12.8 0 32.0 31.0 94.0 0.0 0.0 0.0 51.6
09:58 0.70 0.00 12.8 0.0 0.0 0 12.8 0 32.0 31.3 94.0 0.0 0.0 0.0 51.6
10:03 1.77 0.00 12.1 0.0 0.0 0 12.1 0 32.0 30.2 87.6 0.0 0.0 0.0 50.8
10:08 0.81 0.00 12.0 0.0 0.0 0 12.0 0 32.0 31.2 86.0 0.0 0.0 0.0 50.8
10:13 1.13 0.00 12.0 0.0 0.0 0 12.0 0 32.0 30.9 86.0 0.0 0.0 0.0 51.0
10:18 1.39 0.00 12.0 0.0 0.0 0 12.0 0 32.0 30.6 86.0 0.0 0.0 0.0 51.1
...etc...

# topasout -R detailed /etc/perf/topas_cec.060828 | more
#Report: CEC Detailed --- hostname: lpar01 version:1.1
Start:08/28/06 09:33:21 Stop:08/28/06 23:59:22 Int: 5 Min Range: 866 Min

Time: 09:38:20 -----------------------------------------------------------------
Partition Info Memory (GB) Processors
Monitored : 15 Monitored : 94.0 Monitored : 12.8 Shr Physcl Busy: 0.61
UnMonitored: 0 UnMonitored: 0.0 UnMonitored: 0.0 Ded Physcl Busy: 0.00
Shared : 15 Available : 0.0 Available : 0.0
Dedicated : 0 UnAllocated: 0.0 Unallocated: 0.0 Hypervisor
Capped : 0 Consumed : 51.3 Shared : 12.8 Virt Cntxt Swtch: 9911
UnCapped : 15 Dedicated : 0.0 Phantom Intrpt : 15
Pool Size : 32.0
Avail Pool : 31.4
Host OS M Mem InU Lp Us Sy Wa Id PhysB Ent %EntC Vcsw PhI
-------------------------------------shared-------------------------------------
lpar01 A53 U 8.0 6.8 2 10 3 0 85 0.17 1.0 16.79 929 1
lpar02 A53 U 4.0 4.0 2 0 3 0 96 0.04 0.8 4.89 589 1
lpar03 A53 U 2.0 2.0 2 7 11 0 80 0.12 0.5 24.14 1534 2
lpar04 A53 U 4.0 3.5 2 0 4 0 94 0.04 0.5 8.52 1460 2
lpar05 A53 U 10.0 6.2 2 3 1 0 95 0.05 0.8 6.16 610 1
lpar06 A53 U 8.0 5.5 4 0 0 0 98 0.02 1.2 1.80 436 0
lpar07 A53 U 4.0 1.9 2 1 2 0 96 0.04 0.8 4.84 768 1
lpar08 A53 U 4.0 3.8 2 3 3 0 93 0.06 0.8 7.93 542 2
...etc...
We wanted to be able to easily graph this information for use in monthly reporting of our shared pool usage. To achieve this we used the "Performance Graph Viewer" (pGraph). This is a Java program designed to read files containing performance data and produce graphs. The tool is capable of producing graphs related to CPU, memory, disk, I/O, and network.
We copied the pGraph tool to /etc/perf/pGraph on each of the reporting LPARs. To graph the processor pool usage, we did the following.
Change directory to /etc/perf/archive (our collection point for topas -R data archive):
# cd /etc/perf/archive
Select the file(s) that match the period we wish to graph:
# ls -ltr | topas_cec* | tail -1
-rw-r--r-- 1 root system 3297232 Nov 23 09:43 topas_cec.061117
Format the data with topasout:
# topasout topas_cec.061117
A new file is created, which will be imported into pGraph:
-rw-r--r-- 1 root system 19652328 Nov 23 09:44 topas_cec.061117_01
Change directory to /etc/perf/pGraph and run the pGraph program (we export our display to our X workstation first):
# export DISPLAY=xws23:12
# /usr/java14/bin/java -cp pGraph.jar.zip pGraph.Viewer
Note that we could also concatenate the files and produce a report for several days' worth of data:
# cd /etc/perf/archive
# for i in `ls *0612*`
do
cat $i >> topas_cec_Dec_06
done

# topasout topas_cec_Dec_06
Once the pGraph window appears, do the following steps. Select File, Open Single File. Enter Path/Folder Name of /etc/perf/. Select the file to open.
Once the file is loaded, the filename appears in the bottom left corner of the window. Click the CPU button. This might take a minute or so to load. The CPU stats window appears. To view just the Pool usage stats, click on the "None" button then tick just the "Pool Usage" option. The graph can then be saved as a PNG image. Click the "Save" button and enter a path and filename. See Figure 3 .
The resulting graph is then used in our monthly report on our shared processor pool usage. Looking at the graph for 595-3, on average 5.4 shared pool processors were in use, leaving another 30.6 processors free. This indicates that spare capacity exists in the shared processor pool on 595-3 and that we could deploy more LPARs to this system. The pGraph tool is available here:
http://www-941.ibm.com/collaboration/wiki/display/WikiPtype/Performance
+Graph+Viewer'
We also wrote a small script to monitor the shared pool on each 595 to ensure that the available pool did not fall below a certain threshold. This script (based on output from the lparstat command) was installed on each of our monitoring LPARs and then integrated into our Nagios monitoring tool so that, if the pool was low on resources, we would be sent an email and an SMS warning us of this situation. Nagios is an open source program for monitoring hosts, services, and networks. It is very customizable, so it can monitor virtually anything you can think of as you can integrate your own scripts easily into the framework. For more information, visit:
http://nagios.org/
Another handy tool worth mentioning is lparmon. It presents a basic graphical monitor of the shared pool utilization. It can also display processor usage for an LPAR if desired. We installed and ran the lparmon agent on each of our three reporting LPARs. I created a copy of the lparmon xml configuration for each 595 so that we could run multiple sessions of the tool and monitor the pool on each system:
lparmonv2 $ grep ipaddress lparmon.xml.*
lparmon.xml.1: 192.168.1.10
lparmon.xml.2: 192.168.1.20
lparmon.xml.3: 192.168.1.30
After this, it was simply a matter of running the lparmon command from one LPAR with this small script (again, we export our display to our X workstation):
#!/usr/bin/ksh
#
# Script name: lm2 - lparmon v2 wrapper.
#

case "$1" in
1)
echo "LPARMON for 595-1"
cp lparmon.xml.1 lparmon.xml
./lparmon &
;;

2)
echo "LPARMON for 595-2"
cp lparmon.xml.2 lparmon.xml
./lparmon &
;;

3)
echo "LPARMON for 595-3"
cp lparmon.xml.3 lparmon.xml
./lparmon &
;;

*)
echo "Which 595 do you want to run lparmon on? e.g. ./lm2 2 \
will run lparmon for 595-2"
exit 1
;;
esac
exit 0


$ export DISPLAY=xws23:12
$ ./lm2 1
$ ./lm2 2
$ ./lm2 3
See Figure 4 .
More information on lparmon can be found here:
http://www.alphaworks.ibm.com/tech/lparmon
Considerations
You should review the information in the IBM Redbooks before implementing micro-partitioning. Make yourself aware of the performance considerations. The "Advanced POWER Virtualization on IBM eServer p5 servers: Architecture and Performance Considerations" Redbook covers memory and processor affinity with micro-partitioning, virtual processor dispatch latency, general rules on virtual processor sizing, general rules on the number of micro-partitions per managed system, and the overhead when running micro-partitioning. Despite some misconceptions from one or two of our developers, however, we have not found any significant degradation in performance as a result of micro-partitioning. Workloads that constantly consume their entitled capacity may perform better with dedicated processors.
The traditional methods for monitoring CPU utilization can be misleading in shared processor environments (particularly with uncapped LPARs). To correctly report CPU performance, the AIX performance tools have been enhanced to use the p5 Processor Utilization Resource Register (PURR). Processor utilization is now relative to an LPARs CPU entitlement, so us/sys/wa/id relate to the percentage of physical processor utilization consumed. To make matters more confusing, an uncapped LPAR can access more than its entitled capacity. So, at 90-100% CPU utilization, an LPAR could be accessing processor above its entitlement, which makes the utilization numbers meaningless. It is advisable to focus on actual physical CPU consumed (pc) and the percentage of entitlement consumed (%entc). Commands like lparstat can provide you with statistics for shared processor usage.
Be aware of the licensing implications when using uncapped LPARs. Check with your application software vendors as to how they license their software. Some base their licensing on the maximum number of CPUs a system could possibly access. So for an uncapped LPAR, this could be all of the CPUs in the shared pool. Depending on the licensing used, you may need to configure an LPAR as capped to restrict it to a certain number of CPUs.
Conclusion
To date, we have reduced our hardware costs by moving many p4 servers to only a few p5 systems. As our consolidation project continues in 2007, we will continue moving all of our workload to the p595s. This will allow us to continue extracting greater utilization from our existing p5 processors and avoid the additional cost of buying new hardware. Our current utilization of each processor pool is around 30%. We are aiming for our shared processing resources to be 70% utilized, as opposed to the 20% we used to tolerate with dedicated CPUs. Given the current utilization and spare capacity of the pool, we are confident that we can size systems to meet the demands of the company while maintaining our current unit costs.

Thursday, 24 September 2009

How to automate your FTP session- Part-1

Hardly a day goes by without an FTP automation question appearing in the newsgroups. Until now, the stock answers (for Unix) have been as follows (the options for Windows are sparse indeed):

1. Pipe commands into FTP's standard input. This works great when it works, but doesn't allow for synchronization, error handling, decision making, and so on, and can get into an awful mess when something goes wrong. For example, some FTP clients do special tricks with the password that tend to thwart piping of standard input or "here" documents into them. Also, the exact syntax of your procedure depends on which shell (sh, ksh, csh, bash, etc) you are using. Also, your password can be visible to other people through "ps" or "w" listings.

2. Put the commands to be executed into the .netrc file in your login directory in the form of a macro definition. Except for avoiding shell syntax differences, this is not much different than the first option, since FTP commands don't have any capability for error detection, decision making, conditional execution, etc. Note that the .netrc file can also be used to store host access information (your username and password on each host). It's a glaring security risk to have this well-known file on your disk; anybody who gains access to your .netrc also gains access to all the hosts listed in it.

3. Use Expect to feed commands to the FTP prompt. This improves the situation with synchronization, but:

* It's cumbersome and error-prone, since it relies on the specific messages and prompts of each FTP client and server, which vary, rather than the FTP protocol itself, which is well-defined. Expect scripts break whenever the client or server prompts or text messages change, or if the messages come out in different languages.
* You're still stuck with same dumb old FTP client and its limited range of function.

4. Use FTP libraries available for Perl, Tcl, C, etc. This might give direct programmatic access to the FTP protocol, but still offers limited functionality unless you program it yourself at a relatively low and detailed level.

Monday, 21 September 2009

Taking archives of RAW logical Volumes with AIX commands

Background

A raw logical volume is a physical partition that is not directly controlled by
AIX and the file system. Usually it is for use with databases that need better
performance than they would normally get with file systems.

Whatever the reason for using a raw logical volume, you must remember that AIX
has the ability to allow a database program to use a raw logical volume for
storing data, but expects that database program or utilities for that program
to manage the data stored in that location. AIX data management tools are
designed for working at the file system level, which is one level above the
logical volume level.



Logical volume control block


Every AIX logical volume has a 512-byte block at the beginning of the LV called
the Logical Volume Control Block (LVCB). The LVCB keeps track of information in
the logical volume. Some database vendors have chosen to write over the LVCB and
use their own methods of keeping track of the information in the LV.

When using the AIX dd command for archiving and retrieving raw logical volumes,
it is important to know if your database vendor uses the AIX LVCB or writes over
it. Therefore while taking backup or archive of raw logical volumes , always remember to save your LVCB in any case.



Steps to archive raw logical volumes


Decide on the appropriate tape device blocksize.

To check the device blocksize, execute the following command:
tctl -f /dev/rmt0 status

To change a device blocksize, execute the following command:
chdev -l rmt0 -a block_size=

Recommended values are:

9track/ 1/4in = 512
8mm/4mm/dlt = 1024


Create an archived raw logical volume.

NOTE: When you use the conv=sync flag, all reads that are smaller than the ibs
value will be padded to equal the ibs value. This can greatly affect files
sensitive to change, such as database files.

For example:

ibs=512; file filesize = 52 bytes
52 bytes + 460 blanks = 512 bytes


To archive without software compression, run the following command:
dd if= of=/dev/rmt0 ibs=512 obs= conv=sync

To archive with software compression, run the following command:
dd if= bs=512 | compress | \
dd of=/dev/rmt0 ibs=512 obs= conv=sync

Restoring a raw logical volume archive.

To restore a raw logical volume archive we must know whether or not to overwrite
the Logical Volume Control Block. For more info on the LVCB, see the section
"Logical volume control block".

NOTE: The skip=1 allows the read function to skip over one 512-byte block on the
tape. The seek=1 allows the write function to skip over one 512-byte block on
the disk.



To restore without software compression, run the following command:
dd if=/dev/rmt0 ibs= obs=512 | \
dd of=/dev/ bs=512 skip=1 seek=1

To restore with software compression, run the following command:
dd if=/dev/rmt0 ibs= obs=512 | \
uncompress | dd of=/dev/ bs=512 skip=1 seek=1

Overwriting the Current System LVCB
WARNING: You must NOT overwrite the LVCB unless you are certain you need to.

To restore without software compression, run the following command;
dd if=/dev/rmt0 of=/dev/ ibs obs=512

To restore with software compression, run the following command:
dd if=/dev/rmt0 ibs= obs=512 | \
uncompress | dd of=/dev/ bs=512

Friday, 18 September 2009

Best thing of Kuwait , which i like most?

If an expat living in Kuwait has been asked with the question " which thing in life in kuwait you like most", i am 100% sure 90% of exapt will answer "Kuwaiti Dinar". No doubt being one of the most strongest currency of world , it is a major attraction for expat labour workforce ,living in kuwait.

There are in fact many good and bad things for an expat , who is living in Kuwait. Bad things of course start with extreme weather , dusty storms , dangerous traffic and end up with language problems.

Good things in Kuwait life start with peaceful life, lesser crimes,abundance of variety of food and end up with good savings.

There is no doubt Kuwait life is full of peace and shopping , but lack socialization aspect. Expats are confined to their own communities and they hesitate in mixing across communities.

For me, among many things i like civic sense of Kuwaiti population. While crossing on foot any road in Kuwait with my family , i never find any Kuwaiti or Expat driver who does not stop his car , showing his respect for pedestrians.It is so common here in Kuwait that if somebody does not show same respect for pedestrians , people start looking at him with strange expressions.

Another best thing which i like about Kuwait is peacefulness. No body seems in hurry here. Starting from Mandoub of your company to your customers , nobody takes extra pressure.Work less and earn more , seems a lovely motto for Kuwaiti work force.


No doubt , for a person like me who does not beleive too much in socialization , Kuwait is a heaven.

Wednesday, 16 September 2009

Tip:How to run a cron job every other week

A few weeks back, a customer asked me about running an automated task
every other week. Though most of us use cron as needed to run those nice
little tasks that clean up core files and evaluate the contents of log
files in the middle of the night, running a task every other week or
every other day presents a bit of a challenge. The cron command doesn't
have any way to express odd or even weeks.

The general "trick" that I use for tasks such as these is to ask cron to
run the task every week or every day and then insert logic into the
script itself to determine whether the week is odd or even.

Using this strategy, a cron entry that looked something like this:

8 8 * * 3 /usr/local/bin/send_msg

that would be executed every Wednesday might be calling a script that
examines the date and continues only when it's running on an odd or even
week.

A shell (/bin/sh) script to send a message on even weeks might look
something like this:

-------------
#!/bin/sh

WK=`date +%W`
ON_WK=`expr $WK % 2`

if [ $ON_WK = 1 ]; then
cat /opt/stats/msgs | mailx -s "stats report" someone@someplace.org
fi
-------------

This same strategy can be used for tasks that need to be performed every
other hour, every third week, every seven minutes or almost any other
interval that you might want to work with. For intervals that align
nicely with cron's timing fields (minutes after the hour, hour, day of
the month, month and day of the week, there's no good reason not to put
all of your timing logic into the cron file. When your needs don't align
well with these columns, on the other hand, or when you want to avoid
putting lines like these into the cron file:

0,4,8,12,16,20,24,28,32,36,40,44,48,52,56 * * * * /usr/local/bin/chk_log

constraining the time within the script itself is not such a bad idea.

The number 2 in the "ON_WK=`expr $WK % 2`" line of the scrip represents
the modulo operation. For anyone who isn't used to these, the result of
an "expr % " operation is what you'd be left with if
you removed the modulus as many times as you could. Because our modulus
is 2, the result is 0 or 1. Were the modulus 5, we could get any value
between 0 and 4.

The "WK=`date +%W`" command uses an argument to the date command to
obtain the number of the current week. You'd expect these to run from 1
to 52 or thereabouts. So the combination gives us a 1 if the current
week is odd and a 0 otherwise.

Other date command options that can be used with this kind of logic
include:

%d - date within the month (e.g., 21)
%m - month number (1-12)
%H - hour (0-23)
%M - minute (0-59)
%S - second (0-59)

To run a script every other day, you couldn't rely on the day of the
month. This would only work for a while. You'd soon find yourself moving
from one odd day to another. This would happen any time you got to the
end of a month with 31 days. Instead, you would use the value that
represents the day of the year. You'd expect these to run from 1 to 365
except, of course, on leap years. If the end-of-the-year problem
concerns you, you could probably perform some much more complex
calculation to be sure you're still running every other day but, for
most of us, an adjustment at the end of each calendar year is probably
not too big an issue. We could always switch our running from odd to
even days if the need for regularity was sufficiently important.

Monday, 14 September 2009

How to determine last day of month?

Determining the last day of a month in a shell script can be complicated because each month has a different number of days, and because of leap year. Here's a trick with the "date" command that simplifies this calculation. I use it to archive rotating monthly logs, and run end of the month reports.

#!/usr/bin/ksh

# Pacific Time: TZ=-16 (see below for explanation)
if [ `TZ=-16 date +%d` = "01" ]; then

# Run these commands ...

fi

The timezone setting (TZ) setting tricks the date command into thinking it is 24 hours ahead. The syntax is counter intuitive. First, the "-" sign means forward, and a "+" sign means backward. Second, "TZ" is relative to GMT time. To illustrate, here are a few examples that set the timezone ahead one day:

GMT: TZ=-24
EST: TZ=-19
CST: TZ=-18
PST: TZ=-16

This trick can be used to calculate yesterday, tomorrow or two weeks from now.

Come to one God! Allah!

Many people ask me what is core invitation from islam? I used to tell them that Islam's core message is only one plain and simple. Come to one God , Allah. Leave all your own hand made Gods ... which includes idols , animals , things and sometimes even human beings and revert your self to only one God , Allah.ALLAH who is creator of whole universe , who creates this Universe in only six days ;Allah who has created man for the first time and he who will make men dead and then finally on day of judgement he will put soul on all dead man and bring them questionable again for all their good and bad deeds.

Allah has said clearly in holy quran that many people has created so many stories about him. Some people say that Allah has wife ;some say that he has son ( Naaouz-bill-allah)...but on day of judgement he will bring all of such human beings accounatble as they use to say lie about him and He will never forgive them for all such lies.

In holy quran , we have one simple but straight verse named as "Souraa-e-Iqlaas". This verse is just only 4-5 lines, but what a message it concludes. It says that "SAY ALLAH IS ONE and only ONE..HE has no companion..He has no competitor..He has no wife and no SON.. no one can be like him and He is indifferent to all"

This simple message is infact commmon to all holy religions including christianity and jewism . Unfortunately with the pessage of time , followers of these holy religions and owners of the holy bible and zaboor , forget the original message and twisted their religions and even their holy books. They made their own stories about ONENESS of God, but if any follower of these religions explored a little , he will find that original message was same ... God is One...

I , sometimes used to feel ashame when i see people worshiping plants, animals, hand made idols and even sometimes ordinary human beings like superstars. What we are doing? is it respectable for a human being to worship these statues and idols who can not even remove a single fly from their faces? is it worthful for us as a human being to worship animals like Goats, pigs, snakes and even cows? is it worthful for us to worship ordinary things like fire and plants? NO , we should worship only one God , Allah ,whom we can not see by our eyes but by heart.When you see at blue skies and blue seas then you realize that Allah is great who created this universe in only six days.On day of judgement , we all have to come in front of HIM, will see HIM and will request him to forgive us.

May Allah help us in refraining from this worst sin ( A sin which HE will never forgive ) that is considering any thing or human being to be able to be worshiped beside HIM).

Thursday, 3 September 2009

Step by step Implementation of NTP client on AIX

Here are step by step guide , which i used many times for guiding customers how thye can use it for configuring their AIX server as NTP client to an existing NTP server.

1. Edit the /etc/ntp.conf file:

File should read atleast following lines:

server (In other words, the address of the
AIX box you just configured in the above steps)

driftfile /etc/ntp.drift

tracefile /etc/ntp.trace


2. ntpdate

If it doesn't say "no server suitable for synchronization found", then go to next step.



3. smitty xntpd-->start at both system restart and now.

Let this daemon be run for approximately 15 minutes or so before going on to step 4, otherwise stratum may show 16.


4. Now check the value of stratum , by using

lssrc -ls xntpd

Stratum should now show 4-5. If it still doesn't show, still its ok as long as it doesn't show a bigger value like 16.


Clocks should now be in sync. Repeat client steps to setup other
clients if necessary.

TIP:How to identify root cause of core files on AIX

When an application core dumps, a "core" file is placed in the current directory. Core files are often a symptom of a problem that needs attention. You can determine which application caused the "core" file going to the directory where the core file is located and running the command:

$ lquerypv -h core 6b0 64

The name of the application causing the core file is listed in the section on the right. In the sample output below, the "ftpd" application caused the core file.

000006B0 7FFFFFFF FFFFFFFF 7FFFFFFF FFFFFFFF |................|
000006C0 00000000 000007D0 7FFFFFFF FFFFFFFF |................|
000006D0 00170000 53245A2C 00000000 00000015 |....S$Z,........|
000006E0 66747064 00000000 00000000 00000000 |ftpd............|
000006F0 00000000 00000000 00000000 00000000 |................|
00000700 00000000 00000000 00000000 000000CF |................|
00000710 00000000 00000020 00000000 000000BE |....... ........|

In addition, AIX can be configured to detect when core files are created and mail a message to root, alerting root that an application has failed. The instructions for setting this up are in a README file in the /usr/samples/findcore directory. These programs are delivered with the bos.sysmgt.serv_aid fileset.

TIP:How to restore absolute path backup into different directory

Guys

It is a common day job of a Unix Admin to take backups and restore. However , very often a unix admin stuck due to the fact that he has taken an absolute path backup through tar command and now he wants to restore it into a different directory.

How you can do it. Here is a small but fantastic tip which worked many times for me


Let say, suppose you receive a tar tape created using absolute path names:

tar -cvf /dev/rmt0 /test/*

but now you want to restore it to the /test1 directory. There is a wonderful command named as pax. Syntax for pax command for that purpose would be as follows:

pax -rf /dev/rmt0 -s/test1/test/p


and you are done. Whole backup will be restored in test1 directory. Infact -s switch with pax command did magic for us and allow us this path change.

Thursday, 27 August 2009

Tip: Your AIX is 32 bit or 64 bit?

Description:
# Shows whether the system is 64 bit capable and also shows
# whether the OS is running in either 32 or 64 bit mode.
#

hbit=`bootinfo -y`
rbit=`bootinfo -K`

echo;echo "Hardware: ${hbit} bits capable"
echo "Running: ${rbit} bits mode";echo
exit 0

Monday, 24 August 2009

How can you find out When a system was installed?

Enter the following command:

lslpp -h bos.rte

The output of this command will show the history of when the operating system
was installed.

Sunday, 23 August 2009

Should I restart AIX system or not?

Very often my customers call me and ask that they have installed an AIX LPP or device drvers but they are not sure whether they require to reboot the system to complete the installation.

The answer is in the ".toc" file which is associated with the installp filesets. The ".toc" file contains lines like

bos.adt.libm 05.02.0000.0000 1 N U en_US ...

The "N" character means no reboot is required. On the other hand, if the line has a "b" character, a reboot is necessary. An example of such a line would be

bos.mp64 05.02.0000.0000 1 b B en_US .....

A small script to find speed of your Network cards

#!/bin/ksh

for en in `netstat -i | grep en | awk '{print $1}' | sort -u | cut -c3`
do
adapter=`echo ent${en}`
entstat -d ${adapter} | grep "Media Speed"
done

exit 0

Friday, 21 August 2009

Using Secure Rsync to Synchronize Files Between Servers

To build up the whole solution , we will start with Openssh installation on AIX.OpenSSH is a free software tool that supports SSH1 and SSH2 protocols. It's reliable and secure and is widely accepted in the IT industry to replace the r-commands, telnet, and ftp services, providing secure encrypted sessions between two hosts over the network.

OpenSSH source code is compiled on AIX 5L and shipped on the AIX 5L Expansion Pack and Web Download Pack. You can also get the installation images from OpenSSH on AIX. When you install the AIX OpenSSH image from the Bonus Pack CD or from the website, you can get support from IBM Program Services.

OpenSSH is dynamically linked with OpenSSL for use of the encryption library libcrypto.a. You can get the OpenSSL library from the AIX Toolbox for Linux Applications CD or from this website. OpenSSL is delivered in RPM format (instead of installp format). To install OpenSSL, use the command:

# rpm -i

Lets walk through the process of downloading and installing OpenSSL, OpenSSH and rsync.

1. Download the package manager:

ftp://ftp.software.ibm.com/aix/freeS...LP/ppc/rpm.rte

2. Install the package manager

# installp -qacXgd rpm.rte rpm.rte

3. Download the OpenSSL library: http://www6.software.ibm.com/dl/aixtbx/aixtbx-p

a. OpenSSL is cryptographic content so you will need to sign in with your IBM ID and password. Create one if you don’t have one.
b. The next screen is a license agreement. Agree and confirm.
c. Search the page for “openssl-0.9.7g-1.aix5.1.ppc.rpm” and click on the download button next to it.

4. Install the RPM for openSSL

# rpm –i openssl-0.9.7g-1.aix5.1.ppc.rpm

5. Download OpenSSH: https://sourceforge.net/project/show...roup_id=127997

6. Installation of Openssh: The resulting file is compressed tar file. Uncompress and untar it and follow the directions in the Customer_README file exactly as given.

7. Download the latest version of rsync: ftp://ftp.software.ibm.com/aix/freeS...RPMS/ppc/rsync

8.Install rsync:

# rpm –i rsync-2.6.2-1.aix5.1.ppc.rpm

You must complete these steps on all servers/LPARs that will be using rsync, either as a file server or a sync client. You must also set up the necessary SSH keys between servers.

For the remainder of this exercise, we are going to limit ourselves to two servers. AIXServe will be the server with the master files and AIXClient will be the server/LPAR obtaining the master files for local use.

A common usage in this scenario is user information, so we will address that particular example, but rsync can be used for any types of files or directory trees. Indeed, it can be used to keep HTML source in sync, as just one more example use.

This is an example of a script that does a “pull” from AIXServe. AIXClient transfers the latest passwd, group and security files overwriting its own files. Additionally, FileClient copies any new user directories in /home but does not update, modify or delete any existing directories.

#!/usr/bin/ksh
# Get new /etc/passwd & /etc/group files
# Overwrite existing files
rsync –goptvz -e ssh AIXServe:/etc/passwd /etc/passwd
rsync –goptvz -e ssh AIXServe:/etc/group /etc/group
# Get new files from /etc/security
# Overwrite existing files
for FILE in group limits passwd .ids environ .profile
do
rsync –goptvz -e ssh AIXServer:/etc/security/$FILE /etc/security/$FILE
done
# Grab new directories in /home
# Do not change anything that already exists
rsync -gloprtuvz -e ssh --ignore-existing AIXServ:/home /home

This solution is fine for two or three servers, but what about more than that? Besides which, if the centralized user management is being done on FileServe, doesn’t it make more sense to pull rather than push?

This script does a push from AIXServe to multiple clients:

#!/usr/bin/ksh
for CLIENTS in `cat /etc/useradm_clients.rsync`
do
echo Updating ${CLIENTS}…
# Get new /etc/passwd & /etc/group files
# Overwrite existing files
rsync –goptvz -e ssh /etc/passwd ${CLIENTS}/etc/passwd
rsync –goptvz -e ssh /etc/group ${CLIENTS}/etc/group
# Get new files from /etc/security
# Overwrite existing files
for FILE in group limits passwd .ids environ .profile
do
rsync –goptvz -e ssh /etc/security/$FILE ${CLIENTS}/etc/security/$FILE
done
# Grab new directories in /home
# Do not change anything that already exists
rsync -gloprtuvz -e ssh --ignore-existing /home ${CLIENTS}/home
echo ${CLIENTS} Complete.
done

Tuesday, 18 August 2009

AIX BOOT LED Hang - 581

When one of my customer called me that his AIX server hanging on AIX Boot on LED 581 , I was of the instantaneous point of view that problem is just related to Network configuration of server.

When i was driving to customer site , i was thinking that i will come back to office within 1 hour and enjoy lunch at office.But when i reached at customer site , i realized that problem seems huge as there are almost 4 VIO servers and two dedicated AIX boxes which were having almost 30 minutes hang at 581 LED during boot.There was some POWER related issue at their data center and therfore their all network equipments as well as servers faced POWER failure.

AIX servers were completing boot process , but never giving a login prompt.

VIO servers were also having successful boot , but giving some abnormal behavior.

We decided to not to start AIX client Lpars ( using VIO servers ) until we know the main reason for 581.

While searching on internet , i could not find any details about LED except that this LED code has something to do with network configuration and DNS settings.

While inuring with customer , we realized that they do have some issues with their core switch, DHCP services, DNS services and their PDC. Their windows based PDC was also crashed and was facing some problem related to network adapter driver corruption.

While wasting almost 3-4 hours at customer site , and multiple restart , i was quite sure that we have no other option except wait till their all network related issues resolve.

After 4 hours they were successful in bringing their PDC ( DHCP/DNS) up and as soon as DNS server started pinging, my all AIX Lpars, servers and VIO servers stopped hanging at 581 ( without doing any thing on AIX).

Lesson learnt:If you AIX servers are DNS clients then you are dependent on DNS server. If DNS server is down and you try to reboot AIX or VIOS then you may face issue while rebooting AIX ( LED 581).Better to put two DNS servers entry in AIX or at least put hosts=local,bind4 entry in /etc/netsvc.conf file.

Sunday, 16 August 2009

AIX- TIP " How to handle can not fork condition"

PAGING SPACE LOW messages will be generated when 512 pages remain free in the pool of free pages. Processes will be terminated when only 128 pages of free memory are left.

This is a very common message in a Unix Administrator life. When it comes , in most of circumtances , administrator can not any thing ;sometimes he can not even login to system;sometimes he can login but can not execute any commands.He will get the message "Can not Fork" which is indicative of very low paging space condition on AIX.

The only way to recover from this situation is to restart the system and then monitor paging space utilization for some longer time and then analyze whether you need to optimize paging space on AIX.

As a rule of thumb , all paging spaces on AIX should be roughly of same size and split across multiple physical volumes. However hd6 can be slightly bigger than others.

AIX determines which users have the most page space allocated to them and selects those processes for termination. All real memory allocated to a process will have a backing store of equal size (for every page of real memory allocated, there will be a page of disk space from paging space allocated)

Its recommended that the first paging space (/dev/hd6) be larger, since this one is brought on sooner than the rest, resulting in being more full than the others.

Thrashing should not be confused with the problem of low paging space. Low paging space is the condition in which the amount of paging space is insufficient. Thrashing is the condition in which the amount of RAM is insufficient. Low paging space involves the consumption of disk space; thrashing involves the consumption of RAM and disk I/O.

Example:lsps -a (Display attributes of all paging spaces)or lsps -s which give a summary view

LV hd6 is the default paging space. If more than one paging LV is defined, hd6 will always have a higher percentage of utilization since this is the first paging LV turned on at boot time. Once all the other paging LVs have been swapped on, paging is allocated on a round robin basis - four pages (pagesize is 4k ) at a time.

80% (MAXPERM) of real memory is to be used by persistent storage. Persistent storage is information that is not paged to the page space (/dev/hd6) but rather is paged to the physical volume where that file resides.

Friday, 14 August 2009

Using ACLs on TCPIP ports for AIX

Discretionary Access Control: TCP Connections (DA.4)

TCP based services can be protected with ACLs as well. By specifying port, host/network,user combinations, ports can be restricted to specific hosts and/or users. For example specifying port 6000, machine colorado and user joe, only this user coming from machine colorado will be able to connect to the X server. The remote hosts use TCP AH headers to send the information about the user together with the connection request. AIX 5.2 I checks /etc/security/acl for permitted
clients.
With the DACinet Feature of AIX 5.2 I the concept of privileged ports (ports that can only be opened by the superuser,typically all ports below 1024) is extended so that any port now can be a privileged port. A bitmap of privileged ports is defined to hold information on whether a port is privileged. A system administrator can modify this bitmap.
This function contributes to satisfy the security requirement FDP_ACC.1, FDP_ACF.1, FMT_MSA.1, FMT_SMF.1 and FMT_MSA.3.

Main command which is used for maintaining ACL control on TCPIP is dacinet command.

dacinet Command
Purpose
Administers security on TCP ports in CAPP/EAL4+ configuration.
Syntax
dacinet aclflush
dacinet aclclear Service | Port
dacinet acladd Service | [-] addr [/prefix_length] [u:user | uid | g:group | gid]
dacinet acldel Service | [-] addr [/prefix_length] [u:user | uid | g:group | gid]
dacinet aclls Service | Port
dacinet setpriv Service | Port
dacinet unsetpriv Service | Port
dacinet lspriv
Description
The dacinet command is used to administer security on TCP ports. See the Subcommands section for details of the various functions of dacinet.
Subcommands
acladd Adds ACL entries to the kernel tables holding access control lists used by DACinet. The syntax of the parameters for the acladd subcommand is:
[-]addr[/length][u:user|uid| g:group|gid]
The parameters are defined as follows:

addr
A DNS hostname or an IP v4/v6 address. A "-" before the address means that this ACL entry is used to deny access rather than to allow access.
length
Indicates that addr is to be used as a network address rather than host address, with its first length bits taken from addr.

u:user|uid
Optional user identifier. If the uid is not specified, all users on the specified host or subnet are given access to the service. If supplied, only the specified user is given access.

g:group|gid
Optional group identifier. If the gid is not specified, all users on the specified host or subnet are given access to the service. If supplied, only the specified group is given access.

aclclear Clears the ACL for specified service or port.

acldel Deletes ACL entries from the kernel tables holding access control lists used by DACinet. The dacinet acldel subcommand deletes an entry from an ACL only if it is issued with parameters that exactly match the ones that were used to add the entry to the ACL. The syntax of the parameters for the acldel subcommands is as follows:
[-]addr[/length][u:user|uid| g:group|gid]

 How to Enable Graphical Mode on Red Hat 7 he recommended way to enable graphical mode on RHEL  V7 is to install first following packages # ...