Friday 30 October 2009

AIX 32 and 64 bit Dilemma

Software requirement for 64 bit AIX OS

Beside hardware requirement for running 64 bit operating system on IBM POWER systems , the other main requirement is fileset.The bos.64bit is the Base Operating System 64-bit runtime fileset. If bos.64bit is not installed, you do not have the /etc/methods/cfg64 file. Without the /etc/methods/cfg64 file, you will not have the option of enabling or disabling the 64-bit environment via SMIT, which updates the inittab with the load64bit line (simply adding this line does not enable the 64-bit environment).

The command lslpp -l bos.64bit will reveal if this fileset is installed. The
bos.64bit fileset is on the 4.3.x media, however, installing it does not ensure
that you will be able to run 64-bit software.

With the bos.64bit fileset installed on non 64-bit hardware, you should be able
to compile your 64-bit software; however, you will not be able to run 64-bit
programs on your 32-bit hardware.



Hardware required

You must have 64-bit hardware to run 64-bit applications. At AIX levels 4.3.2
and 4.3.3, to determine whether your system has 32-bit or 64-bit hardware
architecture:

Log in as root.
At the command line, enter:
bootinfo -y

This produces the output of either 32 or 64, depending on whether the hardware
architecture is 32-bit or 64-bit.

In addition, if you enter lsattr -El proc0, at any version of AIX, the output of
the command should return the type of processor for your server.

The types of 64-bit processors are as follows:

PowerPC_RS64
PowerPC_RS64 II
PowerPC_RS64 III
PowerPC_Power3
PowerPC_Power3 II



Kernel extensions vs. 64-bit kernel


To determine if the 64-bit kernel extension is loaded, from the command line
enter:

genkex |grep 64

You should see information similar to the following:

149bf58 a3ec /usr/lib/drivers/syscalls64.ext

NOTE: Having the driver extensions, does not mean that the kernel is a 64-bit
kernel. A 64-Bit Kernel became available at 5.1 oslevel.

The driver extensions just allows the 64-bit application to be compiled by a
32-bit kernel. If the 32-bit kernel has a 64-bit processor, the syscalls64.ext
will allow the 64-bit application to execute. Yet at 5.1, a 64-bit kernel and a
64-bit processor has better performance with 64-bit applications.

To truly change the kernel to 64-bit, you need to be at the 5.1 oslevel. The
means to change to a 64-bit kernel are:

From 32-bit to 64-bit:

ln -sf /usr/lib/boot/unix_64 /unix
ln -sf /usr/lib/boot/unix_64 /usr/lib/boot/unix
lslv -m hd5
bosboot -ad /dev/ipldevice
shutdown -Fr
bootinfo -K (should now be 64)

To change the kernel back to 32-bit:

From 64-bit to 32-bit:

ln -sf /usr/lib/boot/unix_mp /unix
ln -sf /usr/lib/boot/unix_mp /usr/lib/boot/unix
lslv -m hd5
bosboot -ad /dev/ipldevice
shutdown -Fr
bootinfo -K (should now be 32)


32-bit and 64-bit performance comparisons on IBM POWER systems


To examine the benefits and drawbacks of going from 32-bit to 64-bit mode and
further effects on the system, consult the following, AIX 64-bit Performance in
Focus, which is available at IBM Redbooks.

In most cases, running 32-bit applications on 64-bit hardware is not a problem,
because 64-bit hardware can run both 64-bit and 32-bit software. However, 32-bit
hardware cannot run 64-bit software. To find out if any performance issues exist
for applications that are running on the system, such as Lotus Notes and Oracle,
refer to those application's user guides for their recommended running
environment.

Tuesday 27 October 2009

Restricting your AIX Error Logs

Sometimes, you do not want certain error conditions to show in the errorlog. If at this very moment, you think “what a silly idea this is” – please refrain from any further judgement – eventually you will get the picture.
AIX error reporting facilities, use templates in order to know what conditions constitue an error, and how to collect and display the associated with them information.
For these in need or more in-depth info, please look it up in AIX docs or on-line.
Instructing error logging facilities what not to report and/or not to include in the log (among many other things) is done with help of the errupdate command. This command can process your directives contained in an ASCI files or directly from the command line. Error IDENTIFIER is used to identify the error you want to work with. Multiple entries (error IDENTIFIERs and the associated with them processing instructions) must be separated with a blank line.
Look at the few lines shown next showing interaction with errupdate via command line:

root@MarcoPolo: /root> errupdate
=B6048838:
REPORT=FALSE

The first character you type is the = character to indicate modification of existing reporting behaviour associated with error label B6048838. Do you notice the : character following the error label? After you hit the Enter key, you can enter any of the following directives: REPORT, LOG and ALERT. Each may equal either TRUE or FALSE. When you are done, hit Enter twice to activate the changes.
REPORT - The info about events for which REPORTING is disabled is saved in the error log but it is not displayed with the errpt command.
LOG - The info about events for which LOGGING is disabled is not sent to the error log file.
To achieve identical results using an ASCI file to specify the modifications, follow the procedure bellow:
root@MarcoPolo: /root> mkdir -p /var/adm/errorFilter
root@MarcoPolo: /root> cd /var/adm/errorFilter
root@MarcoPolo: /var/adm/errorFilter> vi errorFilter Edit to your satisfaction.
root@MarcoPolo: /var/adm/errorFilter> cat errorFilter
=B6048838:
REPORT=FALSE
LOG=FALSE
ALERT=FALSE

root@MarcoPolo: /var/adm/errorFilter> errupdate ./errorFilter
0 entries added.
0 entries deleted.
1 entries updated.
The results will not only be the requires modifications but also a file in the same directory as the errorFilter named errorFilter.undo - its name reveals its purpose.

Friday 23 October 2009

How to backup your VIO Server

Backing up the Virtual I/O Server

There are 4 different ways to backup/restore the Virtual I/O Server as illustrated in the following table.

Backup method Restore method
To tape From bootable tape
To DVD From bootable DVD
To remote file system From HMC using the NIMoL facility and installios
To remote file system From an AIX NIM server


Backing up to a tape or DVD-RAM

To backup the Virtual I/O Server to a tape or a DVD-RAM, the following steps must be performed

1. check the status and the name of the tape/DVD drive
#lsdev | grep rmt (for tape)
#lsdev | grep cd (for DVD)

2. if it is Available, backup the Virtual I/O Server with the following command
#backupios –tape rmt#
#backupios –cd cd#

If the Virtual I/O Server backup image does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created. This command creates one or more bootable DVDs or tapes that you can use to restore the Virtual I/O Server

Backing up the Virtual I/O Server to a remote file system by creating a nim_resources.tar file

The nim_resources.tar file contains all the necessary resources to restore the Virtual I/O Server, including the mksysb image, the bosinst.data file, the network boot image, and SPOT resource.
The NFS export should allow root access to the Virtual I/O Server, otherwise the backup will fail with permission errors.

To backup the Virtual I/O Server to a filesystem, the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir

2. Mount the exported remote directory on the directory created in step 1.
#mount server:/exported_dir /backup_dir

3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir

The above command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC.

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.


The backupios command empties the target_disk_data section of bosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need to repopulate the target_disk_data section of bosinst.data and replace this file in the nim_resources.tar. All other parts of the nim_resources.tar image must remain unchanged.


Backing up the Virtual I/O Server to a remote file system by creating a mksysb image

You could also restore the Virtual I/O Server from a NIM server. One of the ways to restore from a NIM server is from the mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server from a NIM server from a mksysb image, verify that the NIM server is at the latest release of AIX.

To backup the Virtual I/O Server to a filesystem the following steps must be performed

1. Create a mount directory where the backup file will be written
#mkdir /backup_dir
2. Mount the exported remote directory on the just created directory
#mount NIM_server:/exported_dir /backup_dir
3. Backup the Virtual I/O Server with the following command
#backupios –file /backup_dir/filename.mksysb -mksysb

How to upgrade ML/TL of AIX through alternate disk installation method

1. Pre-installation checks

To check packages/file set consistency
# lppchk –v

If we found some errors. We can get more information about problem & resolve it before continue with installation.
# lppchk -v -m3

Check the current installed ML/TL
# instfix -i|grep ML
# oslevel –s

Check Rootvg

Commit all package/fileset installed on the servers
# smit maintain_software

Check if rootvg is mirrored and all lv's are mirrored correctly (excluding dump and boot volumes). If your rootvg is not mirrored we can skip later in document part for alt_disk_install,
# lsvg -p rootvg
# lsvg rootvg
# lsvg -l rootvg


2. Preinstallation Tasks

Check for HACMP cluster

Check if cluster software is installed .Check for HACMP running on server.

# lslpp -l | grep -i cluster
Check if the cluster processes are active
# lssrc -g cluster

If HACMP is used, a current fix pack for HACMP should be installed when a new AIX Technology Level is installed. Currently available HACMP fix packs can be downloaded via http://www14.software.ibm.com/webapp/set2/sas/f/hacmp/home.html



3. Check for IBM C/C++ compiler

Updates needs to be installed with TL up gradation. Same can be downloaded from below mentioned links.
http://www-1.ibm.com/support/docview.wss?rs=2239&uid=swg21110831

4. Check for Java VersionIf Java is used, current software updates for the Java version(s) should be installed when a new AIX Technology Level is installed. If Java is being used in conjunction with other software, consult the vendor of that software for recommended Java levels

The Java version(s) installed on AIX can be identified with the commands
# lslpp -l | grep -i java

Default Java version can be identified with the
# java -fullversion command.
Java fixes can be downloaded from below link.
http://www14.software.ibm.com/webapp/set2/sas/f/hacmp/home.html


5. Check for recommended TL/SP for system

Gets information of latest TL/SP for system using Fix Level Recommendation Tool available in below link
http://www14.software.ibm.com/webapp/set2/flrt/home

Download latest updates from IBM fix central website & dump in NIM server.

Create resources in NIM servers.

Run mksysb backup of servers on safer side.

Check for running application compatibility if any. Confirm it with application owner.
Free hdisk1 for alternate disk installation

Remove the secondary dump device if present from hdisk1. Then change the settings for secondary dump device to /dev/sysdumpnull.
# sysdumpdev –P –s /dev/sysdumpnull

Unmirror rootvg
#unmirrorvg rootvg

migrate logical volume from hdisk1 to hdisk0 which are not mirrored.
# migratepv hdisk1 hdisk0.

Clear boot record from hdisk0
# chpv -c hdisk1

Add new boot image to the first PV to have “fresh” boot record just for safer side
# bosboot –ad /dev/hdisk0

Set bootlist to hdisk0
# bootlist –m normal hdisk0 hdisk1 (hdisk1 after installation will contain upgraded OS)

Removes the second PV from rootvg
# reducevg rootvg hdisk1



7. Alternate disk migration

Carry out alternate disk installation via nim on hdisk1. We will carry out preview install. If it gets succeed we will go ahead & install TL/SP in applied mode
# smit nimadm

Reboot system. It will be booted from hdisk1 which contains upgraded OS.
# shutdown -Fr



8. Recreate the mirror of rootvg

After few days of stable work and some tests from application users.

Remove alternate disk installed disk
# alt_disk_install –X

Add disk hdisk0 in rootvg
# extendvg rootvg hdisk0

Check for estimated dump
# sysdumpdev –e

Re-create secondary dump device
# sysdumpdev –P –s “dump_device”

Mirror rootvg with hdisk1 in background.
# nohup mirrorvg '-S' rootvg hdisk1 &

Create bootimage on hdisk1
# bosboot -ad /dev/hdisk1

Add hdisk1 to bootlist
# bootlist -m normal hdisk0 hdisk1

Synchronize rootvg
# nohup syncvg -v rootvg &

Friday 16 October 2009

Tip: A small script to notify new error entries in error log

Although IBM is now pushing system director concept into AIX as well, to monitor overall health of system, i still found following small shell script very helpfull which can be used to notify any new errors in AIX error log
-----------------------------------------------
#!/bin/ksh
# Script to notify new errors in AIX error log


TOTALERRS=`errpt | grep -v "IDENTIFIER" | wc -l`

if [ ! -f /usr/local/bin/errpt.count ]
then
echo 0 > /usr/local/bin/errpt.count
fi

OLDERRS=`cat /usr/local/bin/errpt.count`
((NEWERRS=TOTALERRS-OLDERRS))

if [ ${NEWERRS} -gt 1 ]
then
echo "Please check errpt, ${NEWERRS} errors found!" | /usr/bin/mailx -vs "`hostname`: errpt report" recipient@domain.com
elif [ ${NEWERRS} -gt 0 ]
then
errpt | grep -v "IDENTIFIER" | head -${NEWERRS} | cut -c 42- |
while read ERRMSG
do
echo "errpt:${ERRMSG}" | /usr/bin/mailx -vs "`hostname`: errpt report" recipient@domain.com
done
fi

echo ${TOTALERRS} > /usr/local/bin/errpt.count

exit 0
-----------------------------------------------

Wednesday 14 October 2009

My trip to Istanbul:Fascinating city of Civilizations








Istanbul has a long and fascinating history which encompasses over centuries and three prominent eras. It starts with era of Bayzentine nation, followed by Romans and then era of Muslims ( othoman empire).In closer vicinity of meters only, you will find symbols of all these historical eras and you become deeply impressed by greatness of this historical city.

We reached Istanbul SAW airport around 11:00 clock in morning. SAW airport is around 51 KM from istanbul city and it took around 2 hours to reach our hotel which was located in Bayziat area of istanbul city. I was horrified by seeing traffic jams on road of istanbul but it is a fact that like all other big cities of world , istanbul also facing issues of traffic. They have both trams and metro in istanbul city , but still traffic jams are common in city.

Our hotel was small but clean. Main advantage was that it was very closed to main tourist attractions like blue mosque, hagia sofia , Grand Bazaarand Topkapi palace. We were able to reach all these places by walk , within 20 minutes.

We started our first day with a short visit to Grand Bazaar and istanbul university, followed by Bayzait Mosque. All of these locations were very close to our hotel so we took advatnage of that and visited all of them in same afternoon/morning.

Second day , we visited Blue Mosque and Hugia sofia. Both of these places are really wonderful.The only thing which i disliked about hugia sofia is that Government has converted it to Meuseum. I think they should retain it as either church or Mosque , but converting it to a meuseum makes no sense.

Third day , we went to Ayup Mosque to pray fatiha for our great muslim saint and close friend of our prophet (PBUH).

We then visited Topkapi palace which was constructed in Othmon period of muslim era. It is a fantastic palace with all its walls full of Gold. It is a memorable palace , and give you a memory of fantastic and glorious era of othmon empire.

Last day we went to Emunono port to catch a hour based ferry trip. They charged us around 9 lira per person. It was one and hour long trip but really memorable. I advice all travelers who visit Istanbul not to miss this golden oppurtunity.

Sunday 4 October 2009

Changing Herald in AIX

Here are two ways to customize the AIX login prompt.

The first way is to add a "herald" in the default stanza in the /etc/security/login.cfg file as follows

default:
sak_enabled = false
logintimes =
logindisable = 0
logininterval = 0
loginreenable = 0
logindelay = 0
herald = "AIX TIGER HOME\r\nID:"

The second method uses the "chsec" command to modify the same file:

chsec -f /etc/security/login.cfg -s default -a herald="AIX TIGER HOME\r\nID:"

Note: for additional security, I recommend changing the standard Unix "login" prompt to something else like "ID". The "login" prompt almost invariably identifies the system as Unix to hackers.

Friday 2 October 2009

WPARS in AIX 6- Part-1

Workload Partitioning is a virtualization technology that utilizes software rather than firmware to isolate users and/or applications.


A Workload Partition (WPAR) is a combination of several core AIX technologies. There are differences of course, but here the emphasis is on the similarities. In this essay I shall describe the characteristics of these technologies and how workload partitions are built upon them.

There are two types of WPAR: system and application.My focus is on system WPAR as this more closely resembles a LPAR or a seperate system. In other words, a system WPAR behaves as a complete installation of AIX. At a later time application workload partitions will be described in terms of how they differ from a system WPAR. For the rest of this document WPAR and system WPAR are to be considered synonomous.

AIX system software has three components: root, user, and shared. The root component consists of all the software and data that are unique to that system or node. The user (or usr) part consists of all the software and data that is common to all AIX systems at that particular AIX software level (e.g., oslevel AIX 5.3 TL06-01, or AIX 5.3 TL06-02, or AIX 6.1). The shared component is software and data that is common to any UNIX or Linux system.

In it's default configuration a WPAR inherits it's user (/usr) and shared (/usr/share, usually physically included in /usr filesystem) components from the global system. Additionally, the WPAR inherits the /opt filesystem. The /opt filesystem is the normal installation area in the rootvg volume group for RPM and IHS packaged applications and AIX Linux affinity applications and libraries. Because multiple WPAR's are intended to share these file fystems (/usr and /opt) they are read-only by WPAR applications and users. This is very similiar to how NIM (Network Installation Manager) diskless and dataless systems were configured and installed. Since only the unique rootvg volume group file systems need to be created (/, /tmp, /var, /home) creation of a WPAR is a quick process.

The normal AIX boot process is conducted in three phases:
1) boot IPL, or locating and loading the boot block (hd5);
2) rootvg IPL (varyonvg of rootvg),
3) rc.boot 3 or start of init process reading /etc/inittab

A WPAR activation or "booting" skips step 1. Step 2 is the global (is hosting) system mounting the WPAR filesystems - either locally or from remote storage (currently only NFS is officially supported, GPFS is known to work, but not officially supported at this time (September 2007)). The third phase is staring an init process in the global system. This @init@ process does a chroot to the WPAR root filesystem and performs an AIX normal rc.boot 3 phase.

WPAR Management

WPAR Management in it's simpliest form is simply: Starting, Stopping, and Monitoring resource usage. And, not to forget - creating and deleting WPAR.

Creating a WPAR is a very simple process: the onetime prequistite is the existance of the directory /wpars with mode 700 for root. Obviously, we do not want just anyone wondering in the virtualized rootvg's of the WPAR. And, if the WPAR name you want to create resolves either in /etc/hosts or DNS (and I suspect NIS) all you need to do is enter:
# mkwpar -n
If you want to save the output you could also use:
# nohup mkwpar -n & sleep 2; tail -f nohup.out
and watch the show!

This creates all the wpar filesystems (/, /home, /tmp, /var and /proc)
and read-only entries for /opt and /usr. After these have been made, they are
mounted and "some assembly" is performed, basically installing the root part
of the filesets in /usr. The only "unfortunate" part of the default setup is
that all filesystems are created in rootvg, and using generic logical partition
names (fslv00, fslv01, fslv02, fslv03). Fortunately, there is an argument
(-g) that you can use to get the logical partitions made in a different
volume group. There are many options for changing all of these and they
will be covered in my next document when I'll discuss WPAR mobility.

At this point you should just enter:
# startwpar
It will wait for prompt and from "anywhere" you can connect to the running WPAR just
as if it was a seperate system. Just do not expect to make any changes in /usr
or /opt (software installation is also a later document).
AIX / HMC Tip Sheet
HMC Commands
lshmc –n (lists dynamic IP addresses served by HMC)
lssyscfg –r sys –F name,ipaddr (lists managed system attributes)
lssysconn –r sys (lists attributes of managed systems)
lssysconn –r all (lists all known managed systems with attributes)
rmsysconn –o remove –ip (removes a managed system from the HMC)
mkvterm –m {msys} –p {lpar} (opens a command line vterm from an ssh session)
rmvterm –m {msys} –p {lpar} (closes an open vterm for a partition)
Activate a partition
chsysstate –m managedsysname –r lpar –o on –n partitionname –f profilename –b normal
chsysstate –m managedsysname –r lpar –o on –n partitionname –f profilename –b sms
Shutdown a partition
chsysstate –m managedsysname –r lpar –o {shutdown/ossshutdown} –n partitionname [-immed][-restart]
VIO Server Commands
lsdev –virtual (list all virtual devices on VIO server partitions)
lsmap –all (lists mapping between physical and logical devices)
oem_setup_env (change to OEM [AIX] environment on VIO server)
Create Shared Ethernet Adapter (SEA) on VIO Server
mkvdev –sea{physical adapt} –vadapter {virtual eth adapt} –default {dflt virtual adapt} –defaultid {dflt vlan ID}
SEA Failover
ent0 – GigE adapter
ent1 – Virt Eth VLAN1 (Defined with a priority in the partition profile)
ent2 – Virt Eth VLAN 99 (Control)
mkvdev –sea ent0 –vadapter ent1 –default ent1 –defaultid 1 –attr ha_mode=auto ctl_chan=ent2
(Creates ent3 as the Shared Ethernet Adapter)
Create Virtual Storage Device Mapping
mkvdev –vdev {LV or hdisk} –vadapter {vhost adapt} –dev {virt dev name}
Sharing a Single SAN LUN from Two VIO Servers to a Single VIO Client LPAR
hdisk = SAN LUN (on vioa server)
hdisk4 = SAN LUN (on viob, same LUN as vioa)
chdev –dev hdisk3 –attr reserve_policy=no_reserve (from vioa to prevent a reserve on the disk)
chdev –dev hdisk4 –attr reserve_policy=no_reserve (from viob to prevent a reserve on the disk)
mkvdev –vdev hdisk3 –vadapter vhost0 –dev hdisk3_v (from vioa)
mkvdev –vdev hdisk4 –vadapter vhost0 –dev hdisk4_v (from viob)
VIO Client would see a single LUN with two paths.
spath –l hdiskx (where hdiskx is the newly discovered disk)
This will show two paths, one down vscsi0 and the other down vscsi1.


AIX Performance TidBits and Starter Set of Tuneables


Current starter set of recommended AIX 5.3 Performance Parameters. Please ensure you test these first before implementing in production as your mileage may vary.
Network
no –p –o rfc1323=1
no –p –o sb_max=1310720
no –p –o tcp_sendspace=262144
no –p –o tcp_recvspace=262144
no –p –o udp_sendspace=65536
no –p –o udp_recvspace=655360
nfso –p –o rfc_1323=1
NB Network settings also need to be applied to the adapters
nfso –p –o nfs_socketsize=600000
nfso –p –o nfs_tcp_socketsize=600000
Memory Settings
vmo – p –o minperm%=5
vmo –p –o maxperm%=80
vmo –p –o maxclient%=80
Let strict_maxperm and strict_maxclient default
vmo –p –o minfree=960
vmo –p –o maxfree=1088
vmo –p –o lru_file_repage=0
vmo –p –o lru_poll_interval=10

IO Settings

Let minpgahead and J2_minPageReadAhead default
ioo –p –o j2_maxPageReadAhead=128
ioo –p –o maxpgahead=16
ioo –p –o j2_maxRandomWrite=32
ioo –p –o maxrandwrt=32
ioo –p –o j2_nBufferPerPagerDevice=1024
ioo –p –o pv_min_pbug=1024
ioo –p –o numfsbufs=2048
If doing lots of raw I/O you may want to change lvm_bufcnt
Default is 9
ioo –p –o lvm_bufcnt=12
Others left to default that you may want to tweak include:
ioo –p –o numclust=1
ioo –p –o j2_nRandomCluster=0
ioo –p –o j2_nPagesPerWriteBehindCluster=32
Useful Commands
vmstat –v or –l or –s lvmo
vmo –o iostat (many new flags)
ioo –o svmon
schedo –o filemon

Building High Performance clusters on RHEL

Guys

My latest article on tricks to build high performance clusters on linux platform has been published in November edition of Linux Magazine. This edition has been printed and will be available on News shelfs around the world by end of October.
If you really want to convert your old computers into real high performance clusters, you may look into the magazine for that article.

Article will be available on my blog website , six months after publish date, as according to contract.

 How to Enable Graphical Mode on Red Hat 7 he recommended way to enable graphical mode on RHEL  V7 is to install first following packages # ...