Thursday 30 April 2009

Best food places in Kuwait

oh, you are in Kuwait and you are looking for some really good places where you can not only fill up your stomach , but also satisfy your taste bid, then it is really difficult task.

I am living in kuwait for last 3 years and still not able to find very good indian/pakistani restaurants in kuwait ,however still there are some very good arab and labnese restaurants which can be proved as very good return of your money.

One of my favorite Arab restaurant is SamarQund Restaurant in Sharq Area. Delicious Food with best services . Only thing is that , this restaraunt does not have a very good sitting area, however its delicious kababs and sheeh Tawauk compensate for that.

How to reach SamarQund, it is simple... From Bank Center Kuwait City ( Or Durwaza Abdul Ruzaak) move towards the road which goes straight to Dasman Place. On right left hand side , you will find Gulf Takaful Insurance office. Take a U turn from next and samarQund/Tawwa Restuarants are on your right. Sit there and enjoy arab taste of Chicken, Mutton or ( even Lamb) with spicy indian chutnee and Arab Rob ( Curd with mint).

Another place which i love is very near to my home in Hawalli... It is famous for its delicious Sheesh Twaks, named as Mutum ( Arab word for Restaurants ) Othman.

If you have plenty of money in your pockets and want to spend on Arab food , then go to Seven Seas on Gulf road. Its very near to beach and have nice view of sea.

Now returning to availability of indian food in Kuwait, you hardly find any great options. The only options which i know are Mughal Mehal and Village. Mughal Mehal branches are all avilable across different places in kuwait city, like Salmiya, Shurq and Hawalli.

Village is less expensive ...It is located on salem-mubarak street , just above center point building.But its food is average, for example if you try indian briyani there( even in Mughal mehal ) you will feel that it lacks real taste of Biryani. Rice is always separated from meat , which is totally wrong approach.

You can find an iraninan taste option in shurq area, it is located in iranian market and known as "Shater Abbas" restuarant. It is just in front of ABK building in shurq.

I have never found any good chinese restaurant in Kuwait. Reason may be there are not very large chinese community in kuwait. There are large number of philpinos in kuwait , but their taste is really unkown to us.

Pakistani restaurants are very few too in Kuwait. Most of them serve labour community , belonging to pakistan and therefore it is really hard to take your families there. There is one famous Khan-Baaba restuarant in Faheel , which is famous for its halwaaa puri breakfast, but again there remains so rush that you hardly find any place to sit.

Ok now to Western food....Western food are available everywhere in kuwait. You will find American food ( lunch and breakfast) in every shopping mall like in Marina Mall , Souq sharq and Avenues.I am not found of western taste of food..so lack of my knowledge on my end in this respect.

Note: This blog is open for comments. Please help me in updating this list of good food places in Kuwait

SAN Versus NAS

At first glance NAS and SAN might seem almost identical, and in fact many times either will work in a given situation. After all, both NAS and SAN generally use RAID connected to a network, which then are backed up onto tape. However, there are differences -- important differences -- that can seriously affect the way your data is utilized. For a quick introduction to the technology, take a look at the diagrams below.


Wires and Protocols
Most people focus on the wires, but the difference in protocols is actually the most important factor. For instance, one common argument is that SCSI is faster than ethernet and is therefore better. Why? Mainly, people will say the TCP/IP overhead cuts the efficiency of data transfer. So a Gigabit Ethernet gives you throughputs of 60-80 Mbps rather than 100Mbps.

But consider this: the next version of SCSI (due date ??) will double the speed; the next version of ethernet (available in beta now) will multiply the speed by a factor of 10. Which will be faster? Even with overhead? It's something to consider.

The Wires
--NAS uses TCP/IP Networks: Ethernet, FDDI, ATM (perhaps TCP/IP over Fibre Channel someday)
--SAN uses Fibre Channel

The Protocols
--NAS uses TCP/IP and NFS/CIFS/HTTP
--SAN uses Encapsulated SCSI

More Differences

NAS

SAN

Almost any machine that can connect to the LAN (or is interconnected to the LAN through a WAN) can use NFS, CIFS or HTTP protocol to connect to a NAS and share files.

Only server class devices with SCSI Fibre Channel can connect to the SAN. The Fibre Channel of the SAN has a limit of around 10km at best

A NAS identifies data by file name and byte offsets, transfers file data or file meta-data (file's owner, permissions, creation data, etc.), and handles security, user authentication, file locking

A SAN addresses data by disk block number and transfers raw disk blocks.

A NAS allows greater sharing of information especially between disparate operating systems such as Unix and NT.

File Sharing is operating system dependent and does not exist in many operating systems.

File System managed by NAS head unit

File System managed by servers

Backups and mirrors (utilizing features like NetApp's Snapshots) are done on files, not blocks, for a savings in bandwidth and time. A Snapshot can be tiny compared to its source volume.

Backups and mirrors require a block by block copy, even if blocks are empty. A mirror machine must be equal to or greater in capacity compared to the source volume.



Sunday 26 April 2009

Configuring Distributed shell with SSH for AIX software Management


System administrators of large AIX installations very often find them in situations where they want to execute operating system commands in parallel on multiple systems. Imagine about a situation where a system administrator has to manage 100-200 AIX based servers which are widely spread across multiple sites. One morning when he comes in office and as soon as he starts his bright and sunny morning with a hot cup of coffee, planning about coming weekend, he receives an emergency call from his immediate boss to prepare a list of all such AIX servers which have not been installed with a security related APAR (which has to be installed immediately as per IBM instructions). Now his day starts with a laborious task; he has to telnet (or Ssh) to each and every AIX server and check the availability of the specified APAR on all servers, one by one. If not present, he also has to apply this specific APAR on the systems.

We can not deny that such tasks are essential part of system admin roles and jobs, but some times, such jobs become a great burden for system administrators especially in scenarios where installation base is very large. IBM luckily has now included some very useful commands in AIX 5L, which can help system administrators in such scenarios. In this article, I will cover these relatively new and less commonly used (but very useful) commands and their configuration using Ssh as way of passing data between systems. Later in my following article, I will show how dsh and dshbak commands can play a vital role in making AIX day to day system administration tasks a bit easier.

Distributed Shell and its configuration with SSH

Dsh (distributed shell) is a tool which was initially part of PSSP component for SP/2 systems. However, with the introduction of CSM client software with AIX 5L, now it has become a standard for AIX 5L. Both dsh and dshbak commands are now available with AIX 5.2 base operating system also. You however have to install following csm related filesets (including csm.dsh) to get these commands available on AIX 5.2 systems.

Following are the csm related filesets, which will be required for availability of dsh and dshbak commands.

bkmecomm[/home/root] # lslpp -l csm*

Fileset Level State Description

----------------------------------------------------------------------------

csm.client 1.4.1.0 COMMITTED Cluster Systems Management

Client

csm.core 1.4.1.0 COMMITTED Cluster Systems Management

Core

csm.diagnostics 1.4.1.0 COMMITTED Cluster Systems Management

Probe Manager / Diagnostics

csm.dsh 1.4.1.0 COMMITTED Cluster Systems Management Dsh

csm.gui.dcem 1.4.1.0 COMMITTED Distributed Command Execution

Manager Runtime Environment

csm.gui.websm 1.4.1.0 COMMITTED CSM Graphical User Interface

csm.msg.EN_US.core 1.4.0.0 COMMITTED CSM Core Func Msgs - U.S.

English (UTF)

csm.msg.en_US.core 1.4.0.0 COMMITTED CSM Core Func Msgs - U.S.

English

You can easily verify that csm.dsh is actually the fileset containing dsh and dshbak command. All these CSM related filesets can be found in AIX media pack CDs.

# whereis dsh

dsh: /usr/bin/dsh

#lslpp –w /usr/bin/dsh

lslpp -w /usr/bin/dsh

File Fileset Type

----------------------------------------------------------------------------

/usr/bin/dsh csm.dsh Symlink

Now next step would be addition of directory containing dsh and dshbak commands in current path. I did it temporarily using PATH environment variable, you can alternatively edit /etc/environment or .profile to get it done permanently

#export PATH=$PATH:/usr/bin

You have to specify which nodes, you want to add in your dsh management domain .In my case, I need that dsh should be able to execute commands on all of my AIX based database and application servers. As a practical solution, it is also better to have only one node as the management server for all of remaining servers in server Farm. So I selected my AIX 5.2 based “bkbweb” server as management server. On bkbweb, I installed all csm related filesets and added dsh and dshbak commands path in root’s profile. I also created a file called serverlist and putted names of all nodes (which are required to be managed from this bkbweb server using dsh).It is important, off course, that you should be resolving all node names successfully from this management server.

The contents of serverlist file are nothing but simply containing names of all nodes, one name per line.

---------------------------------------------------------------------------------------

bkbdb

bkbapp

bkbqdb

bkbsapp

---------------------------------------------------------------------------------------

After that, I exported the environment variable “DSH_LIST” in root’s .profile

export DSH_LIST=/home/root/serverlist

I then verified this environment variable existence in root user environment by using following command

root@bkbweb-/# env | grep DSH

DSH_LIST=/home/root/serverlist

Next important step would be the specification of way of communication used by dsh command between nodes. The dsh program uses a remote shell of your choice to issue remote commands to the managed nodes which means that in our case root user on bkbweb server should be able to execute rsh to all nodes ( specified in serverlist file ) successfully otherwise dsh will not work properly and you will get error ( something like following)

#/home/root dsh date

dsh: 2617-009 bkbdb remote shell had exit code 1

As rsh is considered as a security loophole for many corporate and therefore not acceptable, I decided to use Ssh as communication infrastructure between nodes for dsh. For this to work, I installed openssh on all of my AIX nodes and started sshd daemon on these AIX nodes.

On bkbweb server, I generated pair of public/private keys using following command:

#ssh-keygen –t dsa –b 2048

This command will generate public and private keys in /home/root/.ssh directory. For sake of simplicity, I did not use any passphrase while storing these keys in the files

( id_dsa and id_dsa.pub).

I then copied public file from bkbweb server to all AIX nodes using scp command as follows:

#scp /home/root/.ssh/id_dsa.pub root@bkbdb/home/root/.ssh/

#scp /home/root/.ssh/id_dsa.pub root@bkbapp/home/root/.ssh/

Now, at bkbdb and bkbapp nodes, I putted these public key files in authorized_keys file

# cd /home/root/.ssh

# cat id_dsa.pub >> authorized_keys

Ssh connectivity with DSA authentication (from bkbweb server to bkbdb and bkbapp) should work now and you will be able to able to login from bkbweb to bkbdb and bkbapp nodes without any password prompt.

Final step would be configuring dsh to use this fully functional Ssh setup. This can be done easily be the use of an environment variable “DSH_REMOTE_CMD”.

I exported this environment variable in root user profile at bkbweb server as follows

#export DSH_REMOTE_CMD=/usr/local/bin/ssh

Now test date command with dsh, which will execute date command simultaneously on all servers specified in serverlist file and will return following output

root@bkbweb-/home/root# dsh date

bkbapp: Mon 15 Jan 11:15:22 2007

bkbdb: Mon 15 Jan 11:15:24 2007

bkbqdb: Mon 15 Jan 11:15:27 2007

bkbsapp: Mon 15 Jan 11:15:22 2007

You can also now use dshbak command in conjunction with dshbak command. This “dshbak” command group all those nodes for which dsh command output is same. For example, if we execute same above command with dshbak –c option , output would like as follows:

root@bkbweb-/home/root#dsh date | dshbak –c

HOSTS ------------------------------------------------------------------------

bkbapp, bkbsapp

-------------------------------------------------------------------------------

Mon 15 Jan 11:17:34 2007

HOSTS-------------------------------------------------------------------

bkbdb

Mon 15 Jan 11:17:36 2007

HOSTS----------------------------------------------------------------

bkbqdb

Mon 15 Jan 11:17:39 2007

Sample Software Maintenance Scripts using DSH

You can make use of distributed shell implementation for many administration tasks across server farm. Below are some sample scripts, which can be proved very helpful for system administrators in AIX software maintenance across server farm.

For instance, combination of dsh and “oslevel –r” command could be used in finding all those servers which are below ML05 for AIX 5.2. I have written a small and simple shell script for this purpose which infact uses already established Ssh based dsh command setup.

-----------------------------------------------------------------

#mlfind.sh

#Created by Khurram Shiraz on 15 Jan 2006

# Help system administrators to find all those servers which are not currently at any

# specified ML- Example Usage : mlfind.sh 5200-06

-------------------------------------------------------------

#!/bin/ksh

dsh "oslevel -r" > /tmp/wrkfile # using dsh to get required output from all servers

while read HOSTNAME ML

do

HOSTNAME=$(echo $HOSTNAME | sed s/\://g) # Removing colon (:) from host names

if [ $ML -eq "$1" ]

then

echo

echo $HOSTNAME has specified ML installed

else

echo

echo $HOSTNAME does not have specified ML installed

fi

done < /tmp/wrkfile

exit 0

-----------------------------------------------------------------------------------

And for system administrators, who want to check presence of any critical fix in hundred of AIX servers, the same script can be modified slightly as follows:

-----------------------------------------------------------------

#fixfind.sh

#Created by Khurram Shiraz on 15 Jan 2006

# Help system administrators to find all those servers which are not currently applied #with a specified patch: example usage fixfind.sh IY43265

-------------------------------------------------------------

#!/bin/ksh

dsh "instfix -ik $1" 1> /tmp/wrkfile 2>/tmp/wrkfile

while read HOSTNAME FIXRESP

do

HOSTNAME=$(echo $HOSTNAME | sed s/\://g)

echo $FIXRESP | grep -E "Not|no" 1>/dev/null

if [[ $? -eq 0 ]]

then

echo $HOSTNAME does not have speceified FIX installed

echo $HOSTNAME > /tmp/servers_no_patch

else

echo $HOSTNAME have this specified FIX installed

fi

done < /tmp/wrkfile

exit 0

---------------------------------------------------------------------

If you execute fixfind.sh with the FIX number, you will get some output like following:

root@bkbweb-/home/root# ./fixfind.sh IY54515

bkbapp does not have speceified FIX installed

bkbdb does not have speceified FIX installed

bkbweb have this specified FIX installed

Now it is time to see how we can use our dsh setup to apply specific APARs on all those nodes which don’t have that APAR installed. From the execution result of last script (fixfind.sh), we have list of all those servers which don’t have this APAR( in /tmp/servers_no_patch file). So, first step would be downloading this APAR and then transferring related filesets to a filesystem (/swexport) on our management server (bkbweb).Then export this filesystem ( I am assuming that NFS setup on bkbweb is already working fine ) by executing

# /usr/sbin/mknfsexp -d '/swexport' -t 'rw' -c 'bkbdb bkbapp' '-B'

Also modify servers list by pointing to only those servers which don’t have patch installed. Name of these servers are present in /tmp/servers_no _patch file.

#export DSH_LIST=/tmp/servers_no_patch

Now mount the /exportsw filesystem on all nodes simultaneously

 
# dsh mount bkbweb:/swexport  /mnt
 

Finally, apply this patch to all these nodes

# dsh instfix –k -d /mnt

After successful completion, unmount and unexport the filesystem

# dsh unmount /mnt

And on management server (bkbweb)

#exportfs -u /swexport

In summary, if you have server farms to manage, distributed shell from IBM is a gift for you. No doubt, you have to implement dsh (along with Ssh for better security) across whole server farm for the first time , but once it is installed and configured properly it can make your life bit easier. Every evening, you can be at home in time, rather than sitting in office for late timings and preparing comparison reports between hundred of servers for submission to your management. Thanks Dsh!!!!

About Author: Khurram Shiraz is senior system Administrator at KMEFIC, Kuwait. In his eight years of IT experience, he worked mainly with IBM technologies and products especially AIX, HACMP Clustering, Tivoli and IBM SAN/ NAS Storage. He also has worked with IBM Integrated Technology Services group. His area of expertise includes design and implementation of high availability and DR solutions based on pSeries, Linux and windows infrastructure. He can be reached at aix_tiger@yahoo.com.



Note: This article was published in AIX Update, Xephon Inc ( A Print Magazine which was used to publish monthly from US)

Friday 24 April 2009

Celebrating three years in Kuwait ( My First day in Kuwait)

Ahhh, 3 years have passed so quickly.. i can not imagine. But it is a fact that i arrived in kuwait city on 24 April 2006 ( exactly 3 years back from today) with deep heart and grief. Leaving your beloved country , your employer, your friends , your relatives and off course your customers was not so easy desicsion for me... but i made it only for better future for myself and my family!

When i look back now, i feel it was not a bad decision alhamdullilah. I joined KMEFIC on 25th April 2006( just next day i came to kuwait ) and worked there till December 2007,with lot of recoginations and appreciations.

I still remember first day in Kuwait...I reached Kuwait airport at 3:50 PM by Air china flight from karachi and there was no one from KMEFIC to receive me....Oh it was really bad start and disappointment for me, but my two great freinds Aamir and Najam were there to receive me . But another problem was there , they did not know where would i stay ?. So they called yacub ( GBM) who was my introducer at KMEFIC, who then called Mr. Qarooni ( My always great friend and respectable Ex-Boss) , who then called Hande ( a good friend of mine ) . Finally Hande coordinated with KMEFIC HR to arange & book hotel for me at same time...AAAAAAAAHh ...Big mishap for me!!


I put all my luggage in hotel and then went to Aamir home with Najam... Spent some time and then we ( I and Najam) went to Marina Mall on Gulf road for dinner....

Both of us were deep hearted , as najam at that time was also a new comer in Kuwait and he was also not settled down well in Kuwait....

It was a busy and deep hearted day for me!! worries for next day, worries for new job, new responsibilities and offcourse new country.............

Configure Rsync on AIX in five minutes

In day to day life of a unix administrator , most important question which arises, is related to data synchronization. No doubt , there are a large number of data synchronization tools available on Unix operating system ranging from less flexibale tool of automatic FTP to more flexibale tools like rsync.


Here i am describing step by step guide to configure rsync to replicate data filesystems from one AIX box to another, but offcourse you can use same steps to configure rsync on other flavours of unix


Created and mount similar filesystems on both Application servers , let say AS1 and AS2

In my case, these filesystems are already in place as a part of installation/configuration of application.

2 Select a user for replication, which should exits on both servers and have read/write access on filesystem structure. Let’s assume for this document as user “replicauser”. You may have different user in actual scenerio


Install following software from Linux tool box on both AS servers
popt-1.7-1.aix.4.3.ppc.rpm
rsync-2.6.2-1.aix.5.1.ppc.rpm


Install openssh-3.8.1.0. and openssl-0.9.6.7 on both servers


Setup openssh so that replicauser from AS2 server can ssh to AS1 server ( without any password prompt).For step by step configuration of SSH , see my post on SSH configuration in five minutes.

Add /usr/local/bin in replicauser’s path environment variable ( preferably in his .profile) on both servers

Now you have to create a shell script on AS2 server

#Script name : rsyncapp.sh
#! /bin/skh
rsync –avz –rsh=ssh --delete AS1: /appl/icons

exit 0


I will recommend you to start with some test folders and when satisfied with rsync functionalities , then go for all actual folders and filesystems.

Now put rysncapp.sh into replicauser crontab so that it will be executed every morning ..Let’s say

Please note that “—delete” parameter is very much important as when administrator delete something from AS1’s folders, rsync will also delete corresponding files from AS2 server’s filesystems as well, as it performs synchronization process.

Friday 17 April 2009

Travel to KSA for Umrah




Umrah means a holy visit to Suadia for all muslims all over the world. It is a spiritual visit which refreshes your body and soul. Believe me , you may have plenty of money , may be you would be living in a country nearby Suadia for many years ( like me ) and you have desire to visit these holy places as well, but you can not visit them until you are invited/permitted by Allah to visit these holy places.

I was planning to visit these places from last three years ( infact from day one when i landed in Kuwait on 26th April 2006),but got permission just recently to visit Mecca and Medina. I spent four days in Mecca and 3 days in Medina and came back to kuwait with sorrow heart but with refreshed heart and soul ...May Allah allow me again to visit them !!!

I felt a big difference between these two holy cities... In mecaa ,when you come for the first time in front of Holy Mosque ( we call it Haram ) , you will feel your heart greatly impressed with power and greatfullness of Almighty Allah!!

And when you see Khan-e-Kaaba ( Allah's first house , first built by Prophet Adam, and then by Prophet Ibrahim and then cleaned from idols by our prophet ,prophet Muhammd (PBUH)) , you will have the feeling which can not be described in words!!

I have seen muslims from all parts of the world ( Europe, eastern asia, Africa,Indo-pak subcontinent ,Middle east) wearing same clothes ( white and simple ) just circulating around khane-kaaaba and saying lubbek ( Allah, we are present in front of your house.. just for you)...


Around Meccaa, you will find big big black mountains ...this is a holy land of prophets ... I also visited Jabel-rehmat , a mountain which is said to be the place where Prophet Adam meets his wife ( Huwwa) for the first time ( off course million years ago), as he and his wife were the first human beings in this world...


Medina is around 5-6 hours drive from Mecca , by bus. I took a big private car for around 400 SR and we reached there in 4 hours only... As soon as you reach medina , you will feel peacefullness and calmness in your heart. Off course , it is city of our holy prophet Muhammad PBUH...so there is specail blessing of Allah on this city!!

Wheather of Medina is really very very good. People used to say that even in peak summer season , you will find the weather to be same..with quite cooler breezes all around in the city...

And off course , you have to visit Holy Mosque of Prophet and say salaam to him all the time , every time in the day... PBUH

May allah give us power , courage and willingness to go and visit these holy places , many times in our lives...

I wish for every muslim brother and sister that he may visit to Mecaa and madina .. Taqabul-Allah...

For our non muslims friends , i would say , have a look on pictures and you will also feel holiness greatness and peacefulnesses of these places....Our religion is only for peace !!! Believe me or not!! We are preachers of peace and love!!

Enhance your security with secret port knocks



In the field of IT systems security, concept of” port knocking” is relatively new. However with the passage of time, it is getting popular day by day among system and security administrators.

Port knocking is a method of externally opening ports on a firewall by generating a connection attempt on a set of pre-specified closed ports. Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect over specified port (s).

The primary purpose of port knocking is to prevent an attacker from scanning a system for potentially exploitable services by doing a port scan. Until the correct knock sequence is used, the protected ports will appear closed– so attackers won’t be able to conduct an attack on those ports.

More specifically, Port knocking works on the concept that users wishing to attach to a network service must initiate a predetermined sequence of port connections or send a unique string of bytes before the remote client can connect to the eventual service.

For example, suppose that a remote client wants to connect to an FTP server. The administrator configures the port-knocking requirements ahead of time, requiring that connecting remote clients first connect to ports 2000, 4000, and 7107 before connecting to the final destination port, 21 on FTP server.

The administrator tells all legitimate clients about the correct” combination” of knocks to port knocking daemon running on FTP server and hence when they want to connect to FTP service, they simply send these knocks to the server and then start using FTP service.

The question arises, what is the basic advantage of the additional step of sending knocks and then connecting to FTP service? The answer is simple: The FTP service is not always running on the server, it will be started only when the correct port knocks are sent to server, and it will shut down once it receives another predefined sequence of port knocks.

The potential backdoor to business-critical services is only to be opened for a short time, when it’s required. Once the service is no longer needed, it is closed again, mitigating the vulnerability to attack.

One of the primary advantages to using port knocking is that it is platform, service, and application agnostic. Any operating system with the correct client and server software can take advantage of port knocking. If you need help finding a tool, you can find a list of port knocking implementations here. The site lists clients and daemons for pretty much any platform you’d care to use.

I selected knockd, which is considered to be one of the most famous and robust implementation of port knocking mechanism for Linux and UNIX. In this article, I will cover setting up port knocking on a Red Hat Enterprise Linux (RHEL) server, using knockd, a popular open source port knocking tool. Most importantly, I will try to extend the idea of port knocking beyond simple firewall modifications to more complex system administration tasks.

Note that knockd is available for other systems as well, so if you’re using Debian, Ubuntu, Mac OS X, or even Windows, you should be able to follow along with most of the advice herein to secure your system with knockd.

Flaws with Port Knocking

Before we begin, I should note that port knocking has some detractors. Some IT security professionals say that a predefined and fixed sequence of knocks is, in and of itself, a security flaw. To overcome this, some port knocker daemons have been modified to generate a random sequence of knocks, which can be used by clients to issue requests.

It’s also important to remember that port knocking is just one component of a successful security strategy. You’ll need to deploy other security mechanisms so that if an attacker is successful in providing the correct sequence, they are still faced with authentication and other barricades before connecting to a service.

Port Knocking: A Basic Overview

To start, let’s take a look at the basic functionality of a port knock server. knockd is a daemon that runs on a server, passively listening to network traffic. You configure knockd with a sequence of ports, the length of time between connection attempts, the type of packet that will be sent, and the command to be run when the correct sequence is given.

Once knockd” sees” a port sequence it has been configured to recognize, it will run the command it’s been configured to run. Note that you can use TCP, UDP, or a combination of both. Usually the action will be an iptables command, but not always.

So, to implement port knocking, we start with the installation of knockd and run it in the background. (Or foreground, if you wish, but we will usually want to run it in the background.)

Securing A MySQL Database Remote Connections with Port Knocks

Now that we know what port knocking is, let’s put it to use. In this scenario, I have a business-critical MySQL-based application running on RHEL. On occasion, I need to allow remote connections from a DBA who is performing basic database maintenance activities.

However, for security reasons, we don’t want to allow remote database connections at all times or from every IP address. Because we wanted tighter control over remote connections, we decided to explore port knocking so that remote connections would be open for a limited time only and from a specific IP address.

Let’s start with the firewall rule, just in case you’re not already a firewall wizard. To append a rule to one of the” chains,” you’ll use the -A option. The -I parameter tells iptables to insert the rule into a specific position in the chain. This is important because you may want specific rules to be processed first. Make sure you give it a rule number.

Now, to secure MySQL connections to my database server (172.16.2.183), I blocked network traffic on server’s MySQL port (default 3306) coming from all addresses. For this purpose, I executed following command:

iptables -A INPUT -p tcp -s 0/0 -d 172.16.2.183 --dport 3306 -j REJECT

You don’t want to be reissuing the command every time you restart the machine, so you’ll want to save the rule permanently, using iptables-save.

Getting and configuring knockd

The next step is to install the knockd server on the system you want to use it on. You can get the RPM from the RHEL network.

After installing knockd it’s time to customize your configuration. The knockd config file is found at /etc/knockd.conf

[options]
logfile=/var/log/knockd.log
[DB2clientopen]
sequence = 7050,8050,9050
seq_timeout = 10
tcpflags = syn
command = /sbin/iptables -I INPUT 1 -p tcp -s 192.168.2.201
--sport 1024:65535 -d 172.16.2.183 --dport 3306 -m state
--state NEW,ESTABLISHED -j ACCEPT
[DB2clientclose]
sequence = 9050,8050,7000
seq_timeout = 10
tcpflags = syn
command = /sbin/iptables -D INPUT 1

Let’s take a look at the format. The syntax is very simple, you give knockd option / value pairs, separated by =, and port numbers are separated by commas in the order you want the” knocks” to be received. Don’t forget to specify a logfile, you may need to review it later!

It should be obvious from the knockd.conf example that it has two types of actions that can be executed by the daemon, depending on the sequence it receives.

First, if it receives syn packets to port 7050, 8050, and 9050, knockd will insert the first iptables rule as rule number 1 in the INPUT chain. This will open the MySQL database port, so a remote connection can be made from 192.168.2.201– and only that IP address. It’s a good idea to specify the IP address whenever possible, so that if an attacker tries to connect while the port is open, they will still be denied.

On the other hand, if the server receives a knock sequence of 9050, 8050, and 7000, it will delete the rule so that all remote database connections will be closed down again.

I made sure that MySQL would know what address my DBA would be coming from, so I added my PC’s IP address to the server’s /etc/hosts file and created a test database called test1, and created a user called test1 as well with the appropriate grant privileges.

First, fire up the MySQL client with mysql-u root-p test1 and enter the following commands:

mysql> create user test;
mysql> grant all privileges on *.* to 'test@dbawin'
identified by 'polanipass' with grant option;

Next, restart knockd as a daemon.

/usr/sbin/knockd -d

It should be noted that, by default, knockd will start listening on eth0. If you need it to run on a different interface, you can configure it to do so using the -i option. For instance, to start knockd as a daemon on wlan0 you’d use /usr/sbin/knockd-i wlan0. If you’re always going to run knockd on a different interface, you can add this to your knockd.conf:

[options]
interface = wlan0

Knock, Knock, It’s Me!

Now, knockd isn’t very useful without a client, so let’s get a client to talk to knockd. I chose a Windows-based Cygwin client, but you can find a client for just about any client OS at the implementations page mentioned earlier.

To use the Windows client, you open a DOS prompt and run something similar to this command:

C:KNOCKKNOCKWINDOWS>knock.exe 172.16.2.183 9050 8050 7000

Of course, the IP address and ports will vary. Once the” knock” is issued, the knock daemon will execute the iptables command listed under the [DB2clientopen] section of knockd.conf and add the rule in INPUT chain to allow DB2 PC to connect to database running on server.

Now you can connect to your MySQL database with your favorite client and do whatever you need to do. Once you’re finished, it’s time to close the door.

If you send the close knock sequence, in this case a syn packet sent to ports 9050, 8050, and then 7000, the MySQL port will be closed and all connections will be terminated. If you try to reconnect to the server, your MySQL client will time out and you’ll eventually see an access error. This will be the case until you send the proper sequence to re-open the port.

So, now you see how you can use port knocking to increase security for remote MySQL connections. Of course, this is really database (and application) independent, so you can use port knocking to secure any database or application you want to connect to remotely.

If it’s too much hassle to open and close the connection each time you need to connect to the database, it might make more sense to set it up so that the port is open during specific hours. For example, if your database guru works 10 a.m. to 7 p.m., you could set up a script to open the port a bit before 10 a.m., and close the port a bit after 7 p.m.

This is not quite as secure, but it does mean that the port won’t be open 24/7, so it may block some automated and casual (i.e., not targeted) attacks. Also, if the port knocking is coupled with only allowing connections from specific IP addresses or IP address ranges, then you have an additional layer of security.

Performing Other System Administration Tasks with Knocks

But wait, that’s not all! Port knocking can be used to do more than set iptables rules. After configuring knockd to play doorkeeper, I decided to explore the feature and see if I could use it to make my life easier in other ways.

I decided I wanted to be able to restart my system remotely, just by” knocking” in the right sequence. I also configured knockd to kick off my backups to tape, so I don’t even need to log in to start the backup– just send a quick series of packets, and my data is safe another day.

Here’s my /etc/knockd.conf:

[options]
logfile=/var/log/knockd.log
[systemreboot]
sequence = 7050,8050,9050
seq_timeout = 10
tcpflags = syn
command = /usr/bin/reboot
[systembackup]
sequence = 9050,8050,7000
seq_timeout = 10
tcpflags = syn
command = /usr/bin/tar -cf /dev/rmt0 /home/root/

You can take this a lot farther, and set it up so that other admins (say, the junior admin who’s reliable but still a bit green) can perform complex actions just by using a knock client, or even just by running a shell script that sends the packets.

Summary

Port knocking is a very useful tool for systems security. It is because of its usefulness and robustness that the number of implementations, and users, are growing rapidly. If you can open a door into a closed black box for to perform some system administration tasks, even without requiring a login to the system, it can be very ideal for many environments.

Finally, it is always a good idea to further secure your systems by changing the knock sequences frequently, or by using random seed generators to create random port knocks.


Note: This article belongs to one of my published work on security. It was published in Linux Magazine , March 2008 print edition.It can still be found on their website at www.linux-mag.com/id/5445

Wednesday 15 April 2009

TSM DRM States and Cycling process

DRM States:

There are six important states in DRM concept.
mountable

Tapes are inside tape library and therefore mountable into tape drives
Courier

Tapes are outside library now and can be handed over to courier service guys so that they can deliver tape cartridges to offsite.
Vault
Tapes are inside vault now at DR site

4.Vaultretrieve
Tapes are expired as they do not contain any valid data now. These tape cartridges can now be retrieved from vault.

5.Courierretrieve
Tapes with expired data have been retrieved from vault by courier guys but still not delivered at main site.
6.Onsiteretrieve
Tapes with expired data has reached main site and can be checked in , into tape library as scratch cartridges.



Daily ( or weekly) Cycle:
Depending upon your business needs, you may decide to remove offsite tape cartridges on daily or weekly basis.
There are two ways to do that. Simpler way is to use ISC console, while more typical way is to use dsmadmc command line tool to identify and remove offsite cartridges.

Let start with ISC first.
Login to ISC as iscadmin and identify DRM cartridges with mountable state. These are the cartridges, which are eligible for movement
You have to remove these cartridges from library by changing their state to next state, which is courier state.
After tapes reach DR site vault, you can change these tape cartridges state to Vault.

Once data on the tape cartridges in vault expired, the state of tape cartridges will automatically change to Vaultretrieve. You have to query from ISC or command line to find out which tape cartridges are in state of Vaultretrieve. Once you found out which tape cartridges are to be brought back, you have to ask your courier service to bring these tape cartridges back.
Once you know that your expired cartridges have been picked up by your courier service, you have to change state of these tape cartridges to Courierretrieve through ISC or command line.

Now when you get these expired cartridges back at main site, you just have to change the state of these cartridges to Onsiteretreive. As soon as you change the state to Onsiteretreive, history of these cartridges will be deleted from TSM and then you can check in these cartridges, one by one into tape library as scratch cartridges.

Saturday 11 April 2009

Panj bhagar Ki Bhujia

Description:

A delicious Pakistani vegetable curry


Ingredients:


4-5 curry leaves, 1/4th tea spoon fenugreek seeds ( Methee Daana in Urdu),
Cumin seeds 1/2 tea spoon ( zeera in urdu), corriander seeds 1/2 tea spoon ( sabit dhunia in urdu),Onion seeds 1/4 tea spoon ( Klonji),chopped garlic 1/2 tea spoon, red chilli powder,Turmenic powder ( haldi in Urdu). 2 tea spoon tamarind pulp,500 grams potato ( cut-in wedges)


Recipe:


1. Take 4 table spoon oil in a pan ,put all the bhagaar ingrediants ( as mentioned above )

2.Let them fry a bit. Add chopped garlic ,stirr, add potatoes then add chilli,termenic powder and salt.

3.Add some water and cook till potatoes are tender. Then stirr well and finally add temenic pulp and let simmer for two to three minutes.

Your delicious and spicy potato curry is ready!!!

Friday 10 April 2009

Configuring SSH in Just Five Minutes



Have a problem in setting up SSH on UNIX or Linux systems Or you are tired of reading big manuals for SSH setups... No problem.. I am documenting three simple steps for configuring SSH with RSA authentication... Do it by yourself in 5 minutes
Prerequisite: the remote system needs to have ssh installed and sshd running, with RSA authentication enabled. This is the default configuration, and is typically specified with the option: RSAAuthentication yes in /etc/ssh/sshd_config.

Zeroth step: You will need ssh installed on your computer. Procedures for doing this vary by Linux and/or Unix (or other OS) distribution. Refer to system documentation for details.
1. Create a local RSA key:
$ ssh-keygen
Follow the prompts, this takes a few seconds as your computer gathers entropy from the system.
You will be asked to supply a passphrase, you can elect to choose a null passphrase. I would recommend you *do* supply a passphrase as it provides additional security -- your key is not useful without it. The upside is that you only have to remember this one passphrase for all the systems you access via RSA authentication. You can change the passhrase later with "ssh-keygen -p".
This is typically stored in your home directory under .ssh/identity. After doing this, a directory listing of ~/.ssh should look like:

-rw------- 1 karsten karsten 528 Aug 4 21:37 identity
-rw-r--r-- 1 karsten karsten 332 Aug 4 21:03 identity.pub
-rw-r--r-- 1 karsten karsten 28106 Jul 26 16:52 known_hosts

2. Copy the public key identity.pub to the hosts you wish to access remotely. You can do this by any method you like, one option is to use scp, naming the key to indicate your present host:
$ scp .ssh/identity.pub remote-user@remote.host:local-host.ssh
e.g.: I might name a key for my host "navel" navel.ssh.
3. Connect to the remote host. You don't have RSA authentication enabled yet, so you'll have to use an old method such as walking up to the terminal or supplying a password. Add the new hostkey to the file .ssh/authorized_keys.

$ cat local-host.ssh >> .ssh/authorized_keys
Note the use of two right-angles ">" -- this will add the contents of local-host.ssh to a preexisting file, or create the file if it already exists.
Check the permissions of .ssh/authorized_keys, it must be as below or you won't be able to use RSA authentication:
-rw-r--r-- 1 karsten karsten 334 Aug 4 21:03 authorized_keys
And you're all set!
4. Test the method by logging out of the remote server and trying to connect to it via ssh:
$ ssh remote-user@remote-host
You may be prompted for your RSA key passphrase, but you won't need a remote password to connect to the host. If you are prompted for a password, or your connection is refused, something is wrong, and you'll want to refer to the troubleshooting section below.
You can repeat steps 1 - 3 for each remote host you wish to connect from.
More information:
• man ssh
• man ssh-keygen
• man sshd

Thursday 9 April 2009

Network services minimization on AIX

Minimize network services on AIX Servers

Principles

Network services present a significant risk to security:

  • Only enable the strict minimum of services needed. The number system processes listed by "ps –ef" or equivalent should be less than 10.
  • Use encrypted tools (like SSH) rather than clear-text network logins (e.g. telnet, 3270, ftp, rlogin, rcmd).
  • Keeping up to date with security patches on network daemons is particularly important.
  • Daemons should run as non-root users.
  • Daemons should "chroot" to a dedicated directory.
  • Use encryption where possible to prevent snooping or replay attacks.
  • Services must use minimal umask, file permissions etc.
  • Strong authentication (with token or lists) should be considered for critical services.
  • Applications should package structure

Minimise Inetd network Services

Inetd a process which automatically starts certain daemons such as telnet, ftp, if connections are made.

Inetd services can be enabled or disabled with the command 'chsubserver' on AIX. Likewise after changes to inetd configuration, the daemon needs to be send a hang-up signal - 'refresh -s inetd'. For example:

[server1]# chsubserver -d -v daytime -p udp
[server1]# chsubserver -d -v daytime -p tcp
[server1]# grep daytime /etc/inetd.conf
#daytime stream tcp nowait root internal
#daytime dgram udp wait root internal

It is recommended that ALL services except the following be disabled:

..... TBD list ...

The can be achieved with the following commands:
chsubserver -d -v daytime -p udp
chsubserver -d -v daytime -p tcp
..... TBD list ...

securetcpip ?

Special services which may be needed (discuss what measures to take for each one)

1. ftp

2. telnet

3. other?

4. tftp - for diskless booting : /etc/tftpaccess.ctl

Minimize /etc/rc.tcpip network services

A description of what services are started in /etc/rc.tcpip and how they can be changed with chrctcp.

/usr/sbin/no -o clean_partial_conns=1
/usr/sbin/no -o bcastping=0
/usr/sbin/no -o directed_broadcast=0
/usr/sbin/no -o ipignoreredirects=1
/usr/sbin/no -o ipsendredirects=0
/usr/sbin/no -o ipsrcroutesend=0
/usr/sbin/no -o ipsrcrouterecv=0
/usr/sbin/no -o ipsrcrouteforward=0
/usr/sbin/no -o ip6srcrouteforward=0
/usr/sbin/no -o icmpaddressmask=0
/usr/sbin/no -o nonlocsrcroute=0
/usr/sbin/no -o tcp_pmtu_discover=0
/usr/sbin/no -o udp_pmtu_discover=0
/usr/sbin/no -o ipforwarding=0


Minimize /etc/rc.nfs network services

A description of /etc/rc.nfs

/etc/exports

secure nfs : /usr/secretdata -secure


Minimize inittab services

A description of what services are started in /etc/inittab and how they can be changed with mkitab and rmitab.


Minimize other services

  • Restrict AIXwindows/CDE login to console
    • The xss command uses the enhanced MIT screen saver extensions.
    • xauth, xhost
  • Disable anonymous ftp
  • Disable anonymous ftp writes
  • Disable ftp to system accounts
  • Lock down root access

The default configuration allows telnet and rlogin access to the root account. This can be configured in the /etc/security/user file -- set the rlogin option to "false" for all system accounts. System managers should login to their account and then su so we have an audit trail.

  • disable SNMP readWrite communities
    The default SNMP configuration includes these "readWrite" communities:

[server1]# grep readWrite /etc/snmpd.conf
# readOnly, writeOnly, readWrite. The default permission is readOnly.
community private 127.0.0.1 255.255.255.255 readWrite
community system 127.0.0.1 255.255.255.255 readWrite 1.17.2


Wednesday 8 April 2009

Activating FlashCopy Feature on DS8000 Storage Subsystems

Activating FlashCopy Feature on DS8000 Storage Subsystems


FlashCopy being a premium feature requires a separate license which can be brought along with DS storage subsystem or can be ordered as an upgrade (also called MES in IBM terminology) for existing DS storage subsystems.

For DS6000 & DS8000 storage subsystems, it is mandatory to activate license activation codes (or at least the Operating Environment License code- OEL). This can be done through DS SMC or through DS CLI console. Other advance features like FlashCopy (or PPRC) can be activated after activation of OEL.

For activation of Flashcopy feature for DS8000, you must gather following information first:

  1. What is machine signature for DS8000. This is the most important information which is needed to activate your FlashCopy feature. Machine signature can be easily found out by using following DScli Commands:

dscli> lssi

dscli> Date/Time: March 30, 2005 6:53:05 PM CEST IBM DSCLI Version: 5.0.1.99

Name ID Storage Unit Model WWNN State ESSNet

============================================================================

- IBM.2107-7520431 IBM.2107-7520430 922 5005076303FFC19D Online Enabled

dscli> showsi IBM.2107-7520431

Date/Time: March 30, 2005 6:53:11 PM CEST IBM DSCLI Version: 5.0.1.99 DS: IBM.2107-7520431

Name -

desc -

ID IBM.2107-7520431

Storage Unit IBM.2107-7520430

Model 922

WWNN 5005076303FFC19D

Signature 896e-c0a3-38e9-5702

State Online

  1. What is machine serial number? The serial number of the DS8000 can be taken from the front of the base frame (lower right corner).On DS command line interface you can also use lssu command for this purpose.

  1. What are order confirmation codes (OCC)? The order confirmation code is printed on the DS8000 series order confirmation code document, which is usually sent to the client’s contact person together with the delivery of the machine.

After noting down machine serial number /machine signature and OCC, you can access following IBM internet site to generate activation codes for FlashCopy.

https://www-03.ibm.com/storage/dsfa/index.jsp

On this website, after putting all these information for your DS storage , you will be redirected to ViewActivation Codes window where you can download, or highlight, then flash and paste, or write down, your activation codes. If you select Download now, you will be prompted to select a file location. The file you download will be a very small XML file.

We opted for writing activation codes in our small note book; no doubt it is more handy approach!!!

In our case , activation code for FlashCopy which we got from above web site was 234-1934-J153-10DC-01FC-CA7D-5678-5678 , so next step was simply application of this activation code. We did this using DScli option

dscli> applykey -key 234-1934-J153-10DC-01FC-CA7D-5678-5678 IBM. 2107-7520431

Date/Time: 2 May 2005 14:47:06 IBM DSCLI Version: 5.0.3.5 DS: IBM. 2107-7520431

CMUC00199I applykey: License Machine Code successfully applied to storage image

IBM.2107-7520431

We then verified activation of FlashCopy on DS6000 using lskey command

dscli> lskey IBM. 2107-7520431

Date/Time: March 30, 2005 6:53:30 PM CEST IBM DSCLI Version: 5.0.1.99 DS: IBM. 2107-7520431

Activation Key Capacity (TB) Storage Type

================================================

Flashcopy 5 FB

Operating Environment 5 All

Using FlashCopy with AIX for online backups

Automated Online Backup solution using FlashCopy in AIX Environment

Design and implementation of a fool proof backup strategy has been an important topic for companies over the years. With the growth of data ( like Terabytes ) in recent years companies are now looking forward to have such backup solutions which are not only fool proof but also capable of completing whole backup process in shortest possible period of time ( no matter what is the size of data itself).

Think of a bank which has to complete its end of day operations daily before 8:00 AM in the morning so that next day business can normally start. Usually in such environments, end of day processes are always accompanied with “before end of day” and “after end of day “backup operations. If the data size of such organization is in Terabytes, it is really very difficult to complete these processes within few hours unless some “snapshot” techniques are used.

IBM storage solutions comprising of all high end storages ( like DS4300,DS6800 & DS8000) comes with advance feature of “Flash Copy” which help customers to meet their business needs. This feature is in fact a data snapshot technique on storage hardware level which copies data bit by bit. Flash copy feature also supports incremental flash copy operations which are in deed much faster than normal flash copy operations. Keep in mind that when we talk about even normal flash operations these are too quick that whole consistent snapshot of Terabytes of data may be made available in less than one or two minute’s time.

The only thing which we should keep in our mind is to make sure consistency with respect to database and operating system level as these snapshots are done on hardware or storage level. This article describes procedures to use FlashCopy feature for consistent and automated backup operations in AIX environment.

Technical Review of FlashCopy Feature

IBM FlashCopy technique provides an instant point-in-time copy of Luns present on DS Storage subsystems. The point-in-time copy function gives an instantaneous copy or ‘view’ of the original data at a specific point in time. This is also known as the T0 (Time Zero) copy of original data.

When FlashCopy is invoked, the command returns to the operating system as soon as the FlashCopy pair relationship has been established and the necessary control bitmaps has been established. This proc takes only a few seconds to complete. Thereafter, we have acc to a T0 copy of the original logical volumes. As soon as the relationship of both copies has been established, read and write operations can be done on both the source and target volumes. So one of the great advantage of using FlashCopy is that source data remains online and available for users ( although writes are required to be temporarily suspended on application or database level in order to ensure data consistency) during FlashCopy operation. Similarly as this operation is done on storage or hardware level so it does not impact servers performances and usually completes in fraction of seconds (regardless of size of the data which may be terabytes in some scenarios)

Due to all these benefits, the point-in-time copy created by FlashCopy is typically used when a copy of the production system is needed with minimum downtime. This state of technology feature is also used for fast online backup of production systems with minimal impact to system performance. Below is an illustration of FlashCopy concept.

Reference: IBM White paper “Storage Solutions for Oracle Database:

Snapshot Backup and Recovery with IBM Total Storage Enterprise Storage Server”

Enabling and activating FlashCopy Feature

FlashCopy is a premium feature that can be purchased separately with IBM DS4000/DS6000 and DS8000 series Storage boxes. Although ways of using this feature differ from storage to storage, basic technology used behind this feature is same.

Obtaining a Feature Key File for all premium features including FlashCopy also varies depending upon DS4000 packaging procedures for the country where the storage box was purchased and time of order:

_ If you bought any premium feature together with the DS4000, the feature key file might be included in the installation package (usually on CDs)

_ If no Feature Key File has been supplied on the installation media and only proof of license is supplied, you can generate a key using feature enabling identifier present on proof of license card and serial number of storage box on the Web at:

https://www-912.ibm.com/PremiumFeatures

Reference: IBM Red Book DS4000 Series, Storage Manager and Copy Services

Possible Operations with FlashCopy drives

There are four possible states with FlashCopy drives which are created with DS4000 Storage subsystems FlashCopy operations. These operations are creation, deletion, recreation (or so called enabling) and disabling of FlashCopy operations.

You can create FlashCopy logical drives either through the Create FlashCopy Logical Drive

Wizard or by using the command line interface (CLI) with the create command. The latter can

be scripted to support automatic operations. This operation will also create a repository drive associated with FlashCopy LUN.

It is usually recommended to stop application access to base logical drive and unmount it before creating FlashCopy drive in order to ensure consistency. So practically speaking , unmounting of base logical drive is not possible especially in a 24x7 environment so only stopping write access to base logical drive during FlashCopy creation operation suffice in most cases.

After creation, FlashCopy drive has to be assigned to host using logical drive-to-host mappings present in Mappings View of the Subsystem Management window of fast storage manager software.

Deletion process simply deletes FlashCopy drive and associated repository drive. This process also deletes hosts mappings for FlashCopy drive without having any impact on IO or access by host to base logical drive.

Disabling operation for a FlashCopy drive is a tricky thing. If a FlashCopy logical drive is no longer needed, it may be disabled. In fact as long as a FlashCopy logical drive is enabled, performance of DS storage subsystem is slightly impacted as a continuous copy-on-write activity is going on with the associated FlashCopy repository logical drive. However when FlashCopy logical drive is disabled, the copy-on-write activity stops and performance returns to its optimal state once again.

Main advantage for disabling the FlashCopy logical drive instead of deleting it is that FlashCopy drive along with its repository drive and hosts mappings is retained. Then, when you need to create a different FlashCopy of the same base logical drive, you can just use the re-create option to reuse a disabled FlashCopy. This takes less time than to create a new one and give a fresh snapshot of changed data.

When you re-create a FlashCopy logical drive, please note that:

  • The FlashCopy logical drive must be in either an optimal or a disabled state.
  • All copy-on-write data on the FlashCopy repository logical drive is deleted.
  • FlashCopy and FlashCopy repository logical drive parameters remain the same as previously disabled FlashCopy logical drive and its associated FlashCopy repository

logical drive. After the FlashCopy logical drive is re-created, you can change parameters

on the FlashCopy repository logical drive through the appropriate menu options.

For automated FlashCopy operations I did creation of FlashCopy drives for all required base LUN drives once using Storage manager software. For subsequent operations, I choose disabling /recreating of FlashCopy drives rather than creation/deletion due to their ability of retaining host-to-lun mappings.

Implementation Description



My Environment comprised of AIX 5.3, Oracle 9.2 and SAP. However FlashCopy can be used with any relational database which supports online backups.

I used DS4300 flash copy (disable & recreate functions) to make instant image of all data file systems along with archive log filesystem and make these target filesystems available on same host (SAP DB/CI) server. Hence source filesystems as well as target filesystems are mounted on same server in my implementation (although it is possible to mount target filesystems on any AIX node different from source AIX node). These Target file systems are then backed up to TSM server using TSM B/A AIX client with help of TSM scheduler.

I automated all these daily operations by integrating UNIX shell scripting with DScli (Command Line Interface) feature. Two shell scripts namely flashrecreate.sh & flashdisable.sh are present in appendix which were used in this whole automated implementation. These scripts are then scheduled by AIX cron facility so that flash creation is done every night at 1:00 AM and flash disable operation is done at every morning 10:00 AM. The FlashCopy disable operation was equally important as it stop unnecessarily tracking of data changes and hence helped in avoiding possible scenario

of filling up associated FlashCopy repository drives.

In order to ensure consistency in such snapshots operations, with respect to operating system, AIX 5L provides “freeze” option with JFS2 filesystems. This option can be used with chfs command to freeze IO to mounted JFS2 filesystem before initiating FlashCopy operation. Most tricky part is to how to make FlashCopy operations consistent in simple JFS environments (like my case). For that purpose I used following techniques

  1. First, I putted whole oracle database in hot backup mode before performing flash copy enable operation so that all write operations on database level should be stopped before starting backup operations.
  2. I synced file systems cache to disk using sync command and then wait for around one minute before starting flash copy task so that any data present in filesystem cache should be written to disk. Please note that AIX “sync” command does not guarantee 100% for writing data from cache to disk but still a useful tool for this purpose.
  3. After execution of sync command , a slight delay of few seconds ( let’s say 10 seconds ) was also putted so that oracle redo log files updation process also completes before actual start of FlashCopy commands.

  1. Finally, before mounting target file systems on AIX Server, I run fsck command against every target filesystem. In my case I calculated that total fsck operation took around 35 minutes for 500 GB filesystem (if done sequentially for five filesystems). Depending upon size of backup window, this fsck operation can be started in parallel on all target filesystems to save time. In my environment, as this time delay was acceptable for us so I did this operation sequentially and did not start TSM scheduled archive operations till all target filesystems are mounted.

In order to avoid any possible execution of flash copy enabling operation without disabling already existing FlashCopy drives for base logical drives, I placed a logical lock in flash shell scripts so that enabling (or so called recreating FlashCopy task) would be done if and only if the FlashCopy operation is already disabled for that logical drive.

Backups Restoration

Another important concern about every backup strategy is the ease and flexibility with which every backup taken can be restored. In fact no backup strategy guarantee 100% surety about restoration success but the only way to ensure is to restore backups on regular basis.

With FlashCopy backup technique, target filesystems can be mounted on same AIX server containing source filesystems as well as on different AIX server. The only important thing to note down is that these target FlashCopy filesystems are mounted on AIX hosts with mount points starting with /fs/ by default. When backed up to TSM server using TSM BA client (or even by using simple tar command), these filesystems are archived using same mount points. Therefore after restoration on target AIX host system mount point of these filesystems have to be changed to / using chfs command before starting application or database on target server.

A simple shell script can also be written in order to change mount points of all restored filesystems using simple chfs command thereby automating restoration process. I observed restoration time of around 45 minutes to 55 minutes to restore 500 GB of data using TSM B/A retrieve function (using GB Ethernet network)

Possible Issues and Resolutions

There were many issues faced in early implementation of this whole strategy. Some of these issues were resolved as follows:

  1. Sometimes logical drives on storage subsystems level change their controller ownerships from their preferred controllers due to any temporary hardware problems on SAN or on storage subsystem level. Although AIX handle this controller ownership issue with RDAC driver without any impact on accessibility from operating system, problems may arise due to such kind of event while using FlashCopy especially though DSCLI in an unattended mode. To resolve this I used IP addresses of both storage controllers with DSCLI commands in order to make sure that recreate and disable operations should work successfully in any such case.
  2. FlashCopy operations might fail due to Repository drive full up issue. This possible scenario was avoided by disabling FlashCopy on daily basis, once backups are done.
  3. FlashCopy recreate tasks might create uncertain problems if tried to be executed without disabling already existing enabled FlashCopy drives. This scenario was avoided by developing simple logic in shell scripts used for recreate/disable FlashCopy operations.
  4. Filesystems corruption might lead to several OS issues including even crash of AIX servers. This was avoided by running full fsck before mounting target filesystems on AIX.


Appendix A - Scripts

------------------------------------------------------------------------------------------------------------------

# Written : For R3PSAP AIX node

# Date : Mar 2005

# Script : begin_backup.sql

# Purpose : It will place all SAP table spaces into begin backup mode

# will ensure database consistency before online backup is taking using #flashcopy

--------------------------------------------------------------------------------------------------------------------

#!/bin/ksh

connect /as sysdba

alter tablespace PSAPBTABD begin backup;

alter tablespace PSAPBTABI begin backup;

alter tablespace PSAPCLUD begin backup;

alter tablespace PSAPCLUI begin backup;

alter tablespace PSAPDDICD begin backup;

alter tablespace PSAPDDICI begin backup;

alter tablespace PSAPDOCUD begin backup;

alter tablespace PSAPDOCUI begin backup;

alter tablespace PSAPEL46CD begin backup;

alter tablespace PSAPEL46CI begin backup;

alter tablespace PSAPES46CD begin backup;

alter tablespace PSAPES46CI begin backup;

alter tablespace PSAPLOADD begin backup;

alter tablespace PSAPLOADI begin backup;

alter tablespace PSAPPOOLD begin backup;

alter tablespace PSAPPOOLI begin backup;

alter tablespace PSAPPROTD begin backup;

alter tablespace PSAPPROTI begin backup;

alter tablespace PSAPROLL begin backup;

alter tablespace PSAPSOURCED begin backup;

alter tablespace PSAPSOURCEI begin backup;

alter tablespace PSAPSTABD begin backup;

alter tablespace PSAPSTABI begin backup;

alter tablespace PSAPTEMP begin backup;

alter tablespace PSAPUSER1D begin backup;

alter tablespace PSAPUSER1I begin backup;

alter tablespace SYSTEM begin backup;

alter system switch logfile;

alter system switch logfile;

alter system switch logfile;

alter system switch logfile;

--------------------------------------------------------------------------------------------------------------

#/bin/ksh

#

# Written : For R3PSAP AIX node

# Date : Mar 2005

# Script : end_backup.sql

# Purpose : To bring all SAP table spaces back to normal state

--------------------------------------------------------------------------------------------------------------

connect / as sysdba

alter tablespace PSAPBTABD end backup;

alter tablespace PSAPBTABI end backup;

alter tablespace PSAPCLUD end backup;

alter tablespace PSAPCLUI end backup;

alter tablespace PSAPLOADD end backup;

alter tablespace PSAPLOADI end backup;

alter tablespace PSAPPOOLD end backup;

alter tablespace PSAPPOOLI end backup;

alter tablespace PSAPPROTD end backup;

alter tablespace PSAPPROTI end backup;

alter tablespace PSAPROLL end backup;

alter tablespace PSAPSOURCED end backup;

alter tablespace PSAPSOURCEI end backup;

alter tablespace PSAPSTABD end backup;

alter tablespace PSAPSTABI end backup;

alter tablespace PSAPTEMP end backup;

alter tablespace PSAPUSER1D end backup;

alter tablespace PSAPUSER1I end backup;

alter tablespace SYSTEM end backup;

---------------------------------------------------------------------------------------------------------------------

-----------------------------------------------------------------------------------------------------------------

#script name: flashrecreate.sh

#!/bin/ksh

#

# written For: R3PSAP AIX node

# Date : Mar 2005

# Created By: Khurram Shiraz

# Purpose : UNIX Shell Script for recreating flash drives and making #them available for TSM client to be backed up to TSM Server

-----------------------------------------------------------------------

TESTFILE="/scripts/lockfile"

if [ ! -f $TESTFILE ];

then

echo Please ensure that Flash Copy Pairs are already disabled

echo It seems that they are not disabled

echo therefore exiting!!!!

exit 1

else

echo Putting Oracle into hot backup Mode

echo please wait ............................

#

su - orar3p -c "sqlplus /nolog < /scripts/begin_backup.sql"

sync

sleep 10

# Execution of SMcli commands

cd /usr/SMclient

./SMcli 192.168.10.208 192.168.10.209 -c 'recreateFlashCopy logicalDrive ["Disk1-1"];';

./SMcli 192.168.10.208 192.168.10.209 -c 'recreateFlashCopy logicalDrive ["Disk2-1"];';

./SMcli 192.168.10.208 192.168.10.209 -c 'recreateFlashCopy logicalDrive ["Disk4-1"];';

./SMcli 192.168.10.208 192.168.10.209 -c 'recreateFlashCopy logicalDrive ["Disk5-1"];';

# Now working for Flashed Data.......

#

cfgmgr

#z=`lsdev -Cc disk | grep Snapshot | awk ' { printf $1 }`

# Preparation of LVM & VGs for mounting of filesystems

echo sleeping

sleep 10

chdev -l hdisk9 -a pv=clear

chdev -l hdisk11 -a pv=clear

chdev -l hdisk12 -a pv=clear

chdev -l hdisk13 -a pv=clear

recreatevg -y copyvg1 -Y cpy_ hdisk9

recreatevg -y copyvg2 -Y cpy_ hdisk11

recreatevg -y copyvg3 -Y cpy_ hdisk12

recreatevg -y copyvg4 -Y cpy_ hdisk13

# Putting Oracle back to normal Mode

su - orar3p -c "sqlplus /nolog < /scripts/end_backup.sql"

echo now running fsck & mounting fs

fsck -y /fs/oracle/R3P/sapdata1

mount /fs/oracle/R3P/sapdata1

fsck -y /fs/oracle/R3P/sapdata2

mount /fs/oracle/R3P/sapdata2

fsck -y /fs/oracle/R3P/sapdata3

mount /fs/oracle/R3P/sapdata3

fsck –y /fs/oracle/R3P/sapdata4

mount /fs/oracle/R3P/sapdata4

fsck –y /fs/oracle/R3P/sapdata5

mount /fs/oracle/R3P/sapdata5

cd /scripts

rm lockfile

exit 0

fi

---------------------------------------------------------------------------------------------------------------------# flashdisable .sh

#

# Written : For R3PSAP AIX node

# Date : Mar 2005

# Purpose : Shell Script for disabling flash target drives from TSM client # node and removing all related OS information.

-----------------------------------------------------------------------------

#!/bin/ksh

# Unmount all fileystems which are created during Flash Copy operation

#

unmount /fs/oracle/R3P/sapdata1

unmount /fs/oracle/R3P/sapdata2

unmount /fs/oracle/R3P/sapdata3

unmount /fs/oracle/R3P/sapdata4

unmount /fs/oracle/R3P/sapdata5

# Varyoff all Flash copy volume Groups

#

varyoffvg copyvg1

varyoffvg copyvg2

varyoffvg copyvg3

varyoffvg copyvg4

# Export all Flash copy volume groups

exportvg copyvg1

exportvg copyvg2

exportvg copyvg3

exportvg copyvg4

# Remove all snapshot logical drives

rmdev -dl hdisk9

rmdev -dl hdisk11

rmdev -dl hdisk12

rmdev -dl hdisk13

cd /usr/SMclient

./SMcli 192.168.10.208 192.168.10.209 -c 'disableFlashCopy logicalDrive["Disk1-1"];';

./SMcli 192.168.10.208 192.168.10.209 -c 'disableFlashCopy logicalDrive["Disk2-1"];';

./SMcli 192.168.10.208 192.168.10.209 -c 'disableFlashCopy logicalDrive["Disk4-1"];';

./SMcli 192.168.10.208 192.168.10.209 -c 'disableFlashCopy logicalDrive["Disk5-1"];';

cd /scripts

touch lockfile

exit 0

 How to Enable Graphical Mode on Red Hat 7 he recommended way to enable graphical mode on RHEL  V7 is to install first following packages # ...