Thursday 25 June 2009

Which Mobile to select?Nokia 5800 or N73?

Yes , after two years of usage my current mobile has become ugly . Although it is still working fine , i realize that now my colleagues are fed up of it. So i decided three weeks ago to change it and go for "latest available technology".
Unfortunately i am not a "mobile Freek".I usually not so much interested in mobiles and their features. In my view, you only use three or four features of your mobile and that's it. So according to me , your mobile should have good reception,good loud speaker, good camera and one good way to transfer data to/from mobile.I know most of my readers will disagree with my point of view. They will say that mobiles should have GPRS, GPS, Gxx, GDD and GDDxx as well and if any one of these features is missing then mobile is useless.But truly speaking it is just a "symbol of status" and "game of competition".

Now as i already have decided to change my current mobile and still striving to select which mobile model/make will be suitable for me but i have realized that it is not going to be a easy task for me. I never been to be in this kind of problem. The basic features , which i am interested in nearly every good model and make of mobile.
So i soon realized that while searching in between different makes and models , i am also becoming a "mobile freek". I am also looking for touch screen and GPS software in mobiles. I am also looking for full key board options , although i never use to write more than two messages per day.

Another very astonishing thing is "made in china". Now almost all the models of Nokia which are available in Kuwait are "made in china". I am not against Chinese products but afraid whether it counts to status symbol game or not. My mobile is hungarian or finland made and what about yours? Is it Chinese ? Oh i am starting feeling sorry about you, hahahaha....

So guys please help me , i have posted a poll , for my own seek and help in decision
Is Nokia 5800 ( China) is better than Nokia N73 ( Finland? Please participate in this poll and help me in making this difficult decision....otherwise i would not be able to make this decision for next two years and technology will again change.

Use Optical Media/DVDs for AIX backups





Using Optical Media for your backups on AIX


Optical Media like DVDs and CD-RW have been an inexpensive way of taking backups as compared to tape cartridges. With the passage of time, difference of prices between optical media cartridges and tape cartridges is also increasing with a sharp decline in optical media prices. On the other hand, there are many disadvantages in it (like in DVD usage). For example, DVDs are always slower than tape cartridges. DVD backups, therefore always take long periods to complete as compared to backups on tape cartridges.
Moreover DVD storage capacity is limited to about 4 GB, as compared to 20GB and more for tapes.

For windows system administrators, taking backups to these inexpensive DVD cartridges have been extremely easy job; on the other hand Linux system administrators also easily enjoy usage of these DVD cartridges for their day to day backups.

For IBM PSeries users and administrators, usage of DVD cartridges has not been a very simple task. First of all, many of them face difficulty in finding compatible DVD cartridges with DVD RAM drives which they have in their systems. Second, some of them face problems in deciding which technique to be used on AIX in order to utilize these DVD cartridges for their day to day backup operations. In this article, I will try to highlight different ways to use DVD cartridges for backup operations on AIX operating system and try to answer many basic questions which come to mind of an AIX system administrator while he goes to use DVD or CD-RW for taking system level backups.


Media & Compatible devices


The first and major question which arises for AIX system administrators is to select appropriate optical media for taking backups on pSeries. In this regard, you have to consider the device, which is being attached with pSeries box.
Following is the table from IBM support sites, which contain supported devices with compatible optical media for IBM pSeries boxes.





So you have to carefully determine which optical device you have on your system and then use the above table to find out compatible media for that device.
Most of latest IBM pSeries servers now come with DVD RAM devices which support DVD RAM cartridges. CD-RW devices are not orderable in most of circumstances. A most common DVD RAM drive from IBM, which is available on most latest IBM pSeries boxes is “IBM 4.7 GB IDE Slimline DVD-RAM Drive” with feature code 5751.You can order IBM DVD media for this drive ( part number of media is also mentioned in above table) or you can buy other vendor media which are easily available in market on much cheaper rates. While selecting non –IBM media, once again always concentrate on your drive type( you may use above tables as your reference), otherwise you may face non –compatibility errors while creating backups on these media. For example, I used type 2 DVD-RAM, single sided rewritable disc from Imation brand with IBM 4.7 GB IDE DVD-RAM drive for number of years without any problems.







Appropriate command selection for system backups


AIX 5L supports creation of bootable system backups, starting from AIX 5.1.Many System administrators still however don’t know which exact command to be used for this purpose. Infact AIX 5L in background uses only one command which is mkcd, however while using smitty option, there are two possibilities; either you can use “smitty mkcd” or “smitty mkdvd”.

If you use smitty mkcd, you will be indirectly using mkcd command with –d option which will create Rock Ridge format image (a CD-R format image)
However, if you go for smitty mkdvd option (which is usually used for creating backups on DVD cartridges), you will be prompted for two possible format options; one possible format option is UDF format (which is in background use mkcd command with –U option). Other possible option is ISO9660 format (which also uses mkcd command in background but with –L & -d flags).

If you are using DVD cartridges with DVD RAM drive , use smitty mkdvd with UDF format selected, because in that case the space required on disks for creation of temporary filesystems is much less than space required in case of ISO9660 format.
Unfortunately, you'll need extra file system space to create the CD's,
and ISO9660 DVDs. Unlike UDF DVDs, where files can be
written directly to the media, an ISO9660 CD/DVD image must be created
in the file system first, before copying it to CD. The easiest way to work around
this, is to just allow mkcd to create the temporary file systems it needs, and it will remove them after it has finished using them. It will also exclude them from the backup, so you won't end up with mkcd’s created temporary filesystems in your mksysb image. However, in many situations, you don’t have even space for creation of these temporary filesystems on disk. Under these circumstances, it is better to use UDF format rather than IS09660 format.
Temporary filesystems which are created by mkcd command with ISO9660 ( or so called Rock Ridge format) are /mkcd/mksysb_image, /mkcd/cd_fs and /mkcd/cd_images. On the other hand, with UDF format only /mkcd/mksysb_image filesystem is created, thereby taking less space on disk.
Please keep in mind that while you use smitty mkcd option to create backup image on CD-R, total size of these temporary filesystems will not exceed maximum limit which is the size of an individual CD cartridge (1.3 GB maximum). On the other hand, if you use DVD media, this maximum limit may approach 8GB ( which is maximum size possible for any DVD cartridge).

Conclusively we can say that if you are using CD-R cartridges, use smitty mkcd (or mkcd –d command).On the other hand , if you are using DVD RAM cartridges , you can use mkcd –d for CD image format (ISO9660) with –L for large size support ( essentially required for DVD cartridges) or simply use mkcd –U for UDF format.


Copying files on UDF filesystems on DVDs

In some circumstances, system administrators may want to just use DVD RAM cartridges for taking backup of files rather than whole operating system or volume group. For this task, you can simply use copy command. However, before that, you have to create a UDF filesystem on DVD cartridge. Below is a simple shell script which I use in my own environment to automate both tasks. It simply moves an application log files which are older than specified number of days from source filesystem to temporary target directory (or you can create a filesystem for this purpose also) and then from this target filesystem, log files are copied to DVD cartridge using UDF filesystem format. You can however, modify it to moving files directly from disk storage to DVD directly without having a temporary staging filesystem on disk. The shell script is intelligent enough to have a calculation on total size of log files which are to be moved from source filesystem to target filesystem (by your selection of number of days) and then give you a choice to either proceed or simply exit from the script without doing anything.
-----------------------------------------------------------------------
#!/bin/ksh
#Script Name: applogsba.sh
#Script Purpose: To copy older log files to DVD cartridge on AIX

function chk_mnt
{
df | grep "/mnt" >/dev/null
if [[ $? = 0 ]]
then
echo " DVD RAM is still mounted...Can't proceed..."
echo " Please unmount DVD and then reexecute the script"
exit 1
fi
}

#Main Script startup
# First check for mounting status of DVD UDF filesystem
chk_mnt
echo " Please enter the source directory containing application log files..."
read srcdir
echo " Please now enter target filesystem's name in absolute path format "
read trgdir
cd $srcdir
echo " Now please enter the number of days for archiving files ....."
read numbdays
find . -name "*.log" -ctime +$numbdays -exec ls -al {} \; >filelist
cat filelist
echo
echo
echo " Above files will be moved by this selection "
echo
echo
z=`more filelist | awk '{sum=sum + $5};END{ print sum/1048576}'`
echo " Also,you will require $z MB Free in $trgdir to copy specified log files"
echo
echo
echo "\n And these files will use $z MB only on DVD cartridge ,if copied"
echo
echo
echo "please enter y to continue or n to exit from script.........."
read ans
case $ans in
y)
avail=`df -m | grep "$trgdir" | awk ' { print $3} '`
if [ $avail -gt $z ]
then
DATEDR="archive-$(date +%H%M%S"-"%d"-"%B"-"%Y)"
if test -d $DATEDR
then
echo directory already exists.... now proceeding to copy files only
else
mkdir $trgdir/$DATEDR
fi
more filelist | awk ' { print $9 }' > workfile
while read filename
do
echo "Moving $filename to taget filesystem"
mv $filename $trgdir/$DATEDR/
done < workfile
else
echo " You don't have enough space in $trgdir filesystem "
echo "\n Please create enough space in target filesystem"
exit 1
fi
echo " Now please insert DVD RAM cartridge and press enter when ready"
echo "\n WARNING..Data on DVD cartridge will be overwritten by proceeding "
read
lsdev -Cc cdrom | grep "Available" 1>/dev/null
if [[ $? = 0 ]]
then
echo "Please wait .... now creating new UDF filesystem on DVD ..."
sleep 1
udfcreate -d /dev/cd0
if [[ $? = 0 ]]
then
mount -v udfs /dev/cd0 /mnt
cd $trgdir/$DATEDR
echo " Now Copying required log files to DVD...."
echo " Please wait ............................."
sleep 2
cp -pR * /mnt/
echo " Files have been copied... please check by using ls command on /mnt"
exit 0
else
echo " Device not ready....check the media or DVD drive"
exit 1
fi
else
echo " DVD is not available on this system.... .Make it available and retry"
exit 1
fi
;;
n) echo "You have selected to exit from script..nothing will happen...."
exit 1
;;
*) echo invalid choice
exit 1
;;
esac

----------------------------------------------------------

Using Sysback for backups to DVD

Sysback, being powerful tool for backup and recovery tool from IBM provides you greater flexibility while using optical media for backups. For example, with Sysback, you can take backups on every level including filesystems, files & directories, volume group and off course mksysb. Sysback also supports both formats to take backups (ISO9660 as well as UDF format) on DVD & CD-R. Another good feature of Sysback is that it allow you to designate one remote server as CD/DVD server (server having DVD/CD device and enough storage to allow temporary filesystems to be created during DVD/CD-RW based backup operations).
For creating backups on DVD/CD-R using Sysback, follow the following procedure:

Smitty SysbackBackup & Recovery OptionsBackup OptionsCreate a backup to CD/DVDSelect backup format (ISO9660 or UDF) select backup type (which will give you option between full system, volume group, filesystems, files and directories and Logical volumes).

Sysback also provides you an option to write your backup directly to DVD cartridges without any need of creation of temporary work space on disk storage. Additionally, appending of backup data onto existing backup image on DVDs is also possible with Sysback tool.

Summary:
As a matter of fact, every solution has its own pros and cons; same rule applies to use of optical media on pSeries boxes. You can easily use DVDs and CD-R for backups on AIX systems however you have to keep two drawbacks in your mind; first relatively slow speed of backup/restoration operations; second, size limitation for the backup images. On the other hand, these drawbacks are over come by cost effectiveness of any backup solution based on DVD and CD-R media. Choice is off course yours!!!!

Saturday 13 June 2009

Performing automated DLPAR operations

One of my closest friend asked recently , how he can perform DLPAR operations automatically without login into HMC itself.

Infact it is not so much difficult task to perform. On following sections , you will find a step by step guide , how you can increase or decrease memory on automated basis and offcouse same technique can be used for increasing/decreasing CPU resources as well.

A) Host Name resolution setup:

I , first of all identified one of Lpar on P570-2 ( aqbtest) as a management Lpar for this whole solution. I established name resolution setup on this Lpar so that all other Lpars on P570-1, P570-2 and P570-3 servers should be pingable by name and Ip address from this management Lpar. Hmc should also be resolvable by hostname from this management Lpar.
One important thing which you should consider while implementing this whole solution is that Lpar names (as they appear on HMC interface) should also be resolvable from management Lpar. In most of the cases, hostnames and Lpar names are same , but however if they are different then still you can use /etc/hosts file or DNS to resolve Lpar names also from management Lpar. This requirement arises from shell scripting done for this solution and will be explained further in Section D.

B) SSH access setup:

Second step would be establishing SSH relationship and access from this management Lpar to each and every other Lpar as well as to HMC.
SSH access to HMC is established in slightly different way as compared to SSH access to Lpars. I used Openssh on AIX from Bull web site ( freeware.openssh.rte 3.8.1.0), installed it alongwith openssl library ( openssl 0.9.6.7).I created a user on aqbtest Lpar named as hscadmin and also on Hmc HMC1. I assigned "Managed system profile " to hscadmin user on HMC .I also allowed remote comnmand execution ( so that HMC can allow SSH remote connections to be established with it )
On AIX lpar "aqbtest", i generated RSA key pairs by following commands
/home/root> su - hscadmin
/home/hscadmin> ssh -keygen -t rsa ( accept default values with blank passphrase )
/home/hscadmin> export hscadminkey=`cat id_rsa.pub`
/home/hscadmin> ssh hscadmin@HMC1 mkauthkeys -a \ "$hscadminke\/" ( replace it with back slash while final editing )
The above command will copy public key from AIX Lpar aqbtest to HMC1. Once copied , you can also directly login to HMC as hscadmin using ssh and varify that key has been copied successfully or not by executing " cat .ssh/authorized_keys2 " command.
You should now be able to login to HMC from AIX management Lpar without any password prompt. You can verify by executing
/home/hscadmin> ssh HMC1 lsusers

which will show all users presnt on hmc.

If you face any problem while login into hmc using ssh , you can always make the authorized_keys file empty and then try again with above procedure. To make this file empty, you can follw the following command sequence on AIX management lpar
/home/hasadmin> touch /tmp/mykeyfile ( an empty file )
/home/hscadmin> scp /tmp/mykeyfile hscadmin@HMC1:.ssh/authorized_keys2

Now the same management Lpar should also be able to execute commands remotely to all other Lpars without any password prompt.For this , again i decided to use SSH with DSA authentication so that management lpar should be login and excute commands on all lpars remotely without any password prompt.

I created hscadmin user ( which may be an ordinary users ) on another Lpar on P570-1 server ( named aqbdb) and then install openssh on this Lpar. I then generated DSA key pair on management Lpar "aqbtest"

/home/hscadmin> ssh-keygen -t dsa -b 2048

/home/hscadmin> scp id_dsa.pub hscadmin@aqbdb:/home/hscadmin/.ssh/dsa_aqbtest.pub

on aqbdb Lpar ,

/home/hscadmin> touch .ssh/authorized_keys
/home/hscadmin> cat .ssh/dsa_aqbtest.pub >> authorized_keys

Now you should be able to login from management Lpar aqbtest to Lpar aqbdb , as hscadmin user, using ssh , without any password prompt.


Shell Scripts Creation for Resources Movement


Now after you are able to login to HMC without any password prompt from your AIX server, next step would be creation of shell scripts to do the resources allocation/reallocation.

Assume that you want to increase memory on aqbdb Lpar from 30 GB to 40 GB before running any batch job or backup process and then revert it back to 30 GB after backup process finishes

Ideally this could be done from same backup shell script by calling memory increase script (memincr.sh) before backup process and then calling memory decrease script ( memdecr.sh) as follows:
-----------------------------------------------------------------------------
#!/bin/ksh

Su – hscadmin –c “/home/hscadmin/memincr.sh” # increasing memory

(Backup process commands)

Su – hscadmin –c “/home/hscadmin/memdescr.sh” #decreasing memory
--------------------------------------------------------------------------------


While memincr.sh script would be simply as follows (it will de allocate memory resources from test partition and allocate the same to production partition


#!/bin/ksh

ssh hscroot@hmc chhwres -m p550_itso1 -o r -p aqbtest -r mem -q 6144 -w 15 #Release 6 GB from aqbtest partition

ssh hscroot@hmctot184 chhwres -m p550_itso1 -o a -p aqbprod -r mem -q 6144 -w 15 #Add 6 GB to Aqbprod partition

------------------------------------------------------------------------

A similar kind of memory decreasing script can easily be written for reverting back memory resources distribution , as it was originally.

Saturday 6 June 2009

Using Port Knocking for Database security Implementations

In the field of IT systems security, although concept of “port knocking:” is relatively new, but with passage of time, it is getting popular among system administrators. According to wikkipedia, port knocking is a method of externally opening ports on a firewall by generating a connection attempt on a set of prespecified closed ports. Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect over specified port(s). The primary purpose of port knocking is to prevent an attacker from scanning a system for potentially exploitable services by doing a port scan, because until the attacker sends the correct knock sequence, the protected ports will appear closed.

More specifically, Port knocking works on the concept that users wishing to attach to a network service must initiate a predetermined sequence of port connections or send a unique string of bytes before the remote client can connect to the eventual service.

For example, suppose that a remote client wants to connect to an FTP server. The administrator configures the port-knocking requirements ahead of time, requiring that connecting remote clients first connect to ports 2000, 4000, and 7107 before connecting to the final destination port, 21 on FTP server. The administrator tells all legitimate clients about the correct “combination” of knocks to port knocking daemon running on FTP server and hence when they want to connect to FTP service, they simply send these knocks to the server and then start using FTP service. Question arises, what is the basic advantage of the additional step of sending knocks and then connecting to FTP service. Answer is simple; FTP service is not always running on server, it will get start once the correct port knocks are sent to server and dies out once it receive other predefined sequence of port knocks. This possible backdoor to business critical server is , therefore only be opened for a very short time and that is, when it is required essentially by business needs and is closed as soon , thereby avoiding chances of any malicious attacks.

In this article, I will try to cover implementation of port knocking on RHEL, using a very famous open source port knocking tool and most importantly will try to extend the idea of port knocking beyond simple firewall changes to some more complex system administration tasks.


Port Knocking – a basic Overview

Now let’s have a review on basic functionality of port knocker mechanism. As described earlier, in such implementations, Knockd is usually a port-knocking application or daemon which silently runs on a server passively listening to network traffic. Once it will see a port sequence it has an action configured for it, it will run that action. So while implementing port knocking technology, we usually start with installation of port knocker daemon, which once installed, start in background or foreground. We then configure some port sequences (tcp, udp, or both), and the appropriate actions for each sequence in this port knocker daemon configuration. Once, port knocker daemon senses this specified sequence, it execute the action (which in most of scenarios EXECUTION OF command to modify existing firewall rules).
This simple and basic port knocking implementation has faced some critics also. In point of view of some IT security professionals, use of predefined and fixes sequence of knocks, itself presents a security breach. To over come this, many port knocking implementations have been modified slightly. In these advanced implementation, port knocker daemon generates a random sequence of knocks and then clients use these knocks sequence to open door to business critical servers.
It should be noted down that port knocking mechanism should always be complimented by your native security techniques, so that even if a hacker manage to trap knock sequence , he should still be challenged by passwords prompt etc, before connecting to service.
The biggest advantage of all is that port knocking is platform-, service-, and application-independent: Any OS with the correct client and server software can take advantage of its protection. Although port knocking is mainly a Linux/Unix implementation, there are Windows tools that can do the same thing.

There is a valuable list of various port knocking implementations available at http://www.portknocking.org/view/implementations..
You can choose tool of your choice from this website. I selected “Knockd”, which is considered to be one of the most famous and robust implementation of port knocking mechanism for Linux and Unix.


Port Knocking and database security



Now we proceed towards possible extensions of port knocking mechanism. In my scenario, a business critical mysql based application, running on RHEL enterprise server require sometimes remote connections from DBA for basic database maintenance activities. I could not allow such remote database connections, all the time and from each and every possible IP address due to corporate security requirements. As a result, I decided to go for exploration of port knocking mechanism , so that it can be known whether it can help me in achieving my objective or not?

First of all, let’s start with Linux firewall tool (IPtables) itself. IPtables command with –A parameter append the filtering rule in last to the existing chain while –I parameter insert the rule into specific position within chain. It is important that with –I parameter, you have to put rule number (rule with rule number 1 will have priority over rule number 2 and so on).
Now to secure mysql connections to my database server (172.16.2.183), I first of all blocked network traffic on server’s mysql port (default 3306) coming from everywhere. For this purpose, I executed following command

#/home/root>iptables –A INPUT –p tcp –s 0/0 –d 172.16.2.183 --dport 3306 –j REJECT

Then I save this rule permanently.

#/home/root>iptables-save

Next step would be installation of knockd server software on RHEL box. I then download the rpm from RHEL network and install knockd rpm (knock-0.4-1.2.el4.rf).
I then customized /etc/knockd.conf file as follows:

-------------------------------------------------------------------------------------------------
[options]
logfile=/var/log/knockd.log
[DB2clientopen]
sequence = 7050,8050,9050
seq_timeout = 10
tcpflags = syn
command = /sbin/iptables -I INPUT 1 -p tcp -s 192.168.2.201 --sport 1024:65535 -d 172.16.2.183 --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT

[DB2clientclose]
sequence = 9050,8050,7000
seq_timeout = 10
tcpflags = syn
command = /sbin/iptables -D INPUT 1

-------------------------------------------------------------------------------------------------


As it is obvious from the above knockd.conf file format, there are two types of actions which will be executed by knockd daemon, depending upon knocking sequence it receives.
First, if it receive knock sequence of 7050, 8050 & 9050, knockd daemon will insert a IPchain rule with rule number 1 in input chain so that mysql database port connection will be opened from database administrator PC ( 192.168.2.201) only. On the other hand, if it receives knock sequence of 9050,8050, 7000 it will simply delete IPchain rule with rule number of 1 so that database remote access would be closed down once again.

I resolved my DBA PC ip address with hostname of ‘dbawin’ using /etc/hosts file and created a test database ‘test1’. I then created a user “test1” with password and grant privileges to user “test1” as follows:
#/home/root> mysql -u root test1
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 404 to server version: 5.0.21-standard-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> create user test;
mysql> grant all privileges on *.* to ‘test@dbawin’ identified by ‘polanipass’ with grant option;


I then restarted knockd daemon in background using the command. It should be noted down that by default knockd daemon will start listening on eth0.

#/home/root> /usr/sbin/knockd –d

I then downloaded, windows based cygwin knock client and then go to DOS prompt of my DBA PC t to knock the knockd daemon with sequence:

C:\KNOCK\KNOCK\WINDOWS>knock.exe 172.16.2.183 9050 8050 7000



As a result of this knocking, Knockd daemon will execute iptables command mentioned in [DB2clientopen] section of knockd.conf and add the rule in INPUT chain to allow DB2 PC to connect to database running on server.
Now you can do the testing with windows based mysql database client (like SQLyog), which will connect to to mysql server easily.






Now if you knock database server with other knock sequence (9050, 8050, 7000), this remote connection will be disallowed. This time you will get “Access error” dialogue message with same client, thereby confirming proper functioning of knockd daemon.
It is obvious now that you can easily use knockd to ensure security control on remote connections to mysql database (and in general to any database), It is totally independent of the fact that whether source of remote connections to database is DBA PC or application server or web server. A more practical usage of this controlled access to database may be time based access, where corporate want to allow their application servers to access backend database till the day end business. For this purpose, execution of knock client with proper knock sequence from windows workstation can be scheduled as a batch script.


Port Knocking and system administration tasks


Using port knocking for performing remote system administration tasks is great idea, but I had never seen any example for that. So I decided to explore strength of port knocking mechanism to do the other system administration tasks, besides just changing firewall rules.
I just modified /etc/knockd.conf file to perform a system restart on one knock sequence and start a backup on tape drive for other sequence. I just restarted knockd daemon to make these changes effective and then tested actions on these knock sequences. Every thing well went well and system restart on first knock sequence and backup was started on second sequence.
--------------------------------------------------------------------------------------------------------
[options]
logfile=/var/log/knockd.log
[systemreboot]
sequence = 7050,8050,9050
seq_timeout = 10
tcpflags = syn
command = /usr/bin/reboot

[systembackup]
sequence = 9050,8050,7000
seq_timeout = 10
tcpflags = syn
command = /usr/bin/tar –cf /dev/rmt0 /home/root/

------------------------------------------------------------------------------------------------------
In this way, you can ask operators to use these port knock sequences to perform these basic system administration tasks (and many more), without having root user privileges.

You can also

Summary:


Port knocking is a very useful tool for systems security. It is because of its usefulness and robustness that number of its implementation and its users are growing rapidly. If you can open a door into closed black box for sometime to perform some system administration tasks, even without requirement of login to system, it can be very ideal for most of secured environments. However, it is always a good idea to secure your port knocks by keep changing these knocks frequently (or you can use some random seed generators to create random port knocks). Knock the box and get the tasks done, whichever you want to do.

About Author: Khurram Shiraz is Technical Consultant at GBM, Kuwait. In his eight years of IT experience, he worked mainly with IBM technologies and products especially AIX, HACMP Clustering, Tivoli and IBM SAN/ NAS Storage. He also has worked with IBM Integrated Technology Services group. His area of expertise includes design and implementation of high availability, security and DR solutions based on AIX, Linux and windows infrastructure. He can be reached at kshiraz12@hotmail.com

Tuesday 2 June 2009

Configuring AIX Audit Subsystems to enhance security

With the dynamic nature of business today and its growing dependence on information technology, it is becoming important day by day to improve security measures to guarantee confidentiality, integrity and authenticity of data. This has introduced needs of more and more security tools and their implementation in corporate environments so that proper security can be ensured.

AIX, like all other industry leading operating systems has a built-in feature of Auditing. This Auditing subsystem is part of base operating system and provides system administrators the features to record information pertinent to system security. This information is essential for the system administrators to prevent potential violations of the system security policy.
Any occurrence on the AIX servers relevant to system security is considered an auditable event. The set of auditable events on the system defines which occurrences can actually be audited and the granularity of the auditing provided.
Hence we can say that main concept of auditing is to detect any occurrence of auditable event record information pertaining to the set of auditable events, and process this information to examine audit trails and generate periodic reports.

An auditing subsystem should also provide feature to monitor the audit trail in real time for generating alerts to immediate security threats. AIX audit subsystem is no exception. It is capable of recording audit events for long term analysis by system administrators and security administrators as well as it can also provide real time auditing.

In this article, I will explore AIX auditing subsystem in detail, covering its different configuration aspects. I will also take my readers to basic configuration steps, so that they can configure this freely available feature of AIX 5L for auditing different system events happening on their AIX servers.

1.AIX Auditing Subsystem Components & Structure


As I described earlier that an audit is an examination of a group, individual account or activity and the auditing Subsystems provides a means of tracing and recording what is happing on your system.

This Subsystem also provides the means of security related information and to alert System and security administrators about potential and actual violations of the system security policy. For example a system administrator would like to be identified either immediately or at day end that some internal or external intruder has tried to change some critical files on your system ( like database log files or system critical log files like smit.log file or root user shell history file etc). Here comes the AIX auditing to help administrators.

Let’s start with components description for AIX auditing. Main component is auditing configuration file ( /etc/security/audit/config ).Whenever you start the auditing daemon on AIX , this configuration file is read. This file contains information, such as
• Mode
Mode represents the data collection method, as used by auditing daemon .There are two different modes of AIX auditing , which are discussed in detail in next section of this article. These modes are
– Bin Mode
– Stream Mode
• Events
Events are systems defined activity. Here are two examples
– USER_SU give you information about whether a user tries to su another user; this event is associated with General class, by default.
– CRON_Start event gives you information whether a cron job has started or not.

• Customized Events( if any):
The Systems Administrator can also define his own Customized Events relating to system kernel activity and also to the critical users activates. For example, in this article I have defined some customized events related to smit activities as performed by root user.
I named it as “smitlogs_WRITE”. This event will give you information about whether any user and even the root user ties to edit the smit.log file or update this file using smitty tool.

• Classes
Classes define group of events. A system administrator can group logically similar events in a class. For example both USER_SU & Password_Change events are related to “general” class of events. Class names are arbitrary and you can use any class name for certain group of events.
• Objects
Auditing objects means system configuration files as well as kernel related objects present in ODM (for example /etc/sendmail.cf and /etc/objrepos/SRCsubsys, both are example of audit objects). Read, write & execution activities related to these audit objects can easily be audited through AIX auditing.
• Users:
It is the responsibility of system administrator to identify the users who are to be audited for specific group of events (or so called classes). You can audit one or more class per user. For instance /etc/security/audit/config file may contain the following stanza.
user:
jack= general,cron,tcpip.


2.0 Data collection Modes

One of the most important configuration aspects of AIX auditing is the data collection mode in which it is operating. This data collection mode, basically describes the way in which data is collected by AIX auditing subsystems for analysis by system and security administrators.

There are two basic types of data collection by AIX auditing. One is named as binary mode and other is known as stream mode.

2.1 Binary Mode Configuration


While AIX auditing daemon operates in binary mode, audit events are recorded in two BIN files alternatively. This recording in binary files is temporary and finally data is appended into one single trail file. It must be noted down as the data format is binary so operating system commands like vi, pg or more can not be used to read the data directly from these files. Instead “auditcat“command is used to read the data from these files.

In essence, binary mode is used in scenarios, where long term audit data recording and analysis is required (for example by IT auditors or by IT security teams).



A schematic data flow in binary mode configuration of AIX audit subsystem is shown as below:


The alternating BIN mechanism (/auditfs/bin1 and /auditfs/bin2) is used to ensure that the audit susbsystem always has something to write to while the audit records are processed. When the audit subsystem switches to the other bin, it empties the first bin content to the /auditfs/trail file.
When time comes to switch the bin again, the first bin will be available. This mechanism ensures the decoupling of the storage and analysis of the data from the data generation.
Typically, the auditcat program is used to read the data from the bin that the kernel is not writing to at the moment. To make sure that the system never runs out of space for the audit trail (the output of the auditcat program), the freespace parameter can be specified in the /etc/security/audit/config file.

Let’s start with basic configuration steps for AIX auditing in binary mode. Configuration files of AIX auditing subsystem are located in /etc/security/audit directory. Main configuration file is “config”, which controls auditing subsystem basic behavior.

For enabling of binary mode, you just have to put “binmode = on” in config file.

It is important to note down that either of modes can be “ON” or both of them. It means that there is also a possibility of having both binary as well as stream modes “active” at same time.

It is always a good idea to create a separate filesystem for holding bin files as well as trail. It will eliminate any chance of filling up / filesystem or any other filesystem (as size of these files is usually large due to enormous number of records coming all together).

I therefore created a filesystem /auditfs and then configured auditing subsystem configuration file as shown below:

/home/root> more /etc/security/audit/config
------------------------------------------------------------------------------------------------------------
start:
binmode = on
streammode=off
bin:
trail = /auditfs/trail
bin1 = /auditfs/bin1
bin2 = /auditfs/bin2
binsize = 10240
cmds = /etc/security/audit/bincmds
freespace = 65536
stream:
cmds = /etc/security/audit/streamcmds
classes:

general =USER_SU,PASSWORD_Change

objects=S_ENVIRON_WRITE,S_GROUP_WRITE,S_LIMITS_WRITE,S_LOGIN_WRITE,S_PASSWD_READ,S_PASSWD_WRITE,S_USER_WRITE,AUD_CONFIG_WRITE
users:
root = general

-------------------------------------------------------------------------------------------------

As shown in this relatively simple configuration of auditing daemon, it is obvious that bin1 and bin2 parameter describes location of binary files while trail parameter specifies location of audit trail record.
The other two important parameters are binsize and cmd;”binsize” parameter specifies size of temporary files in bytes, before switching to the other binary temporary file while “cmd” specifies location of backend command for the BIN mode of AIX auditing subsystem. It is very important to note down that this backend program ( bincmds) is nothing more than usage of auditcat command as follows

/usr/sbin/auditcat -p -o $trail $bin


Events to be audited can be grouped in “classes” stanza of config file. As shown above, class named as “general” comprises of events related to user security (like switch user and password change) and then root user is configured to be audited against this class of events. It should be noted down that you can create classes of you own choice with arbitrary names and then use these classes to group predefined events as well as customized events.

Another important stanza of this configuration file is “objects”. In AIX auditing environment, auditing objects means system configuration files as well as kennel related objects present in ODM (for instance /etc/sendmail.cf & /etc/objrepos/SRCsubsys are both example of audit objects). Read, write & Execution of a file kernel object can be audited through audit objects. You can either specify objects in config file under the stanza of “objects” for relatively simpler configurations but for more complex auditing configurations you can specify all objects of your need in “object” file also, which is present in /etc/security/audit directory.






2.2 Stream Mode Configuration




The stream mode writes the audit records in a circular buffer in memory. The root user or audit group member user can continuously view the records from stream.out file by using vi, more or pg commands.



Structurally, STREAM mode writes the audit records to a circular buffer that can be read
by a /dev/audit device file, as shown in Figure 2-3 on page 10. When the kernel reaches the end of the buffer, it simply wraps to the beginning.
In stream mode, auditstream command is used to read the /dev/audit audit device file. On the other hand, the auditselect command can be used to select only those events in which system administrators are interested. This objective can be achieved by –c flag of auditselect command.

It is obvious Stream mode of AIX auditing is more likely to be used in those environments where system and security administrators are interested in monitoring real time data from audit events and generating some kinds of traps for more crucial security related events.

In the STREAM mode, the kernel writes records into a circular buffer. When the kernel reaches the end of the buffer, it simply wraps to the beginning. Processes read the information through a pseudo-device called /dev/audit. When a process opens this device, a channel is created for that process. Optionally, the events to be read on the channel can be specified as a list of audit classes. See the following figure for an illustration of audit STREAM mode:



As it is obvious, stream mode provides system and security administrators to have a real monitoring on audit events. This mode is therefore useful for the environments where auditors want to have continuous monitoring so that in case of any potential security breach or intruder attack, system and security administrators get notified immediately. Another use is to create a trail that is written immediately, preventing any possible tampering with the audit trail, as is possible if the trail is stored on some writable media.


Yet another method to use the STREAM mode is to write the audit stream into a program that stores the audit information on a remote system, which allows central near-time processing, while at the same time protecting the audit information from tampering at the originating host.


Basic stream mode configuration can be achieved by modifying /etc/security/config file as shown below.




/home/root> more /etc/security/audit/config
-----------------------------------------------------------------------------------------------------------
start:
binmode = off
streammode=on
bin:
trail = /auditfs/trail
bin1 = /auditfs/bin1
bin2 = /auditfs/bin2
binsize = 10240
cmds = /etc/security/audit/bincmds
freespace = 65536
stream:
cmds = /etc/security/audit/streamcmds
classes:
genuser=USER_SU,PASSWORD_Change

objects=S_ENVIRON_WRITE,S_GROUP_WRITE,S_LIMITS_WRITE,S_LOGIN_WRITE,S_PASSWD_READ,S_PASSWD_WRITE,S_USER_WRITE,AUD_CONFIG_WR
users:
root = general

-------------------------------------------------------------------------------------------------



As shown above, there are hardly any significant changes to configure stream mode as compared to what we did to configure binary mode, except we “on” stream mode and switch “off” binary mode.

It is important that streamcmd is nothing but a combination of auditstream and auditpr command:
/home/root> more /etc/security/audit/streamcmds

The output would be as follows:

/usr/sbin/auditstream | auditpr > /audit/stream.out &


As a result , whenever you start stream mode of AIX audit subsystem , by default you will start getting audit records in stream.out file. You can use either vi or cat or tail command to view real time audit records generated by auditing subsystem.









# tail -f /audit/stream.out

event login status time command
--------------- -------- -------- --------- ---------
S_NOTAUTH_READ root OK Thu May 24 14:07:05 2007 cat
S_NOTAUTH_READ root OK Thu May 24 14:07:05 2007 cat
FILE_Unlink root OK Thu May 24 14:07:09 2007 vi
S_NOTAUTH_READ root OK Thu May 24 14:07:09 2007 vi
S_NOTAUTH_READ root OK Thu May 24 14:07:09 2007 vi
S_NOTAUTH_READ root OK Thu May 24 14:07:09 2007 vi





3.0 Customized auditing output presentations



One of the most important aspect of AIX auditing subsystem is that juts like all other auditing subsystems of various other operating system like windows, Linux or Sun Solaris , it generates lot of audit records. As a result , output data coming from auditing is huge.

Best approach to overcome this problem is to properly configure auditing subsystem so that not all events and users should be audited. Instead it is always a good idea to identify critical system files, events and commands to be monitored and restrict this monitoring to few users only.

Finally, you can also restrict output audit records to certain specific auditable events. For example , in above scenerio where I configured auditing for “smit_READ” and “smit_WRITE”, there are other corresponding audit events appear in audit log like “FILE_Unlink”. There is provision of offloading these undesirable events so that information gathered through AIX auditing subsystem will be more specific.
There are several commands present, a combination of which can be used for this objective. Main command is “auditpr” which is used for formatting and display of audit records.

For example , after you setup binary or stream mode of audit subsystem and start audit subsystem ( using audit start command) you can use following “auditpr” command to display or read all audit records .

/home/root> auditpr -hhelpPRtTc -v | more



In stream mode, you can also use a combination of commands to display only selected events related audit records. There is “auditstream” command which stream out incoming data from audit subsystem. The resulted stream data can be inputted to “auditselect “command to select only specific audit events and discard remaining ones.

For instance, as in above example of auditing “smit_WRITE” event, if system administrators wants to collect related data to this specific event in a file “criticalwrites.out” then following combination of commands can be used as follows:

/home/root> /usr/sbin/auditstream | /usr/sbin/auditselect -e "event == smit_Write" | auditpr -hhelpPRtTc -v >> /home/root/criticalwrites.out

Similarly if system administrator wants to watch this critical event on system console , he can modify the above command as follows

/home/root> /usr/sbin/auditstream | /usr/sbin/auditselect -e "event == smit_Write" | auditpr -hhelpPRtTc -v > /dev/console &


4.0 Critical system events auditing configuration


Now we proceed to some relatively complex audit configuration to monitor some critical activities on AIX servers. We will create some customized audit events for this purpose, with our own defined objects to fulfill our requirements.



Event-A Monitoring changes in a specific file



There are certain important files which are always under root user ownership and system administrator with root password can easily play with these files. Traditionally, system administrator being super user for the UNIX operating system has full privileges so he can execute any commands through command line or through smitty menus and then delete the entries from corresponding log files (root’s shell history file and /smit.log files) This has remain a point of concern for IT auditors so here we proceed to configure auditing against any attempt to modify these critical files.
Let’s start with /smit.log file as an object for AIX 5L auditing. Same configuration steps can be used for monitoring root user shell history file.

First of all, I had to configure /smit.log file as an auditing object to be monitored by AIX auditing subsystem. For this purpose, I added following stanza in /etc/security/audit/objects file:


/smit.log:
r = "smitlogs_READ"
w = "smitlogs_WRITE"



Next, I added following entries in /etc/security/audit/config file as follows

classes:
readwrite = smitlogs_smitlogs_WRITE




It has added an event class of readwrite and finally we have to assign users of our interest (whom we want to audit against read write attempts to smit.log file). This again done in same file (/etc/security/audit/config)





users:
root = general, readwrite
jack = readwrite
thomas = readwrite
john = readwrite



Now in which data collection mode, we want to monitor this specific event, it really depends upon our own choice. Let’s assume, we want to monitor this specific event in stream mode , so we just “off” the binary mode in config file ( as described above) and “On” stream mode , start audit subsystem using “ audit start” command .
Because the data collection is enabled in STREAM mode, data collection can be started by running the following command:
# /usr/sbin/auditstream | auditpr -hhelpPRtTc -v

Optionally , as described in section 3.0 , you can combine auditselect command to select and display only specific events
#/usr/sbin/auditstream | /usr/sbin/auditselect -e "event == smit_Write" | auditpr -hhelpPRtTc -v


Or redirecting it to a file for later review by auditors

# usr/sbin/auditstream | /usr/sbin/auditselect -e "event == smit_Write" | auditpr -hhelpPRtTc –v >> /auditfs/criticalevents.out


The auditing results are written in the /auditfs/criticalevents.out file, which can be monitored in real time to keep track of the read and write operations.
If you use first option, typical output would look like below.

Listing 1. Output file—Data collection in STREAM mode

# tail -f /auditfs/criticalevents.out

event login status time command
--------------- -------- -------- --------- ---------
smit_READ root OK Thu May 24 14:07:05 2007 cat
smit_READ root OK Thu May 24 14:07:05 2007 cat
smit_READ root OK Thu May 24 14:07:09 2007 vi
smit_READ root OK Thu May 24 14:07:09 2007 vi
smit_READ root OK Thu May 24 14:07:09 2007 vi
smit_WRITE root OK Thu May 24 14:07:13 2007 vi
FILE_Unlink root OK Thu May 24 14:07:13 2007 vi
FILE_Unlink root OK Thu May 24 14:07:20 2007 vi


The interpretation of the output file is relatively simple as it shows that first of all root user opened the smit.log file using cat command and then by using vi command until the first write was done by root user on Thursday, May 24th, at 14:07:13, after which several operating systems events instances of “FILE_unlink” which updates file related information like inodes and file sizes accordingly.

If data collection in BIN mode is enabled, you can start the data collection by executing the following command:
# /usr/sbin/auditpr -v < /auditfs/trail > /auditfs/audit.out

This command writes the results of auditing to the /audit.out file, which can be monitored in real time as well. A sample is shown below:


Listing 2. Output file—Data collection in BIN mode

# vi /audit.out
"/audit.out" 30 lines, 2012 characters
event login status time command
-------- -------- ----------- ------------- --------------
smit_READ root OK Thu May 24 15:07:27 2007 cat

smit_READ root OK Thu May 24 15:07:27 2007 cat

FILE_Unlink root OK Thu May 24 15:07:32 2007 vi
filename /var/tmp/Ex21778
smit_WRITE root OK Thu May 24 15:07:37 2007 vi

FILE_Unlink root OK Thu May 24 15:07:37 2007 vi




Event-B Monitoring execution of specific command




Another event which may be of very much interest for security administrators and IT auditors is the event of execution of any specific command on system (like rmdev, cfgmgr or even simple rm command).

If you review /etc/security/audit/events, you can find many commands which generate many system related events. For example, you can find a stanza for rmdev command which generates three events namely “DEV_Stop”, “DEV_Unconfigure” and “DEV_Remove”. It means that when system administrator executes “rmdev” command, it will generate all these events.

Now if you want to audit execution of “rmdev” command by root user , first of all you have to add following entry in objects file

I first of all added following stanza in /etc/security/audit/objects file:


/usr/sbin/rmdev
x = "DEV_Remove"






Next, I added following entries in /etc/security/audit/config file as follows

classes:
commandexec = DEV_Remove



Finally I assigned this class of DEV_Remove to one of my developer user so that whenever he executes rm command on systems, security administrators get notified






users:
root = readwrite, commandexec




You can add some more commands to same class. For example to add “rmlv” command in same class, I have to add another object like follows



/usr/sbin/rmlv:
x = " LVM_DeleteLV "



and then add this auditable action to event class of “commandexec”



classes:
commandexec = DEV_Remove,LVM_DeleteLV

Now as event class of “commandexec” has already been assigned to root user, so you can start auditing of execution of rmlv command by shutting down and then restarting audit daemon.

Finally monitoring of this specific event can be done by

#/home/root>/usr/sbin/auditstream | /usr/sbin/auditselect -e "event==LVM_DeleteLV"| auditpr -hhelpPRtTc -v


Event-C Monitoring users related activities

Su and password change events are very critical events on UNIX operating system , especially with reference to root user. These events are , therefore by default monitored by AIX auditing subsystem.

In my scenario, config file already included these events. The events of “USER_SU” and “PASSWORD_Change” are already member of general class of events and are assigned to root user. So whenever root user switch to any other user , using su command or change his password using passwd command , it is being audited by AIX auditing subsystem.

For auditing any attempt by root user (or any other user) to change characteristics of any other user, i simply added following line in objects file , followed by changes in config file ( both mentioned below)

Added new entry in Objects File
/usr/bin/chuser:
X = “USER_Change“


Added new entry in Config File
general = USER_SU,PASSWORD_Change,DEV_Remove,LVM_DeleteLV,USER_Change






Summary:


Auditing always being a powerful tool for ensuring systems security and integrity, has been point of interest for IT auditors since long time, however its strength can only be realized in real sense when system and IT security administrators use this tool in positive sense to enhance systems security. AIX auditing subsystem is no more exception and it can be used for improving corporate level security , however as it generates lot of logs and therefore may put processing load in production environment , therefore it is required to configure this subsystem properly and for auditing of specific events only.

References: AIX 5.3 Security Guide IBM -SC23-4907-03

AIX 5L Auditing and Accounting IBM Red Book


About Author: Khurram Shiraz is Technical Consultant at GBM, Kuwait. In his ten years of IT experience, he worked mainly with IBM technologies and products especially AIX, HACMP Clustering, Tivoli and IBM SAN/ NAS Storage. He also has worked with IBM Integrated Technology Services group. His area of expertise includes design and implementation of high availability and DR solutions based on pSeries, Linux and windows infrastructure. He can be reached at kshiraz12@hotmail.com

Encrypt your AIX backups with OpenSSL

Have u met any auditor who ask you about security of your backups? For most of system and database administrators it is an annoying question, but still this fact can not be denied that the security of database and system level backups is major responsibility of the administrator who has taken these backups.
Security for our day to day taken system and database level backups can be accomplished in many ways. The first way is off course related to physical security of these backups. For most of the corporate, tape cartridges are still being used as main backup media. These tape cartridges are usually kept under safe and lock fire proof vaults and many organizations even allow access of these tape cartridges by even their own staff employees after certain approvals ( usually on IT manager level ) and they have fully defined procedures and policies for this purpose. Most of organizations also move these cartridges to their DR site and secure this movement with the help of secure transportation service provider companies.
But what if these tape cartridges or other backup media been stolen by some hackers during this movement. There is no doubt, that these backup cartridges contain very useful data and could even lead many financial loses if went into the hands of any criminal minded person. Here comes the role of encryption, which can be added to your backups and therefore protect your organization data against any unethical hacking. Many commercial backup encryption software are now available which can be used on database level; however there is still lack of such software availability for operating system level backups.

In this article, I will cover ways of encrypting (both symmetrical and asymmetrical) operating system level backups on AIX with the help of open source software called Open SSL. I will go through you with the steps with which you can even encrypt your mksysb and other vg and file system level backups.

Types of Encryption

There are two types of encryption methods: symmetric and asymmetric.
1) Symmetric Password Based Encryption - This is the simplest form of encryption. It is a symmetrical encryption method. The same password is used to encrypt and decrypt the data (or the file). This method is useful to encrypt sensitive information for yourself, or for family, or for a few trusted friends or coworkers.
2) Symmetric Secret Key Based Encryption
This is the simplest form of key based encryption. It is a symmetrical encryption method. The same secret key file is used to encrypt and decrypt the data (or the file). This is not a very commonly used technique.
3) Assymetric public/private Key Based Encryption
A public key file is used to encrypt the data. The corresponding private key file is used to decrypt the data. Only you should have access to your private key. You can distribute your public key to anyone who needs to send you data. This is the technique that is most commonly used in corporations
While we use either type of encryption for encrypting data, we have to keep this fact in our mind that asymmetrical encryption is ideal for encryption of small amount of data while symmetrical encryption can be used easily for large amount of data. Hence size of the data to be encrypted plays a vital role in deciding which type of encryption to be used in creating whole solution.

Different Encryption Tools for encrypting backups


There is lots of commercial encryption tools available which can be used for data backup’s encryption. Most of them are integrated with database level backups. For example, many of them encrypt oracle based database backups. However, for encryption of operating system level backups, there are not too much commercial products available.
Most of the corporate which want to encrypt their server’s operating system level backups (especially UNIX based systems) have to rely on open source tools available and then have to develop solution using these tools. OpenSSL and PGP are the two commonly used free tools which are used for this purpose.
While any organization thinks of developing solution for encrypting their day to day backups, they must have to consider two important points.

1. First, that solution should not have performance impact on daily backup operations. It means that encryption of backups should not consume long time as well as it should not be CPU cycles consuming on the server which is executing encryption algorithm.
2. The decryption mechanism should be well tested and documented. This means that when there is a need arises for restoration of data, there should not be any surprises.
Although PGP can be well utilized for developing solution for encrypting backups on AIX with above mentioned requirements, I will concentrate to demonstrate how to develope solution using OpenSSL only in this article.

OpenSSL usage for data & backups encryption

OpenSSL is a library that provides cryptographic functionality to various applications. On major Linux and other BSD Unix variants, OpenSSL is provided under GPL licenses. It also includes a command line utility which can be used for different cryptographic purposes
While using OpenSSL on AIX, you can opt for either using OpenSSL from AIX tool box for Linux (website or CD) or download from Bull website.
I opted for getting it from Bull website and installed its rpm without any problem.
Now the first thing is to get the feeling of how the OpenSSL works.

With OpenSSL encryption of a text file is very simple. You have to be root user and then you have to execute:

/home/root> openssl enc –bf –ofb salt –in sample.txt –out enc.txt

The command will prompt for password before encrypting data present in sample.txt file. This is a typical example of symmetrical encryption.

To decrypt, use the following command

/home/root> openssl enc –d –bf –ofb –salt –in enc.txt –out abc1.txt

Above command will ask for password which was used in encryption before decryption.

Now if you want to encrypt the same data file with asymmetrical encryption, technique will be slightly different. First, you have to generate private key with the following command:
/home/root> openssl genrsa –des3 –out prvkey.pem 4096

Then you have to derive public key from this private key by using following command

/home/root> openssl rsa –in prvkey.pem –pubout –out pubkey.pem

Now to encrypt data present in abc.txt file using this already generated pair of keys, you have to execute

/home/root> openssl rsautl –encrypt –inkey pubkey.pem –pubin –in abc.txt –out encr.txt

And to decrypt,

/home/root> openssl rsautl –decrypt –inkey prvkey.pem –in enc.txt –out abc1.txt

This asymmetrical encryption with help of two keys works well for small inputs of data. However as soon as size of data to be encrypted increases, this technique generally does not work well. Imagine when you have a 30 GB file system and your management wants to take an encrypted backup of this filesystem. Under these circumstances, you can not use this asymmetrical technique of encryption. You however, can combine symmetrical encryption with asymmetrical encryption to design a very good solution for need of encrypting your backups.

For this solution , we will start with creating a small text file called backup_key with some string ( which may include numbers and characters). This string will be our password string. We will encrypt this key file with asymmetrical, two keys based encryption technique:
/home/root> openssl genrsa –des3 –out prvkey.pem 4096
/home/root> openssl rsa –in prvkey.pem –pubout –out pubkey.pem
/home/root> openssl rsautl –encrypt –inkey pubkey.pem –pubin –in backup_key –out backup_keyencr.txt

Now you have an encrypted secret key which has been encrypted with strong asymmetrical encryption technique.

The next step would be using this key to encrypt backed up data (symmetrical encryption).

/home/root> tar –cvf - /home/oradata1! /usr/local/bin/openssl enc –des –cbc –salt –pass file:/home/root/backup_key.txt > /dev/rmt0

And for decrypt this backup data, you have to execute

/home/root> /usr/local/bin/openssl enc –d –des –cbc –salt –pass file:/home/root/backup_key.txt < /dev/rmt0! tar –xvf -

Now this solution can work very easily for any corporate environment. You can send tape cartridge containing encrypted tar backup along with a floppy containing encrypted key (backup_keyencr.txt) to decrypt this encrypted tar backup to your disaster recovery site. However any person at you DR site should already have RSA private key which has been used in encryption of this backup_key file. You, therefore have to send this private key, one time, to the person at DR site so that he can first decrypt backup_key and then use decrypted backup_key to decrypt tar backup. Consequently your backups will be entirely safe during the backup movements from your main site to DR site. Even if your backups go into hand of any criminal persons during this movement, they can not decrypt the key file without private key and hence can not decrypt backup data.


Encrypting your AIX level backups


AIX level operating systems backups like volume group, file system and even mksysb can be encrypted. However , as these backup utilities send backups directly to tape drive or CD devices without buffering so encryption of data being backed up by these utilities has to be done in a different way ( as compared to tar command backups ).

To encrypt volume group level backups, I used tricky solution. I first of all created a special pipe device file which can operate on FIFO basis.

/home/root> mknod /tmp/vgbk p

/home/root> cat /tmp/vgnest ! /usr/local/bin/openssl enc –des –cbc –salt –pass file:/home/bck_key ! /bin/dd of=/dev/rmt0 obs=100b &

/home/root> vgbackup datavg /tmp/vgbk


Similarly you can apply same trick to mksysb backups, which is to create a FIFO special device pipe file and then initiate a process in background which can read this file and then encrypt the incoming data with the help of asymmetrical key. This incoming data is off course fed by mksysb command to pipe device in foreground.

Summary:
Although many techniques can be used to encrypt your AIX level backups, however OpenSSL off course provides a easy and free way of encrypting your operating system level backup. No matter whether you use OpenSSL, PGP or any commercial software to encrypt your backups, always remember to test your restoration scheme and procedure, before time actually comes to do your restoration.


Note: This article was published in AIX Update January 2008 edition.

 How to Enable Graphical Mode on Red Hat 7 he recommended way to enable graphical mode on RHEL  V7 is to install first following packages # ...