Sunday, 30 December 2012

How to configure SSL(Secure Sockets Layer) in Apache?

SSL (Secure Sockets Layer) is a protocol which is used for communicating securely over the network. SSL provides both, encryption as well as authentication. For example, in client-server communication it encrypts the data, which browser(client) sends to the server or server sends to the client(browser), as well as authenticates the server to the client. Here authentication means that the client is confirmed that the server to which it is talking is genuine.

SSL works on public/private key cryptography. In SSL, certificates are used to prove the identity to other user.

What is a certificate and how does it work?

Let's say there are three parties. Party 1 is client which uses the services of Party 2. Party2 proves its identity by providing a certificate, signed by a third party, to party 1. Both the parties, party 1 as well as party 2, trust party 3. A certificate contains some information about the party and its public key, it is signed by a CA(Certificate Authority). A certificate authority is the third party which is trusted by both, owner of the certificate as well as the party relying on the certificate. Most of the popular browsers contain information about all the CAs, i.e. they know the genuine CAs, and so these browsers trust the certificates signed by them. If the certificate is signed by a party which is not trusted by the browser then it gives warning that the certificate owner may not be the one who it claims to be. In this case although your channel is secure but the other party is not authenticated, so you might be giving your secret information to a scrupulous person over an encrypted channel which is as good as having no security.

Working:

1) When SSL is configured on server(apache), the client(browser) is presented with the server certificate signed by a known CA. Now Client knows the public key of the server.
2) For authentication, server creates a message and computes its hash value. Server encrypts the hash with its private key. Server sends both, message as well as its encrypted hash, to the client.
3) Client decrypts the encrypted hash using public key of the server. It computes hash value of the message it gets from the server. It compares both the hashes to check the authenticity of the server. If both the hashes are same then the client is talking to a genuine server.
4) As authentication is complete, both client and server need some way to pass the data securely. To achieve this, client selects a synchronous key, encrypts it with the server's public key and sends the encrypted key to the server.
5) Server, after getting the encrypted synchronous key, decrypts it. Now both, client and server, have the synchronous key. They encrypt the data to be passed over the network using this synchronous key. So, security is also achieved in this step.
 Let's see how to configure SSL in apache.
We assume that apache is installed with SSL module(mod_ssl), and open SSL is also installed on the machine.
Make apache listen for HTTPS requests on port 443 by executing the following command.

a2enmod ssl

The above command enables SSL module of apache
Restart apache

/etc/init.d/apache2 restart

Apache has started listening on port 443.
Use the following command to generate a certificate request and a private key.

openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr

It will ask for some basic information which you have to answer to get the certificate request generated.
Once your private key and certificate request is generated, you will have to get it signed. You can get your certificate signed by a commercial CA or you can sign it yourself. The only problem in signing it yourself is that the browser will give warning since it does not know the certificate signing authority.
Let's see how to self sign a certificate request. Use the following command to sign

openssl x509 -req -days 100 -in server.csr -signkey server.key -out server.crt

The above command creates a certificate which is valid for 100 days. Now you have got two files generated.

server.key private key
server.crt certificate

Let's configure apache to use these.
Supposing that there is virtual hosting on this server.
Inside directory /etc/apache2/sites-enabled / create a copy of default-ssl and rename it. Let's say the new file is abc-ssl

cat default-ssl > abc-ssl

Now we have to tweak at some places in abc-ssl

<VirtualHost <Your-IP-Address>>
ServerName <your-site-name>
DocumentRoot <site-directory-path>
<Directory <site-directory-path>>
SSLCertificateFile <File-path-certificate>
SSLCertificateKeyFile <File-path-key>

With all these changes done, run the following command to enable the ssl configurarion for this website

a2ensite abc-ssl

Reload apache with changed configuration

/etc/init.d/apache2 reload

Now the server will listen and forward any https request for this site.
You can also tweak .htaccess to configure whether you want to access whole website or only some of the pages of the website with https.

For more info visit.

Tuesday, 25 December 2012

Dynamic queries with an unknown number of inputs...

Sometimes there is a need to pass dynamic number of inputs to a query, based on conditions satisfied or data from some other source/query, as the program executes. Here, we will see how to pass dynamic/unknown number of inputs to a query.
Let's see how to do it. We assume a situation where we get inputs, to be passed to a dynamic query, from a different select query.  
DECLARE CURSOR c_fetch_ids IS 
SELECT ids FROM employee WHERE name LIKE 'John %'; 
v_counter NUMBER; 
d NUMBER; 
v_dyn_stmt VARCHAR2(1000); 
v_cur NUMBER; 
v_temp employee.ids%TYPE; 
v_tab_ids DBMS_SQL.NUMBER_TABLE; 
bind_names DBMS_SQL.VARCHAR2_TABLE; 
v_names DBMS_SQL.VARCHAR2_TABLE; 
v_salaries DBMS_SQL.NUMBER_TABLE; 
BEGIN v_counter := 0; 
OPEN c_fetch_ids; 
LOOP FETCH c_fetch_ids INTO v_temp; 
EXIT WHEN c_fetch_ids%NOTFOUND; 
v_counter := v_counter+1; 
v_tab_ids(v_counter) := v_temp; 
END LOOP; CLOSE c_fetch_ids; 
v_dyn_stmt := 'SELECT name, salary FROM employee WHERE ids IN ('; 
FOR v_counter IN 1 .. v_tab_ids.count 
LOOP 
bind_names(v_counter) := v_counter; 
IF v_counter=1 
THEN 
v_dyn_stmt := v_dyn_stmt||' :1'; 
ELSE v_dyn_stmt := v_dyn_stmt||' ,:'||v_counter; 
END IF; 
END LOOP; 
v_dyn_stmt := v_dyn_stmt||')'; 
v_cur := DBMS_SQL.OPEN_CURSOR; 
DBMS_SQL.PARSE(v_cur, v_dyn_stmt, DBMS_SQL.NATIVE); 
for v_counter IN 1 .. v_tab_ids.count 
LOOP 
DBMS_SQL.BIND_VARIABLE(v_cur,bind_names(v_counter),v_tab_ids(v_counter)); 
END LOOP; 
DBMS_SQL.DEFINE_ARRAY(v_cur, 1, v_names, 10, 1); 
DBMS_SQL.DEFINE_ARRAY(v_cur, 2, v_salaries, 10, 1); 
d := DBMS_SQL.EXECUTE(v_cur); 
LOOP 
d := DBMS_SQL.FETCH_ROWS(v_cur); 
DBMS_SQL.COLUMN_VALUE(v_cur, 1, v_names); 
DBMS_SQL.COLUMN_VALUE(v_cur, 2, v_salaries); 
EXIT WHEN d!=10; 
END LOOP; 
DBMS_SQL.CLOSE_CURSOR(v_cur); 
END;

The above block fetches names and salaries of employees, 10 at a time, after taking dynamic number of inputs(ids) which come from a different query with the condition that name should be starting as 'John %'.
For more info visit

Sunday, 16 December 2012

Execute dynamic queries using Native Dynamic SQL(NDS) in PL/SQL.

Dynamic SQL statements are a powerful way to execute dynamic queries in PL/SQL. In this approach, queries are designed as the program proceeds or when the conditions arise. There are two tools in PL/SQL to design and execute dynamic queries.

1) Native Dynamic SQL (NDS)
2) DBMS_SQL package

NDS is relatively easy to use when compared to DBMS_SQL package. DBMS_SQL has a complex structure for making dynamic queries.
Using NDS, inputs can be passed to a query statement and outputs can be collected.
While writing dynamic queries, we can follow two approaches. Parts and inputs of a dynamic query can be attached to each other dynamically using string concatenation and then the dynamic query can be executed, or , placeholders can be used to pass parameters/inputs to a dynamic query. Using placeholder technique is a safer approach as it prevents SQL injection attacks.

Now let's see how to use NDS

CREATE OR REPLACE PROCEDURE proc(p_id IN number, p_name OUT varchar2)
 IS 
user_id number(6); 
query_text varchar2(500); 
BEGIN user_id := p_id; 
query_text := 'UPDATE employee_data SET salary=1.1*salary WHERE id = :1 RETURNING name INTO :2'; 
EXECUTE IMMEDIATE query_text USING user_id RETURNING INTO p_name;
COMMIT; 
END;

Create/replace the above procedure and call it. This dynamic query executes the update statement by taking id as input and returns the name of the employee who has got the hike.

Let's see one more example with select statement.

CREATE OR REPLACE PROCEDURE proc(p_id IN number, p_name OUT varchar2) 
IS
user_id number(6); 
query_text varchar2(500); 
BEGIN user_id := p_id; query_text := 'SELECT name FROM employee_data WHERE id = :1'; 
EXECUTE IMMEDIATE query_text INTO p_name USING user_id;
END;

This select statement saves the name of the employee in p_name variable for a given id.

For more info visit

Sunday, 9 December 2012

How to emulate a remote Linux machine on Mac using X11 port forwarding in ssh ?

Remote Linux server/machine can be accessed from Mac using X11 port forwarding feature in ssh.
X11, also known as X window system, is a combination of server and client programs which can be used to emulate desktop environment of remote Unix like machine on a local machine. The server program of X window system runs on local machine and the client program runs on remote machine.
The best feature of X11 window system is that it is separated into client and server program, which makes it suitable to use both the programs either on same machine or on different machines. For ex, when you are using a Linux desktop with GUI, both client and server programs are running on the same machine and when you access your remote unix like machine from local machine then the server program of X window system runs on local machine and the client program runs on remote machine.
Separation of client and server programs makes X window system really fast over networked connections as most of the drawing work is handled on the local machine.
X window system can be really complex to use over networked connection as server is on the local machine, which makes it difficult for the client, running at remote machine, to see the server.
ssh provides a feature to handle this complexity, which is known as X11 port forwarding.
X11 port forwarding feature of ssh makes a secure tunnel between client and server programs of X window system, so that they can communicate .
Now let's see how to use this feature of ssh to connect to a remote Linux machine from a Mac.
You need to have an X11 server program on your mac to connect to the remote Linux machine.
X11 app comes default with the Operating system for Snow Leopard users, but it has to be downloaded and installed explicitly on Mountain Lion.
Mountain Lion users can download X11 app from here.
Once app is installed, you will have to perform some configuration changes in ssh server running on remote Linux machine.

Add the following lines to the file /etc/ssh/ssh_config

ForwardAgent yes
ForwardX11 yes
ForwardX11Trusted yes

After this uncomment/add the following line in /etc/ssh/sshd_config file.

X11Forwarding yes

Restart the ssh server on remote machine for changes to take effect.
Now on local machine
Open terminal and execute the following command

ssh -X user@host.com

This will log you in to the remote machine if the keys are set, else, it will ask for password.
After you are logged in
Assuming that the remote machine's desktop environment is GNOME, execute the following 
command.

gnome-session

If everything works perfectly, you will be able to see your remote Linux machine GUI on your Mac.

For more info visit.

Monday, 26 November 2012

How to use rsync over ssh for a secure and fast transfer?

rsync is a free utility for unix based systems which can be used to transfer files between remote machine and local machine. It does same work as rcp, but it's much faster than rcp. The logic behind its agility is that it doesn't transfer the whole file or directory, it uses checksum-search algorithm to transfer only the differences.
rsync in itself doesn't offer any security, but when used over ssh, it is best way to transfer files securely and that also faster than other copying utilities.
rsync can be used over ssh or it can be directly connected to rsync daemon running over the remote machine.

NOTE: rsync cannot be used for transfer between two remote hosts.

Now, we will see how to use rsync over ssh to synchronize files or directories between remote and local machine.

To synchronize directory on local machine with that of remote machine

rsync -aervz "ssh -l USER" --delete HOST:REMOTE_DIR LOCAL_DIR

In the same way way, to synchronize a remote directory with a local directory

rsync -aervz "ssh -l USER" --delete LOCAL_DIR HOST:REMOTE_DIR
 
Let's see what all these options are

a : stands for sync in archive mode, i.e. it offers the functionality that archive offers.
e: specifies the shell program used for communication, default is ssh.
r: transfer recursively from directories
v: stands for verbose mode
z: used for compressing the data before transmitting
delete: deletes the extraneous file at receiving side not present in sending side.

For more info visit

Tuesday, 6 November 2012

How to connect to a remote SSH server using public/private key cryptography?

Generally we use username and password to connect to a remote SSH server. Connecting using password is a cumbersome and less secure approach. Here are some of the drawbacks of connecting using password.

1) If you use more than one account then you need to remember password for all the accounts.
2) Changing password is an annoying task and you need to communicate password change to everyone who is using shared account.
3) Passwords are not as secure way of authentication as using keys. Each time you use password, it is passed over the network for authentication.

Now, let's see what is a key?. When we use keys to authenticate over the network, we actually use public/private key cryptography for authentication.

How does a public/private key cryptography work?

In this method, we generate two keys, a public key and a private key. Public key is known to everyone, we can also transfer it over the network. Private key is known only to us, we do not transmit it over the network, nor do we tell it to anyone. Private key is present only on your local machine and that also in a way, that only the authorized account/user can access it.

Now, when both the keys are generated, we install public key on remote machine and keep private key with us.

Following steps are followed when we try to authenticate to the server using keys.

1) Local machine requests the server for connection.
2) Server sends some data known as challenge, encrypted by public key, to the local machine.
3) Local machine/account uses its private key to decrypt the data and sends it back to the server.
4) If the server finds that both the data(sent and received) match then it allows the connection, otherwise it refuses the connection.

Let's see, how we can actually set key based authentication

1) Generate keys
Run the following program on shell

ssh-keygen
This will generate both, public as well as private key. When this program is run, it asks for the file name in which you want to save the keys and a passphrase for private key. I will discuss passphrase after sometime. For the time being, if you want, you can enter it, or leave it and press enter. Supposing, you gave file name for the keys to be my_secret_key then two files will be generated.
my_secret_key will have the private key.
my_secret_key.pub will have the public key.

2) After the keys are generated, you will have to install public key on the remote server. For this, secure copy public key from your local machine to remote machine.

scp /home/XYZ/my_secret_key.pub remote_user@host.com:/home/remote_user/

Now the public key is copied to the home directory of remote_user.

3) After this, log in to the remote host using ssh with the account for which you want to install the public key, in this case it is remote_user. This is the last time you will be logging using your password.

Make .ssh directory inside your home directory, only if it is not present. Assuming that you are in home directory.

mkdir ./.ssh (if not present)

Note: This directory is hidden, so use ls -a to check for its presence.

If .ssh directory is present then check for the file authorized_keys. If present, then append the content of my_secret_key.pub into it as

cat my_secret_key.pub >> /home/remote_user/.ssh/authorized_keys

or else, make the file.

Anyways, append will make the file if not present, so you need not make it explicitly. You only have to take care that you might not delete someone else's public key present for the same account.
Your public key is installed on the remote machine.

Come back to your local machine. Remember you entered a passphrase(if entered) while generating keys. This passphrase will be used to encrypt your private key. The private key will be stored in an encrypted form, if passphrase is used, on the local machine. So, even in the case your encrypted private key becomes vulnerable, nobody will be able to decrypt it and use it to decrypt challenge(which the server sends), unless they know the passphrase. Now you get the importance of passphrase.

Everything set, you can try connecting to remote SSH server using keys.

ssh -i my_secret_key remote_user@host.com

It will ask for your passphrase(if set).

After you enter the passphrase, you are connected to the server.

The best part of connecting by this approach is that your password is not passed over the network, not even your passphrase.

If you don't want to enter your passphrase again and again, you can set up your passphrase in an agent. An agent is a program which remembers the passphrase for you, and whenever you try to use SSH client to connect to the host, it enters the passphrase on your behalf.

For more info visit

Monday, 29 October 2012

How to use FTP in a shell script?

FTP is a standard protocol for file transfer. With FTP, files can be transferred to and from a remote host. Remote host has a FTP server installed on it which listens to FTP clients. FTP client is the program which communicates with FTP server.
Suppose that a user has an account on FTP server with username name and password pass.
Now, let's write a script to transfer a file from a remote host to local machine.

#! /bin/sh 
ftp -in host/ip<<END 
user name pass 
cd /var/myremotedir 
lcd /var/mylocaldir 
get file 
close 
bye 
END 
echo 'transfer completed' 

 In the above script, we are transferring file from remote directory /var/myremotedir to local directory /var/mylocaldir. cd is used to change directory at remote server. lcd is used to change directory at local machine. get is used to transfer file from remote directory to local directory. Similarly, we can use put to transfer file from local directory to remote directory. Note: Here cd, lcd, get are ftp commands and not unix commands. FTP server should be installed and listening on remote machine for FTP client to communicate. The username and password used by FTP client should be registered with FTP server.

For more info visit

Sunday, 21 October 2012

Use NSUserDefaults to store default values or user settings for an application...

NSUserDefaults is the class which is used to store default values for an application. It stores the values inside the application's sandbox. The values which can be stored are either scalar or which can be serialized into a property list.
NSUserDefaults is a singleton, means it can have only one instance for an application. You don't have to worry about the storage location, updating data or deallocating the object once its use is over. This all is being handled by NSUserDefaults. Infact, NSUserDefaults also caches the information in the memory so that disk read/write is reduced.
NSUserDefaults periodically keeps on synchronizing the values stored in memory with the values stored in disk. It also provides the synchronize method, which can be used to synchronize the values explicitly. It stores objects using key-value pair.
Now, let's see how to use this class for storing defaults.
Get the singleton object  
NSUserDefaults *userdefaults = [NSUserDefaults standardUserDefaults];
  Store/read values
There are some setters and getters methods to store and retrieve values.
[userdefaults setDouble:2.41 forKey:@"doublevalue"]; 
[userdefaults doubleForKey:@"doublevalue"]; 
  Similar type of methods are there for float and integer as well.
 To set an object for a key we can use
NSArray userarray = [[NSArray alloc] init..........];
 [userdefaults setObject:userarray forKey:@"arr"]; 
[userdefaults objectForKey:@"arr"];
  In the above code, we have set an array object.
NSUserDefaults also provides the facility for storing default values in userdefaults. We can store default values for usedefaults until user sets the values. This can be done by using registerDefaults method of NSUserDefaults. This method takes an NSDictionary objects as its parameter.
NSMutableDictionary *defval = [NSMutableDictionary dictionary]; 
[defs setObject:@”Beginner” forKey:@”Level”];
 [[NSUserDefaults standardUserDefaults] registerDefaults:defval];

For more info visit

Sunday, 7 October 2012

How to use NSOperation and NSOperationQueue for multithreading?

Multithreading is an important aspect of iOS applications. We cannot always load our main thread with all the tasks. If an application is designed to perform all the tasks serially then there is no need of multithreading, but most of the applications have some tasks which can be completed without depending on other tasks, or if main thread has to be left available for user interaction. In these cases, it's a compulsion to go for multithreading.
You can always go with NSThread to multithread your application, but doing so is only recommended when your requirements are such that there is no other option, except to use NSThread. Sometimes using NSThread may not be as efficient as using NSOperation, as NSOperation takes into account the current system load and cores available to decide the extent to which application can be multithreaded.
If you use NSOperation, you don't have to take care of spawning a new thread or terminating it after your work is complete. You also don't have to worry about the number of threads to use to make application work more efficiently. It's not like, using more threads makes your application more efficient. Remember, a thread always has a cost associated with it, as it shares stack space with the main process.
In simple terms, any instance of subclass of NSOperation class will denote the operation which we have to perform in mutithreaded manner with respect to other operations denoted by other instances of subclass of NSOperation or the operation performed by main thread itself.
Why have I used term, subclass of NSOperation class?
NSOperation is an abstract class, so you cannot instantiate it directly. You will have to subclass it if you have to use it. But don't worry, Foundation framework provides NSInvocationOperation subclass which can be used directly for instantiating operations.
NSInvocationOperation provides almost everything to perform multithreading, but in case if you feel that it is unable to cater your needs, you can always subclass NSOperation.
NSOperationQueue class can be instantiated directly to create a queue where we can add our operations. Once you add operations to NSOperationQueue, it will take care of when and how to execute your operations, and how many threads to use for execution. NSOperationQueue starts executing operations almost as soon as they are added to the queue, provided they do not have dependency on some other operations in the same or some other queue, or your queue is not overloaded.
Now, let's see how to use NSOperation and NSOperationQueue.
Note: I won't be dealing with creating custom subclass of NSOperation in this post.
Create an operation
@implementation testViewController
 - (void)viewDidLoad { 
aQueue = [[NSOperationQueue alloc] init]; 
[super viewDidLoad]; 
}
 -(void)fun1:(NSObject *)obj { 
NSInvocationOperation* theOp = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(fun2:) object:obj];
[theOp addObserver:self forKeyPath:@"isFinished" options:NSKeyValueObservingOptionNew | NSKeyValueObservingOptionOld context:NULL]; [aQueue addOperation:theOp]; 
[theOp release]; 
}  
-(void)fun2:(NSObject *) obj { 
// Start your parallel task 
}
 - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { 
if ([keyPath isEqual:@"isFinished"]) { 
// Do something, operation has finished. 

@end

fun2 is the entry point for the new task/thread. NSOperation is KVO (Key Value Observing) compliant class, so you can register your objects for receiving notifications. In the above code, I have registered the current object for receiving notification when an operation finishes.
Note: Don't try to change your operation object once it has been submitted to the queue, because after submitting you never know when queue starts executing it.
For more info visit

Tuesday, 2 October 2012

CRON not sending mails!!!

A while back I faced an issue in which a script set up on cron was not sending mails, and the strange thing was that, when I was running the script from the shell, it was sending mails.
So, it was confirmed that there is no issue with the code or mail command in the script. What was the issue then?
After searching on google, I found out that cron doesn't have all the permissions and privileges which the user has. In my case, though the script in cron was set up by the user who has the privilege to run the mail command, but it wasn't running with the privileges of the user.
Some values were missing from the PATH environment variables due to which the shell was unable to recognize the command name.
How to deal with this problem?
There are many ways to deal with problems like this
1) Execute the profile of the user which has PATH variable set as required.
2) Write command name with full path.
3) Set up PATH environment variable in the script to include the Path of the command.

For more info visit

Sunday, 16 September 2012

How to mount a Pen Drive manually in Linux?

Most of you must be thinking that why do we need to mount a pen drive or any other external Hard Drive manually. Why cant' we just plug in the drive and let Linux do things for us, i.e. formatting (creating a file system on drive) and then mounting it inside the root file system.
The answer is, we can always let Linux automatically mount the drive for us. Infact, in my version of Linux, as soon as I plug in the drive, it is mounted at /media/Pendrive, supposing that my pendrive name is Pendrive.
But sometimes, we do not want automatic mounting. Infact, we want to mount our external hard drive or pendrive at a specified location for things to work. This is the scenario where we need manual mounting of a drive.
Let's see how to mount a drive manually.
If your drive is mounted automatically on plugging in then dismount the drive from the location on which it is mounted by using the command.
umount /media/Pendrive # supposing pendrive is mounted on /media/Pendrive
umount /dev/sdb1 # supposing the name Linux gave to my device is sdb1
You can also give the device name for dismounting if you know, or you can check it by using the command dmesg
Look for an SCSI device with storage capacity as that of your drive ( name would be like sda1,sda2...., sdb1,sdb2...)
Make the directory where you want to mount the drive. Let's say I want to mount the drive at /Folder
Mount the drive using the following command
mount -t ext2 /dev/sdb1 /Folder
Now the drive is mounted at /Folder. Here, ext2 denotes the file system for the drive.
Again, if you want that that your drive should be automatically mounted at the specified location when the system boots up and should be dismounted when the system shuts down, add the following line in /etc/fstab.
/dev/sdb1 /Folder ext2 defaults 0 0
The file systems in /etc/fstab are automatically mounted during system start up and dismounted during system shutdown.
You can always mount all the file systems in fstab using mount -a

For more info visit

Sunday, 9 September 2012

{ } and ( ), two different ways of grouping commands in shell script.

There are two different ways of grouping commands in unix shell script.
i) ( ) : When we group commands inside ( ), the commands are executed in the subshell, instead of current shell.
ii) { } : When we group commands inside { }, the commands are executed in the current shell.
Let's understand it with the help of an example. In this example, we will try to change the working directory of a shell and verify whether it has been changed or not.
Open shell and type the following command. Assuming that shell's pwd before executing the below command is /
$ { cd /etc/mail ; pwd ; }
The output will be
/etc/mail
This cd command was run on the current shell. If you want to verify, execute pwd again and you will see the same output
$ pwd
/etc/mail
Now, we will execute the same set of commands using ( ). Again assuming that pwd is /etc/mail
$ ( cd .. ; pwd )
The output will be
/etc
This cd command was run on the subshell. If you want to verify, execute pwd and you will see the output
$ pwd
/etc/mail
We can see that pwd of the current shell has not changed.
Note: Always use ; before } if { and } appears on the same line. This is not needed in case of ( ).
For more info visit

Tuesday, 28 August 2012

Different ways of running a shell script…

We hardly care about how a shell script executes, the only thing which we care about is that scripts should execute without any bugs. Here, we are going to discuss about some ways of running a script.
Before seeing the ways of running a shell script, let me clear a question. What is a Shell Script?
We all execute commands on shell. A shell is a program which provides us the environment to execute commands. This also means that we need to have shell running before we start running commands. Now, a shell script is nothing but collection of commands. We club some commands in a file and then ask the interpreter to run the file. This file, where we club our commands is known as interpreter file. Please note that it is not the interpreter file which executes, but the contents of the interpreter file is executed by interpreter, and interpreter is nothing but the shell. Not going in much details into shell and interpreters, we assume that now we know that a shell or interpreter is very necessary for execution of Shell Scripts.
Let's see the ways of running a script
1) /bin/sh testscript.sh
This is the first way in which we specify that we want to run the script using bash shell. Now, the main shell will fork a new shell and that subshell/interpreter will execute the shell script.
Note: We will not be able to access any variables from parent shell unless they are exported.
2) ./testscript.sh or {full-path}/testscript.sh
This is the second way in which we don't specify the shell name. Infact, we write a special line at the top of the interpreter file
#! /bin/sh
# in shell script means that the line is a comment, but # in the first line defines the interpreter to be used for running the script. This line is known as shebang line.
Note: If we don't specify shebang line in this format of running the script then by default, the executing shell will be of same type as the user's login shell. For ex: If the user's default login shell is ksh then ksh shell will be used for running the script.
It is necessary to give relative or full path of script, if we type only script name on the command line, the shell will treat it as a command and start searching in the PATH for the executable file of this command. We can set PATH variable to contain the script path if want to run the script using the script name only.
3) . ./testscript.sh
If we leave first dot, it is same as the second one, then what change does this first dot brings?
Here first dot means script will be run using the same/current shell, i.e. don't fork a new subshell for running the script. The script will be able to use all the variables of the current shell because it is running on the current shell.

For more info visit

Monday, 20 August 2012

How to use TO_NUMBER() and TO_CHAR() to get Number/String in a given format in PL/SQL?

 TO_NUMBER: The structure of TO_NUMBER function is shown below.

TO_NUMBER(string,[format],[NLS params])

The parameters shown in [] are optional. We can simply use TO_NUMBER to convert a string to number as shown below.

TO_NUMBER('32392.849');

This output of the above function call will be 32392.849. But, what about the case when your string contains separators such as , .

Suppose a string 323,567,897.90. Now you want to convert this string into a number. Straightaway using the above function will give an error. This string can be converted into a number as shown below.

TO_NUMBER('323,567,897.90' , '999G999G999D99');

Here G is the second character in NLS_NUMERIC_CHARACTERS set of a session and D is the first character.

Run the following query to see the NLS_NUMERIC_CHARACTERS set of your session

Select * from NLS_SESSION_PARAMETERS.

You can modify the NLS_NUMERIC_CHARACTERS set in your session or pass the custom NLS_NUMERIC_CHARACTERS as the third parameter of TO_NUMBER() function. Now, the output of the above query will be big, but we will see the line of output which is useful to us.

NLS_NUMERIC_CHARACTERS .,

So, if G is the second character then it denotes , , and if D is the first character then it denotes . .

You can pass NLS_NUMERIC_CHARACTERS in the TO_NUMBER() function as follows.

TO_NUMBER('323.567.897,90' , '999G999G999D99' , 'NLS_NUMERIC_CHARACTERS='',.''');

Now, D will denote , ,and G will denote . .

Currency specifier can also be passed into the format as shown below.

TO_NUMBER('$323,567,897.90' , 'L999G999G999D99');

The default currency in NLS_SESSION_PARAMETERS is $. Again, you can pass your own currency as the third parameter of TO_NUMBER function.

NOTE: We cannot have more digits in first parameter than specified in the second parameter of the TO_NUMBER function, both to the left as well as right hand side of decimal.

Now, let's see how to use TO_CHAR function to convert a number into string. The format is same as TO_NUMBER function.

Suppose a number 323567897.90. Now we want to convert this number to String in a given format

TO_CHAR(323567897.90 , 'L999G999G999D99')

and the output will be $323,567,897.90.

Also, there is a minor difference between TO_CHAR and TO_NUMBER. TO_NUMBER doesn't allow you to have more than specified number of digits in the second parameter either on left or right side of decimal in the first parameter. But in TO_CHAR, if you have more number of digits to the right side of decimal than specified in the format, it will round it off, for ex.

TO_CHAR(323567897.83 , 'L999G999G999D9') is perfectly normal in TO_CHAR.

You can always have more digits in format specifier(arg 2) than the actual number(arg1) in either of the function.

THE V FORMAT ELEMENT

Below is the example for V format element.

TO_CHAR(323567897.83 , 'L999G999G999V999')

The output will be $323,567,897830. Did you get the pattern? Overlay arg2 on arg1 with V taking the place of decimal and append extra 0s, if required, on the right side of decimal.

You can always go for various combinations of format specifier and get interesting results.

For more info visit

Tuesday, 31 July 2012

fork() vs vfork().

For more info visitvfork() is same as fork(), it also creates a new process when it is called. The only difference is that, it doesn't copy the address space of parent into the address space of child. The main intention of using vfork() is that, child will run a new program using exec as soon as it is created, and so, shall not need a reference to the parent's address space.

After the child process is created, it runs on parent's address space until exec or exit is called. This means, both child and parent share the same address space until exec or exit is called.

vfork() also guarantees that after forking, child will be the first one to run until exec or exit is called. While the child process is running, before calling exec or exit, parent process will not run. Parent process will resume only when exec or exit is called by the child process.

Below given is a simple example, which will help you to understand the difference between them.

Compile and execute the following program

 
#include<stdio.h>
#include<stdlib.h>
int main()
{
int pid,var=1;
if((pid=vfork())==0)
{
// Child process starts
var=var+1;
exit(0);
// Child process ends
}
// Parent process continues
printf("value of var %d",var);
return 0;
}


Note the value of var. The output will be

value of var 2.

As we can see, the value of var in parent's address space has been changed by the child process. This means, no other address space was created for child, unless exec was called.

Now run the same program using fork() instead of vfork(), and the output will be

value of var 1.

This is because, we are changing var in child's address space, so value in parent's address space will not change.

 

Monday, 16 July 2012

Records in PL/SQL.

For more info visit

A record, as the name signifies, is a collection of more than one fields in a variable. For those of you who are familiar with language "C", you can compare a record to a structure in "C". The fields of a record can be of sql  or pl/sql type.

There are two ways in which a record can be declared.

1) Using Anchor Declarations(Table Based or Cursor Based Records).

ex:   employee_details employee%ROWTYPE;

Here employee is a table . Suppose, employee table has structure as shown below.

 EMP_ID                                             NOT NULL                 NUMBER
 EMP_NAME                                    NOT NULL                 VARCHAR2(10)

 We can collect the values corresponding to some row of a table in the record as

SELECT * into employee_details FROM employee WHERE emp_id  = 1;

and after that, we can access or manipulate the fields in the record using ".".

employee_details.emp_name

In the same way, we can also declare a record of Cursor type

CURSOR employee_cursor is SELECT * into employee_details FROM employee WHERE emp_id  = 1;

emp_cur employee_cursor%ROWTYPE;

We can directly fetch values from cursor into our record as

OPEN employee_cursor;

FETCH employee_cursor INTO emp_cur;

2) Programmer Defined Records:

In a programmer defined records, we can decide the field of our records. It doesn't have to be dependent on any table or cursor for its structure. These type of records are declared as

TYPE emp_rec IS RECORD

( name employee.emp_name%TYPE,

age NUMBER(3),

salary NUMBER(10) := 0

);

employee_record emp_rec;

In this type of record, we are free to define our own fields with different datatypes. We can always specify [NOT NULL]:=[DEFAULT] with the fields of a record. These types of records are useful when we want to collect data from different tables or when our record has nothing to do with tables or cursors.

We can manipulate the fields of these type of records in a number of ways, for example

employee_record.name := 'John';

select name into employee_record.name from employee where emp_id = 1;

 Note:

1) Two records cannot be compared as rec1=rec2, even if they are of same record type. Comparision of two records can only be done on field basis.

2) Values of a record can be assigned to other record if they are of same record type.

rec1 := rec2;

3) Records can be passed as parameters or can be returned from a function.

4) IS NULL cannot be used to check if all the fields of a record are NULL.

5) NULL can be assigned to a record as

rec1 := NULL;

6) We can insert the whole record into a table(without using individual fields), only if the record is declared using anchored declaration.

emp_rec employee%ROWTYPE;

insert into employee values emp_rec;

Sunday, 8 July 2012

Difference between dup() and dup2().

For more info visitdup() and dup2() are system calls which are used to duplicate file descriptors. Each process has a file descriptor table which contains the pointers to the file tables, process is using.

File descriptors are generally indexes of that table. The real information, i.e. the File pointer, is stored corresponding to these indexes. File pointer points to the file table which contains various attributes of files, such as current offset of file, file status flag, v-node information etc. Whenever a process forks, i.e creates a new child process, all the information in this table is copied to the child process's memory space.

While the information is copied from parent to child process, file pointer information from file descriptor table is also copied, so both the file pointers in both the memory spaces(child as well as Parent) point to the same File table. All the attributes in the File table( for the copied file pointers) are common for both, child as well as Parent.

This was the brief description about how the files descriptors work in the process. For detail information about file descriptors visit.

Now, as told earlier, dup() and dup2() are system calls which are used to duplicate file descriptors. By duplicating file descriptors we mean, the pointer information is copied from one location of the descriptor table to other location, no new file table is made for this process, only the file table is pointed by one more location in the descriptor table.

dup() : dup(int filedes) takes a file descriptor as its argument and duplicates the next lowest available file descriptor with the same file table pointer. It returns the new file descriptor value.

Suppose in a process, we want to duplicate the output file descriptor (1). Call

dup(1)

which will return the next lowest available file descriptor. If the process has

0 as the input descriptor

1 as output descriptor

2 as error output descriptor.

then calling the dup() will return 3 as the next available file descriptor, which will point to the terminal(same as 1).

dup2() : Using dup(), we are never sure about the value which will be returned. It will automatically return the next lowest available file descriptor. In some cases, we may need to duplicate a file descriptor as another file descriptor, we use dup2().

dup2(int filedes1, int filedes2) takes two file descriptors as its arguments. Let's suppose, we want to duplicate file descriptor 5 with file descriptor 1, i.e we want the file descriptor 5 to point at the same file table as file descriptor 1.

We will call dup2(1,5). It first checks, if file descriptor 5 is already open. If it's open, it closes the file descriptor 5 and opens it again, with the pointer pointing to the same file table, in the same way as 1.

If file descriptor 5 was already pointing to that table, it returns -1.

One more thing, we could have implemented the same functionality as below.

close(filedes2)

dup(filedes1)

But we don't do it like this. Why?

If we see, the above two operations are not atomic, i.e. we have a slight open time window between call to close and call to dup, so there is a possibility of some signal handler function(in case we receive a signal between these two calls) to alter the pattern of file descriptors. 

Generally, we use these duplicate system calls in case of Inter Process Communication, where we make pipes for the communication between two processes.

Monday, 2 July 2012

Oracle WITH Clause.

For more info visit

WITH clause is also known as Subquery factoring clause. It is used to simplify the structure of complex queries, and in some cases, optimize the queries.

With clause works in the same way as a Temporary table or an inline view, but the main difference between With clause and, an inline view or a Temporary table is that, a query written using With clause is less complex and easier to understand.

When we use a With clause, it depends on the optimizer whether it will make temporary table or an inline view, corresponding to the subquery in the With clause. This also depends on the subqueries used in With clause, if the queries are too complex then the optimizer may create Temporary tables ,else an inline view is sufficient.

We can also pass Optimizer hints in the subqueries written in with clause, to request the optimizer for an inline view or a Temporary table.

Select /*+ MATERIALIZE */ .... For Temporary table

Select /*+ INLINE */ ..... For Inline view

Now let's see an example, with and without With clause.

Suppose I want to find out the names of the employees having salary greater than the average salary of the department in which they work.

I can write this query in 3 ways.

1) Using Corelated Subqueries.

select ed1.name from employee_details ed1
where salary>(select avg(salary) from
employee_details ed2 where
ed1.department=ed2.department)

Writing corelated subquery is fine when data in the tables is less, but when there is a large amount of data in tables then this approach may not be suitable. For each employee in the table, the subquery will execute. So, we discard this approach in case of huge amount of data in tables.

2) The second approach is to create Inline view.

select ed1.name from employee_details ed1,
(Select avg(salary) sal,department
from employee_details group by
department) ed2 where ed2.department=ed1.department
and ed1.salary > ed2.sal

Here, an inline view is created, which is more efficient in case of huge amount of data. The query given above contains only one inline view, but what about the queries containing many inline views. It will become very complex to understand.

3) Using With Clause.

There is not much difference between aWith clause and an inline view, except the way of writing queries becomes more understandable.

Now, let's see the same query using a With Clause.

With ed2 as(Select avg(salary) sal,department
from employee_details group by
department)
select ed1.name from employee_details ed1,
ed2
where ed1.department=ed2.department and
ed1.salary>ed2.sal

The queries using With clause starts with With instead of Select, which may seem wierd at first, but as you start using With Clause, it will become inseparable part of  writing simple and understandable queries.

With clause may come handy in a lot of other situations(demanding subqueries), the only limit is how efficiently we can reduce the complexity of our queries using With Clause.

Saturday, 30 June 2012

sigsuspend vs pause...

For more info visit

sigsuspend and pause, both do the same work, they pause or temporarily stops the process until they receive some signal. These functions return, once the signal handling of the signal is complete.

A small difference between these functions is that, in sigsuspend, we can specify the signal mask to which the function should not listen. Suppose we implement the sigsuspend in the following manner

sigset_t tempmask; sigemptyset(&tempmask); sigaddset(&tempmask,SIGINT); sigsuspend(&tempmask);

Now, in the above code, we have specified the sigsuspend function to listen for every signal except SIGINT.

When sigsuspend receives any other signal than SIGINT, it returns.

This was a small difference between sigsuspend and pause. We get to know the major difference between their ways of working, when we have a scenario, and we try to implement it by both.

Suppose my scenario is, I have a program with two critical regions. By critical regions, I mean to say that I don't want some signal to interrupt in between, while my critical section is executing. But, at the same time I want my second critical section to start, only if I get the confirmation, that some signal has been received.

This is a very natural scenario, and we may encounter this type of condition anytime in our programs. Suppose I have two processes, p1 and p2, p2 has two critical sections, and it wants to go ahead with its second critical section only if it receives, a signal from p1 or has already received a signal from p1, and at the same time it doesn't want the signal to disturb, during the execution of its critical sections.

Let's see, how we can implement this scenario using pause()

#include<stdio.h> #include<signal.h> static void rec(int signo) { printf("\ninterrupt received\n"); } int main() { sigset_t mask1,mask2; signal(SIGINT, rec); sigemptyset(&mask1); sigaddset(&mask1,SIGINT); sigprocmask(SIG_BLOCK,&mask1,&mask2); printf("\ncritical region 1\n"); sleep(5); printf("\ncritical region 1 ends\n"); sigprocmask(SIG_SETMASK,&mask2,NULL); pause(); sigprocmask(SIG_BLOCK,&mask1,&mask2); printf("\ncritical region 2\n"); sleep(5); printf("\ncritical region 2 ends\n"); sigprocmask(SIG_SETMASK,&mask2,NULL); exit(0); }

We are using only one process, and let's assume that SIGINT is the signal which should not be allowed to interrupt the process during execution of critical section. In the above program, we block SIGINT before starting of any critical section and unblock it when the criticel section is over.

The program also pauses for confirmation that SIGINT is received before proceeding towards second critical section.

We unblock the signal SIGINT once the critical section ends, in order to receive the SIGINT signal. The main problem with this program is that, there is a time window, between unblocking of signal and pause, in which the signal can be received. In this case, the process will go to an infinite pause.

To test this problem, send the SIGINT from the terminal before the critical section 1 ends, i.e. when the process is in sleep inside first critical section

sigprocmask(SIG_SETMASK,&mask2,NULL);

//window in which the signal could be received pause();

To solve this problem, we make use of sigsuspend. We will write another program implementing the same logic, but using sigsuspend.

#include<stdio.h> #include<signal.h> static void rec(int signo) { printf("\ninterrupt received\n"); } int main() { sigset_t mask1,mask2; signal(SIGINT, rec); sigemptyset(&mask1); sigaddset(&mask1,SIGINT); sigprocmask(SIG_BLOCK,&mask1,&mask2); printf("\ncritical region 1\n"); sleep(5); printf("\ncritical region 1 ends\n"); sigsuspend(&mask2); printf("\ncritical region 2\n"); sleep(5); printf("\ncritical region 2 ends\n"); sigprocmask(SIG_SETMASK,&mask2,NULL); exit(0); }

The above program is not unblocking the SIGINT to receive it. Infact, it is using sigsuspend in place of it.

We are passing original process mask, which doesn't mask SIGINT,  in sigsuspend function. sigsuspend pauses the process and can only return if any of the signal, which is not in mask2, is received and its signal handler is executed. So, sigsuspend will return upon receival of SIGINT . When sigsuspend returns it resets the sigmask for the process to the mask just before when sigsuspend executed. In this case the SIGINT will be blocked again before execution of second critical section starts.

Monday, 18 June 2012

Session, Foreground processes, Background processes, and their interaction with Controlling Terminal.

For more info visit

Session: A session is a collection of process groups. In a session there can be only one foreground process group and more than one background process groups. The process which starts the session is known as session leader. A session leader can either be in a foreground process group or in a background process group.

Run the following command in your terminal to get the idea

ps -o pid,ppid,pgid,sid,tpgid,comm

The output will be similar as following

  PID     PPID   PGID      SID     TPGID  COMMAND 17335  17334  17335  17335  17495    bash 17495  17335  17495  17335  17495    ps

The column SID gives us the information about the session leader for the session. The SID will always be the process group id of the session leader, or we can also say that it will be same as id of session leader. Why?

Because whenever a new session is started, there is only a single process in the session, which is also known as session leader. This session leader is the process which starts the session.

In the above case, the session leader process is shell. We can also start our own sessions. Sessions are not started from some new process, but they are started from the currently existing process. The existing process is in some session before starting a new session. So, whenever a new session is started, the process which starts the new session is transferred to a new process group with group id same as its process id, means that the session starter process becomes the group leader of the new group. This new session contains only a single process, which is the starter process, and a single group in it.

As I said, a session leader can either be in foreground process group or background process group. In the above case, the shell, which is the session leader, is in background process group

Now, run the same command in background.

ps -o pid,ppid,pgid,sid,tpgid,comm &

The output will be

PID       PPID      PGID    SID     TPGID  COMMAND

17601  17600  17601   17601  17601     bash 17610  17601  17610   17601  17601      ps

Now, in this case, the shell is in foreground process group. How do we get to know, which is foreground process group and which is background process group?

Here comes the concept of Controlling terminal

A controlling terminal is the terminal device which is connected to a session. A session can have only one  terminal controller. The session leader, which makes the connection with the controlling process, is called Controlling Process. The Controlling terminal can only interact with the foreground process group.

The TPGID visible in the above example is Terminal process Group id, which is nothing, but the process group id of the foreground group process leader, which in second case is the shell.

The interrupts from the terminal such as ^C, ^D, etc are only going to effect the processes in the foreground process group.

Any Background process group can always be brought Foreground, provided the Background process group is in the same session as Foreground process group.

Now, let's see a simple example of toggling between Foreground process group and Background process group

#include<stdio.h> #include<stdlib.h> int main() { int pid; pid=fork(); if(pid==0) { setpgid(getpid(),getpid()); sleep(2); printf("\nin child process"); printf("\nchildpid %d",getpid()); printf("\nchildgrpid %d\n",getpgrp()); printf("\nchild done"); printf("\nchanged fpgid to child process%d\n",tcgetpgrp(1)); } else { printf("\ninitial fpgid %d\n",tcgetpgrp(1)); printf("\nin parent process"); printf("\nparentpid %d",getpid()); printf("\nparentgrpid %d\n",getpgrp()); tcsetpgrp(1,pid); wait(); printf("\nparent done"); printf("\nchanged fpgid to parent process%d\n",tcgetpgrp(1)); } exit(0);

Compile and run the above program The output will be -

initial fpgid 17737

in parent process parentpid 17737 parentgrpid 17737

in child process childpid 17738 childgrpid 17738

child done changed fpgid to child process17738

[1]+  Stopped                 ./a.out

Now, let's analyse the program from beginning.

We fork() a child process and change its group id to its pid. Why? This is just to transfer the child process in some other group, as, after forking it would have inherited the group of its parent. We make it sleep for 2 secs, in order to run the parent process first.

In parent process, the parent's pid, pgid, and Foreground process group id is printed. Initially, the fpgid is same as parentgrpid. After printing all this, we change the fpgid to childgrpid or childpid. Parent process waits for child process to terminate.

Child process starts after 2 secs. It prints childpid, childgrpid and changed fpgid which now is same as childgrpid. This means, now child process group is the foreground process group. Child process prints "child done" and terminates.

Now what is this "[1]+  Stopped                 ./a.out".

Looks like some process has been stopped in the background. After seeing, we get to know that parent process has been stopped.

Why was it stopped?

It was stopped because it received signal SIGTTOU while trying to print the last two statements on the terminal. This terminal sends this signal to any background process group which tries to print something on terminal.

Since the parent process is not allowed to continue, the child process has also become ZOMBIE. Though the parent process has received the exit status of child process, it couldn't delete its details from process structure table.

Now we will start this stopped parent process by bringing it in foreground.

fg %1

The output will be

parent done changed fpgid to parent process17737

One more thing, just now we changed the fpgid to child process's group id. How did it revert back to parent process's group id.

It's simple, because when we brought the job with job id 1 to Foreground, all the processes of the group came to foreground.

Note: Make sure you run the following command, before executing above program.

stty tostop

This command stop the background jobs that try to write to the terminal.

Sunday, 10 June 2012

Session, Foreground processes, Background processes, and their interaction with Controlling Terminal.

For more info visit

Session: A session is a collection of process groups. In a session there can be only one foreground process group and more than one background process groups. The process which starts the session is known as session leader. A session leader can either be in a foreground process group or in a background process group.

Run the following command in your terminal to get the idea

ps -o pid,ppid,pgid,sid,tpgid,comm

The output will be similar as following

  PID     PPID   PGID      SID     TPGID  COMMAND 17335  17334  17335  17335  17495    bash 17495  17335  17495  17335  17495    ps

The column SID gives us the information about the session leader for the session. The SID will always be the process group id of the session leader, or we can also say that it will be same as id of session leader. Why?

Because whenever a new session is started, there is only a single process in the session, which is also known as session leader. This session leader is the process which starts the session.

In the above case, the session leader process is shell. We can also start our own sessions. Sessions are not started from some new process, but they are started from the currently existing process. The existing process is in some session before starting a new session. So, whenever a new session is started, the process which starts the new session is transferred to a new process group with group id same as its process id, means that the session starter process becomes the group leader of the new group. This new session contains only a single process, which is the starter process, and a single group in it.

As I said, a session leader can either be in foreground process group or background process group. In the above case, the shell, which is the session leader, is in background process group

Now, run the same command in background.

ps -o pid,ppid,pgid,sid,tpgid,comm &

The output will be

PID       PPID      PGID    SID     TPGID  COMMAND

17601  17600  17601   17601  17601     bash 17610  17601  17610   17601  17601      ps

Now, in this case, the shell is in foreground process group. How do we get to know, which is foreground process group and which is background process group?

Here comes the concept of Controlling terminal

A controlling terminal is the terminal device which is connected to a session. A session can have only one  terminal controller. The session leader, which makes the connection with the controlling process, is called Controlling Process. The Controlling terminal can only interact with the foreground process group.

The TPGID visible in the above example is Terminal process Group id, which is nothing, but the process group id of the foreground group process leader, which in second case is the shell.

The interrupts from the terminal such as ^C, ^D, etc are only going to effect the processes in the foreground process group.

Any Background process group can always be brought Foreground, provided the Background process group is in the same session as Foreground process group.

Now, let's see a simple example of toggling between Foreground process group and Background process group

#include<stdio.h> #include<stdlib.h> int main() { int pid; pid=fork(); if(pid==0) { setpgid(getpid(),getpid()); sleep(2); printf("\nin child process"); printf("\nchildpid %d",getpid()); printf("\nchildgrpid %d\n",getpgrp()); printf("\nchild done"); printf("\nchanged fpgid to child process%d\n",tcgetpgrp(1)); } else { printf("\ninitial fpgid %d\n",tcgetpgrp(1)); printf("\nin parent process"); printf("\nparentpid %d",getpid()); printf("\nparentgrpid %d\n",getpgrp()); tcsetpgrp(1,pid); wait(); printf("\nparent done"); printf("\nchanged fpgid to parent process%d\n",tcgetpgrp(1)); } exit(0);

Compile and run the above program The output will be -

initial fpgid 17737

in parent process parentpid 17737 parentgrpid 17737

in child process childpid 17738 childgrpid 17738

child done changed fpgid to child process17738

[1]+  Stopped                 ./a.out

Now, let's analyse the program from beginning.

We fork() a child process and change its group id to its pid. Why? This is just to transfer the child process in some other group, as, after forking it would have inherited the group of its parent. We make it sleep for 2 secs, in order to run the parent process first.

In parent process, the parent's pid, pgid, and Foreground process group id is printed. Initially, the fpgid is same as parentgrpid. After printing all this, we change the fpgid to childgrpid or childpid. Parent process waits for child process to terminate.

Child process starts after 2 secs. It prints childpid, childgrpid and changed fpgid which now is same as childgrpid. This means, now child process group is the foreground process group. Child process prints "child done" and terminates.

Now what is this "[1]+  Stopped                 ./a.out".

Looks like some process has been stopped in the background. After seeing, we get to know that parent process has been stopped.

Why was it stopped?

It was stopped because it received signal SIGTOUT while trying to print the last two statements on the terminal. This terminal sends this signal to any background process group which tries to print something on terminal.

Since the parent process is not allowed to continue, the child process has also become ZOMBIE. Though the parent process has received the exit status of child process, it couldn't delete its details from process structure table.

Now we will start this stopped parent process by bringing it in foreground.

fg %1

The output will be

parent done changed fpgid to parent process17737

One more thing, just now we changed the fpgid to child process's group id. How did it revert back to parent process's group id.

It's simple, because when we brought the job with job id 1 to Foreground, all the processes of the group came to foreground.