Inter-process Communication in OS Notes

 

Interprocess Communication in OS Notes

Definition:

A set of methods through which data is shared among concurrent processes in the system is known as Inter-process Communication (IPC). Processes of a computer system may be running on one or more computers connected by a network. Multiple processes communicate with each other to share data of any type and resources. Various functions are required for the communication of processes with each other. In multiprogramming systems, some common storage is used where the process can share data. The shared storage may be a region of main memory or it may be a shared file. Fies is the most commonly used mechanism for data sharing between processes. One process can write data in the file while another process can read the data for the same file.

Inter-process

Why Inter-Process Communication?

Some of the several reasons for providing an environment that allows inter-process communication of any type are, information sharing, computation speedup, modularity, and convenience.
  • Information Sharing:
When concurrent is executed in a system, they commonly need to share information/data to proceed with their processing. Without such communication among the processes, the objectives of multiprogramming and multitasking cannot be availed.
  • Computational Speedup:
The provision of an environment in which inter-process communication is possible increases the overall speed of the system. Actually, inter-process communication makes it possible to have concurrent processes and thus increasing the computational speed of the system.
  • Modularity:
Modularity means to break down software processes into different simpler processes for ease of handling and to reduce complexity. These simper modules must share information to reach the objective of the main process.
  • Convenience:
Inter-process communication always provides convenience to the users of a system as they can access their required information/data from one application to another application.

Shared Memory Model

In the shared-memory model, the co-operating processes share a  region of memory for the sharing of information. Some operating systems use the supervisor call to create shared memory space. While the other operating system uses a file system to create a RAM disk, which is a virtual disk created in the RAM. The shared files are stored in the RAM disk to share in the memory. The processes can share any type of information by writing and reading data of any type to the shared memory location or RAM disk.

Properties of Link in Indirect Communication

A communication link in indirect communication has the following properties:
  • A link is established between a pair of processes only if both members of the pair have a shared mailbox in a system.
  • A link may be associated with more than two processes in a system.
  • Between each pair of communication processes, there may be several different communication links, with each link corresponding to one mailbox.

Problems in Indirect Communication

The indirect communication may create problems for co-operating processes. Suppose four processes P1, P2, P3, and P4 share a mailbox MB. Process p1 sends a message that will receive the message sent by P1. 

communication-system

The above problem may be solved by one of the following methods.
  • Allow a link to be associated with at most two processes in a system.
  • Allow only one process at a time to receive the message.
  • Allow the system to select any number of processes to receive the message. The computer network system also may define an algorithm for selecting which process will receive the message in a system.

Buffering

A buffer is a temporary memory area that stores data, while data is communicated between devices or applications or processes or a combination of these. In other words, we can say that a buffer is just like a queue of messages.
Basically, buffering can be implemented in three ways;
  1. Zero Capacity
  2. Bounded Capacity
  3. Unbounded Capacity

Threads

A thread is actually a part of a process on a computer system. It can be managed independently by an operating system. A process can consist of several threads, each of which executes separately. Therefore, a thread is sometimes is said to be a lightweight process.
Each thread has its own information of program counter, CPU registers, and stack. In a process, threads are typically used for dividing a task into several subtasks, which can execute independently. 

For example:

The tasks in a Web browser are divided into multiple threads in a computer network system such as:
The task to download the images.
The task to download the text.
The task to display images and text etc.

Operating System classified:

Based on process execution, the operating system can be classified as;
  1. Single-process single-threaded
  2. Single-process multiple-threaded
  3. Multi-process single-threaded
  4. Multi-process multi-threaded

User-Level and Kernel Level Threads

In an operating system, support for threads may be provided either at the user level or at the kernel level. Therefore, threads may be divided into two categories:
  1. User-Level Threads
  2. Kernel-Level Threads

User-Level Threads

In user-level threads, all work of threads management is done by the application without kernel support on a computer system. Any application can be programmed to be multi-threaded by using a threads library in a system. This library has a collection of routines that can be used for user-level threads management.

For example:

  • Creating and destroying threads
  • Paying data between threads
  • Scheduling threads
  • Saving and restoring states of threads etc.

Advantages of User-level threads:

  • The process does not switch to the kernel-mode for threads management. Therefore, it saves time of process execution. Due to this reason, user-level threads implementation improves the performance of a system.
  • The scheduling of threads is done by the application itself. So the programmer can use a scheduling algorithm into his/her application according to his/her own choice.
  • Use-Level threads can be run on any operating system etc.

Kernel Level Threads

In kernel threads, all work of threads management is done by the kernel of the operating system. There is no thread management code in the user space of the process. In this approach, when control is switched from one thread to another within the same process, the control also has to switch from user mode to kernel mode. So this approach affects the performance of the computer network system.

Multithreading Models:

In this model, there must exist a relationship between user threads and kernel threads. This relationship can be established by using the following methods:
Many-to-one Model
One-to-One Mode
Many-to-Many Model

Comments

Popular posts from this blog

Modern scenario of information technology:

Deadlock Questions and Answers pdf

What is the bus interconnection?