Guiding questions and answers for operating systems

Arinda Iradi
We have provided questions and answers for easy reading and understanding of operating systems for students.
Questions
1. Define a process

2. Explain the parts of a process

3. Differentiate between I/O bound process from CPU bound process

4. What needs to be done on a process schedule?

5. What needs to be done on a thread schedule?

6.What is a context switch?

7.Define process scheduling

8.Explain the following terms as used in process scheduling.
Job Queue, 
Ready Queue
Device Queue

9.Using the knowledge of scheduling, explain what happens in 
(short term scheduler(CPU scheduler)
long term scheduler(job scheduler)
medium scheduler

10. Define buffering

11.Explain how processes is achieved in 
Direct Communication, 
Indirect Communication

12. List the advantages and disadvantages of direct communication

13. List the advantages and disadvantages of indirect communication

ANSWERS
  
NO: 1
 A process is an instance of a program that is being executed
.
No: 2
 Start: this is the initial state when a process is first created.
Ready: in this state, the process is waiting to be assigned a processor. Ready processes are waiting for the operating system to assign them a processor so they can run. The process may enter this state after starting or while running if the scheduler interrupts it to assign the CPU to another process.
Running: when the OS scheduler assigns a processor to a process, the process state is set to running, and the processor executes the process the process instructions.
Waiting: if a process needs to wait for a resource, such as waiting for user input or for a file to become available, it enters the waiting state.
Terminated or exit: once a process has completed its execution or has been terminated by operating system, it moved to the terminated state, where it waits for removal from the main memory.
                                    Illustration


No: 3
                 I/O bound process refers to processes that spend most of their times waiting for input/output operations to complete, such as reading from or writing to disks, network communication or user input. These processes benefit from efficient I/O scheduling and asynchronous I/O operations to minimize idle time and improve overall system performance.
            While
CPU bound processes refers to processes that primarily require computational resources and speed most of their times executing CPU intensive tasks such as mathematical calculations, data processing or complex computations. These processes may benefit from efficient CPU scheduling algorithms to ensure fair allocation of CPU time among competing processes and maximize overall throughput.  
NO: 4
            In process scheduling, tasks include managing process arrivals, dispatching processes to the CPU preempting processes when necessary selecting appropriate scheduling algorithms synchronizing access to shared resources, allocating system resources efficiently, handling deadlocks, performing context switching, ensuring fairness and monitoring system performance for optimization. These tasks collectively facilitate the smooth execution of processes and the effective utilization of system resources.
NO: 5
 In thread scheduling, tasks involve managing thread arrivals, dispatching threads to available CPU cores, preempting threads based on priority or time slice expiration, selecting suitable scheduling algorithms like priority scheduling synchronizing thread execution and access to shared resources using synchronization primitives, allocating CPU time effectively among threads. These tasks collectively enable efficient thread execution and utilization of CPU resources in multi-threaded environments.
NO: 6
   A context switch is the process of saving the current state of a process or thread and loading the state of another process or thread for execution.
NO: 7
  Process scheduling refers to the mechanism by which the operating system selects and allocates CPU time to processes in computer system. It involves tasks like deciding which process to execute next, managing process arrival and departures and optimizing resource utilization.
NO: 8 
A job queue is a list of all processes in the system, including those awaiting for execution, suspended or in various states. It represents the pool of processes available to enter the system.
Ready queue refers to a list of processes that are prepared to execute and are waiting for CPU time. Process in ready queue are in state where they can immediately run if allocated CPU resources by the scheduler.
Device queue is a queue of processes waiting for access to I/O devices. Processes in in the device queue are waiting for I/O operations to complete before they can continue execution. These queues help manage access to I/O devices and ensure orderly processing of I/O requests.

NO: 9 
Short term schedule (CPU scheduler) selects which process from the ready queue will be executed next and allocated CPU time, it decides on a very short time scale typically in milliseconds or micro seconds. The goal is to optimize CPU utilization, minimize response time and maximize system throughout. Context switching and preemption are common operations performed by short term scheduler.
Long-term schedule determines which processes from the job queue are admitted into the system and loaded into memory for execution. It operates on much long time scale compared to the short scheduler potentially ranging from seconds to minutes or even hours. Its goal is to maintain system stability and avoid overloading the system with too many processes thus ensuring efficient resource utilization.
Medium term scheduler is responsible for managing the process of swapping processes between main memory and secondary storage when necessary. It operates on a medium time scale typically ranging from seconds to minutes. The medium term scheduler helps to alleviate memory pressure by moving less frequently used or idle resources out of memory and bringing the back to the memory when needed.

NO: 10
  Buffering refers to the process or technique of temporarily storing data in a memory buffer while it is being transferred between different components or processes, it helps to manage the follow of data between devices or processes.

NO: 11
In direct communication, processes communicate with each other explicitly typically by using shared memory or message passing mechanisms. Shared memory allows processes to read from and write to a common area of memory, enabling direct exchange of data.
Message passing involves sending messages directly between process processes through system calls or specialized communication. APIs provided by the operating system. Processes can send messages to specific recipients or broadcast messages to multiple processes.

In indirect communication processes communicate with each other through intermediary entities known as communication channels or mail boxes.
These act as buffers or queues where processes deposit messages or retrieve messages from. Processes communicate indirectly by sending messages or receiving messages from these channels. Each channel is typically associated with a unique identifier, allowing processes to send messages to specific channels without needing to know the identities of the receiving processes.

NO: 12
Advantages
Efficiency
Low latency
Simplicity
Disadvantages
Tight coupling
Synchronization
Low Scalability

NO: 13
    Advantages
Decoupling
Flexibility
Reliability
Advantages
Complexity
High latency
High burden