Preemptive Scheduling Algorithm in OS with Examples and Types!!

In the pre-emptive scheduling, the operating system can interrupt a running process and allocate the CPU to another process with a higher priority. This helps ensure that important tasks get completed quickly and efficiently. Now, we are going to cover about preemptive scheduling algorithm in operating System; involving with preemptive scheduling examples and its types with ease. This is unique article over the internet. So we make sure that at the end of this post; you will definitely fully educate about Preemptive Scheduling Algorithm without getting any hindrance.

What is Preemptive Scheduling Algorithm?

Preemptive scheduling is a CPU scheduling technique used in computer operating systems to ensure that the highest priority process gets access to the CPU (Central Processing Unit) as soon as it becomes available. In this scheduling, the operating system can interrupt a running process to allocate the CPU to a higher-priority process that needs it. This is in contrast to non-preemptive scheduling, where once a process has control of the CPU, it will continue to execute until it voluntarily releases the CPU or until it has completed its task.

Preemptive-Scheduling

Preemptive scheduling allows to assign every process as a priority level, which determines its importance relative to other processes. The operating system will allocate the CPU to the process with the highest priority level that is currently ready to run. If a higher-priority process becomes ready to run while a lower-priority process is currently running, the operating system will preempt the lower-priority process and allocate the CPU to the higher-priority process.

Preemptive Scheduling Tutorial Headlines:

In this section, we will show you all headlines about this entire article; you can check them as your choice; below shown all:

  1. What is Preemptive Scheduling Algorithm?
  2. Different Types of Preemptive Scheduling
  3. Preemptive Scheduling Example
  4. Real-Life Example of Preemptive Scheduling
  5. Advantages of Preemptive Scheduling
  6. Disadvantages of Preemptive Scheduling
  7. Characteristics of Preemptive Scheduling

Let’s Get Started!!

Different Types of Preemptive Scheduling

There are several types of preemptive scheduling algorithms, including:

Round-Robin Scheduling: Round-robin scheduling is a CPU scheduling algorithm that assigns an equal amount of time (known as a time slice or quantum) to each process in a queue or ready queue. The algorithm works by maintaining a circular queue of processes, where each process is given a fixed time slice to execute. Once a process has finished its time slice, it is put at the end of the queue and the next process is given a turn.

The round-robin scheduling algorithm is useful in situations where processes have similar priorities and there is no need to give priority to any particular process. It is also effective in situations where the length of each process is not known in advance, as it can prevent any one process from monopolizing the CPU for an extended period of time.

Priority Scheduling: Priority scheduling algorithm assigns a priority value to each process and allocates CPU time based on the priority. Processes with higher priority are given more CPU time than those with lower priority.

In priority scheduling, each process is assigned a priority value, typically an integer value. The priority values can be static or dynamic, depending on the implementation. Static priorities are assigned at the time of process creation and do not change during the lifetime of the process. Dynamic priorities, on the other hand, can change during the execution of the process based on its behavior or other factors.

The priority scheduling algorithm works by selecting the process with the highest priority from the ready queue and allocating CPU time to it. If two or more processes have the same priority, then round-robin scheduling is used to allocate CPU time among them.

Shortest Job First (SJF) Scheduling: SJF scheduling is a CPU scheduling algorithm that selects the process with the shortest expected processing time to be executed next. The expected processing time can be estimated based on past performance or other factors such as the size of the process or the number of instructions it contains.

In SJF scheduling, each process is assigned an estimated processing time, either at the time of process creation or based on historical data. When a new process enters the ready queue, the scheduler selects the process with the shortest estimated processing time to be executed next. If multiple processes have the same estimated processing time, then priority scheduling or round-robin scheduling can be used to break the tie.

Multi-Level Feedback Queue (MLFQ) Scheduling: In this type of scheduling, processes use the multiple queues with different priority levels to manage the execution of processes. Each queue has a different time quantum or priority level, and processes move between queues based on their execution history and behavior.

In MLFQ scheduling, processes initially enter the highest priority queue. If a process completes its time quantum without being blocked or terminating, it is moved to a lower priority queue. Conversely, if a process uses up its time quantum but has not finished executing, it is moved to a higher priority queue. This way, long-running processes are given lower priority, while short-running processes are given higher priority.

Each queue can use a different scheduling algorithm, such as round-robin or priority scheduling. Additionally, MLFQ scheduling can be either preemptive or non-preemptive, depending on the implementation.

Guaranteed Scheduling: In this type of scheduling, the operating system guarantees a minimum amount of CPU time to each process, regardless of other processes in the system. This ensures that no process is starved of CPU time.

Real-Time Scheduling: In real-time scheduling, the operating system guarantees that processes will be executed within a specific timeframe or deadline. Real-time scheduling is commonly used in embedded systems and other applications where timing is critical.

Earliest Deadline First (EDF) Scheduling: In this type of scheduling, processes are prioritized based on their deadline. The process with the earliest deadline is given the CPU first. This ensures that processes with tighter deadlines are executed first, which helps to meet the deadlines.

Rate Monotonic Scheduling (RMS): In RMS, the priority of the process is assigned based on its periodicity. The process with the shortest period is assigned the highest priority. This type of scheduling is useful in real-time systems where the response time is critical.

Deadline Monotonic Scheduling (DMS): DMS is similar to RMS, but the priority of the process is assigned based on its deadline. The process with the earliest deadline is assigned the highest priority.

Preemptive Scheduling Example

Here’s an example of preemptive scheduling:

Also Read: CPU Scheduling Algorithms in OS | Types of CPU Scheduling

Suppose we have three processes: P1, P2, and P3, with the following information:

Process P1:

Arrival Time: 0

Burst Time: 5

Priority: 3

Process P2:

 

Arrival Time: 1

Burst Time: 3

Priority: 1

Process P3:

 

Arrival Time: 2

Burst Time: 4

Priority: 2

To apply preemptive scheduling, we need to know the priority of each process and the amount of time required to complete its execution. We will use the shortest remaining time first (SRTF) scheduling algorithm, which selects the process with the shortest remaining burst time.

Here’s how the scheduling algorithm would work:

  1. At time 0, P1 arrives and starts executing because it has the highest priority.
  2. At time 1, P2 arrives with a higher priority than P1. The operating system interrupts P1 and switches to P2.
  3. At time 2, P3 arrives with a higher priority than P2. The operating system interrupts P2 and switches to P3.
  4. At time 4, P3 completes its execution. The operating system selects the process with the shortest remaining burst time, which is P1.
  5. At time 5, P1 completes its execution.
  6. At time 8, P2 completes its execution.

The scheduling order is P1, P2, and P3.

In this example, preemptive scheduling ensures that processes with higher priority are executed first, even if they arrive later than processes with lower priority. By using the shortest remaining time first (SRTF) scheduling algorithm, the operating system can also minimize the average waiting time and turnaround time for all processes.

Real-Life Example of Preemptive Scheduling

One real-life example of preemptive scheduling is the scheduling of tasks in an airline reservation system.

Also Read: Longest Job First (LJF) Scheduling with Examples & Programs!!

In an airline reservation system, there are multiple tasks that need to be performed simultaneously, such as checking seat availability, booking tickets, canceling reservations, and generating reports. These tasks have different priorities, with booking tickets being the highest priority and generating reports being the lowest priority.

Suppose a customer wants to book a ticket, and there is only one seat available. The booking task has the highest priority, so the system will interrupt any lower-priority task that is currently running, such as generating a report, to immediately process the booking request. This ensures that the customer’s request is processed promptly and that the ticket is booked before someone else can take the last available seat.

Without preemptive scheduling, the system might continue running the current task until it completes, and the customer might miss out on the last available seat. Preemptive scheduling ensures that the highest-priority task is always given priority, even if it means interrupting a lower-priority task.

Advantages of Preemptive Scheduling

Here are some advantages of preemptive scheduling:

Also Read: Longest Remaining Time First (LRTF) Scheduling with Examples & Programs!!

Fairness: Preemptive scheduling ensures that each task gets a fair share of the CPU time. Higher priority tasks get to execute first, but the lower priority tasks still get their chance to run.

Responsiveness: Preemptive scheduling allows the system to respond quickly to high-priority tasks. If a high-priority task needs to be executed immediately, it can interrupt the currently running lower priority task and take over the CPU.

Efficiency: Preemptive scheduling can improve system efficiency by ensuring that tasks are executed in a timely manner. It can also help to avoid wasted CPU time by interrupting tasks that are waiting for I/O operations or other events.

Real-Time Systems: Preemptive scheduling is particularly useful in real-time systems where tasks need to be executed within specific time constraints. By allowing high-priority tasks to interrupt lower priority tasks, preemptive scheduling can help to ensure that deadlines are met.

Flexibility: Preemptive scheduling is more flexible than non-preemptive scheduling, as it allows tasks to be interrupted and rescheduled as needed. This can be particularly useful in systems with varying workloads or unpredictable events.

Priority-Based Scheduling: Preemptive scheduling is often used in priority-based scheduling algorithms. In such algorithms, tasks with higher priorities are given more CPU time than tasks with lower priorities. Preemptive scheduling ensures that the higher priority tasks get to execute first, which can be important in systems where certain tasks are more important than others.

Resource Utilization: Preemptive scheduling can help to improve resource utilization by ensuring that tasks that are waiting for I/O operations or other events are not hogging the CPU. By allowing other tasks to execute while one task is waiting, preemptive scheduling can help to ensure that resources are used more efficiently.

Multitasking: Preemptive scheduling is essential for multitasking systems, where multiple tasks need to run simultaneously. By allowing tasks to be interrupted and rescheduled as needed, preemptive scheduling allows multiple tasks to be executed in parallel.

Dynamic Workload: Preemptive scheduling is better suited to dynamic workloads, where the workload changes over time. In such systems, non-preemptive scheduling can result in long wait times for low-priority tasks, whereas preemptive scheduling ensures that all tasks are executed in a timely manner.

Reduced Response Time: Preemptive scheduling can help to reduce response time in interactive systems, where the system needs to respond quickly to user input. By allowing high-priority tasks to interrupt lower priority tasks, preemptive scheduling ensures that user input is processed quickly and efficiently.

Disadvantages of Preemptive Scheduling

While preemptive scheduling has its advantages, there are also several disadvantages, including:

Also Read: Highest Response Ratio Next (HRRN) Scheduling with Examples & Programs!!

Overhead: Preemptive scheduling requires the operating system to spend extra resources monitoring and interrupting processes. This overhead can be significant, especially when the system is running many processes at once.

Complexity: Preemptive scheduling can be more complex than non-preemptive scheduling because the operating system needs to constantly monitor and prioritize processes. This complexity can make it more difficult to design and debug the operating system.

Race Conditions: Preemptive scheduling can create race conditions when multiple processes are competing for resources. For example, if two processes are writing to the same file simultaneously, one process may overwrite the other’s changes if it is preempted at an inopportune time.

Starvation: Preemptive scheduling can cause some processes to be starved of CPU time if the system is heavily loaded. In this case, some processes may never get a chance to run because they keep getting preempted by other processes.

Context Switching: Preemptive scheduling requires frequent context switching, which can be costly in terms of CPU time and cache misses. Context switching refers to the process of saving the current state of a process and loading the state of another process.

Priority Inversion: Preemptive scheduling can also cause priority inversion, which occurs when a low-priority process holds a resource needed by a high-priority process. In this case, the high-priority process may be preempted while waiting for the resource to become available, even though it should have had priority over the low-priority process.

Increased Response Time Variability: Preemptive scheduling can lead to increased response time variability, which can make it difficult to predict system performance. Because processes can be preempted at any time, the amount of time it takes to complete a task can vary depending on the current system load and the number of other processes running.

Scheduling Overhead: Preemptive scheduling requires additional scheduling overhead compared to non-preemptive scheduling. The operating system needs to maintain a scheduler that continuously monitors the state of the system and makes decisions about which process to run next.

Fairness: Preemptive scheduling may not always be fair to all processes. For example, if a process requires a large amount of CPU time and other processes are constantly preempting it, it may not be able to complete its task in a timely manner.

Synchronization and Deadlocks: Preemptive scheduling can also lead to synchronization and deadlock problems when processes are competing for shared resources. For example, if two processes are waiting for a shared resource to become available, they may enter a deadlock state if they are both preempted at the same time.

Characteristics of Preemptive Scheduling

There are some characteristics of preemptive scheduling are:

Also Read: Process Life Cycle in Operating System | Process State in OS

  • In preemptive scheduling, processes are assigned priorities based on their importance, and the CPU is allocated to the process with the highest priority.
  • Preemptive scheduling allows for time-sharing of the CPU among multiple processes. The CPU is allocated to each process for a short period of time, and then switched to another process.
  • This scheduling is interrupt-driven, which means that the operating system uses interrupts to interrupt the currently running process and allocate the CPU to another process.
  • Preemptive scheduling allows for efficient use of resources, as it ensures that the CPU is always allocated to the process that needs it the most.
  • Preemptive scheduling can lead to faster response times for interactive processes, as the operating system can quickly switch the CPU to the process that requires input from the user.
  • Preemptive scheduling can result in additional overhead due to the frequent context switches that occur when the CPU is allocated to different processes.

Overall, preemptive scheduling is a powerful technique for managing resources in computer operating systems. It allows for fast response times to high-priority tasks and ensures that important processes are given priority access to the CPU. However, it also introduces additional complexity and overhead compared to non-preemptive scheduling, and requires careful management of priority levels and resource allocation to avoid issues like priority inversion.

Wrapping Up

Now, we can hope that you have fully understood about preemptive scheduling algorithm in operating System; involving with preemptive scheduling examples and its types with ease. If this article is useful for you, then please share it along with your friends, family members or relatives over social media platforms like as Facebook, Instagram, Linked In, Twitter, and more.

Also Read: What is Process in OS? Types of Process in Operating System!!

If you have any experience, tips, tricks, or query regarding this issue? You can drop a comment!

Happy Learning!!

Leave a Reply

Your email address will not be published. Required fields are marked *