Preemptive Scheduling Algorithm in OS with Examples and Types!!

Hello Friends! Today, we are going to cover about preemptive scheduling algorithm in operating System; involving with preemptive scheduling examples and its types with ease. This is unique article over the internet. So we make sure that at the end of this post; you will definitely fully educate about Preemptive Scheduling Algorithm without getting any hindrance.

What is Preemptive Scheduling Algorithm?

Preemptive scheduling is a kind of CPU scheduling technique. It is using in computer operating systems to ensure that the highest priority process gets access to the CPU as soon as it becomes available. In this scheduling, the operating system can interrupt a running process to allocate the CPU to a higher-priority process that needs it. This is in contrast to non-preemptive scheduling. In which, once a process has control of the CPU, it will continue to execute until it voluntarily releases the CPU.

Preemptive-Scheduling

Preemptive scheduling allows to assign every process as a priority level, which determines its importance relative to other processes. The operating system will allocate the CPU to the process with the highest priority level. If a higher-priority process becomes ready to run while a lower-priority process is currently running.  Then, operating system will preempt the lower-priority process and allocate the CPU to the higher-priority process.

Preemptive Scheduling Tutorial Headlines:

In this section, we will show you all headlines about this entire article. You can check them as your choice; below shown all:

  1. What is Preemptive Scheduling Algorithm?
  2. Different Types of Preemptive Scheduling
  3. Preemptive Scheduling Example
  4. Real-Life Example of Preemptive Scheduling
  5. Advantages of Preemptive Scheduling
  6. Disadvantages of Preemptive Scheduling
  7. Characteristics of Preemptive Scheduling

Let’s Get Started!!

Different Types of Preemptive Scheduling

There are several types of preemptive scheduling algorithms, including:

Round-Robin Scheduling: Round-robin scheduling allows to assign an equal amount of time to each process in a queue or ready queue. The algorithm allows to maintain a circular queue of processes, where each process gave a fixed time slice to execute. Once a process finished its time slice, it put at the end of the queue.

This algorithm is useful in situations where processes have similar priorities. There is no need to give priority to any particular process. It is also effective in situations where the length of each process not aware in advance. Tt can also prevent any one process from monopolizing the CPU for an extended period of time.

Priority Scheduling: Priority scheduling algorithm assigns a priority value to each process and allocates CPU time based on the priority. Processes with higher priority give more CPU time than those with lower priority.

In priority scheduling, each process assigns a priority value, typically an integer value. The priority values can static or dynamic, depending on the implementation. But, static priorities assign at the time of process creation and do not change during the lifetime of the process. Dynamic priorities, on the other hand, can change during the execution of the process based on its behavior or other factors.

The priority scheduling algorithm works by selecting the process with the highest priority from the ready queue and allocating CPU time to it. If two or more processes have the same priority, then round-robin scheduling allows to allocate CPU time among them.

Shortest Job First (SJF) Scheduling: SJF scheduling selects the process with the shortest expected processing time that will execute next. The expected processing time can estimate based on past performance or other factors such as the size of the process.

Here, each process assigns an estimated processing time. When a new process enters the ready queue. The scheduler selects the process with the shortest estimated processing time that will execute next. If multiple processes contain the same estimated processing time, then priority scheduling or round-robin scheduling can use.

Multi-level Feedback Queue (MLFQ) Scheduling: In which, processes use the multiple queues with different priority levels. Each queue has a different time quantum or priority level. Then, processes move between queues based on their execution history and behavior.

In MLFQ scheduling, processes initially enter the highest priority queue. If a process completes its time quantum without  blockage or termination, it moves to a lower priority queue. Conversely, if a process uses up its time quantum but not finished executing, it moves to a higher priority queue. This way, long-running processes given lower priority, while short-running processes given higher priority.

Each queue can use a different scheduling algorithm, such as round-robin or priority scheduling. Additionally, MLFQ scheduling can be either preemptive or non-preemptive, depending on the implementation.

Guaranteed Scheduling: In this type of scheduling, the operating system guarantees a minimum amount of CPU time to each process, regardless of other processes in the system. This ensures that no process allows to starve of CPU time.

Real-Time Scheduling: In real-time scheduling, the operating system guarantees that processes will execute within a specific timeframe or deadline. Real-time scheduling is commonly using in embedded systems and other applications where timing is critical.

Earliest Deadline First (EDF) Scheduling: In this types of scheduling, processes prioritize based on their deadline. The process with the earliest deadline given the CPU first. This ensures that processes with tighter deadlines executed first, which helps to meet the deadlines.

Rate Monotonic Scheduling (RMS): In RMS, the priority of the process assigns based on its periodicity. The process with the shortest period assigns the highest priority. This type of scheduling is useful in real-time systems where the response time is critical.

Deadline Monotonic Scheduling (DMS): DMS is similar to RMS, but the priority of the process assigns based on its deadline. The process with the earliest deadline that will assign the highest priority.

Preemptive Scheduling Example

Here’s an example of preemptive scheduling:

Also Read: CPU Scheduling Algorithms in OS | Types of CPU Scheduling

Suppose we have three processes: P1, P2, and P3, with the following information:

Process P1:

Arrival Time: 0

Burst Time: 5

Priority: 3

Process P2:

 

Arrival Time: 1

Burst Time: 3

Priority: 1

Process P3:

 

Arrival Time: 2

Burst Time: 4

Priority: 2

Here, we need to know the priority of each process, and time required to complete its execution. We will use the shortest remaining time first (SRTF) scheduling algorithm. This selects the process with the shortest remaining burst time.

Here’s how the scheduling algorithm would work:

  1. At time 0, P1 arrives and starts executing because it has the highest priority.
  2. At time 1, P2 arrives with a higher priority than P1. The operating system interrupts P1 and switches to P2.
  3. At time 2, P3 arrives with a higher priority than P2. The operating system interrupts P2 and switches to P3.
  4. At time 4, P3 completes its execution. The operating system selects the process with the shortest remaining burst time, which is P1.
  5. At time 5, P1 completes its execution.
  6. At time 8, P2 completes its execution.

The scheduling order is P1, P2, and P3.

In this example, preemptive scheduling ensures that processes with higher priority will execute first, even if they arrive later than processes with lower priority. By using the shortest remaining time first (SRTF) scheduling algorithm, the operating system can also minimize the average waiting time and turnaround time for all processes.

Real-Life Example of Preemptive Scheduling

One real-life example of preemptive scheduling is the scheduling of tasks in an airline reservation system.

Also Read: Longest Job First (LJF) Scheduling with Examples & Programs!!

In an airline reservation system, there are multiple tasks that need to perform simultaneously, such as checking seat availability, booking tickets, canceling reservations, and generating reports. These tasks have different priorities, with booking tickets being the highest priority and generating reports being the lowest priority.

Suppose a customer wants to book a ticket, and there is only one seat available. The booking task has the highest priority, so the system will interrupt any lower-priority task that is currently running, such as generating a report, to immediately process the booking request. This ensures that the customer’s request will process promptly and that the ticket books before someone else can take the last available seat.

Without preemptive scheduling, the system might continue running the current task until it completes, and the customer might miss out on the last available seat. Preemptive scheduling ensures that the highest-priority task always given priority, even if it means interrupting a lower-priority task.

Advantages of Preemptive Scheduling

Here are some advantages of preemptive scheduling:

Also Read: Longest Remaining Time First (LRTF) Scheduling with Examples & Programs!!

Fairness: Preemptive scheduling ensures that each task gets a fair share of the CPU time. Higher priority tasks get to execute first, but the lower priority tasks still get their chance to run.

Responsiveness: Preemptive scheduling allows the system to respond quickly to high-priority tasks. If a high-priority task needs for executing immediately, it can interrupt the currently running lower priority task and take over the CPU.

Efficiency: Preemptive scheduling can improve system efficiency by ensuring that tasks will execute in a timely manner. It can also help to avoid wasted CPU time by interrupting tasks that are waiting for I/O operations or other events.

Real-Time Systems: Preemptive scheduling is particularly useful in real-time systems where tasks need for executing within specific time constraints. By allowing high-priority tasks to interrupt lower priority tasks, preemptive scheduling can help to ensure that deadlines will meet.

Flexibility: Preemptive scheduling is more flexible than non-preemptive scheduling, as it allows tasks to interrupt and rescheduled as needed. This can be particularly useful in systems with varying workloads or unpredictable events.

Priority-Based Scheduling: Preemptive scheduling is often using in priority-based scheduling algorithms. In such algorithms, tasks with higher priorities given more CPU time than tasks with lower priorities. Preemptive scheduling ensures that the higher priority tasks get to execute first, which can be important in systems where certain tasks are more important than others.

Resource Utilization: Preemptive scheduling can help to improve resource utilization by ensuring that tasks that are waiting for I/O operations or other events are not hogging the CPU. By allowing other tasks to execute while one task is waiting, preemptive scheduling can help to ensure that resources are using more efficiently.

Multitasking: Preemptive scheduling is essential for multitasking systems, where multiple tasks need to run simultaneously. By allowing tasks will interrupt and reschedul as needed, preemptive scheduling allows multiple tasks to execute in parallel.

Dynamic Workload: Preemptive scheduling is better suitable to dynamic workloads, where the workload changes over time. In such systems, non-preemptive scheduling can result in long wait times for low-priority tasks, whereas preemptive scheduling ensures that all tasks will execute in a timely manner.

Reduced Response Time: Preemptive scheduling can help to reduce response time in interactive systems, where the system needs to respond quickly to user input. By allowing high-priority tasks to interrupt lower priority tasks, preemptive scheduling ensures that user input processes quickly and efficiently.

Disadvantages of Preemptive Scheduling

While preemptive scheduling has its advantages, there are also several disadvantages, including:

Also Read: Highest Response Ratio Next (HRRN) Scheduling with Examples & Programs!!

Overhead: Preemptive scheduling requires the operating system to spend extra resources monitoring and interrupting processes. This overhead can be significant, especially when the system is running many processes at once.

Complexity: Preemptive scheduling can be more complex than non-preemptive scheduling because the operating system needs to constantly monitor and prioritize processes. This complexity can make it more difficult to design and debug the operating system.

Race Conditions: Preemptive scheduling can create race conditions when multiple processes are competing for resources. For example, if two processes are writing to the same file simultaneously, one process may overwrite the other’s changes if it gets preempted at an inopportune time.

Starvation: Preemptive scheduling can cause some processes to starve of CPU time if the system heavily get loaded. In this case, some processes may never get a chance to run because they keep getting preempted by other processes.

Context Switching: Preemptive scheduling requires frequent context switching, which can be costly in terms of CPU time and cache misses. Context switching refers to the process of saving the current state of a process and loading the state of another process.

Priority Inversion: Preemptive scheduling can also cause priority inversion, which occurs when a low-priority process holds a resource needed by a high-priority process. In this case, the high-priority process may be preempted while waiting for the resource to become available, even though it should have had priority over the low-priority process.

Increased Response Time Variability: Preemptive scheduling can lead to increased response time variability, which can make it difficult to predict system performance. Because processes can be preempted at any time, the amount of time it takes to complete a task can vary depending on the current system load and the number of other processes running.

Scheduling Overhead: Preemptive scheduling requires additional scheduling overhead compared to non-preemptive scheduling. The operating system needs to maintain a scheduler that continuously monitors the state of the system and makes decisions about which process to run next.

Fairness: Preemptive scheduling may not always be fair to all processes. For example, if a process requires a large amount of CPU time and other processes are constantly preempting it, it may not be able to complete its task in a timely manner.

Synchronization and Deadlocks: Preemptive scheduling can also lead to synchronization and deadlock problems when processes are competing for shared resources. For example, if two processes are waiting for a shared resource to become available, they may enter a deadlock state if they are both preempted at the same time.

Characteristics of Preemptive Scheduling

There are some characteristics of preemptive scheduling are:

Also Read: Process Life Cycle in Operating System | Process State in OS

  • In preemptive scheduling, processes are assigned priorities based on their importance, and the CPU is allocated to the process with the highest priority.
  • Preemptive scheduling allows for time-sharing of the CPU among multiple processes. The CPU is allocated to each process for a short period of time, and then switched to another process.
  • This scheduling is interrupt-driven, which means that the operating system uses interrupts to interrupt the currently running process and allocate the CPU to another process.
  • Preemptive scheduling allows for efficient use of resources, as it ensures that the CPU is always allocated to the process that needs it the most.
  • Preemptive scheduling can lead to faster response times for interactive processes, as the operating system can quickly switch the CPU to the process that requires input from the user.
  • Preemptive scheduling can result in additional overhead due to the frequent context switches that occur when the CPU is allocated to different processes.

Overall, preemptive scheduling is a powerful technique for managing resources in computer operating systems. It allows for fast response times to high-priority tasks and ensures that important processes are given priority access to the CPU. However, it also introduces additional complexity and overhead compared to non-preemptive scheduling, and requires careful management of priority levels and resource allocation to avoid issues like priority inversion.

Wrapping Up

Now, we can hope that you have fully understood about preemptive scheduling algorithm in operating System; involving with preemptive scheduling examples and its types with ease. If this article is useful for you, then please share it along with your friends, family members or relatives over social media platforms like as Facebook, Instagram, Linked In, Twitter, and more.

Also Read: What is Process in OS? Types of Process in Operating System!!

If you have any experience, tips, tricks, or query regarding this issue? You can drop a comment!

Happy Learning!!

Leave a Reply

Your email address will not be published. Required fields are marked *