CPU Scheduling Algorithms in OS | Types of CPU Scheduling

Hello Learners! Today, here we are going to explain what is CPU Scheduling Algorithms in OS? And different types of CPU Scheduling algorithm in Operating System with ease. At the end of this post, you will get to know completely information with CPU Scheduling Algorithms without any hindrance.

What is CPU Scheduling in Operating System?

CPU scheduling is the procedure of an operating system. It determines which process should be allocating to the CPU at any given time. The main goal of CPU scheduling is to improve the efficiency of the entire system. It also allows to maximize the use of CPU resources, and minimizing the response time for user requests.

CPU-Schedulings-and-it-types

In a multitasking environment, multiple processes are running concurrently, then CPU scheduler determines which process will execute next. The scheduler selects the next process from the ready queue that are waiting for executing. CPU scheduler uses scheduling algorithm, and it determines the order in which processes selected from the ready queue.

CPU Scheduling Tutorial Headlines:

In this section, we will show you all headlines about this entire article. You can check them as your choice; below shown all:

  1. What is CPU Scheduling in Operating System?
  2. Why to Need for CPU Scheduling Algorithm
  3. CPU Scheduler in OS
  4. CPU Scheduling Dispatcher
  5. CPU Scheduling Criteria
  6. CPU Scheduling Algorithm Terminologies
  7. Different Types of CPU Scheduling Algorithms

A) Pre-emptive Scheduling

  • Priority Scheduling
  • Longest Remaining Job First Scheduling
  • Shortest Remaining Job First Scheduling
  • Round Robing Scheduling

B) Non-Preemptive Scheduling

  • Shortest Job First Scheduling
  • First Come First Server Scheduling
  • Longest Job First Scheduling
  • Highest Response Ratio Next Scheduling

Let’s Get Started!!

Why to Need for CPU Scheduling Algorithm

CPU scheduling algorithms are necessary because modern computer systems often have multiple processes and threads competing for CPU time. The CPU can only execute one process at a time. So the operating system must determine which process to run next and for how long. This is where the CPU scheduling algorithm comes into play.

Also Read: Process Life Cycle in Operating System | Process State in OS

The main objectives of a CPU scheduling algorithm are:

Maximize CPU Utilization: This should try to keep the CPU as busy, as possible so that no time is wasting.

Fairness: Each process should get a fair share of the CPU time. So it has no any process is starving of CPU time.

Response Time: The algorithm should try to minimize the time it takes for a process to get a response.

Throughput: The algorithm should try to maximize the number of processes that completed per unit time.

Turnaround Time: The algorithm should try to minimize the time it takes for a process to complete.

Without a good CPU scheduling algorithm, the operating system may not be able to make optimal use of the CPU. Therefore, CPU scheduling algorithms are essential for efficient utilization of system resources and providing a good user experience.

CPU Scheduler in OS

CPU scheduler is a component of an operating system. It is responsible for allocating the CPU (Central Processing Unit) time to various processes and threads. The CPU scheduler ensures CPU is utilizing effectively.

Also Read: Longest Remaining Time First (LRTF) Scheduling with Examples & Programs!!

The CPU scheduler works for a queue of processes and threads that are waiting for CPU time. When CPU is available, then scheduler selects the next process to run based on a set of scheduling policies. These policies have base on various factors, such as priority of the process, and amount of CPU time.

The choice of scheduling algorithm can have a significant impact on system performance and resource utilization. A well-designed CPU scheduler can help to ensure that the system is responsive and efficient. But, poorly designed scheduler can lead to problems such as process starvation, and poor resource utilization.

CPU Scheduling Dispatcher

The CPU scheduling dispatcher is an essential component of the operating system. It is responsible for selecting which process should be executing next on the CPU. Its primary function is to manage the scheduling queue of processes waiting for the CPU. So, it selects the next process to run based on the scheduling algorithm.

Also Read: What is Process in OS? Types of Process in Operating System!!

The dispatcher needs to be efficient in selecting processes and allocating resources to ensure that the operating system runs smoothly. The scheduling algorithm used by the dispatcher plays a critical role in the system’s overall performance. Different algorithms prioritize different criteria such as the length of the process and its priority.

CPU Scheduling Criteria

The criteria are as follows:

CPU Utilization: The operating system tries to keep the CPU busy all the time. Therefore, it chooses the process that will utilize the CPU to the maximum.

Throughput: The number of processes that complete their execution per unit of time is called throughput. The operating system aims to maximize the throughput by selecting the processes.

Turnaround Time: Turnaround time is the time taking by a process to complete from submission to termination. The operating system tries to minimize the turnaround time by selecting the process.

Waiting Time: Waiting time is the time that a process spends in the ready queue waiting for CPU time. The operating system tries to minimize the waiting time by selecting the process that has been waiting the longest.

Response Time: The system takes the response time to respond for a user’s request. The operating system tries to minimize the response time by selecting the process.

Priority: Priority scheduling is using to assign priority to the processes. The operating system gives higher priority to the processes that are more important.

CPU Scheduling Algorithm Terminologies

There are several terms that are commonly using in CPU scheduling algorithms:

Process: A program that in execution.

Arrival Time: The time at which a process arrives in the system.

Burst Time: The amount of time a process requires to complete its execution.

Completion Time: The time at which a process completes its execution.

Turnaround Time: The total amount of time a process takes from arrival to completion. (Turn Around Time = Completion Time – Arrival Time)

Waiting Time: The amount of time a process spends waiting in the ready queue. That means (Waiting Time = Turn Around Time – Burst Time)

Response Time: Response time is the amount of time, it takes for a process to start running after a request made. In CPU scheduling, response time is an important metric, because it determines how quickly system can interact with a process.

Context Switching: The process of switching the CPU from one process to another.

Pre-emptive Scheduling:  In which, CPU can keep away from a process before it completes its execution.

Non-preemptive Scheduling: In which, CPU cannot keep away from a process before it completes its execution.

Different Types of CPU Scheduling Algorithms

CPU scheduling algorithm has two different categories, so here will discuss all CPU scheduling algorithms with comparison like as:

types-of-CPU-Scheduling-algorithms

1) Pre-emptive Scheduling

Pre-emptive scheduling is a method that helps to manage the execution of multiple processes or threads. In pre-emptive scheduling, OS can interrupt a currently executing process and allocate the CPU to another process.

This means, operating system can forcibly stop the execution of a lower-priority process or thread. Pre-emptive scheduling ensures that the CPU always allocated to the process or thread. It has the highest priority at any given time that helps to maximize system throughput and minimize response times.

There are different types of pre-emptive scheduling algorithm such as:

A) Priority Scheduling: Priority scheduling is a method of scheduling processes or threads in an operating system. In which, each process  assigns as priority value. The priority value is typically an integer. It indicates the relative importance of the process as compared to other processes in the system.

In priority scheduling, the operating system assigns CPU time to the process with the highest priority value. If multiple processes have the same priority; then they  schedule the different scheduling algorithm, such as round-robin scheduling.

B) Longest Remaining Job First Scheduling: Longest Remaining Job First (LRJF) scheduling algorithm selects the process with the largest estimated remaining processing time to execute next. 

The LRJF algorithm is similar to the Shortest Remaining Time First (SRTF) algorithm. But instead of choosing the process with the smallest remaining time, it chooses the process with the largest remaining time. 

C) Shortest Remaining Job First Scheduling: Shortest Remaining Time First (SRTF) scheduling algorithm is going to use in OS for executing processes efficiently. 

In SRTF, the process with the shortest estimated run time works as priority, and the CPU allocated to that process. If new process arrives with a shorter expected run time than the currently running process, the running process is preempted, and the new process executes. CPU give the process with the shortest remaining time, until it completes  by another process with a shorter remaining time.

D) Round Robing Scheduling: In round-robin scheduling, processes execute in a cyclic way, where each process gets a fixed amount of time called a time quantum. Once a process used up its time quantum, it acts as preempted and the next process in the queue to run.

The basic idea behind round-robin scheduling is to provide fair CPU time to all processes in the system, preventing any process from monopolizing the CPU. The length of the time quantum can adjust depending on the needs of the system and the nature of the processes being run.

2) Non-Preemptive Scheduling

Non-preemptive scheduling technique is using where a process to run until it completes. In non-preemptive scheduling, the CPU allocates to a process and remains with that process until it enters a waiting state.

This scheduling technique has another name ‘cooperative scheduling‘, because the process cooperates with the scheduler. Non-preemptive scheduling is useful for processes that need to complete a task without interruption, such as printing a large document or copying a file.

There are different types of Non preemptive scheduling algorithm, including:

A) Shortest Job First Scheduling: In SJF scheduling, the process with the smallest execution time selects for execution first. This algorithm is implementing in both preemptive and non-preemptive modes.

In non-preemptive SJF, once a process allocated the CPU, then it keeps the CPU until it finishes its execution. In preemptive SJF, if a new process arrives with a smaller burst time than the currently executing process, then currently executing process interrupted, and the new process executes first.

SJF Scheduling Algorithm Pros

The advantages of Shortest Job First (SJF) scheduling are as follows:

Reduces Waiting Time: SJF scheduling aims to reduce the average waiting time of processes by giving priority to the processes with the shortest execution time. This results in faster completion of processes and reduces the waiting time of other processes in the queue.

Increases Throughput: The SJF scheduling algorithm can improve system throughput by executing processes more efficiently. It allowing more processes for completing within a given time period.

Fairness: SJF scheduling ensures fairness by allocating the CPU time to processes based on their burst time. Shorter processes give the higher priority that can prevent longer processes from monopolizing the CPU.

Efficient Use of Resources: SJF scheduling ensures to utilize CPU is efficiently by running shorter processes first, which frees up the CPU for other processes.

No Starvation: SJF scheduling guarantees that every process eventually gets CPU time, as there is no concept of aging. Even if a process has a long burst time, it will eventually get its turn.

SJF Scheduling Algorithm Drawbacks

The disadvantages of Shortest Job First (SJF) scheduling are as follows:

Execution Time Prediction: The main disadvantage of SJF scheduling is that it requires accurate information about the execution time of each process, which may not always be available. If the predicted burst time of a process is inaccurate, it can lead to inefficient CPU utilization and increased waiting time.

Prioritization of Short Processes: SJF scheduling gives priority to shorter processes that can lead to longer processes as starved of CPU time.

Preemption Overhead: In the case of preemptive SJF scheduling, the overhead of context switching between processes can be high, leading to reduced system performance.

Inherent Delay: In non-preemptive SJF scheduling, longer processes can delay, and leading to increased waiting time and potentially longer completion times.

Not Suitable for Real-Time Systems: SJF scheduling is not suitable for real-time systems where the response time of a process is critical, as SJF scheduling does not guarantee a fixed response time for any process.

B) First Come First Server Scheduling: First Come First Serve (FCFS) scheduling algorithm operates on the principle of serving tasks in the order they arrive. The first task that arrives is the first one for serving, and so on.

In FCFS scheduling, the first process allocate the CPU in the ready queue. The process runs until it either completes its execution or it goes into a waiting state. Once the first process finished, the next process will allocate the CPU in the queue, and so on.

This scheduling algorithm is simple to implement, but it may not be optimal in all situations. For example, if a long-running process arrives first, it will hold the CPU for an extended period, causing shorter processes to wait in the queue, which can lead to longer response times for those tasks

FCFS Scheduling Algorithm Pros

The First Come First Serve (FCFS) scheduling algorithm has some advantages:

Simple to Implement: FCFS is a simple and easy-to-understand scheduling algorithm that does not require complex calculations or heuristics to determine which process to run next.

Fairness: FCFS provides fairness to the processes in the system. Since the processes are serving in the order they arrive, no process given priority over another.

No Starvation: In FCFS, every process eventually gets a chance to run, even if it has to wait for a long time. 

No Overhead: FCFS has no overhead associated with scheduling decisions, which means that it is less likely to cause CPU overhead or consume additional resources.

FCFS Scheduling Algorithm Drawback

Although First Come First Serve (FCFS) scheduling algorithm has some advantages, it also has some disadvantages, including:

Poor Turnaround Time: FCFS scheduling can lead to poor turnaround time, especially if a long-running process arrives first. This is because shorter processes have to wait in the queue, resulting in a longer waiting time and hence a longer turnaround time.

Inefficient Utilization of CPU: In FCFS, the CPU is allocated to the first process in the queue, regardless of its length. This can lead to inefficient utilization of the CPU, as a long-running process may hold the CPU for an extended period, while shorter processes have to wait.

No Priority: FCFS does not take into account the priority of the process. This can be a problem in real-time systems, where certain tasks require higher priority than others.

Convoy Effect: FCFS can lead to the convoy effect, where a long-running process can hold up the queue of shorter processes behind it, resulting in a longer waiting time and overall inefficiency.

Not Suitable for Interactive Systems: FCFS is not suitable for interactive systems, where quick response time is essential, and the user’s interaction is critical.

C) Longest Job First Scheduling: Longest Job First (LJF) scheduling is a CPU scheduling algorithm that selects the process with the largest CPU burst time to execute first. In other words, the process with the longest expected processing time is given the highest priority.

The basic idea behind LJF scheduling is that longer jobs, which typically have higher processing requirements, will take longer to complete, and hence may cause longer wait times for other processes in the queue. By executing the longest job first, the average waiting time of all the processes in the queue can be minimized.

LJF Scheduling Algorithm Pros

The Longest Job First (LJF) scheduling algorithm has several advantages, including:

Minimizes Average Waiting Time: By executing the longest job first, LJF scheduling minimizes the average waiting time for all the processes in the queue. This is because the longer jobs tend to cause longer wait times for other processes, and executing them first can help reduce this effect.

Simple and Easy to Implement: LJF scheduling is a simple and easy-to-implement algorithm that does not require a lot of overhead. This makes it suitable for systems with limited resources or where efficiency is a top priority.

Works Well for Batch Processing: Since LJF scheduling is non-preemptive, it is well suited for batch processing systems where jobs are executed without user interaction. This can help improve the overall system efficiency and throughput.

Provides Fairness: LJF scheduling can provide a fair allocation of resources among processes, as each process is given an equal opportunity to execute based on its CPU burst time.

LJF Scheduling Algorithm Drawbacks

The Longest Job First (LJF) scheduling algorithm also has some disadvantages, including:

Poor Response Time: LJF scheduling may result in poor response time for shorter processes as they have to wait for the longer processes to finish. This can be problematic in interactive systems where user responsiveness is a priority.

Inefficient for Short Processes: LJF scheduling can be inefficient for short processes as they may end up waiting for a long time before they get executed. This can lead to poor system throughput and can reduce the overall efficiency of the system.

Can Cause Starvation: LJF scheduling can cause starvation for shorter processes, especially if there are a large number of long processes in the queue. This means that shorter processes may never get a chance to execute, leading to reduced system efficiency and fairness.

Non-Preemptive Nature: LJF scheduling is a non-preemptive algorithm, which means that once a process starts executing, it cannot be interrupted until it completes its CPU burst. This can lead to longer waiting times for other processes in the queue.

D) Highest Response Ratio Next Scheduling: Highest Response Ratio Next (HRRN) scheduling algorithm selects the process with the highest response ratio as the next process to run. The response ratio is defined as the ratio of the sum of the waiting time and the execution time to the execution time. The idea behind this algorithm is to give priority to processes that have been waiting for a long time and have a short execution time, as they will have a high response ratio.

To implement the HRRN algorithm, the scheduler maintains a list of ready processes and calculates the response ratio for each process in the list. The process with the highest response ratio is selected for execution. If multiple processes have the same highest response ratio, the scheduler selects the one that arrived first.

HRRN Scheduling Algorithm Pros

The HRRN (Highest Response Ratio Next) scheduling algorithm has several advantages, including:

Fairness: HRRN scheduling is a non-preemptive scheduling algorithm that gives priority to processes that have been waiting for a long time and have a short execution time. This ensures that all processes get a chance to run and complete, leading to a fair distribution of CPU time.

High Throughput: HRRN scheduling can achieve a high throughput, as it prioritizes short processes with long waiting times. This ensures that more processes are completed in a shorter amount of time, which can lead to a higher system throughput.

Low Turnaround Time: HRRN scheduling can help reduce the turnaround time for processes. By prioritizing processes with a high response ratio, the algorithm ensures that processes with shorter execution times are completed quickly, which can reduce the overall turnaround time for processes.

Low Waiting Time: HRRN scheduling can also help reduce the waiting time for processes. By prioritizing processes that have been waiting for a long time, the algorithm ensures that processes are executed as soon as possible, which can reduce the overall waiting time for processes.

HRRN Scheduling Algorithm Drawbacks

While HRRN (Highest Response Ratio Next) scheduling has some advantages, it also has some disadvantages, including:

Overhead: The HRRN scheduling algorithm requires a lot of calculations to determine the response ratio for each process in the ready queue. This overhead can be significant, particularly in systems with a large number of processes.

Starvation: HRRN scheduling can lead to starvation of long-running processes. This is because the algorithm always prioritizes processes with the highest response ratio, which may result in long-running processes being repeatedly passed over in favor of shorter processes with higher response ratios.

Unpredictability: The HRRN scheduling algorithm is not very predictable, as the response ratio changes over time as processes wait in the ready queue. This can make it difficult to estimate the completion time for processes, which can be problematic for certain types of applications.

Inefficiency: In some cases, HRRN scheduling can be less efficient than other scheduling algorithms, particularly when there are many long-running processes in the system. This is because the algorithm prioritizes short processes with high response ratios, which may not be the best use of CPU time in all situations.

Final Verdicts

Making ensure, in this post, you have been completely aware about what is CPU Scheduling Algorithms in OS? And different types of CPU Scheduling algorithm in Operating System with ease. If this content is helpful for you, then share it along with your friends, family members or relatives over social media platforms like as Facebook, Instagram, Linked In, Twitter, and more.

Also Read: Multiprogramming Operating System: Example, Types, and Advantage!!

If you have any experience, tips, tricks, or query regarding this issue? You can drop a comment!

Have a Nice Day!!

Leave a Reply

Your email address will not be published. Required fields are marked *