Deadlock Prevention in OS with Their Algorithms and Techniques

Hi Learners! Today, you will learn all possible stuffs about what is deadlock Prevention in OS with their algorithms and techniques with ease. This is unique article over the internet; therefore, after reading this blog post; you will definitely fully understood about Deadlock Prevention in OS without any hassle.

What Deadlock Prevention in OS?

It is very necessary to prevent deadlock in operating system before it can happen. So, system identifies every transaction before getting its execution, and ensures it doesn’t get to deadlock problem. If, that time any transaction may occur deadlock issue, then it can’t get to execute its instructions.

Deadlock Prevention in OS

Deadlock prevention is a proactive approach employed by operating systems to eliminate or avoid the formation of deadlocks altogether. However, implementing these techniques can be challenging, as each technique comes with its own trade-offs and considerations, and the choice of prevention strategy depends on the specific requirements and constraints of the system at hand.

What is Feasibility of Deadlock Prevention?

Feasibility of deadlock prevention refers to the practicality of implementing methods to prevent deadlocks in an operating system.

Deadlock prevention is a proactive approach that aims to eliminate or avoid the occurrence of deadlocks altogether by breaking one or more of the necessary conditions for deadlock formation. However, some of the methods used for deadlock prevention, such as resource allocation denial, can be inefficient and non-shareable.

Therefore, the feasibility of deadlock prevention depends on the specific requirements and constraints of the system at hand. Deadlock prevention is crucial for system stability and optimal resource utilization, but the choice of prevention strategy should be carefully considered.

Deadlock Prevention Algorithms

We have two techniques to prevent the deadlock problem, such as:-

  • Deadlock Prevention Schemes: Wait Die Scheme and Wound Wait Scheme
  • To Violate any One Condition from Four Conditions: (Mutual Exclusion, Hold and Wait, No Preemption, Circular Wait)

Deadlock Prevention Schemes

In deadlock prevention schemes, we implement the timestamp, and to get ensure that deadlock doesn’t happen are given as follows.

  • Wait Die Scheme:

In this scheme, if transaction T1 fires to request for getting resource but that same resource is held by the other transaction T2, then two condition can be occurred such as –

A). TS (T1) < TS (T2), means that if T1 is older than to T2, so T1 visited in your system before to T2. T1 is getting to wait for resource that will be free when T2 has done its entire execution.

B). TS(T1) > TS(T2), means that T1 is smaller to T2, so T1 visited in your system after T2, then T2 is dropped. So it is again started along with same timestamp.

  • Wound Wait Scheme:

In this scheme, if transaction T1 fires to request for getting resource but that same resource is held by the other transaction T2, then two condition can be occurred such as –

A). TS(T1) < TS(T2), means that if T1 is older than to T2, so T1 visited in your system before to T2, then T1 is getting to roll back T2 or wound T2. So T1 obtains some resource and done its entire execution. Then T2 is further again started along with same timestamp.

B). TS(T1) > TS(T2), means that T1 is smaller to T2, so T1 visited in your system after T2, then T2 is dropped. So it is again started along with same timestamp.

Conditions to Arise Deadlock in OS

The conditions necessary for deadlock to arise in an operating system are as follows:

Mutual Exclusion: At least one resource must be held in a non-sharable mode.

Hold and Wait: Processes hold resources while waiting for additional resources.

No Preemption: Resources cannot be forcibly taken from a process.

Circular Wait: A circular chain of processes exists, where each process holds a resource that the next process in the chain is waiting for.

Deadlock prevention aims to eliminate at least one of these conditions to ensure that deadlocks do not occur.

To Violate any One Condition from Four Conditions:

1) Mutual Exclusion

In the Mutual Exclusion, one resource is assigned by single process at same time; if other process is needed to same allotted resources then it has to required wait to occupy for those resources. So, Mutual Exclusion can’t be broken for process because in practically, single resource can conduct the task of one process at once. For instance – Multiple users can’t fire commands to print document at same time.

Avoid Mutual Exclusion: To avoid mutual exclusion, it can be used the “Spooling” technique. In the Spooling, all jobs of each process are linked with memory. For example, Printer uses of those jobs and produce the output of all jobs according to FCFS (First Come First Serve) principle, because in which any process does not need to wait to print.

Mutual Exclusion Example:

Let’s consider a simple example with two processes, P1 and P2, and two resources, R1 and R2. The goal is to prevent deadlock by enforcing mutual exclusion.

import threading

import time

# Shared resources

R1 = threading.Lock()

R2 = threading.Lock()

def process_one():

    while True:

        # Acquire resource R1

        R1.acquire()

        print(“Process P1 acquired resource R1”)

        # Simulate some work with R1

        time.sleep(1)

        # Acquire resource R2

        R2.acquire()

        print(“Process P1 acquired resource R2”)

        # Release resources

        R2.release()

        R1.release()

def process_two():

    while True:

        # Acquire resource R2

        R2.acquire()

        print(“Process P2 acquired resource R2”)

        # Simulate some work with R2

        time.sleep(1)

        # Acquire resource R1

        R1.acquire()

        print(“Process P2 acquired resource R1”)

        # Release resources

        R1.release()

        R2.release()

# Create two threads, one for each process

thread_p1 = threading.Thread(target=process_one)

thread_p2 = threading.Thread(target=process_two)

# Start the threads

thread_p1.start()

thread_p2.start()

# Wait for the threads to finish (which they won’t due to the infinite loop)

thread_p1.join()

thread_p2.join()

In this example, we have two processes (process_one and process_two) that acquire two resources (R1 and R2) in a specific order. The order of resource acquisition is different for each process, preventing the possibility of a deadlock. This is a simple way to ensure mutual exclusion and avoid the circular waiting condition that leads to deadlocks.

Challenges Are:

Implementing mutual exclusion poses several challenges, particularly in distributed systems where processes do not have complete information about the system’s state due to the lack of shared memory and a common physical clock. This can lead to issues such as deadlock, where processes endlessly wait for messages that may never arrive, and starvation, where a process is unable to execute its critical section in a finite time.

2) Hold and Wait

This condition is occurred, if one process holds few resources and it has to wait for further resources which are already held by another waiting process.

For avoiding that problem, before turn-on execution, process would be allotted whole resources which are needed then finally it will start its execution. In this approach, process has not needed to wait for few resources while its execution. This technique is not valid because, we don’t aware about that resource is needed by process in further life in execution.

To avoid hold and wait with using another methods, we can use another method that is “Do Not Hold”. For instance, if process required 20 resources N1, N2, N3, .. N20. At the specific time slot, we can release N1, N2, N3, N4, N5, and N6.  After completing that job on those resources, process has to need to free those resources, and other resources will be delivered to process. So with using this concept, we can avoid the hold and wait problem. 

Hold and Wait Example:

Let’s consider an example in a banking scenario where a customer wants to transfer money between two accounts. To prevent deadlock due to the “Hold and Wait” condition, we can use a strategy where the customer must acquire both accounts’ locks simultaneously or release any acquired locks before making a new request.

import threading

import time

class BankAccount:

    def __init__(self, balance):

        self.balance = balance

        self.lock = threading.Lock()

def transfer_money(account_from, account_to, amount):

    # Acquire locks for both accounts or release any acquired locks before retrying

    while True:

        if account_from.lock.acquire(blocking=False):

            print(f”Acquired lock for account {id(account_from)}”)

            if account_to.lock.acquire(blocking=False):

                try:

                    # Simulate money transfer

                    if account_from.balance >= amount:

                        account_from.balance -= amount

                        account_to.balance += amount

                        print(f”Transfer successful: ${amount} from account {id(account_from)} to account {id(account_to)}”)

                    else:

                        print(“Insufficient funds for transfer.”)

                finally:

                    # Release locks

                    account_to.lock.release()

                    account_from.lock.release()

                    break

            else:

                print(f”Release lock for account {id(account_from)} and retrying…”)

                account_from.lock.release()

        else:

            time.sleep(0.1)  # To avoid busy-waiting

# Create two bank accounts

account1 = BankAccount(1000)

account2 = BankAccount(500)

# Create a thread for the money transfer

transfer_thread = threading.Thread(target=transfer_money, args=(account1, account2, 200))

# Start the thread

transfer_thread.start()

# Wait for the thread to finish

transfer_thread.join()

# Print the final account balances

print(f”Final balance in account1: ${account1.balance}”)

print(f”Final balance in account2: ${account2.balance}”)

In this example, the transfer_money function attempts to acquire locks for both the source (account_from) and destination (account_to) accounts simultaneously. If it fails to acquire both locks, it releases any acquired locks and retries. This approach allows to prevent the “Hold and Wait” condition that can making to lead to deadlock. The use of the loop ensures that the function keeps trying until it successfully acquires both locks.

3) No Preemption

It is a technique in which one process cannot take the resources of other processes by force. But if we find some resources that are causing a system deadlock, then we can stop that resource from holding that resource.

By doing so; we can overcome the deadlock but there are some things that must be kept in mind before using this forceful approach. If the process is of a very high priority or the process is a system process, then only the process can bind the resources of other processes to the former. In addition, try to prefigure the resources of processes that are in a state of waiting.

No Preemption Example:

Let’s consider an example where two processes need access to two printer resources. The processes request the printers and hold them until they complete their printing tasks. With “No Preemption,” a process cannot be interrupted and have its printer resource taken away.

import threading

import time

class Printer:

    def __init__(self, printer_id):

        self.printer_id = printer_id

        self.lock = threading.Lock()

def print_document(printer, document):

    # Acquire the printer lock

    printer.lock.acquire()

    print(f”Printing {document} on printer {printer.printer_id}”)

        # Simulate the printing process

    time.sleep(2)

    # Release the printer lock

    printer.lock.release()

# Create two printer resources

printer1 = Printer(printer_id=1)

printer2 = Printer(printer_id=2)

# Define two printing tasks

task1 = “Task-1 Document”

task2 = “Task-2 Document”

# Create two threads for the printing tasks

thread_task1 = threading.Thread(target=print_document, args=(printer1, task1))

thread_task2 = threading.Thread(target=print_document, args=(printer2, task2))

# Start the threads

thread_task1.start()

thread_task2.start()

# Wait for the threads to finish

thread_task1.join()

thread_task2.join()

# Print a message indicating the tasks are complete

print(“Printing tasks are complete.”)

In this example, two printer resources (printer1 and printer2) are created. Two printing tasks (task1 and task2) are defined, and two threads (thread_task1 and thread_task2) are created to execute the printing tasks. Each thread acquires the lock of the printer it needs and releases it when the printing is complete.

4) Circular Wait

If, first process is waiting for such resource that is held by second process, and this process is again waiting for such resource which are held by third process and so on. Then we can say that Circular wait’s condition is occurred.

To avoid circular wait condition, some priority numbers are allotted to every resource. And process has not permission to fire request for lesser priority resource. This ensures that not a single process can request a resource which is being utilized by some other process and no cycle will be formed.

Circular Wait Example:

Let’s consider an example involving three processes (P1, P2, and P3) and three resources (R1, R2, and R3). Each process needs to acquire resources in a specific order.

import threading

import time

# Define resources

R1 = threading.Lock()

R2 = threading.Lock()

R3 = threading.Lock()

def process_one():

    while True:

        # Acquire resource R1

        R1.acquire()

        print(“Process P1 acquired resource R1”)

        # Simulate some work with R1

        time.sleep(1)

        # Acquire resource R2

        R2.acquire()

        print(“Process P1 acquired resource R2”)

        # Simulate some work with R2

        time.sleep(1)

        # Release resources

        R2.release()

        R1.release()

def process_two():

    while True:

        # Acquire resource R2

        R2.acquire()

        print(“Process P2 acquired resource R2”)

        # Simulate some work with R2

        time.sleep(1)

        # Acquire resource R3

        R3.acquire()

        print(“Process P2 acquired resource R3”)

        # Simulate some work with R3

        time.sleep(1)

        # Release resources

        R3.release()

        R2.release()

def process_three():

    while True:

        # Acquire resource R3

        R3.acquire()

        print(“Process P3 acquired resource R3”)

        # Simulate some work with R3

        time.sleep(1)

        # Acquire resource R1

        R1.acquire()

        print(“Process P3 acquired resource R1”)

        # Simulate some work with R1

        time.sleep(1)

        # Release resources

        R1.release()

        R3.release()

# Create three threads, one for each process

thread_p1 = threading.Thread(target=process_one)

thread_p2 = threading.Thread(target=process_two)

thread_p3 = threading.Thread(target=process_three)

# Start the threads

thread_p1.start()

thread_p2.start()

thread_p3.start()

# Wait for the threads to finish (which they won’t due to the infinite loop)

thread_p1.join()

thread_p2.join()

thread_p3.join()

In this example, each process follows a specific order in acquiring resources (R1, R2, R3 for P1; R2, R3, R1 for P2; R3, R1, R2 for P3). This creates a circular wait condition, as each process is waiting for a resource held by the next process in the cycle. To prevent deadlock, you might use techniques such as resource allocation graphs, ensuring a global ordering of resource acquisition, or redesigning the system to avoid circular dependencies in resource requests.

FAQs (Frequently Asked Questions)

How can deadlock prevention affect system performance?

Deadlock prevention strategies may are getting to introduce the overhead and complexities in the resource management. For example, enforcing “acquire all or release all” policies can impact resource utilization and increase latency. Balancing the need for deadlock prevention with system efficiency and responsiveness is crucial.

What are the drawbacks of deadlock prevention strategies?

Deadlock prevention strategies are getting to impose restrictions on the resource allocation or process execution, and making to lead with lower resource utilization. Few prevention techniques also need prior knowledge of resource requirements that may not always be feasible. Additionally, prevention strategies may introduce overhead and complexity in the system.

Can deadlock prevention completely eliminate the possibility of deadlocks?

Deadlock prevention focuses on proactively eliminating or avoiding the occurrence of deadlocks. It enables with modifying resource allocation; as well as the process execution policies to violate the mandatory conditions for deadlock formation.

What are some techniques for preventing deadlocks?

Techniques for preventing deadlocks include resource allocation denial, resource ordering, hold and wait prevention, preemptive resource allocation, and spooling/I/O scheduling.

Final Words

Now, we make ensure that you have been fully understood about about what is deadlock Prevention in OS with their algorithms and techniques with ease. If this helpful for you, then please share it along with your friends, family members or relatives over social media platforms like as Facebook, Instagram, Linked In, Twitter, and more.

If you have any experience, tips, tricks, or query regarding this issue? You can drop a comment!

Happy Learning!!

Leave a Reply

Your email address will not be published. Required fields are marked *