Process Synchronization – V –Implementing Semaphore without busy waiting

In the previous post, we discussed about mutex locks and semaphores and the differences between them. We also discussed how mutex implementations and the sempahore implementations (using wait() and signal() ) discussed previously suffered from busy waiting, which entails doing a lot of unnecessary work. Let us now discuss implementing semaphores without busy waiting.


What exactly does no busy waiting mean?

  • It means that a process P which has to wait on a semaphore S, does so by some other mechanism, rather than executing a while loop in which it repeatedly checks S, as in, 

  • It also means that whenever the process wakes up from waiting, the condition it was waiting for should hold. i.e., it must not wake up, find the condition false, and again wait on the same semaphore

Take a look at the following snippet.

Let us now see how this implementation overcomes the problem of busy waiting.

  • If process executes wait(), and finds that Semaphore S is not positive → it must wait.
    • Here, instead of engaging in busy waiting, process can block itself.
      • First, the process is placed into a waiting queue associated with the semaphore
      • Then, the state of process is switched to waiting
      • Finally, the control is transferred to the CPU scheduler, which then selects another process to execute
    • Modified signal() function
      • First, when some process P1 executes a signal() call, the process P2 that is waiting on a semaphore S, should be restarted.
      • The process P2 is restarted by a wakeup() call, which changes the process from waiting state to ready state.
      • The process is then placed into the ready queue.
      • Now, it is upto the scheduler to decide, whether to switch from currently running process to the newly ready process or not.

New functions used in this implementation are :

  • block() : used to suspend the process that invokes it
  • wakeup(P) : resume execution of a blocked process

A few points to note about this implementation : 

  • In this implementation, semaphore value can be negative. On the other hand, in the classical definition of semaphores with busy waiting, semaphores value can never be negative. (Recall the multiple lavatories example)
    • Therefore, it depends on the implementation, whether semaphore values can be negative or not.
    • System V API → semaphore cannot be less than 0
    • POSIX Semaphore API → Semaphore value can be less than 0. If semaphore is less than 0, then the magnitude is the number of processes waiting on that semaphore.
  • As we have seen in the algorithm described above, the list of waiting processes can be easily implemented by a link field in each PCB
    • The structure for a semaphore described above contains an integer value and a pointer to the list of PCBs.
    • One way of implementation :
      • maintain a FIFO queue, with the semaphore containing both head and tail pointers to the queue.
      • FIFO will ensure bounded waiting.
      • However, the list can use any queueing mechanism.
  • As required in the previous implementation, here too, semaphore operations must be executed atomically.
    • At any point in time can two processes execute wait() and signal() operations on the same semaphore.
    • Similar to the critical section problem.
      • Single processor environment → inhibit interrupts during the time wait() and signal() operations are executing
        • Will ensure that instructions from different processes cannot be interleaved.
        • Only the currently running process executes until interrupts are reenabled and the scheduler can regain control.
      • Multi-processor environment 
        • One way is to disable interrupts on every processor.
        • Disabling interrupts on every processor can be a difficult task and furthermore can seriously diminish performance
        • Other way is to use alternative locking techniques—such as compare and swap() or spinlocks—to ensure that wait() and signal() are performed atomically.

Note that busy waiting has not been completely eliminated. It has rather moved from entry section to the critical section of the application programs. Furthermore, we have limited busy waiting to the critical sections of the wait() and signal() operations, and these sections are short (if properly coded, they should be no more than about ten instructions).

Since this implementation uses a waiting queue to implement blocking, we must also consider the situation where two or more processes are waiting for a signal() which has to be executed by one of the waiting processes itself.

This situation is called a deadlock, which can lead to indefinite blocking or starvation. (when a process waits indefinitely within a semaphore)

Indefinite blocking may occur if we remove processes from the list associated with a semaphore in LIFO (last-in, first-out) order.

Let us see a simple example of deadlock : 

In this system, we have two processes → P0 and P1

  • Both the processes access  two semaphores, S and Q, set to the value 1
  • Suppose that P0 executes wait(S) and then P1 executes wait(Q).
  • When P0 executes wait(Q), it must wait until P1 executes signal(Q).
  • Similarly, when P1 executes wait(S), it must wait until P0 executes signal(S).

Since these signal() operations cannot be executed, P0 and P1 are deadlocked. Therefore, we prefer a FIFO-ordered queue to avoid deadlock.

In the next post, we will discuss the concept of Priority inversion. Until then, keep reading and sharing 🙂

Source : 1, 2, 3