The semaphore implementation of the Linux kernel source code

  
A previous blog introduced the spin lock for Linux kernel synchronization, that is, using spin lock to protect shared resources, today introduced another Linux kernel synchronization mechanism — — semaphore . Semaphores are widely used in the kernel for the protection of various shared resources. The semaphore is different from the implementation mechanism of the spin lock, and the use is different. First, both the spin lock and the semaphore use a counter to indicate the maximum number of processes that are allowed to access the shared resource at the same time, but the shared count value of the spin lock is 1, which means that only one process runs in the shared code area at any time; The amount allows the use of a shared count greater than 1, that is, shared resources are allowed to be accessed simultaneously by multiple different processes. Of course, the semaphore counter can also be set to 1, in which case the semaphore is also called a mutex. Secondly, the spin lock is used to protect shared resources that can complete operations in a short period of time. Process sleep and process switching are not allowed during use. The semaphore is often used for shared resources that cannot be temporarily acquired. If the acquisition fails, the process enters an uninterruptible sleep state. It can only be woken up by the process that released the resource. Finally, the spin lock can be used in the interrupt service routine; semaphores cannot be used in the interrupt service routine because the interrupt service routine does not allow the process to sleep. The basic knowledge about semaphores has been explained. Next, let's look at the implementation of semaphores in the kernel. The kernel version explained in this article is linux-2.6.24. 1 data structure
struct semaphore { atomic_t count; int sleepers; wait_queue_head_t wait;}; The data structure used by the semaphore is struct semaphore, containing three data members: count is the shared count value, sleepers are waiting for the current semaphore to go to sleep The number of processes, wait is the waiting queue of the current semaphore. 2 Semaphore usage To initialize before using the semaphore, it is only a simple setting of the shared count and wait queue. The number of sleep processes is 0 at the beginning. This article focuses on the use and implementation of semaphores. Semaphore operation API:
static inline void down(struct semaphore * sem)//Get the semaphore, if it fails, it will enter the sleep state static inline void up(struct semaphore * sem)//release the semaphore and wake up waiting The first process in the queue The semaphore is used as follows:
down(sem);...critical section...up(sem); The kernel guarantees that the number of processes accessing the critical section is less than or equal to the initialized share. Counting the value, the process that fails to get the semaphore will enter an uninterruptible sleep state, waiting in the waiting queue of the semaphore. When the process releases the semaphore, it wakes up the first process in the waiting queue. 3 semaphore implementation 3.1 down(sem) First look at the definition of the function:
static inline void down(struct semaphore * sem){ might_sleep(); __asm__ __volatile__( "# atomic down operation\ \\t" LOCK_PREFIX " ;decl %0\ \\t" /* --sem->count */"jns 2f\ " "\\tlea %0,%%eax\ \\t" "call __down_failed\ " " ;2:" :"+m" (sem->count) : :"memory","ax");} This contains some assembly code, %0 for sem->count. That is to say, first reduce sem->count by 1, LOCK_PREFIX means to lock the bus when executing this instruction, to ensure that the minus 1 operation is atomic. If it is greater than or equal to 0, it will go to the label 2 to execute. It will skip the __down_failed function and go directly to the end of the function and return. The semaphore is successfully obtained. Otherwise, after s1, sem->count is less than 0. The latter __down_failed function. Then look at the definition of __down_failed function:
ENTRY (__ down_failed) CFI_STARTPROC FRAME pushl% edx CFI_ADJUST_CFA_OFFSET 4 CFI_REL_OFFSET edx, 0 pushl% ecx CFI_ADJUST_CFA_OFFSET 4 CFI_REL_OFFSET ecx, 0 call __down popl% ecx CFI_ADJUST_CFA_OFFSET -4 CFI_RESTORE ecx popl% edx CFI_ADJUST_CFA_OFFSET -4 CFI_RESTORE edx ENDFRAME ret CFI_ENDPROC END(__down_failed) pushl and popl are used to save and restore registers, and CFI prefix instructions are used for instruction alignment adjustment. The focus is on the function __down. Let's look at the definition of the function:
fastcall void __sched __down(struct semaphore * sem){ struct task_struct *tsk = current; DECLARE_WAITQUEUE(wait, tsk); unsigned long flags; tsk-> State = TASK_UNINTERRUPTIBLE; spin_lock_irqsave(&sem->wait.lock, flags); add_wait_queue_exclusive_locked(&sem->wait, &wait); sem->sleepers++; for (;;) { int sleepers = sem - & gt; sleepers; /* * Add " everybody else " into it They are not * playing, because we own the spinlock in * the wait_queue_head * /if (atomic_add_negative (sleepers - 1, &..! sem- & gt ;count)) { sem->sleepers = 0; break; } sem->sleepers = 1; /* us - see -1 above */spin_unlock_irqrestore(&sem->wait.lock, flags); schedule (); spin_lock_irqsave(&sem->wait.lock, flags); tsk->state = TASK_UNINTERRUPTIBLE; } remove_wait_queue _locked(&sem->wait, &wait); wake_up_locked(&sem->wait); spin_unlock_irqrestore(&sem->wait.lock, flags); tsk->state = TASK_RUNNING;}
Copyright © Windows knowledge All Rights Reserved