The relationship between spinlock and linux kernel scheduling

  
has a lot of articles about spin lock usage, but some details are not enough. I will take a look at the place that I personally think is easy to have questions.

a spin lock (spinlock) Introduction

spin lock can only be held up to a kernel tasks at the same time, so only one thread at a time allowed to exist in the critical region. This can be applied to locking services that are required in multiprocessing machines or preemptive kernels running on a single processor.

Second, the semaphore Profile

here also introduce the concept under the semaphore, because its use spin locks and similar places. Semaphore

Linux is a sleep lock. If a task attempts to acquire a semaphore that is already held, the semaphore pushes it into the waiting queue and then lets it sleep. At this point the processor is free to execute other code. When the process holding the semaphore releases the semaphore, a task in the waiting queue will be woken up, so that this semaphore can be obtained.

three spin locks and semaphores contrast

in many places spin locks and semaphores can select any use, but there are some places can only select one of them. Let's compare the usage of some of the two.

Table 1-1 Comparative spin locks and semaphores

applications
semaphore or lock spinlock
low overhead (faster execution time critical zone) < Br> Preferred spin lock
Low overhead lock (longer critical section execution time)
Preferred semaphore
Critical section may contain code causing sleep
Cannot be selected from twist lock select the semaphore
the critical area is located in a non-process context, this time can not sleep
preference spin locks, even if you select semaphores can only be used
down_trylock non-blocking manner

< Br>
four, spin lock and linux kernel process We discuss scheduling relationship

Case 3 1-1 in the following table (in several other cases easier to understand), if the critical region could contain code that causes the sleep you can not use a spin lock, it may cause Deadlock. So why

amount of signal protection code can sleep spin locks can not do?

look at the implementation of the spin lock bar, substantially follows the form of spin lock:

spin_lock (& ​​amp; mr_lock); //critical region spin_unlock (& ​​amp; mr_lock); < Br> Tracking the implementation of spin_lock(&mr_lock)
#define spin_lock(lock) _spin_lock(lock)
#define _spin_lock(lock) __LOCK(lock)
#define __LOCK(lock) \\
do {preempt_disable (); __acquire (lock); (void) (lock);} while (0)

noted & ldquo; preempt_disable () & rdquo ;, this function is called & ldquo; off & rdquo preemption ; (The preemption function will be turned back on in spin_unlock). It can be seen that the area protected by the spin lock is working in a non-preemptive state; even if the lock is not obtained, the state of "spin" is prohibited from being preempted. Knowing this, I think we should be able to understand why the code for spinlock protection can't sleep. Imagine if you sleep in the middle of the spinlock protected code and process scheduling occurs at this point, another process may call the code protected by spinlock again. And now we know that even if we don't get the lock's "spin" state, it is forbidden to preempt, and "spin" is dynamic and will not sleep anymore, that is, in this processor. There will be no more process scheduling happening, then the deadlock will happen naturally.

we can summarize the characteristics of the next spin locks:
l single-processor non-preemptive kernel: spin locks are ignored at compile time;
l uniprocessor kernel preemption under: Self The spin lock is only used as a switch to set the kernel preemption;
l Multi-processor: At this time, the spin lock can be fully utilized. The spin lock is mainly used in the kernel to prevent concurrent access thresholds in multiple processors. Zone to prevent competition caused by kernel preemption.

five, linux

seize the time to seize the linux happened last time occurred in the understanding, seize seize divided into user and kernel preemption.

user preemption generated in the following cases:
l call to return to user space
l return from the interrupt handler

user space kernel preemption will occur in the system:
l When returning kernel space from the interrupt handler, and the kernel is preemptive at the time;
l When the kernel code is once again preemptible. (eg: spin_unlock)
l If the task in the kernel explicitly calls schedule()
l if the task in the kernel is blocked.

basic process scheduling is what has happened after the clock interrupt, and the discovery process has been used over time slice,
process occurs preemption. Usually we can use the interrupt handler to return to the kernel space when the kernel preemption feature can be used to improve the real-time performance of some I /O operations, such as: when the I /O event occurs, the corresponding interrupt handler is activated, when It finds that when a process is waiting for this I/O event, it activates the wait process and sets the need_resched flag of the currently executing process, so that when the interrupt handler returns, the scheduler is activated and is waiting for I/O. The process of the event (probably) gains execution rights, thus ensuring a relatively fast response (in milliseconds) to I/O events. It can be seen that when an I/O event occurs, the processing of the I/O event preempts the current process, and the response speed of the system is independent of the length of the scheduled time slice.

Copyright © Windows knowledge All Rights Reserved