Edit October 26, 2014: The commenter known as dirbase corrected me on some errors I made in this post, which I will now be correcting. Thank you very much for the constructive feedback, dirbase. :)
Still being a bit lazy about the blog -- I've been busy reading and working, both of which have longer quantums than writing for this blog, apparently.
Basically I just wanted to take a moment out of this Sunday afternoon to discuss thread quantums. This entire post is inspired by Windows Internals, 6th Ed. by Mark Russinovich, et al. As always, I could not recommend the book more highly.
So, we've all seen this screen, right? Adjust processor scheduling for best performance of "Programs" or "Background services:"
Well that seems like a very simple, straightforward choice... but who knows what it actually means? To answer that question, we need to know about a basic mechanism that Windows uses: Quantum.
A thread quantum is the amount of time that a thread is allowed to run until Windows checks if there is another thread at the same priority waiting for its chance to run. If there are no other threads of the same priority waiting to run, then the thread is allowed to run for another quantum.
Process Priority is that attribute that you can set on a process in Task Manager or in Process Explorer by right-clicking a process and choosing its priority. Even though it's the threads that actually "run" and not processes per se, each process can have many dynamically-lived threads, so Windows allows you to set a priority per process, and in turn each thread of that process inherits its base priority from its parent process. (Threads actually have two priority attributes, a base and a current priority. Scheduling decisions are made based on the thread's current priority.)
There are 32 process priority levels, 0-31, that are often given simplified labels such as "Normal," "Above Normal," "Real time," etc. Those are all within the subset of 0-1 on the Interrupt Request Level (IRQL) scale. What this means is that if you set a process to run at "Real Time" - or the highest possible priority - the process and its threads will still not have the ability to preempt or block hardware interrupts, but it could delay and even block the execution of important system threads, not to mention all other code running at Passive level. That is why you should have a very good reason for setting a process to such a high priority. Doing so has the ability to affect system-wide stability.
So now, back to quantum. We now know its definition, but how long exactly is a quantum? That depends on your hardware clock resolution (not to be confused with timer expiration frequency,) the speed of your processor, and how you have configured that setting pictured above to optimize performance for "Programs" or "Background services." As of Windows 7 and Windows Server 2008 R2, clients are configured to let threads run for 2 clock intervals before another scheduling decision is made, while it's 12 clock intervals on servers. So when you change that setting on the Performance Options page, you are bouncing back and forth between those two values.
The reasoning for the longer default quantums on Server operating systems is to minimize context switching, and that if a process on a server is woken up, with a longer quantum it will have a better chance of completing the request and going back to sleep without being interrupted in between. On the other hand, shorter quantums can make things seem "snappier" on your desktop, leading to a better experience for desktop OSes.
As I said before, the resolution of your system clock is a factor in determining how long a quantum really is. In contemporary x86 and x64 processors, this is usually 15.6 milliseconds and it's set by the HAL, not the OS kernel. You can see what it's set to for yourself by using a kernel debugger and examining KeMaximumIncrement:
Reading the bytes right to left, if you convert 02, 62, 5A to decimal, you will get 156250, which represents about 15.6ms. Don't confuse this value with timer expiration frequency/interval. The two values are related, but different.
There are a couple of different ways to obtain timer expiration frequency/interval. One way is with clockres.exe from Sysinternals:
Notice that the maximum timer interval is the familiar 15.6 milliseconds, which is also my hardware clock interval. But my current timer interval is 1ms. Programs running on your system can request system-wide changes to this timer, which is what has happened here. You can use powercfg.exe -energy to run a power efficiency analysis of your system that will identify processes that have made such requests to increase the resolution of the system timer. When timer expirations fire at a higher frequency, that causes the system to use more energy, which can be of significant concern on laptops and mobile devices that run on battery power. In my case, it's usually Google Chrome that asks that the system timer resolution be increased from its default of 15.6ms. But remember that even when this timer interval changes, it doesn't change the length of thread quantums, as thread quantum calculation is done using the max or base clock interval.
When Windows boots up, it uses the above KeMaximumIncrement value, in seconds, and multiplies it by the processor speed in Hertz, divides it by 3 and stores the result in KiCyclesPerClockQuantum:
Converted to decimal, that is 17151040 CPU cycles.
The other factor in determining the length of a quantum is base processor frequency. You can obtain this value in several different ways, including using the !cpuinfo command in the debugger:
3.293 GHz is the base frequency of my processor, even though a command such as Powershell's
would report 3.801 GHz as the maximum frequency. This is a slightly overclocked Intel i5-2500k. Now that we have those two pieces of information, all that's left is some good old fashioned arithmetic:
The CPU completes 3,293,000,000 cycles per second, and the max timer interval, as well as the hardware clock resolution, is 15.6 ms. 3293000000 * 0.0156 = 51370800 CPU cycles per clock interval.
1 Quantum Unit = 1/3 (one third) of a clock interval, therefore 1 Quantum Unit = 17123600 CPU cycles.
This is only a tiny, rounding error amount off from the value that is stored in KiCyclesPerClockQuantum.
Assuming that at a rate of 3.293GHz, each CPU cycle is 304 picoseconds, that works out to 5.2 milliseconds per quantum unit. Since my PC is configured for thread quantums of 2 clock intervals, and each clock interval is 3 quantum units, that means my PC is making a thread scheduling decision about every 31 milliseconds.
Now there's one final complication to this, and that is by using the "Programs" performance setting as opposed to the "Background services" setting, you are also enabling variable length quantums. Whereas a typically configured server will use fixed-length, 12 clock-interval quantums... but I'll leave off here and if you're interested in knowing more about variable length quantums, I would suggest the book I mentioned at the beginning of this post.