We were unable to locate this content in zh-cn. Here is the same content in en-us. .NET
The CLR's Thread Pool
Jeffrey Richter
Microsoft is always trying to improve the performance of its platforms and applications. Many years ago, Microsoft researched how threads were being used by application developers to see what could be done to improve their use. Out of this research came a very important discovery: developers frequently created a new thread to perform a single task and when the task was complete, the thread would die.
This pattern is extremely common in a server application. A client makes a request of the server, the server creates a thread to process the client's request, and then, when the client's request is complete, the server's thread would die. Compared to a process, creating and destroying a thread is fast and uses fewer OS resources. But creating and destroying threads is certainly not free.
To create a thread, a kernel object is allocated and initialized, the thread's stack memory is allocated and initialized, and Windows® sends every DLL in the process a DLL_THREAD_ATTACH notification, causing pages from disk to be faulted into memory so that code can execute. When a thread dies, every DLL is sent a DLL_THREAD_DETACH notification, the thread's stack memory is freed, and the kernel object is freed (if its usage count goes to 0). So, there is a lot of overhead associated with creating and destroying a thread that has nothing to do with the work that the thread was created to perform in the first place.
The Birth of the Thread Pool
The result of this study led Microsoft to implement a thread pool, which first appeared in Windows 2000. When a Microsoft® .NET Framework team was designing and building the common language runtime (CLR), they decided to implement a thread pool right into the CLR itself. This way, any managed application could take advantage of a thread pool even if the application was running on a version of Windows prior to Windows 2000 (such as Windows 98).
When the CLR initializes, its thread pool contains no threads. When the application wants to create a thread to perform a task, the application should request the task be performed by a thread pool thread. The thread pool knows that and will create an initial thread. This new thread will go through the same initialization as any other thread; but, when the task is complete, the thread will not destroy itself. Instead, the thread will return to the thread pool in a suspended state. If the application makes another request of the thread pool, then the suspended thread will just wake up and perform the task and a new thread will not be created. This saves a lot of overhead. As long as the application queues tasks to the thread pool no faster than the one thread can handle each task, the same thread gets reused over and over again saving an enormous amount of overhead over the app's lifetime.
Now, if the application queues up tasks for the thread pool faster than the one thread can handle it, then the thread pool will create additional threads. Of course, creating new threads does generate overhead, but it is very likely that the application will require just a few threads to handle all of the tasks thrown at it over the application's lifetime. So, overall, the application's performance improves by using the thread pool.
Now, you might be wondering what happens if the thread pool contains many threads and the workload on the application diminishes. In this case, the thread pool contains several threads that are sitting suspended for long periods of time, wasting OS resources. Microsoft thought about this, too. When a thread pool thread suspends itself, it waits for 40 seconds. If 40 seconds elapses and the thread is given nothing to do, then the thread wakes up and destroys itself, freeing all the OS resources (stack, kernel object, and so forth) that it was using. Also, it probably doesn't hurt performance to have the thread wake up and destroy itself because the application can't be doing too much anyway or the thread would have resumed execution. By the way though, I said that threads in the thread pool wake themselves up in 40 seconds, the actual amount of time is not documented and is subject to change.
The cool thing about a thread pool is that it is heuristic. If your application needs to perform many tasks, then the thread pool creates more threads. If your application's work load dies down, then the thread pool threads kill themselves. The thread pool's algorithms ensure that it contains as many threads as required by the workload placed on it!
So, hopefully, you now understand the general concept behind a thread pool and see the performance advantages that it can offer. At this time, I'd like to show you some code demonstrating how to use the thread pool. First, you should know that the thread pool offers four capabilities:
The first three capabilities are quite useful and I will demonstrate them in this column. However, the fourth capability is very rarely used by application developers, so I will not demonstrate it here; perhaps I'll cover it in a future column.
Capability 1: Calling a Method Asynchronously
In your application, if you have code in which you create a new thread to perform a task, I recommend that you replace that code with new code that directs the thread pool to perform the task instead. In fact, you'll generally find that it is easier to have the thread pool perform a task then it is to have a new, dedicated thread.
To queue a task for the thread pool, you use the ThreadPool class defined in the System.Threading namespace. The ThreadPool class offers only static methods and no instance of it can be constructed. To have a thread pool thread call a method asynchronously, your code must call one of ThreadPool's overloaded QueueUserWorkItem methods, as shown here:
public static Boolean QueueUserWorkItem(WaitCallback wc, Object state); These methods queue a "work item" (and optional state data) to a thread in the thread pool and return immediately. A work item is simply a method (identified by the wc parameter) that is called and passed a single parameter, state (the state data). The version of QueueUserWorkItem without the state parameter passes null to the callback method. Eventually, some thread in the pool will process the work item, causing your method to be called. The callback method you write must match the System.Threading.WaitCallback delegate type, which is defined as follows:
public delegate void WaitCallback(Object state); Notice that you never call any method that creates a thread yourself; the CLR's thread pool will automatically create a thread, if necessary, and reuse an exiting thread if possible. Also, this thread is not immediately destroyed after it processes the callback method; it goes back into the thread pool so that it is ready to handle any other work items in the queue. Using QueueUserWorkItem might make your application more efficient because you won't be creating and destroying threads for every single client request.
The code in Figure 1 demonstrates how to have the thread pool call a method asynchronously.
Figure 1 Thread Pool Calls a Method
using System; Capability 2: Calling a Method at Timed Intervals
If your application needs to perform a certain task at a certain time or if your application needs to execute some method periodically, the thread pool is the perfect thing for you to use. The System.Threading namespace defines the Timer class. When you construct an instance of the Timer class, you are telling the thread pool that you want a method of yours called back at a particular time in the future. The Timer class offers four constructors:
public Timer(TimerCallback callback, Object state, All four constructors construct a Timer object identically. The callback parameter identifies the method that you want called back by a thread pool thread. Of course, the callback method you write must match the System.Threading.TimerCallback delegate type, which is defined as follows:
public delegate void TimerCallback(Object state); The constructor's state parameter allows you to pass state data to the callback method; you can pass null if you have no state data to pass. You use the dueTime parameter to tell the thread pool how many milliseconds to wait before calling your callback method for the very first time. You can specify the number of milliseconds using a signed or unsigned 32-bit value, a signed 64-bit value, or a TimeSpan value. If you want the callback method called immediately, specify 0 for the dueTime parameter. The last parameter, period, allows you to specify how long, in milliseconds, to wait before each successive call. If you pass 0 for this parameter, then the thread pool will call the callback method just once.
After constructing a Timer object, the thread pool knows what to do and monitors the time automatically for you. However, the Timer class offers some additional methods allowing you to communicate with the thread pool to modify when (or if) the method should be called back. Specifically, the Timer class offers several Change and Dispose methods:
public Boolean Change(Int32 dueTime, Int32 period); The Change method allows you to change the Timer object's due time and period. The Dispose method allows you to cancel the callback altogether and optionally signal the kernel object identified by the notifyObject parameter when all pending callbacks for the time have completed.
The code in Figure 2 demonstrates how to have a thread pool thread call a method immediately and every 2000 milliseconds (or two seconds) thereafter.
Figure 2 Using the Period Parameter
using System; Capability 3: Calling a Method When a Single Kernel Object Becomes Signaled
While doing their performance studies, Microsoft researchers discovered that many applications spawn threads simply to wait for a single kernel object to become signaled. Once the object is signaled, the thread posts some sort of notification to another thread and then loops back, waiting for the object to signal again. Some developers even write code in which several threads each wait on a single object. This is a big waste of system resources. So, if you currently have threads in your application that wait for single kernel objects to become signaled, then the thread pool is, again, the perfect resource for you to increase your application's performance.
To have a thread pool thread call your callback method when a kernel object becomes signaled, you again use some static methods defined in the System.Threading.ThreadPool class. To have a thread pool thread call a method when a kernel object becomes signaled, your code must call one of the overloaded RegisterWaitHandle methods you see in Figure 3.
Figure 3 RegisterWaitHandle Methods
public static RegisterWaitHandle RegisterWaitForSingleObject( When you call one of these methods, the h parameter identifies the kernel object that you want the thread pool to wait on. Since this parameter is of the abstract base class System.Threading.WaitHandle, you can specify any class derived from this base class. Specifically, you can pass a reference to an AutoResetEvent, ManualResetEvent, or Mutex object. The second parameter, callback, identifies the method that you want the thread pool thread to call. The callback method that you implement must match the System.Threading.WaitOrTimerCallback delegate type, which is defined in the following line of code:
public delegate void WaitOrTimerCallback(Object state, The third parameter, state, allows you to specify some state data that should be passed to the callback method; pass null if you have no special state data to pass. The fourth parameter, milliseconds, allows you to tell the thread pool how long it should wait for the kernel object to become signaled. It is common to pass -1 here to indicate an infinite timeout. If the last parameter, executeOnlyOnce, is true, then a thread pool thread will execute the callback method just once. But, if executeOnlyOnce is false, then a thread pool thread will execute the callback method every time the kernel object is signaled. This is most useful with an AutoResetEvent object.
When the callback method is called, it is passed state data and a Boolean value, timedOut. If timedOut is false, then the method knows that it is being called because the kernel object became signaled. If timedOut is true, then the method knows it is being called because the kernel object did not become signaled in the time specified. The callback method should perform whatever action is necessary.
In the prototypes shown earlier, you'll notice that the RegisterWaitForSingleObject method returns a RegisteredWaitHandle object. This object identifies the kernel object that the thread pool is waiting on. If, for some reason, your application wants to tell the thread pool to stop watching the registered wait handle, your application can call RegisteredWaitHandle's Unregister method:
public Boolean Unregister(WaitHandle waitObject); The waitObject parameter indicates how you want to be notified when all queued work items have executed. You should pass null for this parameter if you don't want a notification. If you pass a valid reference to a WaitHandle-derived object, then the thread pool will signal the object when all pending work items for the registered wait handle have executed.
The code in Figure 4 demonstrates how to have a thread pool thread call a method whenever a kernel object becomes signaled.
Figure 4 Method Called When Object Signaled
using System; Conclusion
In this column, I've explained the need for thread pools and demonstrated how to use the various capabilities offered by the CLR's thread pool. By now you should see the value that a thread pool can bring to your own development efforts to improve your application's performance and simplify your own code.
Send your questions and comments for Jeff to dot-net@microsoft.com. |
相关文章
- 10-28简易线程池Thread Pool
- 10-28Warning: Leaking Caffe2 thread-pool after fork
- 10-28thread/process pool
- 10-28The CLR’S Execution Model
- 10-28ceph集群中报application not enabled on 1 pool(s)错误
- 10-28Thread pool in chromium
- 10-28Understanding Swift’s value type thread safety - 代码分析(一)
- 10-28hdfs 故障服务namenode 报错GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=
- 10-28INFO [com.zaxxer.hikari.HikariDataSource] (ServerService Thread Pool — 85) HikariPool-2 – Start comp
- 10-28C++笔记--thread pool【转】