默认情况下,一个线程的栈要预留1M的内存空间
而一个进程中可用的内存空间只有2G,所以理论上一个进程中最多可以开2048个线程
但是内存当然不可能完全拿来作线程的栈,所以实际数目要比这个值要小。
你也可以通过连接时修改默认栈大小,将其改的比较小,这样就可以多开一些线程。
如将默认栈的大小改成512K,这样理论上最多就可以开4096个线程。
即使物理内存再大,一个进程中可以起的线程总要受到2GB这个内存空间的限制。
比方说你的机器装了64GB物理内存,但每个进程的内存空间还是4GB,其中用户态可用的还是2GB。
如果是同一台机器内的话,能起多少线程也是受内存限制的。每个线程对象都要站用非页面内存,而非页面内存也是有限的,当非页面内存被耗尽时,也就无法创建线程了。
如果物理内存非常大,同一台机器内可以跑的线程数目的限制值会越来越大。
在Windows下写个程序,一个进程Fork出2000个左右线程就会异常退出了,为什么?
这个问题的产生是因为windows32位系统,一个进程所能使用的最大虚拟内存为2G,而一个线程的默认线程栈StackSize为1024K(1M),这样当线程数量逼近2000时,2000*1024K=2G(大约),内存资源就相当于耗尽。
MSDN原文:
“The number of threads a process can create is limited by the available virtual memory. By default, every thread has one megabyte of stack space. Therefore, you can create at most 2,028 threads. If you reduce the default stack size, you can create more threads. However, your application will have better performance if you create one thread per processor and build queues of requests for which the application maintains the context information. A thread would process all requests in a queue before processing requests in the next queue.”
如何突破2000个限制?
可以通过修改CreateThread参数来缩小线程栈StackSize,例如
#define MAX_THREADS 50000
DWORD WINAPI ThreadProc( LPVOID lpParam ){
while(1){
Sleep(100000);
}
return 0;
}
int main() {
DWORD dwThreadId[MAX_THREADS];
HANDLE hThread[MAX_THREADS];
for(int i = 0; i < MAX_THREADS; ++i)
{
hThread[i] = CreateThread(0, 64, ThreadProc, 0, STACK_SIZE_PARAM_IS_A_RESERVATION, &dwThreadId[i]);
if(0 == hThread[i])
{
DWORD e = GetLastError();
printf("%d\r\n",e);
break;
}
}
ThreadProc(0);
}
服务器端程序设计
如果你的服务器端程序设计成:来一个client连接请求则创建一个线程,那么就会存在2000个限制(在硬件内存和CPU个数一定的情况下)。建议如下:
The "one thread per client" model is well-known not to scale beyond a dozen clients or so. If you're going to be handling more than that many clients simultaneously, you should move to a model where instead of dedicating a thread to a client, you instead allocate an object. (Someday I'll muse on the duality between threads and objects.) Windows provides I/O completion ports and a thread pool to help you convert from a thread-based model to a work-item-based model.