一个nginx 回源限速的bug处理过程记录

一个生产环境,nginx占用cpu很高。

top - :: up  day, :,   users,  load average: 13.26, 13.20, 13.20
Tasks: total, running, sleeping, stopped, zombie
Cpu(s): 21.3%us, 18.0%sy, 0.0%ni, 57.4%id, 0.0%wa, 0.0%hi, 3.3%si, 0.0%st
Mem: 257556M total, 254371M used, 3184M free, 400M buffers
Swap: 0M total, 0M used, 0M free, 200639M cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
root .9g .3g .2g R 1.3 :16.31 nginx
root .9g .5g .4g R 1.4 :16.87 nginx
root .9g .5g .4g R 1.4 :35.20 nginx
root .9g .6g .5g R 1.4 :12.50 nginx
root .9g .5g .4g R 1.4 :04.89 nginx
root .9g .7g .6g R 1.5 :46.03 nginx
root .9g .6g .5g R 1.4 :45.09 nginx
root .9g .7g .6g R 1.5 :42.75 nginx

top跟进去单个线程,发现是主线程很高:

top - :: up  day, :,   users,  load average: 12.79, 12.84, 13.04
Tasks: total, running, sleeping, stopped, zombie
Cpu(s): 18.0%us, 21.5%sy, 0.1%ni, 54.6%id, 1.7%wa, 0.0%hi, 4.1%si, 0.0%st
Mem: 257556M total, 253504M used, 4052M free, 400M buffers
Swap: 0M total, 0M used, 0M free, 199627M cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
root .9g .4g .3g S 1.3 :24.72 nginx
root .9g .4g .3g S 1.3 :07.05 nginx
root .9g .4g .3g S 1.3 :06.36 nginx
root .9g .4g .3g S 1.3 :07.07 nginx
root .9g .4g .3g S 1.3 :06.18 nginx
root .9g .4g .3g S 1.3 :06.70 nginx
root .9g .4g .3g S 1.3 :03.87 nginx
root .9g .4g .3g S 1.3 :06.16 nginx
root .9g .4g .3g S 1.3 :06.51 nginx
root .9g .4g .3g S 1.3 :07.03 nginx
root .9g .4g .3g S 1.3 :07.10 nginx

gdb跟踪一下,发现大量的epoll_wait返回,中间没有任何系统调用:

epoll_wait(, {{EPOLLOUT, {u32=, u64=}}}, , ) =
epoll_wait(, {{EPOLLOUT, {u32=, u64=}}}, , ) =
epoll_wait(, {{EPOLLOUT, {u32=, u64=}}}, , ) =
epoll_wait(, {{EPOLLOUT, {u32=, u64=}}}, , ) =
epoll_wait(, {{EPOLLOUT, {u32=, u64=}}}, , ) =
epoll_wait(, {{EPOLLOUT, {u32=, u64=}}}, , ) =
epoll_wait(, {{EPOLLOUT, {u32=, u64=}}}, , ) =
epoll_wait(, {{EPOLLOUT, {u32=, u64=}}}, , ) =

查看堆栈,

epoll_wait在返回有active的fd的时候,我们并没有去调用recv或者recvfrom,走查代码:

        if ((revents & EPOLLOUT) && wev->active) {

            if (c->fd == - || wev->instance != instance) {

                /*
* the stale event from a file descriptor
* that was just closed in this iteration
*/ ngx_log_debug1(NGX_LOG_DEBUG_EVENT, cycle->log, ,
"epoll: stale event %p", c);
continue;
} wev->ready = ; if (flags & NGX_POST_EVENTS) {
ngx_post_event(wev, &ngx_posted_events); } else {
wev->handler(wev);
}
}

发现 wev->active 有时候不为1,但是在epoll_wait返回的时候,大多数情况是1,为什么没有recv呢?

查看 wev->handler,在 ngx_http_upstream_process_non_buffered_request 中,有这么一个分支判断:

            if(downstream->write->delayed)
{
ngx_log_error(NGX_LOG_DEBUG, upstream->log, ,
"[no_buffering_limite_rate] downstream->write->delayed return here. remove upstream read event"); ngx_del_event(upstream->read, NGX_READ_EVENT, );
return;
}

在做了限速的情况下,只删除了 upstream->read 事件,没有删除 upstream->write 事件。

而我们epoll_wait返回的是 EPOLLOUT 事件。

在这种情况下,wev->handler(wev); 由于处理非常快,

ngx_process_events_and_timers(ngx_cycle_t *cycle)函数在下面这个逻辑:
    delta = ngx_current_msec;

    (void) ngx_process_events(cycle, timer, flags);

    delta = ngx_current_msec - delta;

这样delta几乎为0,使得下次循环的时候扫描的timer为红黑树的最小值,且这个值都没有超时。

 timer = ngx_event_find_timer();

所以我们也看到epoll_wai的超时值大多数时间非常小。

最后,这个问题在满足限速情况下,将NGX_WRITE_EVENT 从epoll中删除,等到了限速不满足的时候,再加入这个event。

上一篇:学习关于display :flex 布局问题!


下一篇:Linux下单机部署ELK日志收集、分析环境