MongoDB mongod.log "connection refused because too many open connections" 处理方法

一、MongoDB副本集

副本集名称 角色 IP地址 端口号 优先级
CCTV-test Primary 192.168.1.21 27017 10
Secondary 192.168.1.21 27018 9
Secondary 192.168.1.21 27019 8

二、现象描述

1. 配置文件

复制集成员配置文件->连接数

[root@mm2020 ~]# cat /data/mongodb/cctv/27017/conf/mongodb.conf | grep max
  maxIncomingConnections: 8

2. 报错

操作系统连接MongoDB数据库主节点报错

[root@mm2020 ~]# mongo 192.168.1.21:27017/admin -u 'admin' -p 'admin'
MongoDB shell version v3.6.3
connecting to: mongodb://192.168.1.21:27017/admin
2020-12-14T20:37:59.851+0800 I NETWORK  [thread1] Socket recv() Connection reset by peer 192.168.1.21:27017
2020-12-14T20:37:59.851+0800 I NETWORK  [thread1] SocketException: remote: (NONE):0 error: SocketException socket exception [RECV_ERROR] server [192.168.1.21:27017] 
2020-12-14T20:37:59.851+0800 E QUERY    [thread1] Error: network error while attempting to run command 'isMaster' on host '192.168.1.21:27017'  :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed

日志显示报错

MongoDB mongod.log "connection refused because too many open connections" 处理方法
2020-12-14T20:28:22.547+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:60634 #60 (7 connections now open)
2020-12-14T20:28:22.547+0800 I NETWORK  [conn60] received client metadata from 192.168.1.21:60634 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:22.560+0800 I ACCESS   [conn60] Successfully authenticated as principal admin on admin
2020-12-14T20:28:22.564+0800 I NETWORK  [conn60] end connection 192.168.1.21:60634 (6 connections now open)
2020-12-14T20:28:24.625+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:60636 #61 (7 connections now open)
2020-12-14T20:28:24.625+0800 I NETWORK  [conn61] received client metadata from 192.168.1.21:60636 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:24.637+0800 I ACCESS   [conn61] Successfully authenticated as principal admin on admin
2020-12-14T20:28:24.641+0800 I NETWORK  [conn61] end connection 192.168.1.21:60636 (6 connections now open)
2020-12-14T20:28:26.659+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:60638 #62 (7 connections now open)
2020-12-14T20:28:26.659+0800 I NETWORK  [conn62] received client metadata from 192.168.1.21:60638 conn: { driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:26.660+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:60644 #63 (8 connections now open)
2020-12-14T20:28:26.660+0800 I NETWORK  [conn63] received client metadata from 192.168.1.21:60644 conn: { driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:26.661+0800 I ACCESS   [conn63] Successfully authenticated as principal __system on local
2020-12-14T20:28:26.662+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:26.663+0800 I NETWORK  [conn63] end connection 192.168.1.21:60644 (7 connections now open)
2020-12-14T20:28:26.694+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:60648 #65 (8 connections now open)
2020-12-14T20:28:26.694+0800 I NETWORK  [conn65] received client metadata from 192.168.1.21:60648 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:26.706+0800 I ACCESS   [conn65] Successfully authenticated as principal admin on admin
2020-12-14T20:28:26.711+0800 I NETWORK  [conn65] end connection 192.168.1.21:60648 (7 connections now open)
2020-12-14T20:28:28.754+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:60650 #66 (8 connections now open)
2020-12-14T20:28:28.754+0800 I NETWORK  [conn66] received client metadata from 192.168.1.21:60650 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:28.766+0800 I ACCESS   [conn66] Successfully authenticated as principal admin on admin
2020-12-14T20:28:28.771+0800 I NETWORK  [conn66] end connection 192.168.1.21:60650 (7 connections now open)
2020-12-14T20:28:30.416+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:60656 #67 (8 connections now open)
2020-12-14T20:28:30.416+0800 I NETWORK  [conn67] received client metadata from 192.168.1.21:60656 conn: { driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T20:28:30.416+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:30.891+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:32.961+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:35.028+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:37.084+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:39.134+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:41.296+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:43.376+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:45.442+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:47.501+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:49.549+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:51.613+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:52.501+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:53.664+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:53.749+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:55.087+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:55.710+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:57.777+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:28:59.840+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:29:01.886+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:29:03.949+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:29:06.021+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:29:08.069+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:29:10.138+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T20:29:12.202+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2

显示的主要报错

2020-12-14T20:30:35.086+0800 I NETWORK  [listener] connection refused because too many open connections: 9

3. 检查

从节点检查副本集状态良好

MongoDB mongod.log "connection refused because too many open connections" 处理方法
CCTV-test:SECONDARY> rs.status()
{
        "set" : "CCTV-test",
        "date" : ISODate("2020-12-14T12:41:18.696Z"),
        "myState" : 2,
        "term" : NumberLong(1),
        "syncingTo" : "192.168.1.21:27017",
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1607949670, 1),
                        "t" : NumberLong(1)
                },
                "readConcernMajorityOpTime" : {
                        "ts" : Timestamp(1607949670, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1607949670, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1607949670, 1),
                        "t" : NumberLong(1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "192.168.1.21:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 958,
                        "optime" : {
                                "ts" : Timestamp(1607949670, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1607949670, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-12-14T12:41:10Z"),
                        "optimeDurableDate" : ISODate("2020-12-14T12:41:10Z"),
                        "lastHeartbeat" : ISODate("2020-12-14T12:41:18.437Z"),
                        "lastHeartbeatRecv" : ISODate("2020-12-14T12:41:17.558Z"),
                        "pingMs" : NumberLong(0),
                        "electionTime" : Timestamp(1607948728, 1),
                        "electionDate" : ISODate("2020-12-14T12:25:28Z"),
                        "configVersion" : 1
                },
                {
                        "_id" : 1,
                        "name" : "192.168.1.21:27018",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1073,
                        "optime" : {
                                "ts" : Timestamp(1607949670, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-12-14T12:41:10Z"),
                        "syncingTo" : "192.168.1.21:27017",
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "192.168.1.21:27019",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 958,
                        "optime" : {
                                "ts" : Timestamp(1607949670, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1607949670, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2020-12-14T12:41:10Z"),
                        "optimeDurableDate" : ISODate("2020-12-14T12:41:10Z"),
                        "lastHeartbeat" : ISODate("2020-12-14T12:41:18.438Z"),
                        "lastHeartbeatRecv" : ISODate("2020-12-14T12:41:18.437Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "192.168.1.21:27017",
                        "configVersion" : 1
                }
        ],
        "ok" : 1,
        "operationTime" : Timestamp(1607949670, 1),
        "$clusterTime" : {
                "clusterTime" : Timestamp(1607949670, 1),
                "signature" : {
                        "hash" : BinData(0,"53rtWYGUhf34PYvIgzb/NPx+KmA="),
                        "keyId" : NumberLong("6906087208994734081")
                }
        }
}
3

从节点检查主从延迟状态正常

MongoDB mongod.log "connection refused because too many open connections" 处理方法
CCTV-test:SECONDARY> rs.printReplicationInfo()
configured oplog size:   2048MB
log length start to end: 1032secs (0.29hrs)
oplog first event time:  Mon Dec 14 2020 20:25:18 GMT+0800 (CST)
oplog last event time:   Mon Dec 14 2020 20:42:30 GMT+0800 (CST)
now:                     Mon Dec 14 2020 20:42:39 GMT+0800 (CST)
CCTV-test:SECONDARY> rs.printSlaveReplicationInfo()
source: 192.168.1.21:27018
        syncedTo: Mon Dec 14 2020 20:42:30 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
source: 192.168.1.21:27019
        syncedTo: Mon Dec 14 2020 20:42:30 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
4

三、解决措施

1. 重启主节点mongod进程

操作简单快捷,连接释放彻底。但是会影响数据写入。

2. 在线关闭一个 TCP SCOCKET 连接

使用 netstat 找到 mongod 实例进程

[root@mm2020 ~]# ps -ef | grep mongod
root      20446      1  0 20:23 ?        00:00:22 mongod -f /data/mongodb/cctv/27018/conf/mongodb.conf
root      20478      1  0 20:23 ?        00:00:22 mongod -f /data/mongodb/cctv/27019/conf/mongodb.conf
root      20551      1  0 20:24 ?        00:00:23 mongod -f /data/mongodb/cctv/27017/conf/mongodb.conf
root      29086  14219  0 21:09 pts/0    00:00:00 grep --color=auto mongod

使用 lsof 找到进程 20446 打开的所有文件描述符,并找到对应的 socket 连接

[root@mm2020 ~]# lsof -i:27017
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
mongod  20446 root   39u  IPv4  73906      0t0  TCP mm2020.us.oracle.com:60508->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod  20446 root   51u  IPv4  88512      0t0  TCP mm2020.us.oracle.com:34432->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod  20446 root   52u  IPv4  88516      0t0  TCP mm2020.us.oracle.com:34434->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod  20446 root   64u  IPv4  77209      0t0  TCP mm2020.us.oracle.com:60638->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod  20478 root   35u  IPv4  84076      0t0  TCP mm2020.us.oracle.com:33732->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod  20478 root   39u  IPv4  73903      0t0  TCP mm2020.us.oracle.com:60506->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod  20478 root   52u  IPv4  84080      0t0  TCP mm2020.us.oracle.com:33734->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod  20478 root   68u  IPv4  77232      0t0  TCP mm2020.us.oracle.com:60656->mm2020.us.oracle.com:27017 (ESTABLISHED)
mongod  20551 root   11u  IPv4  73870      0t0  TCP *:27017 (LISTEN)
mongod  20551 root   45u  IPv4  73904      0t0  TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:60506 (ESTABLISHED)
mongod  20551 root   47u  IPv4  73907      0t0  TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:60508 (ESTABLISHED)
mongod  20551 root   52u  IPv4  84077      0t0  TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:33732 (ESTABLISHED)
mongod  20551 root   53u  IPv4  88513      0t0  TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:34432 (ESTABLISHED)
mongod  20551 root   57u  IPv4  88517      0t0  TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:34434 (ESTABLISHED)
mongod  20551 root   58u  IPv4  84081      0t0  TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:33734 (ESTABLISHED)
mongod  20551 root   59u  IPv4  77216      0t0  TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:60638 (ESTABLISHED)
mongod  20551 root   60u  IPv4  77233      0t0  TCP mm2020.us.oracle.com:27017->mm2020.us.oracle.com:60656 (ESTABLISHED)

调试PID进程

方法原理:

检查发现主节点所有的连接全部源自同一个进程PID 20551,如果手动kill该进程,不仅连接被kill掉,包括该进程同样会被kill掉。

Super User上看到一个非常棒的方法:

gdb -p PID       //gdb attach 到进程上下文
call close($fd)  //关闭该 socket
MongoDB mongod.log "connection refused because too many open connections" 处理方法
[root@mm2020 ~]# gdb -p 20551
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-114.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Attaching to process 20551
Reading symbols from /usr/local/src/mongodb-linux-x86_64-3.6.3/bin/mongod...Missing separate debuginfo for /usr/local/src/mongodb-linux-x86_64-3.6.3/bin/mongod
Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/64/21b3d51ea326c407c1d5147dcee2f9775c87f0.debug
(no debugging symbols found)...done.
Reading symbols from /lib64/libresolv.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/libresolv.so.2
Reading symbols from /lib64/librt.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/librt.so.1
Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/libdl.so.2
Reading symbols from /lib64/libm.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libm.so.6
Reading symbols from /lib64/libgcc_s.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib64/libgcc_s.so.1
Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done.
[New LWP 28829]
[New LWP 28399]
[New LWP 27375]
[New LWP 27374]
[New LWP 25167]
[New LWP 25166]
[New LWP 21179]
[New LWP 21161]
[New LWP 20767]
[New LWP 20646]
[New LWP 20645]
[New LWP 20644]
[New LWP 20643]
[New LWP 20642]
[New LWP 20641]
[New LWP 20640]
[New LWP 20639]
[New LWP 20638]
[New LWP 20637]
[New LWP 20636]
[New LWP 20635]
[New LWP 20634]
[New LWP 20633]
[New LWP 20632]
[New LWP 20631]
[New LWP 20630]
[New LWP 20629]
[New LWP 20628]
[New LWP 20627]
[New LWP 20626]
[New LWP 20625]
[New LWP 20624]
[New LWP 20623]
[New LWP 20622]
[New LWP 20621]
[New LWP 20620]
[New LWP 20619]
[New LWP 20618]
[New LWP 20617]
[New LWP 20616]
[New LWP 20615]
[New LWP 20614]
[New LWP 20613]
[New LWP 20612]
[New LWP 20611]
[New LWP 20610]
[New LWP 20609]
[New LWP 20608]
[New LWP 20607]
[New LWP 20606]
[New LWP 20605]
[New LWP 20604]
[New LWP 20598]
[New LWP 20580]
[New LWP 20579]
[New LWP 20578]
[New LWP 20577]
[New LWP 20576]
[New LWP 20575]
[New LWP 20574]
[New LWP 20572]
[New LWP 20571]
[New LWP 20570]
[New LWP 20569]
[New LWP 20568]
[New LWP 20567]
[New LWP 20566]
[New LWP 20565]
[New LWP 20564]
[New LWP 20563]
[New LWP 20562]
[New LWP 20561]
[New LWP 20560]
[New LWP 20559]
[New LWP 20558]
[New LWP 20553]
[New LWP 20552]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Loaded symbols for /lib64/libpthread.so.0
Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib64/libc.so.6
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
0x00007f8cbea97965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7.x86_64 libgcc-4.8.5-36.el7.x86_64
(gdb) call close(47u)
[New Thread 0x7f8c9b072700 (LWP 29202)]
$1 = 0
(gdb) quit
A debugging session is active.

        Inferior 1 [process 20551] will be detached.

Quit anyway? (y or n) 
Please answer y or n.
A debugging session is active.

        Inferior 1 [process 20551] will be detached.

Quit anyway? (y or n) y
Detaching from program: /usr/local/src/mongodb-linux-x86_64-3.6.3/bin/mongod, process 20551
gdb -p PID

退出 gdb 后:

查看mongod.log日志

MongoDB mongod.log "connection refused because too many open connections" 处理方法
2020-12-14T21:13:33.611+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T21:13:33.611+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T21:13:33.611+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T21:13:33.611+0800 I NETWORK  [listener] connection refused because too many open connections: 9
2020-12-14T21:13:33.611+0800 I COMMAND  [conn713] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, appliedOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, memberId: 0, cfgver: 1 }, { durableOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, appliedOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, memberId: 1, cfgver: 1 }, { durableOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, appliedOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, memberId: 2, cfgver: 1 } ], $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1607951410, 1), t: 1 }, lastOpVisible: { ts: Timestamp(1607951410, 1), t: 1 }, configVersion: 1, replicaSetId: ObjectId('5fd759aead2915a66623f6b4'), primaryIndex: 0, syncSourceIndex: 0 }, $clusterTime: { clusterTime: Timestamp(1607951410, 1), signature: { hash: BinData(0, B4FDE38F4D10BAE21D9865AD22CADE3B2C40FA73), keyId: 6906087208994734081 } }, $db: "admin" } numYields:0 reslen:353 locks:{} protocol:op_msg 126302ms
2020-12-14T21:13:33.611+0800 I REPL     [replexec-45] Member 192.168.1.21:27018 is now in state RS_DOWN
2020-12-14T21:13:33.612+0800 I NETWORK  [conn1356] received client metadata from 192.168.1.21:35022 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:33.613+0800 I COMMAND  [conn1063] command local.oplog.rs command: getMore { getMore: 30749310873, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 5000, term: 1, lastKnownCommittedOpTime: { ts: Timestamp(1607951410, 1), t: 1 }, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1607951410, 1), signature: { hash: BinData(0, B4FDE38F4D10BAE21D9865AD22CADE3B2C40FA73), keyId: 6906087208994734081 } }, $db: "local" } originatingCommand: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1607948718, 1) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, batchSize: 13981010, term: 1, $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" }, $clusterTime: { clusterTime: Timestamp(1607948730, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "local" } planSummary: COLLSCAN cursorid:30749310873 keysExamined:0 docsExamined:0 numYields:2 nreturned:0 reslen:576 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_msg 126304ms
2020-12-14T21:13:33.613+0800 I COMMAND  [conn1062] command admin.$cmd command: isMaster { isMaster: 1, $db: "admin" } numYields:0 reslen:750 locks:{} protocol:op_query 5738ms
2020-12-14T21:13:33.613+0800 I NETWORK  [conn1062] end connection 192.168.1.21:34432 (7 connections now open)
2020-12-14T21:13:33.614+0800 I REPL     [replexec-45] stepping down from primary, because a new term has begun: 2
2020-12-14T21:13:33.615+0800 I NETWORK  [conn1063] end connection 192.168.1.21:34434 (6 connections now open)
2020-12-14T21:13:33.615+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35064 #1372 (7 connections now open)
2020-12-14T21:13:33.616+0800 I NETWORK  [conn1372] received client metadata from 192.168.1.21:35064 conn: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:33.617+0800 I ACCESS   [conn1372] Successfully authenticated as principal __system on local
2020-12-14T21:13:33.618+0800 I REPL     [replexec-47] Member 192.168.1.21:27018 is now in state PRIMARY
2020-12-14T21:13:33.619+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35068 #1373 (8 connections now open)
2020-12-14T21:13:33.619+0800 I REPL     [replexec-45] transition to SECONDARY from PRIMARY
2020-12-14T21:13:33.619+0800 I NETWORK  [replexec-45] Skip closing connection for connection # 1373
2020-12-14T21:13:33.619+0800 I NETWORK  [replexec-45] Skip closing connection for connection # 1372
2020-12-14T21:13:33.619+0800 I NETWORK  [replexec-45] Skip closing connection for connection # 1358
2020-12-14T21:13:33.619+0800 I NETWORK  [replexec-45] Skip closing connection for connection # 1357
2020-12-14T21:13:33.619+0800 I NETWORK  [replexec-45] Skip closing connection for connection # 713
2020-12-14T21:13:33.620+0800 I COMMAND  [ftdc] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after dur: 0, after extra_info: 0, after globalLock: 0, after locks: 0, after logicalSessionRecordCache: 0, after network: 0, after opLatencies: 0, after opcounters: 0, after opcountersRepl: 0, after repl: 5747, after storageEngine: 5747, after tcmalloc: 5747, after transactions: 5747, after wiredTiger: 5747, at end: 5747 }
2020-12-14T21:13:33.622+0800 I NETWORK  [conn1356] Error sending response to client: SocketException: Broken pipe. Ending connection from 192.168.1.21:35022 (connection id: 1356)
2020-12-14T21:13:33.622+0800 I NETWORK  [conn1356] end connection 192.168.1.21:35022 (7 connections now open)
2020-12-14T21:13:33.623+0800 I NETWORK  [conn62] end connection 192.168.1.21:60638 (6 connections now open)
2020-12-14T21:13:33.623+0800 I NETWORK  [conn67] end connection 192.168.1.21:60656 (5 connections now open)
2020-12-14T21:13:33.623+0800 I NETWORK  [conn1358] end connection 192.168.1.21:35026 (4 connections now open)
2020-12-14T21:13:33.623+0800 I NETWORK  [conn1357] received client metadata from 192.168.1.21:35024 conn: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:33.623+0800 I NETWORK  [conn1373] received client metadata from 192.168.1.21:35068 conn: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:33.624+0800 I NETWORK  [conn1357] end connection 192.168.1.21:35024 (3 connections now open)
2020-12-14T21:13:33.624+0800 I ACCESS   [conn1373] Successfully authenticated as principal __system on local
2020-12-14T21:13:34.613+0800 I REPL     [rsBackgroundSync] sync source candidate: 192.168.1.21:27019
2020-12-14T21:13:34.613+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to 192.168.1.21:27019
2020-12-14T21:13:34.620+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to 192.168.1.21:27019, took 7ms (1 connections now open to 192.168.1.21:27019)
2020-12-14T21:13:34.624+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to 192.168.1.21:27019
2020-12-14T21:13:34.626+0800 I REPL     [rsBackgroundSync] Starting rollback due to OplogStartMissing: Received an empty batch from sync source.
2020-12-14T21:13:34.626+0800 I REPL     [rsBackgroundSync] Replication commit point: { ts: Timestamp(1607951410, 1), t: 1 }
2020-12-14T21:13:34.626+0800 I REPL     [rsBackgroundSync] Rollback using the 'rollbackViaRefetch' method because UUID support is feature compatible with featureCompatibilityVersion 3.6.
2020-12-14T21:13:34.627+0800 I REPL     [rsBackgroundSync] transition to ROLLBACK from SECONDARY
2020-12-14T21:13:34.627+0800 I NETWORK  [rsBackgroundSync] Skip closing connection for connection # 1373
2020-12-14T21:13:34.627+0800 I NETWORK  [rsBackgroundSync] Skip closing connection for connection # 1372
2020-12-14T21:13:34.627+0800 I NETWORK  [rsBackgroundSync] Skip closing connection for connection # 713
2020-12-14T21:13:34.627+0800 I ROLLBACK [rsBackgroundSync] Starting rollback. Sync source: 192.168.1.21:27019
2020-12-14T21:13:34.630+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to 192.168.1.21:27019, took 6ms (2 connections now open to 192.168.1.21:27019)
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] Finding the Common Point
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] our last optime:   Timestamp(1607951613, 1)
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] their last optime: Timestamp(1607951606, 1)
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] diff in end of log times: 7 seconds
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] Rollback common point is { ts: Timestamp(1607951410, 1), t: 1 }
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] Starting refetching documents
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] Finished refetching documents. Total size of documents refetched: 0
2020-12-14T21:13:34.631+0800 I ROLLBACK [rsBackgroundSync] Checking the RollbackID and updating the MinValid if necessary
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Setting minvalid to { ts: Timestamp(1607951606, 1), t: 2 }
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Rolling back createIndexes commands.
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Dropping collections to roll back create operations
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Rolling back renameCollection commands and collection drop commands.
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Rolling back collections pending being dropped: Removing them from the list of drop-pending collections in the DropPendingCollectionReaper.
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Rolling back dropIndexes commands.
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Deleting and updating documents to roll back insert, update and remove operations
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Rollback deleted 0 documents and updated 0 documents.
2020-12-14T21:13:34.632+0800 I ROLLBACK [rsBackgroundSync] Truncating the oplog at { ts: Timestamp(1607951410, 1), t: 1 }
2020-12-14T21:13:34.633+0800 I REPL     [rsBackgroundSync] Incremented the rollback ID to 2
2020-12-14T21:13:34.633+0800 I ROLLBACK [rsBackgroundSync] Rollback finished. The final minValid is: { ts: Timestamp(1607951606, 1), t: 2 }
2020-12-14T21:13:34.633+0800 I REPL     [rsBackgroundSync] transition to RECOVERING from ROLLBACK
2020-12-14T21:13:34.633+0800 I REPL     [rsBackgroundSync] sync source candidate: 192.168.1.21:27019
2020-12-14T21:13:34.635+0800 I STORAGE  [rsSync] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: a4b4f292-e51d-4a3b-9a59-3526c3cb9548
2020-12-14T21:13:34.640+0800 I REPL     [rsSync] transition to SECONDARY from RECOVERING
2020-12-14T21:13:34.641+0800 I REPL     [SyncSourceFeedback] SyncSourceFeedback error sending update to 192.168.1.21:27019: InvalidSyncSource: Sync source was cleared. Was 192.168.1.21:27019
2020-12-14T21:13:35.619+0800 I REPL     [replexec-49] Scheduling priority takeover at 2020-12-14T21:13:46.246+0800
2020-12-14T21:13:35.671+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35076 #1374 (4 connections now open)
2020-12-14T21:13:35.672+0800 I NETWORK  [conn1374] received client metadata from 192.168.1.21:35076 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:35.684+0800 I ACCESS   [conn1374] Successfully authenticated as principal admin on admin
2020-12-14T21:13:35.690+0800 I NETWORK  [conn1374] end connection 192.168.1.21:35076 (3 connections now open)
2020-12-14T21:13:36.841+0800 W REPL     [rsBackgroundSync] Fetcher stopped querying remote oplog with error: InvalidSyncSource: sync source 192.168.1.21:27019 (config version: 1; last applied optime: { ts: Timestamp(1607951616, 1), t: 2 }; sync source index: 1; primary index: 1) is no longer valid
2020-12-14T21:13:36.842+0800 I REPL     [rsBackgroundSync] could not find member to sync from
2020-12-14T21:13:37.738+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35078 #1375 (4 connections now open)
2020-12-14T21:13:37.738+0800 I NETWORK  [conn1375] received client metadata from 192.168.1.21:35078 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:37.751+0800 I ACCESS   [conn1375] Successfully authenticated as principal admin on admin
2020-12-14T21:13:37.756+0800 I NETWORK  [conn1375] end connection 192.168.1.21:35078 (3 connections now open)
2020-12-14T21:13:39.803+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35080 #1376 (4 connections now open)
2020-12-14T21:13:39.804+0800 I NETWORK  [conn1376] received client metadata from 192.168.1.21:35080 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:39.816+0800 I ACCESS   [conn1376] Successfully authenticated as principal admin on admin
2020-12-14T21:13:39.821+0800 I NETWORK  [conn1376] end connection 192.168.1.21:35080 (3 connections now open)
2020-12-14T21:13:41.891+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35082 #1377 (4 connections now open)
2020-12-14T21:13:41.891+0800 I NETWORK  [conn1377] received client metadata from 192.168.1.21:35082 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:41.903+0800 I ACCESS   [conn1377] Successfully authenticated as principal admin on admin
2020-12-14T21:13:41.908+0800 I NETWORK  [conn1377] end connection 192.168.1.21:35082 (3 connections now open)
2020-12-14T21:13:43.952+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35084 #1378 (4 connections now open)
2020-12-14T21:13:43.953+0800 I NETWORK  [conn1378] received client metadata from 192.168.1.21:35084 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:43.966+0800 I ACCESS   [conn1378] Successfully authenticated as principal admin on admin
2020-12-14T21:13:43.970+0800 I NETWORK  [conn1378] end connection 192.168.1.21:35084 (3 connections now open)
2020-12-14T21:13:46.033+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35086 #1379 (4 connections now open)
2020-12-14T21:13:46.033+0800 I NETWORK  [conn1379] received client metadata from 192.168.1.21:35086 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:46.045+0800 I ACCESS   [conn1379] Successfully authenticated as principal admin on admin
2020-12-14T21:13:46.050+0800 I NETWORK  [conn1379] end connection 192.168.1.21:35086 (3 connections now open)
2020-12-14T21:13:46.247+0800 I REPL     [replexec-47] Canceling priority takeover callback
2020-12-14T21:13:46.247+0800 I REPL     [replexec-47] Starting an election for a priority takeover
2020-12-14T21:13:46.247+0800 I REPL     [replexec-47] conducting a dry run election to see if we could be elected. current term: 2
2020-12-14T21:13:46.249+0800 I REPL     [replexec-50] VoteRequester(term 2 dry run) received a yes vote from 192.168.1.21:27018; response message: { term: 2, voteGranted: true, reason: "", ok: 1.0, operationTime: Timestamp(1607951616, 1), $clusterTime: { clusterTime: Timestamp(1607951616, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2020-12-14T21:13:46.249+0800 I REPL     [replexec-50] dry election run succeeded, running for election in term 3
2020-12-14T21:13:46.250+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Ending connection to host 192.168.1.21:27019 due to bad connection status; 0 connections to that host remain open
2020-12-14T21:13:46.251+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Connecting to 192.168.1.21:27019
2020-12-14T21:13:46.258+0800 I REPL     [replexec-47] VoteRequester(term 3) received a yes vote from 192.168.1.21:27018; response message: { term: 3, voteGranted: true, reason: "", ok: 1.0, operationTime: Timestamp(1607951616, 1), $clusterTime: { clusterTime: Timestamp(1607951616, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }
2020-12-14T21:13:46.258+0800 I REPL     [replexec-47] election succeeded, assuming primary role in term 3
2020-12-14T21:13:46.258+0800 I REPL     [replexec-47] transition to PRIMARY from SECONDARY
2020-12-14T21:13:46.258+0800 I REPL     [replexec-47] Entering primary catch-up mode.
2020-12-14T21:13:46.259+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Connecting to 192.168.1.21:27019
2020-12-14T21:13:46.260+0800 I REPL     [replexec-47] Member 192.168.1.21:27018 is now in state SECONDARY
2020-12-14T21:13:46.260+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to 192.168.1.21:27019, took 9ms (2 connections now open to 192.168.1.21:27019)
2020-12-14T21:13:46.261+0800 I REPL     [replexec-42] Caught up to the latest optime known via heartbeats after becoming primary.
2020-12-14T21:13:46.261+0800 I REPL     [replexec-42] Exited primary catch-up mode.
2020-12-14T21:13:46.264+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to 192.168.1.21:27019, took 5ms (2 connections now open to 192.168.1.21:27019)
2020-12-14T21:13:47.853+0800 I REPL     [rsSync] transition to primary complete; database writes are now permitted
2020-12-14T21:13:48.098+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35092 #1380 (4 connections now open)
2020-12-14T21:13:48.098+0800 I NETWORK  [conn1380] received client metadata from 192.168.1.21:35092 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:48.110+0800 I ACCESS   [conn1380] Successfully authenticated as principal admin on admin
2020-12-14T21:13:48.115+0800 I NETWORK  [conn1380] end connection 192.168.1.21:35092 (3 connections now open)
2020-12-14T21:13:48.403+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35094 #1381 (4 connections now open)
2020-12-14T21:13:48.403+0800 I NETWORK  [conn1381] received client metadata from 192.168.1.21:35094 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:48.416+0800 I ACCESS   [conn1381] Successfully authenticated as principal admin on admin
2020-12-14T21:13:50.166+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35096 #1382 (5 connections now open)
2020-12-14T21:13:50.167+0800 I NETWORK  [conn1382] received client metadata from 192.168.1.21:35096 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:50.178+0800 I ACCESS   [conn1382] Successfully authenticated as principal admin on admin
2020-12-14T21:13:50.182+0800 I NETWORK  [conn1382] end connection 192.168.1.21:35096 (4 connections now open)
2020-12-14T21:13:51.858+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35098 #1383 (5 connections now open)
2020-12-14T21:13:51.858+0800 I NETWORK  [conn1383] received client metadata from 192.168.1.21:35098 conn: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:51.859+0800 I ACCESS   [conn1383] Successfully authenticated as principal __system on local
2020-12-14T21:13:52.245+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35100 #1384 (6 connections now open)
2020-12-14T21:13:52.245+0800 I NETWORK  [conn1384] received client metadata from 192.168.1.21:35100 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:52.256+0800 I ACCESS   [conn1384] Successfully authenticated as principal admin on admin
2020-12-14T21:13:52.261+0800 I NETWORK  [conn1384] end connection 192.168.1.21:35100 (5 connections now open)
2020-12-14T21:13:52.806+0800 I NETWORK  [conn1381] end connection 192.168.1.21:35094 (4 connections now open)
2020-12-14T21:13:52.946+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35102 #1385 (5 connections now open)
2020-12-14T21:13:52.947+0800 I NETWORK  [conn1385] received client metadata from 192.168.1.21:35102 conn: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:52.948+0800 I ACCESS   [conn1385] Successfully authenticated as principal __system on local
2020-12-14T21:13:52.952+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35104 #1386 (6 connections now open)
2020-12-14T21:13:52.952+0800 I NETWORK  [conn1386] received client metadata from 192.168.1.21:35104 conn: { driver: { name: "NetworkInterfaceASIO-RS", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:52.956+0800 I ACCESS   [conn1386] Successfully authenticated as principal __system on local
2020-12-14T21:13:54.321+0800 I NETWORK  [listener] connection accepted from 192.168.1.21:35106 #1387 (7 connections now open)
2020-12-14T21:13:54.321+0800 I NETWORK  [conn1387] received client metadata from 192.168.1.21:35106 conn: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.6.3" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.el7.x86_64" } }
2020-12-14T21:13:54.334+0800 I ACCESS   [conn1387] Successfully authenticated as principal admin on admin
2020-12-14T21:13:54.338+0800 I NETWORK  [conn1387] end connection 192.168.1.21:35106 (6 connections now open)
8

查看连接数信息

2020-12-14_21-11-27 connections:2020-12-14T21:11:27.309+0800 I NETWORK  [thread1] Socket recv() Connection reset by peer 192.168.1.21:27017
2020-12-14T21:11:27.309+0800 I NETWORK  [thread1] SocketException: remote: (NONE):0 error: SocketException socket exception [RECV_ERROR] server [192.168.1.21:27017] 
2020-12-14T21:11:27.309+0800 E QUERY    [thread1] Error: network error while attempting to run command 'isMaster' on host '192.168.1.21:27017'  :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:21
2020-12-14_21-13-33 connections:2020-12-14T21:13:33.620+0800 E QUERY    [thread1] Error: network error while attempting to run command 'saslStart' on host '192.168.1.21:27017'  :
DB.prototype._authOrThrow@src/mongo/shell/db.js:1608:20
@(auth):6:1
@(auth):1:2
2020-12-14_21-13-35 connections:{ "current" : 4, "available" : 4, "totalCreated" : 89 }
2020-12-14_21-13-37 connections:{ "current" : 4, "available" : 4, "totalCreated" : 90 }
2020-12-14_21-13-39 connections:{ "current" : 4, "available" : 4, "totalCreated" : 91 }
2020-12-14_21-13-41 connections:{ "current" : 4, "available" : 4, "totalCreated" : 92 }
2020-12-14_21-13-43 connections:{ "current" : 4, "available" : 4, "totalCreated" : 93 }
2020-12-14_21-13-46 connections:{ "current" : 4, "available" : 4, "totalCreated" : 94 }
2020-12-14_21-13-48 connections:{ "current" : 4, "available" : 4, "totalCreated" : 95 }
2020-12-14_21-13-50 connections:{ "current" : 5, "available" : 3, "totalCreated" : 97 }
2020-12-14_21-13-52 connections:{ "current" : 6, "available" : 2, "totalCreated" : 99 }
2020-12-14_21-13-54 connections:{ "current" : 7, "available" : 1, "totalCreated" : 102 }
2020-12-14_21-13-56 connections:{ "current" : 7, "available" : 1, "totalCreated" : 103 }

连接数满,无法连接MongoDB主节点实例问题解决。哦耶!

too many open connections

上一篇:Linux Centos 使用yum安装MongoDB 4.0


下一篇:Mongodb 的介绍和安装以及简单的使用