目录
1. 现象描述
外界连接9090端口均超时,但telnet端口总是成功。使用top命令观察,发现单个线程的CPU最高达99.99%,但并不总是99.9%,而是在波动。当迁走往该机器的流量后,能够访问成功,但仍然有超时,读超时比写超时多:
# ./hbase_stress --hbase=110.13.136.207:9090 --test=2 --timeout=10 [2016-11-27 10:15:21/771][139756154767104/31562][ERROR][hbase_stress.cpp:302]TransportException(thrift://110.13.136.207:9090): EAGAIN (timed out) [2016-11-27 10:15:31/775][139756154767104/31562][ERROR][hbase_stress.cpp:302]TransportException(thrift://110.13.136.207:9090): EAGAIN (timed out)
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20727 zhangsan 20 0 10.843g 9.263g 26344 R 99.9 26.4 1448:00 java 20729 zhangsan 20 0 10.843g 9.263g 26344 R 99.9 26.4 1448:00 java 20730 zhangsan 20 0 10.843g 9.263g 26344 R 99.9 26.4 1449:10 java 20728 zhangsan 20 0 10.843g 9.263g 26344 R 99.8 26.4 1448:00 java 20693 zhangsan 20 0 10.843g 9.263g 26344 S 0.0 26.4 0:00.00 java
20727 zhangsan 20 0 10.843g 9.263g 26344 R 75.5 26.4 1448:06 java 20728 zhangsan 20 0 10.843g 9.263g 26344 R 75.2 26.4 1448:06 java 20729 zhangsan 20 0 10.843g 9.263g 26344 R 75.2 26.4 1448:06 java 20730 zhangsan 20 0 10.843g 9.263g 26344 R 75.2 26.4 1449:15 java 20716 zhangsan 20 0 10.843g 9.263g 26344 S 24.9 26.4 93:48.75 java |
2. 问题定位
使用ps命令找出CPU最多的线程,和top显示的一致:
$ ps -mp 20693 -o THREAD,tid,time | sort -rn zhangsan 18.8 19 - - - - 20730 1-00:11:23 zhangsan 18.7 19 - - - - 20729 1-00:10:13 zhangsan 18.7 19 - - - - 20728 1-00:10:13 zhangsan 18.7 19 - - - - 20727 1-00:10:13 zhangsan 16.1 19 - futex_ - - 20731 20:44:51 zhangsan 5.2 19 - futex_ - - 20732 06:46:39 |
然后借助jstack,发现为GC进程:
"Gang worker#0 (Parallel CMS Threads)" os_prio=0 tid=0x00007fb7200d4000 nid=0x50f7 runnable "Gang worker#1 (Parallel CMS Threads)" os_prio=0 tid=0x00007fb7200d5800 nid=0x50f8 runnable "Gang worker#2 (Parallel CMS Threads)" os_prio=0 tid=0x00007fb7200d7800 nid=0x50f9 runnable "Gang worker#3 (Parallel CMS Threads)" os_prio=0 tid=0x00007fb7200d9000 nid=0x50fa runnable |
使用jstat工具查看GC,情况很不乐观,问题就是有GC引起的:
$ jstat -gcutil 20693 1000 100 S0 S1 E O M CCS YGC YGCT FGC FGCT GCT 0.00 99.67 100.00 100.00 98.08 94.41 42199 369.132 27084 34869.601 35238.733 0.00 99.67 100.00 100.00 98.08 94.41 42199 369.132 27084 34870.448 35239.580 0.00 99.67 100.00 100.00 98.08 94.41 42199 369.132 27084 34870.448 35239.580 0.00 99.67 100.00 100.00 98.08 94.41 42199 369.132 27084 34870.448 35239.580
$ jstat -gccapacity 20693 NGCMN NGCMX NGC S0C S1C EC OGCMN OGCMX OGC OC MCMN MCMX MC CCSMN CCSMX CCSC YGC FGC 191808.0 1107520.0 1107520.0 110720.0 110720.0 886080.0 383680.0 8094144.0 8094144.0 8094144.0 0.0 1077248.0 31584.0 0.0 1048576.0 3424.0 42199 27156
$ jstat -gcold 20693 MC MU CCSC CCSU OC OU YGC FGC FGCT GCT 31584.0 30978.7 3424.0 3232.7 8094144.0 8094144.0 42199 27174 34964.347 35333.479
$ jstat -gcoldcapacity 20693 OGCMN OGCMX OGC OC YGC FGC FGCT GCT 383680.0 8094144.0 8094144.0 8094144.0 42199 27192 34982.623 35351.755
$ jstat -gcnewcapacity 20693 NGCMN NGCMX NGC S0CMX S0C S1CMX S1C ECMX EC YGC FGC 191808.0 1107520.0 1107520.0 110720.0 110720.0 110720.0 110720.0 886080.0 886080.0 42199 27202
$ jstat -gc 20693 S0C S1C S0U S1U EC EU OC OU MC MU CCSC CCSU YGC YGCT FGC FGCT GCT 110720.0 110720.0 0.0 110395.9 886080.0 886080.0 8094144.0 8094144.0 31584.0 30978.7 3424.0 3232.7 42199 369.132 27206 34996.538 35365.671
$ jstat -gcnew 20693 S0C S1C S0U S1U TT MTT DSS EC EU YGC YGCT 110720.0 110720.0 0.0 110396.9 6 6 55360.0 886080.0 886080.0 42199 369.132 |
使用lsof显示该进程的连接数不多,完全在安全范围内,问题应当是有对象不能被回收。使用jmap查看内存详情,先看堆的使用情况:
$ jmap -heap 20693 Attaching to process ID 20693, please wait... Debugger attached successfully. Server compiler detected. JVM version is 25.77-b03
using parallel threads in the new generation. using thread-local object allocation. Concurrent Mark-Sweep GC
Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 9422503936 (8986.0MB) NewSize = 196411392 (187.3125MB) MaxNewSize = 1134100480 (1081.5625MB) OldSize = 392888320 (374.6875MB) NewRatio = 2 SurvivorRatio = 8 MetaspaceSize = 21807104 (20.796875MB) CompressedClassSpaceSize = 1073741824 (1024.0MB) MaxMetaspaceSize = 17592186044415 MB G1HeapRegionSize = 0 (0.0MB)
Heap Usage: New Generation (Eden + 1 Survivor Space): capacity = 1020723200 (973.4375MB) used = 1020398064 (973.1274261474609MB) free = 325136 (0.3100738525390625MB) 99.96814650632022% used Eden Space: capacity = 907345920 (865.3125MB) used = 907345920 (865.3125MB) free = 0 (0.0MB) 100.0% used From Space: capacity = 113377280 (108.125MB) used = 113052144 (107.81492614746094MB) free = 325136 (0.3100738525390625MB) 99.71322649476156% used To Space: capacity = 113377280 (108.125MB) used = 0 (0.0MB) free = 113377280 (108.125MB) 0.0% used concurrent mark-sweep generation: capacity = 8288403456 (7904.4375MB) used = 8288403424 (7904.437469482422MB) free = 32 (3.0517578125E-5MB) 99.9999996139184% used
10216 interned Strings occupying 934640 bytes. |
进一步查看对象的情况:
$ jmap -histo 20693
num #instances #bytes class name ---------------------------------------------- 1: 72835212 2518411456 [B 2: 49827147 1993085880 java.util.TreeMap$Entry 3: 12855993 617087664 java.util.TreeMap 4: 4285217 445662568 org.apache.hadoop.hbase.client.ClientScanner 5: 4285222 377099536 org.apache.hadoop.hbase.client.Scan 6: 4284875 377069000 org.apache.hadoop.hbase.client.ScannerCallable 7: 4285528 342921344 [Ljava.util.HashMap$Node; 8: 4284880 308511360 org.apache.hadoop.hbase.client.ScannerCallableWithReplicas 9: 8570671 274261472 java.util.LinkedList 10: 4285579 205707792 java.util.HashMap 11: 4285283 205693584 org.apache.hadoop.hbase.client.RpcRetryingCaller 12: 3820914 152836560 org.apache.hadoop.hbase.filter.SingleColumnValueFilter 13: 4291904 137340928 java.util.concurrent.ConcurrentHashMap$Node 14: 8570636 137130176 java.util.TreeMap$EntrySet 15: 4285278 137128896 org.apache.hadoop.hbase.io.TimeRange 16: 8570479 137127664 java.util.concurrent.atomic.AtomicBoolean 17: 2891409 92525088 org.apache.hadoop.hbase.NoTagsKeyValue 18: 4286540 68584640 java.lang.Integer 19: 4285298 68564768 java.util.TreeMap$KeySet 20: 4285275 68564400 java.util.TreeSet 21: 4285006 68560096 java.util.HashSet 22: 4284851 68557616 java.util.HashMap$KeySet 23: 3176118 50817888 org.apache.hadoop.hbase.filter.BinaryComparator 24: 109 33607600 [Ljava.util.concurrent.ConcurrentHashMap$Node; 25: 418775 18479112 [Lorg.apache.hadoop.hbase.Cell; 26: 671443 17693224 [C 27: 418781 16751240 org.apache.hadoop.hbase.client.Result 28: 669739 16073736 java.lang.String 29: 644796 15475104 org.apache.hadoop.hbase.filter.SubstringComparator 30: 419134 10059216 java.util.LinkedList$Node |
为使系统能够正常工作,先实施治标不治本的方案:监控GC,定时重启HBase Thrift2进程,然后再找出根本原因达到治本的目的。
从上面jmap的输出来看,猜测是不是因为额scanner没有被关闭导致的。而scanner没有被关闭的原因有两个:一是客户端程序问题没有关闭,也就是有内存泄漏了,二是客户端程序异常导致没机会关闭。
查看客户端源代码,确实存在openScanner的异常时未关闭。另外客户端被kill掉或断电等,也会导致无法释放,这一点是HBase Thrift2得解决的问题。
3. 解决方案
针对前面分析出的问题分别加以解决:
1) 客户端保证scanner全部释放;
2) HBase Thrift2增加自动释放长时间未操作的scanner;
3) 另外也可以使用getScannerResults替代getScannerRows来规避此问题。
补丁:
https://issues.apache.org/jira/browse/HBASE-17182。
4. 相关代码
private final Map scannerMap = new ConcurrentHashMap();
@Override public int openScanner(ByteBuffer table, TScan scan) throws TIOError, TException { Table htable = getTable(table); ResultScanner resultScanner = null; try { resultScanner = htable.getScanner(scanFromThrift(scan)); } catch (IOException e) { throw getTIOError(e); } finally { closeTable(htable); }
// 将scanner放入到scannerMap中, // 如果客户端没有调用closeScanner,则导致该scanner泄漏,GC无法回收改部分内存 return addScanner(resultScanner); }
/** * Assigns a unique ID to the scanner and adds the mapping to an internal HashMap. * @param scanner to add * @return Id for this Scanner */ private int addScanner(ResultScanner scanner) { int id = nextScannerId.getAndIncrement(); scannerMap.put(id, scanner); // 将scanner放入到scannerMap中 return id; }
/** * Returns the Scanner associated with the specified Id. * @param id of the Scanner to get * @return a Scanner, or null if the Id is invalid */ private ResultScanner getScanner(int id) { return scannerMap.get(id); }
@Override public void closeScanner(int scannerId) throws TIOError, TIllegalArgument, TException { LOG.debug("scannerClose: id=" + scannerId); ResultScanner scanner = getScanner(scannerId); if (scanner == null) { String message = "scanner ID is invalid"; LOG.warn(message); TIllegalArgument ex = new TIllegalArgument(); ex.setMessage("Invalid scanner Id"); throw ex; } scanner.close(); // 关闭scanner removeScanner(scannerId); // 从scannerMap中移除scanner }
/** * Removes the scanner associated with the specified ID from the internal HashMap. * @param id of the Scanner to remove
* @return the removed Scanner, or */ protected ResultScanner removeScanner(int id) { return scannerMap.remove(id); // 从scannerMap中移除scanner } |