一、安装监控插件elasticsearch_exporter
wget https://github.com/prometheus-community/elasticsearch_exporter/releases/download/v1.2.1/elasticsearch_exporter-1.2.1.linux-386.tar.gz
tar xf elasticsearch_exporter-1.2.1.linux-386.tar.gz
mv elasticsearch_exporter-1.2.1.linux-386 /opt/elasticsearch_exporter
(elasticsearch_exporter的下载地址如下:https://github.com/prometheus-community/elasticsearch_exporter/releases)
启动成功后,可以访问 curl http://ip/metrics ,看抓取的信息如下:
指标 | 解析 |
---|---|
##搜索和索引性能 | |
elasticsearch_indices_search_query_total | 查询总数 吞吐量 |
elasticsearch_indices_search_query_time_seconds | 查询总时间 性能 |
elasticsearch_indices_search_fetch_total | 提取总数 |
elasticsearch_indices_search_fetch_time_seconds | 花费在提取上的总时间 |
##索引请求 | |
elasticsearch_indices_indexing_index_total | 索引的文件总数 |
elasticsearch_indices_indexing_index_time_seconds_total | 索引文档总时间 |
elasticsearch_indices_indexing_delete_total | 索引的文件删除总数 |
elasticsearch_indices_indexing_delete_time_seconds_total | 索引的文件删除总时间 |
elasticsearch_indices_refresh_total | 索引刷新总数 |
elasticsearch_indices_refresh_time_seconds_total | 刷新指数的总时间 |
elasticsearch_indices_flush_total | 索引刷新总数到磁盘 |
elasticsearch_indices_flush_time_seconds | 将索引刷新到磁盘上的总时间 累计flush时间 |
##JVM内存和垃圾回收 | |
elasticsearch_jvm_gc_collection_seconds_sum | GC run time in seconds垃圾回收时间 |
elasticsearch_jvm_gc_collection_seconds_count | Count of JVM GC runs垃圾搜集数 |
elasticsearch_jvm_memory_committed_bytes | JVM memory currently committed by area最大使用内存限制 |
elasticsearch_jvm_memory_max_bytes | 配置的最大jvm值 |
elasticsearch_jvm_memory_pool_max_bytes | JVM内存最大池数 |
elasticsearch_jvm_memory_pool_peak_max_bytes | 最大的JVM内存峰值 |
elasticsearch_jvm_memory_pool_peak_used_bytes | 池使用的JVM内存峰值 |
elasticsearch_jvm_memory_pool_used_bytes | 目前使用的JVM内存池 |
elasticsearch_jvm_memory_used_bytes | JVM memory currently used by area 内存使用量 |
##集群健康和节点可用性 | |
elasticsearch_cluster_health_status | 集群状态,green( 所有的主分片和副本分片都正常运行)、yellow(所有的主分片都正常运行,但不是所有的副本分片都正常运行)red(有主分片没能正常运行)值为1的即为对应状态 |
elasticsearch_cluster_health_number_of_data_nodes | node节点的数量 |
elasticsearch_cluster_health_number_of_in_flight_fetch | 正在进行的碎片信息请求的数量 |
elasticsearch_cluster_health_number_of_nodes | 集群内所有的节点 |
elasticsearch_cluster_health_number_of_pending_tasks | 尚未执行的集群级别更改 |
elasticsearch_cluster_health_initializing_shards | 正在初始化的分片数 |
elasticsearch_cluster_health_unassigned_shards | 未分配分片数 |
elasticsearch_cluster_health_active_primary_shards | 活跃的主分片总数 |
elasticsearch_cluster_health_active_shards | 活跃的分片总数(包括复制分片) |
elasticsearch_cluster_health_relocating_shards | 当前节点正在迁移到其他节点的分片数量,通常为0,集群中有节点新加入或者退出时该值会增加 |
##资源饱和度 | |
elasticsearch_thread_pool_completed_count | 线程池操作完成(bulk、index、search、force_merge) |
elasticsearch_thread_pool_active_count | 线程池线程活动(bulk、index、search、force_merge) |
elasticsearch_thread_pool_largest_count | 线程池最大线程数(bulk、index、search、force_merge) |
elasticsearch_thread_pool_queue_count | 线程池中的排队线程数(bulk、index、search、force_merge) |
elasticsearch_thread_pool_rejected_count | 线程池的被拒绝线程数(bulk、index、search、force_merge) |
elasticsearch_indices_fielddata_memory_size_bytes | fielddata缓存的大小(字节) |
elasticsearch_indices_fielddata_evictions | 来自fielddata缓存的驱逐次数 |
elasticsearch_indices_filter_cache_evictions | 来自过滤器缓存的驱逐次数(仅版本2.x) |
elasticsearch_indices_filter_cache_memory_size_bytes | 过滤器高速缓存的大小(字节)(仅版本2.x) |
elasticsearch_cluster_health_number_of_pending_tasks | 待处理任务数 |
elasticsearch_indices_get_time_seconds | |
elasticsearch_indices_get_missing_total | 丢失的文件的GET请求总数 |
elasticsearch_indices_get_missing_time_seconds | 花费在文档丢失的GET请求上的总时间 |
elasticsearch_indices_get_exists_time_seconds | |
elasticsearch_indices_get_exists_total | |
elasticsearch_indices_get_total | |
##主机级别的系统和网络指标 | |
elasticsearch_process_cpu_percent | Percent CPU used by process CPU使用率 |
elasticsearch_filesystem_data_free_bytes | Free space on block device in bytes 磁盘可用空间 |
elasticsearch_process_open_files_count | Open file descriptors ES进程打开的文件描述符 |
elasticsearch_transport_rx_packets_total | Count of packets receivedES节点之间网络入流量 |
elasticsearch_transport_tx_packets_total | Count of packets sentES节点之间网络出流量 |
二、Prometheus server端的相关配置
1、修改主配置文件prometheus.yml 并添加es相关监控的配置
vim prometheus.yml
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
- localhost:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
- "rules/*rules.yml"
scrape_configs:
- job_name: 'elasticsearch'
scrape_interval: 15s
metrics_path: '/metrics'
file_sd_configs:
- files:
- targets/elasticsearch/*.json
refresh_interval: 1m
2、添加ES相关对象的监控文件
mkdir targets/elasticsearch
cat targets/elasticsearch/elasticsearch.json
[
{
"targets": [
"10.32.54.172:9112"
],
"labels": {
"cluster": "es-testnet-riskcontrol"
}
}
]
三、elasticsearch_exporter客户端启动和配置
cat << EOF > /etc/systemd/system/elasticsearch_exporter.service
[Unit]
Description=Prometheus elasticsearch_exporter
After=local-fs.target network-online.target network.target
Wants=local-fs.target network-online.target network.target
[Service]
User=root
Nice=10
ExecStart = /opt/elasticsearch_exporter/elasticsearch_exporter --es.all --es.indices --es.cluster_settings --es.indices_settings --es.shards --es.snapshots --web.listen-address :9111 --es.uri http://ip:9200
ExecStop= /usr/bin/killall elasticsearch_exporter
[Install]
WantedBy=default.target
EOF
es严重需要: https://user:password@IP:9200
systemctl daemon-reload
systemctl enable elasticsearch_exporter.service
systemctl start elasticsearch_exporter.service
四、通过grafana通过dashboard
1、依次通过grafana导入https://grafana.com/grafana/dashboards/2322。dashboard---manager---import---import via grafana.com选择2322---load即可
2、稍等两分钟既可以查看相关的图形。
参考如下:
https://grafana.com/grafana/dashboards/2322
https://github.com/prometheus-community/elasticsearch_exporter/releases