监控prometheus+alertmanager+PrometheusAlert

此篇文章在于记录监控搭建方法


prometheus

prometheus存储的是时序数据,即按相同时序(相同名称和标签),以时间维度存储连续的数据的集合。

部署prometheus

1.官网下载安装包解压即可主要在于配置规则

global:
  scrape_interval: 15s                  每隔多少秒去检测一次目标
  evaluation_interval: 15s              每隔多少秒去执行rules
  # scrape_timeout is set to the global default (10s).

# 配置你的altermanager(可以同时配置多个)
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - 127.0.0.1:9093

#配置你的规则(可以同时配置多个)
rule_files:
  # - "rules/first_rules.yml"
  # - "rules/second_rules.yml"

#监控目标配置
scrape_configs:
  - job_name: "prometheus1"
    static_configs:   (手动添加)
      - targets: ["localhost:9090"]
      - targets: ["localhost:9100"]
      自定义一些标签可以在alertmanager里使用
        labels:
          idc: shanghai
          system: baidu
          owner: xxx
  - job_name: "prometheus2"
    - job_name: "prometheus1"
      file_sd_configs:
       - files:
         - /usr/local/prometheus/test.yaml
         refresh_interval: 5s
    可以将现有的标签进行替换
    relabel_configs:
      - action: replace
        source_labels: ["_address_"]
        regex: "(.*)"
        target_label: "instance"(自动新增的标签)
        replacement: "$1"
        或者
      - source_labels: ["_address_"]
        regex: "(.*)"
        target_label: "test"
        replacement: $1

 
 test.yaml内容如下:
 - targets:
   - 10.1.9.1xx 
   - 10.1.9.2xx
   labels:
     service: aaa    

2.rules/first_rules.yml

groups:
- name: node_monitor
  rules:

  # Alert for any instance that is unreachable for >5 minutes.
  - alert: InstanceDown
    expr: up == 0
    for: 1m
    labels:
      severity: 'critical'
    annotations:
      summary: "Instance {{ $labels.instance }} down"
      description: " {{ $labels.instance }} has been down for more than 5 minutes. {{$labels.test}}"

- name: cpu_test
  rules:
  - alert: CPU
    expr: (1-rate(node_cpu_seconds_total{mode="idle"}[1m]))*100 > 1
    for: 5s
    labels:
      severity: 'warning'
    annotations:
      summary: " cpu利用率超过 90%,{{ $labels.instance }}当前值: {{ $value }}%"

3.altermanager.yaml

global:
  resolve_timeout: 5m
  smtp_from: "archive@qq.com"
  smtp_smarthost: "smtp.partner.com:587"
  smtp_auth_username: "archive@qq.com"
  smtp_auth_password: "mi1PooI7F%Ht9m0#"
route:
  group_by: ['alertname']
  group_wait: 5s
  group_interval: 5s
  repeat_interval: 5s
  receiver: 'email'  这里只是配置默认的receiver

  routes:
  - match:         直接匹配
      service: foo1
    receiver: "email1"
  - match_re:      正则匹配
      owner: "xxxx"
    receiver: "email"

receivers:    这里配置多个receiver,email,webhook等
- name: 'email'
  email_configs:
  - to: 'test@qq.com'
    send_resolved: true    发送已解决的问题

- name: 'email1'        一个receiver下面可以有多个接收器
  webhook_configs:
  - url: 'http://prometheus-webhook-dingtalk.kube.com
  email_configs:
  - to: 'test@qq.com'
    send_resolved: true

inhibit_rules: # 抑制规则
  - source_match: # 源标签警报触发时抑制含有目标标签的警报,在当前警报匹配 status: 'High'
      severity: 'warning'  # 此处的抑制匹配一定在最上面的route中配置不然,会提示找不key。
    target_match:
      severity: 'critical' # 目标标签值正则匹配,可以是正则表达式如: ".*MySQL.*"
    equal: ['alertname','instance'] # 确保这个配置下的标签内容相同才会抑制,也就是说警报中必须有这三个标签值才会被抑制

4、PrometheusAlert
github或者gittee 中搜索feiyu563/PrometheusAlert
下载后编辑app.conf 然后运行promethuesalert
访问后使用app.conf中的username和password 点击模板修改模板
但是注意在alertmanager中配置
receivers:

  • name: ‘PrometheusAlert’
    webhook_configs:
    • url: 为promethuesalert中模板后面的地址

可手动或等待Prometheus告警触发后,去PrometheusAlert中查看收到的日志消息。通过json中的键值调整模板中的信息。

上一篇:3 报警的介绍


下一篇:Prometheus 结合cAdvisor、AlertManager、node-exporter 监控容器并实现邮箱告警