SaltStack数据系统

SaltStack数据系统

SaltStack有两大数据系统,分别是:

  • Grains
  • Pillar

SaltStack数据系统的组件

SaltStack组件之Grains

Grains是SaltStack的一个组件,其存放着minion启动时收集到的信息。

Grains是SaltStack组件中非常重要的组件之一,因为我们在做配置部署的过程中会经常使用它,Grains是SaltStack记录minion的一些静态信息的组件。可简单理解为Grains记录着每台minion的一些常用属性,比如CPU、内存、磁盘、网络信息等。我们可以通过grains.items查看某台minion的所有Grains信息。

Grains的功能:

  • 收集资产信息

Grains应用场景:

  • 信息查询
  • 在命令行下进行目标匹配
  • 在top file中进行目标匹配
  • 在模板中进行目标匹配

模板中进行目标匹配请查看官方文档:
SaltStack官方文档

信息查询实例

[root@master ~]# salt 'node1' grains.items
node1:
    ----------
    biosreleasedate:
        02/27/2020
    biosversion:
        6.00
    cpu_flags:
        - fpu
        - vme
        - de
        - pse
        - tsc
        - msr
        - pae
        - mce
        - cx8
        - apic
        - sep
        - mtrr
        - pge
        - mca
        - cmov
        - pat
        - pse36
        - clflush
        - mmx
        - fxsr
        - sse
        - sse2
        - ss
        - syscall
        - nx
        - pdpe1gb
        - rdtscp
        - lm
        - constant_tsc
        - arch_perfmon
        - nopl
        - xtopology
        - tsc_reliable
        - nonstop_tsc
        - cpuid
        - pni
        - pclmulqdq
        - ssse3
        - fma
        - cx16
        - pcid
        - sse4_1
        - sse4_2
        - x2apic
        - movbe
        - popcnt
        - tsc_deadline_timer
        - aes
        - xsave
        - avx
        - f16c
        - rdrand
        - hypervisor
        - lahf_lm
        - abm
        - 3dnowprefetch
        - cpuid_fault
        - invpcid_single
        - pti
        - ssbd
        - ibrs
        - ibpb
        - stibp
        - fsgsbase
        - tsc_adjust
        - bmi1
        - avx2
        - smep
        - bmi2
        - invpcid
        - rdseed
        - adx
        - smap
        - clflushopt
        - xsaveopt
        - xsavec
        - xsaves
        - arat
        - md_clear
        - flush_l1d
        - arch_capabilities
    cpu_model:
        Intel(R) Core(TM) i5-9300H CPU @ 2.40GHz
    cpuarch:
        x86_64
    cwd:
        /
    disks:
        - sr0
        - sda
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 114.114.114.114
        ip6_nameservers:
        nameservers:
            - 114.114.114.114
        options:
        search:
        sortlist:
    domain:
    fqdn:
        node1
    fqdn_ip4:
        - 192.168.100.20
    fqdn_ip6:
        - fe80::baf8:3da3:ce41:8484
    fqdns:
        - node1
    gid:
        0
    gpus:
        |_
          ----------
          model:
              SVGA II Adapter
          vendor:
              vmware
    groupname:
        root
    host:
        node1
    hwaddr_interfaces:
        ----------
        ens33:
            00:0c:29:c8:f3:ae
        lo:
            00:00:00:00:00:00
    id:
        node1
    init:
        systemd
    ip4_gw:
        192.168.100.2
    ip4_interfaces:
        ----------
        ens33:
            - 192.168.100.20
        lo:
            - 127.0.0.1
    ip6_gw:
        False
    ip6_interfaces:
        ----------
        ens33:
            - fe80::baf8:3da3:ce41:8484
        lo:
            - ::1
    ip_gw:
        True
    ip_interfaces:
        ----------
        ens33:
            - 192.168.100.20
            - fe80::baf8:3da3:ce41:8484
        lo:
            - 127.0.0.1
            - ::1
    ipv4:
        - 127.0.0.1
        - 192.168.100.20
    ipv6:
        - ::1
        - fe80::baf8:3da3:ce41:8484
    kernel:
        Linux
    kernelparams:
        |_
          - BOOT_IMAGE
          - (hd0,msdos1)/vmlinuz-4.18.0-257.el8.x86_64
        |_
          - root
          - /dev/mapper/cs-root
        |_
          - ro
          - None
        |_
          - crashkernel
          - auto
        |_
          - resume
          - /dev/mapper/cs-swap
        |_
          - rd.lvm.lv
          - cs/root
        |_
          - rd.lvm.lv
          - cs/swap
        |_
          - rhgb
          - None
        |_
          - quiet
          - None
    kernelrelease:
        4.18.0-257.el8.x86_64
    kernelversion:
        #1 SMP Thu Dec 3 22:16:23 UTC 2020
    locale_info:
        ----------
        defaultencoding:
            UTF-8
        defaultlanguage:
            zh_CN
        detectedencoding:
            UTF-8
        timezone:
            EDT
    localhost:
        node1
    lsb_distrib_codename:
        CentOS Stream 8
    lsb_distrib_id:
        CentOS Stream
    lsb_distrib_release:
        8
    lvm:
        ----------
        cs:
            - root
            - swap
    machine_id:
        58b35e9af791430083a37dc2e24a33a9
    manufacturer:
        VMware, Inc.
    master:
        192.168.100.10
    mdadm:
    mem_total:
        1789
    nodename:
        node1
    num_cpus:
        2
    num_gpus:
        1
    os:
        CentOS Stream
    os_family:
        RedHat
    osarch:
        x86_64
    oscodename:
        CentOS Stream 8
    osfinger:
        CentOS Stream-8
    osfullname:
        CentOS Stream
    osmajorrelease:
        8
    osrelease:
        8
    osrelease_info:
        - 8
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
    pid:
        190448
    productname:
        VMware Virtual Platform
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python3.6
    pythonpath:
        - /usr/bin
        - /usr/lib64/python36.zip
        - /usr/lib64/python3.6
        - /usr/lib64/python3.6/lib-dynload
        - /usr/lib64/python3.6/site-packages
        - /usr/lib/python3.6/site-packages
    pythonversion:
        - 3
        - 6
        - 8
        - final
        - 0
    saltpath:
        /usr/lib/python3.6/site-packages/salt
    saltversion:
        3003.1
    saltversioninfo:
        - 3003
        - 1
    selinux:
        ----------
        enabled:
            True
        enforced:
            Permissive
    serialnumber:
        VMware-56 4d 42 34 4c f4 54 e8-e6 1e 42 2c e5 c8 f3 ae
    server_id:
        1797241226
    shell:
        /bin/sh
    ssds:
    swap_total:
        2047
    systemd:
        ----------
        features:
            +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy
        version:
            239
    systempath:
        - /usr/local/sbin
        - /usr/local/bin
        - /usr/sbin
        - /usr/bin
    uid:
        0
    username:
        root
    uuid:
        34424d56-f44c-e854-e61e-422ce5c8f3ae
    virtual:
        VMware
    zfs_feature_flags:
        False
    zfs_support:
        False
    zmqversion:
        4.3.4

只查询所有的grains的key

[root@master ~]# salt 'node1' grains.ls
node1:
    - biosreleasedate
    - biosversion
    - cpu_flags
    - cpu_model
    - cpuarch
    - cwd
    - disks
    - dns
    - domain
    - fqdn
    - fqdn_ip4
    - fqdn_ip6
    - fqdns
    - gid
    - gpus
    - groupname
    - host
    - hwaddr_interfaces
    - id
    - init
    - ip4_gw
    - ip4_interfaces
    - ip6_gw
    - ip6_interfaces
    - ip_gw
    - ip_interfaces
    - ipv4
    - ipv6
    - kernel
    - kernelparams
    - kernelrelease
    - kernelversion
    - locale_info
    - localhost
    - lsb_distrib_codename
    - lsb_distrib_id
    - lsb_distrib_release
    - lvm
    - machine_id
    - manufacturer
    - master
    - mdadm
    - mem_total
    - nodename
    - num_cpus
    - num_gpus
    - os
    - os_family
    - osarch
    - oscodename
    - osfinger
    - osfullname
    - osmajorrelease
    - osrelease
    - osrelease_info
    - path
    - pid
    - productname
    - ps
    - pythonexecutable
    - pythonpath
    - pythonversion
    - saltpath
    - saltversion
    - saltversioninfo
    - selinux
    - serialnumber
    - server_id
    - shell
    - ssds
    - swap_total
    - systemd
    - systempath
    - uid
    - username
    - uuid
    - virtual
    - zfs_feature_flags
    - zfs_support
    - zmqversion

查询某个key的值,例如获取ip地址

[root@master ~]# salt '*' grains.get fqdn_ip4
node1:
    - 192.168.100.20
master:
    - 192.168.100.10


[root@master ~]# salt '*' grains.get ip4_interfaces
node1:
    ----------
    ens33:
        - 192.168.100.20
    lo:
        - 127.0.0.1
master:
    ----------
    ens33:
        - 192.168.100.10
    lo:
        - 127.0.0.1


[root@master ~]# salt '*' grains.get ip4_interfaces:ens33
master:
    - 192.168.100.10
node1:
    - 192.168.100.20

使用grains来匹配minion

[root@master ~]# salt -G 'os:CentOS Stream' cmd.run 'date'
master:
    Mon Jul  4 06:04:23 EDT 2021
node1:
    Mon Jul  4 06:04:23 EDT 2021

在top file里面使用Grains

[root@master base]# vim /srv/salt/base/top.sls
base:
  'os:CentOS Stream':
    - match: grains
    - web.nginx.install

//在minion端停止nginx
[root@node1 ~]# ss -antl
State    Recv-Q   Send-Q     Local Address:Port      Peer Address:Port   Process   
LISTEN   0        128              0.0.0.0:22             0.0.0.0:*                
LISTEN   0        128                 [::]:22                [::]:*  



[root@master ~]# salt 'node1' state.highstate
node1:
----------
          ID: nginx-install
    Function: pkg.installed
        Name: nginx
      Result: True
     Comment: All specified packages are already installed
     Started: 06:09:55.628492
    Duration: 878.447 ms
     Changes:   
----------
          ID: nginx-service
    Function: service.running
        Name: nginx
      Result: True
     Comment: Service nginx is already enabled, and is running
     Started: 06:09:56.508264
    Duration: 153.803 ms
     Changes:   
              ----------
              nginx:
                  True

Summary for node1
------------
Succeeded: 2 (changed=1)
Failed:    0
------------
Total states run:     2
Total run time:   1.032 s


//再次查看minion的服务
[root@node1 ~]# ss -antl
State    Recv-Q   Send-Q     Local Address:Port      Peer Address:Port   Process   
LISTEN   0        128              0.0.0.0:80             0.0.0.0:*                
LISTEN   0        128              0.0.0.0:22             0.0.0.0:*                
LISTEN   0        128                 [::]:80                [::]:*                
LISTEN   0        128                 [::]:22                [::]:*  

自定义Grains的两种方法:

  • minion配置文件,在配置文件中搜索grains
  • 在/etc/salt下生成一个grains文件,在此文件中定义(推荐方式)
[root@node1 ~]# cd /etc/salt/
[root@node1 salt]# vim grains
test-grains: linux-node1

[root@node1 salt]# systemctl restart salt-minion

[root@master ~]# salt '*' grains.get test-grains
master:

node1:
    Minion did not return. [No response]
    The minions may not have all finished running and any remaining minions will return upon completion. To look up the return data for this job later, run the following command:
    
    salt-run jobs.lookup_jid 20210705101448171250
ERROR: Minions returned with non-zero exit code
[root@master ~]# 
[root@master ~]# salt-run jobs.lookup_jid 20210705101448171250
master:
node1:
    linux-node1

不重启的情况下自定义grains:

[root@node1 salt]# vim grains
test-grains: linux-node1
zdj: test

[root@master ~]# salt '*' saltutil.sync_grains
node1:
master:
[root@master ~]# salt '*' grains.get zdj
master:
node1:
    Minion did not return. [No response]
    The minions may not have all finished running and any remaining minions will return upon completion. To look up the return data for this job later, run the following command:
    
    salt-run jobs.lookup_jid 20210705101830906882
ERROR: Minions returned with non-zero exit code
[root@master ~]# salt-run jobs.lookup_jid 20210705101830906882
master:
node1:
    test

SaltStack组件之Pillar
Pillar也是SaltStack组件中非常重要的组件之一,是数据管理中心,经常配置states在大规模的配置管理工作中使用它。Pillar在SaltStack中主要的作用就是存储和定义配置管理中需要的一些数据,比如软件版本号、用户名密码等信息,它的定义存储格式与Grains类似,都是YAML格式。

在Master配置文件中有一段Pillar settings选项专门定义Pillar相关的一些参数:

#pillar_roots:
#  base:
#    - /srv/pillar

默认Base环境下Pillar的工作目录在/srv/pillar目录下。若你想定义多个环境不同的Pillar工作目录,只需要修改此处配置文件即可。

Pillar的特点:

  • 可以给指定的minion定义它需要的数据
  • 只有指定的人才能看到定义的数据
  • 在master配置文件里设置
[root@master ~]# salt '*' pillar.items
node1:
    ----------
master:
    ----------

默认pillar是没有任何信息的,如果想查看信息,需要在 master 配置文件上把 pillar_opts的注释取消,并将其值设为 True。

[root@master ~]# vim /etc/salt/master
# master config file that can then be used on minions.
pillar_opts: True   

# The pillar_safe_render_error option prevents the master from passing pillar


[root@master ~]# salt '*' pillar.items
master:
    ----------
    master:
        ----------
        __cli:
            salt-master
        __role:
            master
        allow_minion_key_revoke:
            True
        archive_jobs:
            False
        auth_events:
            True
        auth_mode:
            1
        auto_accept:
            False
        azurefs_update_interval:
            60
        cache:
            localfs
        cache_sreqs:
            True
        cachedir:
            /var/cache/salt/master
        clean_dynamic_modules:
            True
        cli_summary:
            False
        client_acl_verify:
            True
        cluster_mode:
            False
        con_cache:
            False
        conf_file:
            /etc/salt/master
        config_dir:
            /etc/salt
        cython_enable:
            False
        daemon:
            False
        decrypt_pillar:
        decrypt_pillar_default:
            gpg
        decrypt_pillar_delimiter:
            :
        decrypt_pillar_renderers:
            - gpg
        default_include:
.........    //此处省略....行
       winrepo_refspecs:
            - +refs/heads/*:refs/remotes/origin/*
            - +refs/tags/*:refs/tags/*
        winrepo_remotes:
            - https://github.com/saltstack/salt-winrepo.git
        winrepo_remotes_ng:
            - https://github.com/saltstack/salt-winrepo-ng.git
        winrepo_ssl_verify:
            True
        winrepo_user:
        worker_threads:
            5
        zmq_backlog:
            1000
        zmq_filtering:
            False
        zmq_monitor:
            False

pillar自定义数据:
在master的配置文件里找pillar_roots可以看到其存放pillar的位置

# highstate format, and is generally just key/value pairs.
//把下列三行的注释取消
pillar_roots:
  base:
    - /srv/pillar/base  //加上base
# 
#ext_pillar:


[root@master ~]# vim /etc/salt/master
pillar_roots:
  base:
    - /srv/pillar/base
[root@master ~]# mkdir -p /srv/pillar/base
[root@master ~]# systemctl restart salt-master
[root@master ~]# vim /srv/pillar/base/apache.sls
{% if grains['os'] == '* %}
apache: httpd
{% elif grains['os'] == '* %}
apache: apache2
{% endif %}
//定义top file入口文件
[root@master ~]# vim /srv/pillar/base/top.sls
base:
  '*':
    - apache
[root@master ~]# salt '*' pillar.items
minion1:
    ----------
    apache:
        httpd
#state
[root@master ~]# vim /srv/salt/base/top.sls
base:
  '*':
    - web.apache.install
//salt下修改apache的状态文件,引用pillar的数据
[root@master ~]# mkdir /srv/salt/base/web/apache/
[root@master ~]# vim /srv/salt/base/web/apache/install.sls
apache-install:
  pkg.installed:
    - name: {{ pillar['apache'] }}

apache-service:
  service.running:
    - name: {{ pillar['apache'] }}
    - enable: True
[root@master ~]# salt '*' state.highstate
master:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for master
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
node1:
    Minion did not return. [No response]
    The minions may not have all finished running and any remaining minions will return upon completion. To look up the return data for this job later, run the following command:
    
    salt-run jobs.lookup_jid 20210705104137415143
ERROR: Minions returned with non-zero exit code
[root@master ~]# salt-run jobs.lookup_jid 20210705104137415143
master:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for master
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
node1:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: The following packages were installed/updated: httpd
     Started: 06:41:39.078171
    Duration: 9987.425 ms
     Changes:   
              ----------
              apr:
                  ----------
                  new:
                      1.6.3-11.el8
                  old:
              apr-util:
                  ----------
                  new:
                      1.6.1-6.el8
                  old:
              apr-util-bdb:
                  ----------
                  new:
                      1.6.1-6.el8
                  old:
              apr-util-openssl:
                  ----------
                  new:
                      1.6.1-6.el8
                  old:
              centos-logos-httpd:
                  ----------
                  new:
                      85.5-1.el8
                  old:
              httpd:
                  ----------
                  new:
                      2.4.37-40.module_el8.5.0+852+0aafc63b
                  old:
              httpd-filesystem:
                  ----------
                  new:
                      2.4.37-40.module_el8.5.0+852+0aafc63b
                  old:
              httpd-tools:
                  ----------
                  new:
                      2.4.37-40.module_el8.5.0+852+0aafc63b
                  old:
              mod_http2:
                  ----------
                  new:
                      1.15.7-3.module_el8.4.0+778+c970deab
                  old:
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: False
     Comment: Job for httpd.service failed because the control process exited with error code.
              See "systemctl status httpd.service" and "journalctl -xe" for details.
     Started: 06:41:49.075385
    Duration: 10177.333 ms
     Changes:   

Summary for node1
------------
Succeeded: 1 (changed=1)
Failed:    1
------------
Total states run:     2
Total run time:  20.165 s

//这里看到httpd状态已经启动
[root@node1 salt]# systemctl start httpd
[root@node1 salt]# ss -antl
State    Recv-Q   Send-Q     Local Address:Port      Peer Address:Port   Process   
LISTEN   0        128              0.0.0.0:22             0.0.0.0:*                
LISTEN   0        128                    *:80                   *:*                
LISTEN   0        128                 [::]:22                [::]:*   

SaltStack数据系统

上一篇:SaltStack之数据系统


下一篇:saltstack 数据系统!!!