elasticsearch + kibana + x-pack + logstash_集群部署安装

elasticsearch 部分总体描述:

1.elasticsearch 的概念及特点。
概念:elasticsearch 是一个基于 lucene 的搜索服务器。lucene 是全文搜索的一个框架。
特点:
- 分布式,可扩展,高可用
- 能够实时搜索分析数据。
- 复杂的 RESTful API。
总结:是一个采用RESTful API 标准,实现分布式,可扩展以及高可用的实时数据存储分析的全文搜索工具。

2.elasticsearch 涉及的相关概念。(关系菲关系对比)
相关概念:
-Node:装有一个 elasticsearch 服务器的节点。
-Cluster:有一个货多个 Node 组成的集群,共同工作。
-Document:一个可被搜索的基础信息单元。
-Index:拥有相似特征的文档集合。
-Type:一个索引中科院定义一种或多种类型。
-Filed:是 elasticsearch 的最小单位,相当于数据的某一列。
-Shards:索引分成若干份,每一份就是一个 shard。默认5份。
-Replicas:是索引的一份或者多份拷贝资料。

3.关系型与非关系型的对比:
databases ----> Index
table ----> Type
row ----> Document
column ----> filed

4.内置字段和字段类型:
-内置字段:
_uid,_id,_type,_source,_all,_analyzer,_boost,_parent,_routing,_index,_size,_timestamp,_ttl

-字段类型:
staing,integer/long,float/double,boolean,null,date

5.elasticsearch 架构详情。
-------待补充-------

6.什么是倒排索引。
概念:倒排索引(英语:Inverted index),也常被称为反向索引,置入档案或反向档案。
是一种索引方法,被用来存储在全文索引下某个单词在一个文档或者一组文档中的存储位置的映射。
它是文档检索的系统中最常用的数据结构。
正向索引和倒排索引对比:
内容 -- 正向 --> 关键字/词
内容 <--倒排 -- 关键字/词

7.RESTful API 以及 curl。
概念:RESTful 表现层状态转化。
- 表现层:图片,文件,视频等。
- 转化后:资源
API:应用程序接口。
RESTful API:一种资源操作的应用程序接口。

curl:一个利用 URL 语法在命令行下工作的文件传输工具,支持文件上传和下载。
curl常用参数:
- I 获取头部信息
- v 获取握手过程
- o 保存文件
curl -o baidu.html www.baidu.com
curl -I -v www.baidu.com

【声明我的是阿里云服务器-有公网和内网,访问网站用外网。部署配置文件用内网 切记阿里云需要到安全组添加对应的访问端口,不然怎么都访问不了外网!】
【本文中应用的是本地虚拟机做出来的。云服务器可以这么操作】
【如有不懂加我QQ:1322734677】
【安装 elasticsearch + kibana + x-pack + logstash】 // 【kibana 安装在后面】
8.elasticsearch 单机/集群安装配置。单机安装(elasticsearch)

【注意】防火墙和selinux 都是关闭的

【1】添加用户(启动elasticsearch使用)以及_解析 集群配置。我这里是 4 台
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# useradd -m elastic
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# passwd elastic
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# vim /etc/hosts
192.168.191.1.110 iZuf6j20vqe3q83v5tjwd0Z node-1
192.168.191.1.120 iZuf6j20vqe3q83v5tjwd1Z node-2
192.168.191.1.130 iZuf6j20vqe3q83v5tjwd2Z node-3
192.168.191.1.140 iZuf6j20vqe3q83v5tjwd2Z node-4

【1.2】编辑解析 node1 节点 每个集群节点都配置下就好了
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=node-1

【2】新建目录/var/log/elasticsearch/和/var/lib/elasticsearch/用于存放日志和数据更改目录属组和属主
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# mkdir -p /var/log/elasticsearch
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# mkdir -p /var/lib/elasticsearch
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# chown elastic:elastic /var/log/elasticsearch/
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# chown elastic:elastic /var/lib/elasticsearch/

【3】编辑 sysctl.conf
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# vim /etc/sysctl.conf
vm.max_map_count=262144 约等于=26M 最低要求

【3.1】查看添加的变量
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# grep vm.max_map_count /etc/sysctl.conf
vm.max_map_count=262144

【3.2】使环境变量立即生效
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# sysctl -p

【4】编辑 limits.conf 修改用户最大文件数等限制
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# vim /etc/security/limits.conf
elastic soft nofile 65535
elastic hard nofile 65535
elastic soft memlock unlimited
elastic hard memlock unlimited

【5】安装 jdk1.8.181
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# mkdir -p /usr/java
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# mkdir software
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# cd software/

【5.1】下载 java-1.8 JDK 以下是百度网盘的下载地址:
链接:https://pan.baidu.com/s/1J_Wo42a0CEnC2Bn8f-sS0Q 提取码:5vvd

[root@iZuf6j20vqe3q83v5tjwd2Z software]# tar -zxvf jdk-8u181-linux-x64.tar.gz -C /usr/java/

【5.2】查看 安装的 java 环境
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

【5.3】添加 java-1.8 JDK 环境变量
[root@iZuf6j20vqe3q83v5tjwd2Z software]# vim /etc/profile
JAVA_HOME=/usr/java/jdk1.8.0_181/
PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/BIN:$PATH
CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export JAVA_HOME CLASSPATH PATH

【5.4】使环境变量立即生效
[root@iZuf6j20vqe3q83v5tjwd0Z software]# source /etc/profile

【6】下载安装 elasticsearch。
[root@iZuf6j20vqe3q83v5tjwd0Z software]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.1-linux-x86_64.tar.gz
[root@iZuf6j20vqe3q83v5tjwd0Z software]# tar -zxvf elasticsearch-7.1.1-linux-x86_64.tar.gz -C /opt/
[root@iZuf6j20vqe3q83v5tjwd0Z software]# mv /opt/elasticsearch-7.1.1 /opt/elasticsearch
[root@iZuf6j20vqe3q83v5tjwd0Z software]# chown -R elastic:elastic /opt/elasticsearch/*
[root@iZuf6j20vqe3q83v5tjwd0Z software]# vim /opt/elasticsearch/config/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-es 集群名各个节点必须统一
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1 节点名
#
# Add custom attributes to the node:
#mi
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch 数据路径
#
# Path to log files:
#
path.logs: /var/log/elasticsearch 日志路径
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.1.110 绑定ip
#
# Set a custom port for HTTP:
#
http.port: 9200 http端口
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["192.168.1.110", "192.168.1.120", "192.168.1.130", "192.168.1.140"] 发现集群 ip
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1", "node-2", "node-3", "node-4"] 指定那些节点可以竞选 master
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
gateway.recover_after_nodes: 1 // 集群节点最少是几台,这里是 1 也就是说其他 两台挂了还可以用集群!当然也可以不开放这个。根据个人需求
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

【7】查看配置文件的参数
[root@iZuf6j20vqe3q83v5tjwd0Z software]# grep -Ev "^#|^$" /opt/elasticsearch/config/elasticsearch.yml
cluster.name: my-es
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.1.110
discovery.seed_hosts: ["192.168.1.110", "192.168.1.120", "192.168.1.130", "192.168.1.140"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3", "node-4"]
gateway.recover_after_nodes: 1

【7.1】复制/opt/elasticsearch整个目录到另外两个节点,elasticsearch.yml文件内容大部分都一样只需要修改以下两个参数
节点2
node.name: node-2
network.host: 192.168.1.120

节点3
node.name: node-3
network.host: 192.168.1.130

节点4
node.name: node-3
network.host: 192.168.1.140

【7.2】启动程序命令 不支持 root 用户启动,必须切换到普通用户下去启动,我们需要在后台启动,这样当我们退出时,应用仍在后台运行
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# su - elastic
[elastic@iZuf6j20vqe3q83v5tjwd0Z ~]$ /opt/elasticsearch/bin/elasticsearch -d 后台启动

【7.3】关闭程序需要杀进程 查看进程
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# ps -ef | grep elasticsearch

【8】编辑 elasticsearch 启动脚本
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# vim /etc/init.d/elasticsearch
#!/bin/bash
#chkconfig: 2345 80 05
#description: elasticsearch

case "$1" in
start)
su - elastic<<!
cd /opt/elasticsearch/
./bin/elasticsearch -d
!
echo "elasticsearch startup"
;;
stop)
elasticsearch_pid=`ps aux|grep elasticsearch | grep -v 'grep elasticsearch' | awk '{print $2}'`
kill $elasticsearch_pid
echo "elasticsearch stopped"
;;
restart)
elasticsearch_pid=`ps aux|grep elasticsearch | grep -v 'grep elasticsearch' | awk '{print $2}'`
kill $elasticsearch_pid
echo "elasticsearch stopped"
su - elastic<<!
cd /opt/elasticsearch/
./bin/elasticsearch -d
!
echo "elasticsearch startup"
;;
*)
echo "start|stop|restart"
;;
esac

exit $?

【8.1】给脚本添加权限 目前脚本只能 start 与 stop
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# chmod -R +755 /etc/init.d/elasticsearch

【8.2】设置启动项
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# chkconfig --add elasticsearch
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# /etc/init.d/elasticsearch start

【8.3】查看集群状态
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# curl -XGET 'http://192.168.1.110:9200/_cat/nodes?v' //根据自己定义的集群 IP 看
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.1.130 22 23 0 0.00 0.01 0.05 mdi - node-3
192.168.1.120 21 23 4 0.00 0.05 0.05 mdi * node-2
192.168.1.110 9 42 1 0.04 0.08 0.11 mdi - node-1
192.168.1.140 10 31 3 0.01 0.04 0.09 mdi - node-4

【8.4】如果集群起不来删除文件就可以起来啦
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# rm -rf /var/lib/elasticsearch/*
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# rm -rf /var/log/elasticsearch/*

【8.5】百度网页进行访问
http:本服务器IP:9200 这样就可以进行访问了!

【kibana】安装与部署 下载安装
【1】下载安装
[root@iZuf6j20vqe3q83v5tjwd0Z ~]# cd software/
[root@iZuf6j20vqe3q83v5tjwd0Z software]# wget https://artifacts.elastic.co/downloads/kibana/kibana-7.1.1-linux-x86_64.tar.gz
[root@iZuf6j20vqe3q83v5tjwd0Z software]# shasum -a 512 kibana-7.1.1-linux-x86_64.tar.gz
[root@iZuf6j20vqe3q83v5tjwd0Z software]# tar -zxvf kibana-7.1.1-linux-x86_64.tar.gz -C /opt/
[root@iZuf6j20vqe3q83v5tjwd0Z software]# mv /opt/kibana-7.1.1-linux-x86_64/ /opt/kibana

【2】修改编译 kibana 配置文件属性
[root@iZuf6j20vqe3q83v5tjwd0Z config]# vim /opt/kibana/config/kibana.yml

【2.1】查看修改的文件配置参数
[root@iZuf6j20vqe3q83v5tjwd0Z config]# grep -Ev "^#|^$" /opt/kibana/config/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.1.110:9200"]
kibana.index: ".newkibana"
pid.file: /var/run/kibana.pid

【2.2】启动 kibana 使用)以及_解析
[root@iZuf6j20vqe3q83v5tjwd0Z kibana]# cd /opt/kibana/
[root@iZuf6j20vqe3q83v5tjwd0Z kibana]# ./bin/kibana //回车启动

【3】启动报错排查错误
[root@iZuf6j20vqe3q83v5tjwd0Z kibana]# grep ERROR /opt/kibana/data/headless_shell-linux/chrome_debug.log //查看日志
[0618/134738.048223:ERROR:egl_util.cc(59)] Failed to load GLES library: /opt/kibana/data/headless_shell-linux/swiftshader/libGLESv2.so: cannot open shared object file: No such file or directory
[0618/140029.258985:ERROR:egl_util.cc(59)] Failed to load GLES library: /opt/kibana/data/headless_shell-linux/swiftshader/libGLESv2.so: file too short
[0618/140717.891370:ERROR:egl_util.cc(59)] Failed to load GLES library: /opt/kibana/data/headless_shell-linux/swiftshader/libGLESv2.so: file too short
[0618/141002.924684:ERROR:egl_util.cc(59)] Failed to load GLES library: /opt/kibana/data/headless_shell-linux/swiftshader/libGLESv2.so: file too short
[0618/141002.929251:ERROR:viz_main_impl.cc(184)] Exiting GPU process due to errors during initialization

【3.1】解决启动报错
[root@iZuf6j20vqe3q83v5tjwd0Z kibana]# mkdir -p /opt/kibana/data/headless_shell-linux/swiftshader/
[root@iZuf6j20vqe3q83v5tjwd0Z kibana]# touch libGLESv2.so
[root@iZuf6j20vqe3q83v5tjwd0Z kibana]# chmod -R +755 /opt/kibana/data/headless_shell-linux/swiftshader/libGLESv2.so

【3.2】重新启动
[root@iZuf6j20vqe3q83v5tjwd0Z kibana]# cd /opt/kibana/
[root@iZuf6j20vqe3q83v5tjwd0Z kibana]# nohup ./bin/kibana & //回车启动 放在后台执行

【3.3】或者后台不启动日志启动
[root@iZuf6j20vqe3q83v5tjwd0Z kibana]# nohup ./bin/kibana > /dev/null 2>&1 &

【3.4】kibana 设置开机启动
[root@iZuf6j20vqe3q83v5tjwd0Z kibana]# cd /etc/init.d/
[root@iZuf6j20vqe3q83v5tjwd0Z init.d]# vim kibana
#!/bin/bash
# chkconfig: 2345 98 02
# description: kibana
KIBANA_HOME=/opt/kibana
case $1 in
start) $KIBANA_HOME/bin/kibana &;;
*) echo "require start";;
esac

[root@iZuf6j20vqe3q83v5tjwd0Z init.d]# chmod +x kibana
[root@iZuf6j20vqe3q83v5tjwd0Z init.d]# chkconfig --add kibana
[root@iZuf6j20vqe3q83v5tjwd0Z init.d]# /etc/init.d/kibana start

【4】百度网页进行访问
http:本服务器IP:5601 这样就可以进行访问了!

【x-pack】安装与部署
【1】安装与部署 由于在elasticsearch在6.3版本之后x-pack是默认安装好的,所以不再需要用户自己去安装
【1.1】在任何一台集群节点上操作都可以
[root@iZuf6j20vqe3q83v5tjwd0Z software]# vim /opt/elasticsearch/config/elasticsearch.yml
xpack.security.enabled: true //最后添加一条记录。有几台节点就重启几台节点

[root@iZuf6j20vqe3q83v5tjwd0Z software]# grep -Ev "^#|^$" /opt/elasticsearch/config/elasticsearch.yml //遍历查询一下数据信息
cluster.name: my-es_tion
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.1.110
http.port: 9200
discovery.seed_hosts: ["192.168.1.110", "192.168.1.120", "192.168.1.130", "192.168.1.140"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3", "node-4"]
xpack.security.enabled: true //最后添加一条记录。有几台节点就重启几台节点

[root@iZuf6j20vqe3q83v5tjwd0Z software]# /etc/init.d/elasticsearch stop //关闭 elasticsearch 服务器的节点。有几台节点就重启几台节点
[root@iZuf6j20vqe3q83v5tjwd0Z software]# /etc/init.d/elasticsearch start //启动 elasticsearch 服务器的节点。有几台节点就重启几台节点

[root@iZuf6j20vqe3q83v5tjwd0Z software]# cd /opt/elasticsearch //进入 elasticsearch 当前家目录

[root@iZuf6j20vqe3q83v5tjwd0Z software]# ./bin/elasticsearch-setup-passwords -h //查看帮助
Sets the passwords for reserved users

Commands
--------
auto - Uses randomly generated passwords #主要命令选项,表示系统将使用随机字符串设置密码
interactive - Uses passwords entered by a user #主要命令选项,表示使用用户输入的字符串作为密码

Non-option arguments:
command

Option Description
------ -----------
-h, --help show help
-s, --silent show minimal output
-v, --verbose show verbose output
[elastic@es-node1 bin]$ ./elasticsearch-setup-passwords auto #为了演示效果,这里我们使用系统自动创建
Initiating the setup of passwords for reserved users elastic,kibana,logstash_system,beats_system.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y #选择y

Changed password for user kibana #kibana角色和密码
PASSWORD kibana = 4VXPRYIVibyAbjugK6Ok

Changed password for user logstash_system #logstash角色和密码
PASSWORD logstash_system = 2m4uVdSzDzpt9OEmNin5

Changed password for user beats_system #beast角色和密码
PASSWORD beats_system = O8VOzAaD3fO6bstCGDyQ

Changed password for user elastic #elasticsearch角色和密码
PASSWORD elastic = 1TWVMeN8tiBy917thUxq

[root@iZuf6j20vqe3q83v5tjwd0Z software]# ./bin/elasticsearch-setup-passwords interactive //例如密码安全验证功能。输入用户为 elastic 密码为 12345678

[root@iZuf6j20vqe3q83v5tjwd0Z bin]# vim /opt/kibana/config/kibana.yml //添加或者修改一下以下记录
elasticsearch.username: "elastic"
elasticsearch.password: "12345678"
xpack.security.enabled: true
xpack.security.encryptionKey: "4297f44b13955235245b2497399d7a93"

[root@iZuf6j20vqe3q83v5tjwd0Z bin]# grep -Ev "^#|^$" /opt/kibana/config/kibana.yml //遍历查询一下数据信息
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.1.110:9200"]
kibana.index: ".newkibana"
elasticsearch.username: "elastic"
elasticsearch.password: "12345678"
xpack.security.enabled: true
xpack.security.encryptionKey: "4297f44b13955235245b2497399d7a93"

[root@iZuf6j20vqe3q83v5tjwd0Z bin]# ps -ef | grep kibana //查看 kibana 进程
[root@iZuf6j20vqe3q83v5tjwd0Z bin]# kill -9 kibana.id // kibana.id 是 kinaba 的进程号
[root@iZuf6j20vqe3q83v5tjwd0Z bin]# cd /opt/kibana // 进入 kibana 服务的当前家目录
[root@iZuf6j20vqe3q83v5tjwd0Z bin]# nohup ./bin/kibana > /dev/null 2>&1 & // 回车启动 放在后台执行

【2】破解 x-pack 30 天的期限
替换x-pack-core-7.1.1.jar文件 已经做好的文件替换就好了 下载地址是百度网盘

链接:https://pan.baidu.com/s/1yZajXMNUJqNhWJ8Ix5md8w 提取码:5un1
注意:以上链接是 7.1.1 的版本 别的版本没有

【2.1】把对应下载下来的文件 替换到你对应服务器目录的文件下 我把从百度网盘上下载的文件放在 /root/software 目录下
[root@iZuf6j20vqe3q83v5tjwd0Z software]# find / -name x-pack-core-7.1.1.jar
/opt/elasticsearch/modules/x-pack-core/x-pack-core-7.1.1.jar

[root@iZuf6j20vqe3q83v5tjwd0Z software]# rm -rf /opt/elasticsearch/modules/x-pack-core/x-pack-core-7.1.1.jar
[root@iZuf6j20vqe3q83v5tjwd0Z software]# mv x-pack-core-7.1.1.jar /opt/elasticsearch/modules/x-pack-core/

【2.2】申请 License 可以到官方网站去申请
https://www.elastic.co/guide/en/elastic-stack-overview/7.1/license-management.html

【2.3】如有不懂可以访问以下网站进行操作
https://www.ipyker.com/2019/03/13/elastic-x-pack.html

【logstash】安装与部署
【1】下载安装
[root@iZuf6j20vqe3q83v5tjwd0Z software]# pwd //查看当前所在目录
/root/software

[root@iZuf6j20vqe3q83v5tjwd0Z software]# curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-7.1.1.tar.gz
[root@iZuf6j20vqe3q83v5tjwd0Z software]# tar -zxvf logstash-7.1.1.tar.gz -C /opt/

[root@iZuf6j20vqe3q83v5tjwd0Z software]# mv /opt/logstash-7.1.1 /opt/logstash
[root@iZuf6j20vqe3q83v5tjwd0Z software]# cd /opt/logstash/bin/
[root@iZuf6j20vqe3q83v5tjwd0Z bin]# ./logstash -e 'input { stdin{} } output { stdout{} }' //这条语句会执行的比较慢会卡很久_耐心等待
Sending Logstash logs to /opt/logstash/logs which is now configured via log4j2.properties
[2019-06-21T18:33:05,235][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
[2019-06-21T18:33:05,450][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
[2019-06-21T18:33:06,507][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-06-21T18:33:06,554][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.1"}
[2019-06-21T18:33:06,615][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"15ed32aa-f29a-4e2c-b218-980b547a0bd2", :path=>"/opt/logstash/data/uuid"}
[2019-06-21T18:33:24,662][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, :thread=>"#<Thread:0x1e7a9630 run>"}
[2019-06-21T18:33:24,851][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2019-06-21T18:33:24,959][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-06-21T18:33:26,315][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-06-21T18:37:10,372][WARN ][logstash.runner ] SIGINT received. Shutting down.
[2019-06-21T18:37:10,608][FATAL][logstash.runner ] SIGINT received. Terminating immediately..

【2】由于今天是周五,我着急回去,笔者就不写了!但是分享了一个链接给你们!以下是链接地址!

【3】按照此链接操作就好了!安装 logstash 日志收集器
https://www.cnblogs.com/dyh004/p/9638675.html

上一篇:Python中的逗号有什么作用?


下一篇:微软职位内部推荐-Senior Software Engineer-Office Incubation