ClickHouse集群搭建教程-三、ckman依赖部署

  • prometheus(非必需)
  • node_exporter(非必需)
  • nacos(>1.4)(非必需)
  • zookeeper(>3.6.0, 推荐 )
  • mysql (当持久化策略设置为mysql时必需)
  • jdk1.8+

1. prometheus搭建

1.1 下载并解压

mkdir -p /opt/module
wget https://github.com/prometheus/prometheus/releases/download/v2.51.0/prometheus-2.51.0.linux-amd64.tar.gz -P /tmp
tar -zxvf /tmp/prometheus-2.51.0.linux-amd64.tar.gz -C /opt/module

1.2 配置启停服务

创建/usr/lib/systemd/system/prometheus.service文件。

vim /usr/lib/systemd/system/prometheus.service

添加如下内容。

[Unit]
Description=Prometheus
Documentation=https://prometheus.io/
After=network.target

[Service]
Type=simple
ExecStart=/opt/module/prometheus-2.51.0.linux-amd64/prometheus --config.file=/opt/module/prometheus-2.51.0.linux-amd64/prometheus.yml --web.listen-address=0.0.0.0:9090
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure

[Install]
WantedBy=multi-user.target

1.3 promethues配置(可选,不影响ckman核心功能)

修改/opt/module/prometheus-2.51.0.linux-amd64/prometheus.yml文件,添加如下内容。

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:9090"]
      
  - job_name: 'node_exporter'
    scrape_interval: 10s
    static_configs:
      - targets: ['192.168.145.103:9100', '192.168.145.104:9100', '192.168.145.105:9100']
 
  - job_name: 'clickhouse'
    scrape_interval: 10s
    static_configs:
      - targets: ['192.168.145.103:9363', '192.168.145.104:9363', '192.168.145.105:9363']
 
  - job_name: 'zookeeper'
    scrape_interval: 10s
    static_configs:
      - targets: ['192.168.145.103:7070', '192.168.145.104:7070', '192.168.145.105:7070']

1.4 prometheus启停命令

1.4.1 启动prometheus
systemctl start prometheus
1.4.2 停止prometheus
systemctl stop prometheus
1.4.3 重启prometheus
systemctl restart prometheus
1.4.4 查看prometheus状态
systemctl status prometheus

2. node_exporter搭建

2.1 下载并解压

mkdir -p /opt/module/
wget https://github.com/prometheus/node_exporter/releases/download/v1.5.0/node_exporter-1.5.0.linux-amd64.tar.gz -P /tmp
tar -zxvf /tmp/node_exporter-1.5.0.linux-amd64.tar.gz -C /opt/module/

2.2 配置启停服务

创建/usr/lib/systemd/system/node_exporter.service文件。

vim /usr/lib/systemd/system/node_exporter.service

添加如下内容。

[Unit]
Description=node_exporter
Documentation=https://github.com/prometheus/node_exporter
After=network.target

[Service]
Type=simple
ExecStart=/opt/module/node_exporter-1.5.0.linux-amd64/node_exporter
Restart=on-failure
[Install]
WantedBy=multi-user.target

2.3 node监控配置(可选,不影响ckman核心功能)

node_exporter是用来监控clickhouse节点所在机器的一些系统指标的一款工具,因此需要安装在ck节点所在的机器,默认监听9100端口。

2.4 node_exporter启停命令

2.4.1 启动node_exporter
systemctl start node_exporter
2.4.2 停止node_exporter
systemctl stop node_exporter
2.4.3 重启node_exporter
systemctl restart node_exporter
2.4.4 查看node_exporter状态
systemctl status node_exporter

3. zookeeper搭建

3.1 单机搭建(忽略)

试用功能不想搭建zookeeper集群时可搭建zookeeper单机版。

wget --no-check-certificate https://archive.apache.org/dist/zookeeper/zookeeper-3.8.1/apache-zookeeper-3.8.1-bin.tar.gz -P /tmp
mkdir -p /opt/soft/zookeeper
tar -zxvf /tmp/apache-zookeeper-3.8.1-bin.tar.gz -C /opt/soft/zookeeper
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
cp ./conf/zoo_sample.cfg ./conf/zoo.cfg
sed -i "s|^dataDir=.*|dataDir=./tmp/zookeeper|" ./conf/zoo.cfg
sed -i "s|^clientPort=.*|clientPort=12181|" ./conf/zoo.cfg

3.2 集群搭建

3.2.1 hadoop101节点操作
mkdir -p /opt/soft/zookeeper
wget --no-check-certificate https://archive.apache.org/dist/zookeeper/zookeeper-3.8.1/apache-zookeeper-3.8.1-bin.tar.gz -P /tmp
tar -zxvf /tmp/apache-zookeeper-3.8.1-bin.tar.gz -C /opt/soft/zookeeper
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
cp ./conf/zoo_sample.cfg ./conf/zoo.cfg
sed -i "s|^dataDir=.*|dataDir=./tmp/zookeeper|" ./conf/zoo.cfg
sed -i "s|^clientPort=.*|clientPort=12181|" ./conf/zoo.cfg
echo >> ./conf/zoo.cfg
echo "server.1=hadoop101:12888:13888" >> ./conf/zoo.cfg
echo "server.2=hadoop102:12888:13888" >> ./conf/zoo.cfg
echo "server.3=hadoop103:12888:13888" >> ./conf/zoo.cfg
echo >> ./conf/zoo.cfg
echo 'metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider' >> ./conf/zoo.cfg
echo 'metricsProvider.httpPort=7070' >> ./conf/zoo.cfg
echo 'admin.enableServer=true' >> ./conf/zoo.cfg
echo 'admin.serverPort=18080' >> ./conf/zoo.cfg
./bin/zkServer.sh start
echo 1 > ./tmp/zookeeper/myid
./bin/zkServer.sh restart
3.2.2 hadoop102节点操作
mkdir -p /opt/soft/zookeeper
wget --no-check-certificate https://archive.apache.org/dist/zookeeper/zookeeper-3.8.1/apache-zookeeper-3.8.1-bin.tar.gz -P /tmp
tar -zxvf /tmp/apache-zookeeper-3.8.1-bin.tar.gz -C /opt/soft/zookeeper
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
cp ./conf/zoo_sample.cfg ./conf/zoo.cfg
sed -i "s|^dataDir=.*|dataDir=./tmp/zookeeper|" ./conf/zoo.cfg
sed -i "s|^clientPort=.*|clientPort=12181|" ./conf/zoo.cfg
echo >> ./conf/zoo.cfg
echo "server.1=hadoop101:12888:13888" >> ./conf/zoo.cfg
echo "server.2=hadoop102:12888:13888" >> ./conf/zoo.cfg
echo "server.3=hadoop103:12888:13888" >> ./conf/zoo.cfg
echo >> ./conf/zoo.cfg
echo 'metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider' >> ./conf/zoo.cfg
echo 'metricsProvider.httpPort=7070' >> ./conf/zoo.cfg
echo 'admin.enableServer=true' >> ./conf/zoo.cfg
echo 'admin.serverPort=18080' >> ./conf/zoo.cfg
./bin/zkServer.sh start
echo 2 > ./tmp/zookeeper/myid
./bin/zkServer.sh restart
3.2.3 hadoop103节点操作
mkdir -p /opt/soft/zookeeper
wget --no-check-certificate https://archive.apache.org/dist/zookeeper/zookeeper-3.8.1/apache-zookeeper-3.8.1-bin.tar.gz -P /tmp
tar -zxvf /tmp/apache-zookeeper-3.8.1-bin.tar.gz -C /opt/soft/zookeeper
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
cp ./conf/zoo_sample.cfg ./conf/zoo.cfg
sed -i "s|^dataDir=.*|dataDir=./tmp/zookeeper|" ./conf/zoo.cfg
sed -i "s|^clientPort=.*|clientPort=12181|" ./conf/zoo.cfg
echo >> ./conf/zoo.cfg
echo "server.1=hadoop101:12888:13888" >> ./conf/zoo.cfg
echo "server.2=hadoop102:12888:13888" >> ./conf/zoo.cfg
echo "server.3=hadoop103:12888:13888" >> ./conf/zoo.cfg
echo >> ./conf/zoo.cfg
echo 'metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider' >> ./conf/zoo.cfg
echo 'metricsProvider.httpPort=7070' >> ./conf/zoo.cfg
echo 'admin.enableServer=true' >> ./conf/zoo.cfg
echo 'admin.serverPort=18080' >> ./conf/zoo.cfg
./bin/zkServer.sh start
echo 3 > ./tmp/zookeeper/myid
./bin/zkServer.sh restart

3.3 启动zookeeper

3.3.1 单机版启动
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
./bin/zkServer.sh start

查看是否启动成功。

./bin/zkServer.sh status
3.3.2 集群版启动
3.3.2.1 hadoop101节点操作
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
./bin/zkServer.sh start
3.3.2.2 hadoop102节点操作
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
./bin/zkServer.sh start
3.3.2.3 hadoop103节点操作
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
./bin/zkServer.sh start

3.4 zookeeper命令

3.4.1 启动zookeeper
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
./bin/zkServer.sh start
3.4.2 停止zookeeper
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
./bin/zkServer.sh stop
3.4.3 重启zookeeper
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
./bin/zkServer.sh restart
3.4.4 查看zookeeper状态
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
./bin/zkServer.sh status
3.4.5 进入zookeeper客户端
cd /opt/soft/zookeeper/apache-zookeeper-3.8.1-bin
./bin/zkCli.sh -server localhost:12181

3.5 zookeeper监控配置(可选,不影响ckman核心功能)

zookeeper集群是clickhouse实现分布式集群的重要组件。由于clickhouse数据量极大,避免给zookeeper带来太大的压力,最好给clickhouse单独部署一套集群,不要和其他业务公用。

修改zoo.cfg配置文件,添加如下配置。

metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
metricsProvider.httpPort=7000  #暴露给promethues的监控端口
admin.enableServer=true
admin.serverPort=8080   #暴露给四字命令如mntr等的监控端口,3.5.0以上版本支持

配置之后访问http://localhost:8080/commands/mntr,检查是否能正常访问。

4. mysql搭建

  • mysql5.7.44自动化安装教程
  • mysql5.7.37自动化安装教程

5. 安装jdk

hadoop101、hadoop102、hadoop103节点都要执行。

下载地址:https://www.oracle.com/java/technologies/downloads/#java8

下载后上传到/tmp目录下。

然后执行下面命令,用于创建目录、解压,并设置系统级环境变量。

mkdir -p /opt/module
tar -zxvf /tmp/jdk-8u391-linux-x64.tar.gz -C /opt/module/
echo >> /etc/profile
echo '#JAVA_HOME' >> /etc/profile
echo "export JAVA_HOME=/opt/module/jdk1.8.0_391" >> /etc/profile
echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/profile
source /etc/profile

上一篇:Git入门实战教程之创建版本库


下一篇:你真会判断DataGuard的延迟吗?