Docker Compose 简介
Docker Compose是什么?
Docker Compose是一个能一次性定义和管理多个Docker容器的工具。
详细地说:
- Compose中定义和启动的每一个容器都相当于一个服务(service)
- Compose中能定义和启动多个服务,且它们之间通常具有协同关系
管理方式:
- 使用YAML文件来配置我们应用程序的服务。
- 使用单个命令(docker-compose up),就可以创建并启动配置文件中配置的所有服务。
Docker Compose 工作原理
Docker Compose安装
Docker for Mac与Docker for Windows自带docker-compose
Linux下需要单独安装:
第一步:
sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
第二步:
sudo chmod +x /usr/local/bin/docker-compose
终端中使用 docker-compose --version 查看安装的版本
docker-compose --version
这里示例安装版本是1.21.2,很可能您看到这里时,已经出现更新的版本,因此建议换成最新版本。查看最新版本。
其他安装方法查看
https://docs.docker.com/compose/install/#install-compose
Docker Compose CLI
利用docker-compose --help 查看
docker-compose --help
或者查看官方文档
https://docs.docker.com/compose/reference/overview/
对比后会发现:Docker Compose CLI的很多命令的功能和Docker Client CLI是相似的。最主要的区别就是前者能一次性运行管理多个容器,后者只能一次管理一个。
了解 Docker Compose File
Docker Compose File 有多个版本,基本是向后兼容的,但也有极个别配置项高版本中没有。
https://docs.docker.com/compose/compose-file/
在docker-compose.yml一开始就需要利用version关键词标明当前file使用的版本
Docker Compose File TOP配置参数概览
Docker Compose File *配置项:
- version:指定Docker Compose File版本号 (很重要)
- services:定义多个服务并配置启动参数
- volumes:声明或创建在多个服务*同使用的数据卷对象
- networks:定义在多个服务*同使用的网络对象
- configs:声明将在本服务中要使用的一些配置文件
- secrets:声明将在本服务中要使用的一些秘钥、密码文件
- x-***:自定义配置。主要用于复用相同的配置。
更多详细配置
https://docs.docker.com/compose/compose-file/#service-configuration-reference
Docker Compose File 参考示例
Docker Compose 案例一 小型web服务项目搭建
项目结构如下
不知道镜像的版本的话,可以去 git 上找,网址如下
https://github.com/docker-library/docs
步骤:
- 搭建一个flask的小型web项目
- 根据项目环境,利用Dockerfile构建镜像
- 撰写docker-compose.yaml配置文件,启动项目
创建项目的工作目录
mkdir workspace cd workspace mkdir case1-flask-web
撰写项目代码
vi flask-web-code/app.py
代码如下
# encoding=utf-8 import time import redis from flask import Flask app = Flask(__name__) # 此处host是docker-compose.yaml配置文件中 redis服务的名称 cache = redis.Redis(host='redis', port=6379) def get_hit_count(): '''利用redis统计访问次数''' retries = 5 # 由于当redis重启时,可能会有短暂时间无法访问redis # 因此循环的作用就是在这个期间重试,默认重试5次 while True: try: # redis的incr方法,如果hits值存在则自动+1,否则新增该键,值为1 return cache.incr("hits") except redis.execeptions.ConnectError as exec: if retries == 0: raise exec retries -= 1 time.sleep(0.5) @app.route("/") def main(): count = get_hit_count() return "欢迎访问!网站已经累计访问{}次\n".format(count) if __name__ == "__main__": app.run(host="0.0.0.0", debug=True)
app.py
配置需求文件
vi flask-web-code/requirements.txt
配置如下
redis flask
requirements.txt
编写docker file文件
vi Dockerfile
内容如下
# flask web app v1.0 # 搭建一个基于 flask 的 web项目,实现访问量统计 # 第一步:获取一个镜像 python3.6 FROM python:3.6-alpine LABEL Description="本镜像用于启动建议的基于flask的web程序" Author="Itcast" Version="1.0" # 第二部拷贝项目代码到镜像中 ADD COPY COPY ./flask-web-code /code # 第三部:安装项目的环境依赖 flask redis WORKDIR /code RUN pip install -r requirements.txt CMD ["python", "app.py"]
构建镜像 在 Dockerfile 同级目录下输入以下命令
docker build . -t my-flask-image
查看构建的镜像
docker images
撰写 docker compose 文件
在 case1-flask-web 目录下创建 docker-compose.yaml 内容如下
version: '3.6' services: web: build: . ports: - "5000:5000" container_name: flask_web networks: - web redis: image: redis volumes: - redis-data:/data container_name: redis networks: - web volumes: redis-data: driver: local networks: web: driver: "bridge"
检测 docker compose 配置是否正确
docker-compose config
启动容器
docker-compose up -d
访问服务
curl 127.0.0.1:5000
停止容器
docker-compose down
Docker Compose 案例二 单机环境ELK系统搭建
ELK工作原理介绍 官方文档
https://www.elastic.co/guide/index.html
步骤:
配置单机版的docker-compose.yaml文件(ELK镜像地址)
https://www.docker.elastic.co/
下载镜像
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.2.4 docker pull docker.elastic.co/kibana/kibana:6.2.4 docker pull docker.elastic.co/logstash/logstash:6.2.4
结构如下
官方的 elasticsearch 的compose 拷贝下来 ,保存到 docker-compose.yaml 中
version: '2.2' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 container_name: elasticsearch environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata1:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet elasticsearch2: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 container_name: elasticsearch2 environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata2:/usr/share/elasticsearch/data networks: - esnet volumes: esdata1: driver: local esdata2: driver: local networks: esnet:
利用 docker-compose up启动环境
docker-compose up -d
查看 compose log 日志
docker-compose logs
可以看到报了一个错误 virtual 太低了
这点在官方文档上就有说明
配置好后,再次启动
docker-compose down docker-compose up -d
再次查看日志
docker-compose logs -f
正常
访问 elas 服务
curl 127.0.0.1:9200
加入 logstash 到compose
logstash: image: docker.elastic.co/logstash/logstash:6.2.4 environment: - "LS_JAVA_OPTS=-Xms256m -Xmx256m" container_name: logstash networks: - esnet
完整内容如下
version: '2.2' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 container_name: elasticsearch environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata1:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet elasticsearch2: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 container_name: elasticsearch2 environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata2:/usr/share/elasticsearch/data logstash: image: docker.elastic.co/logstash/logstash:6.2.4 environment: - "LS_JAVA_OPTS=-Xms256m -Xmx256m" container_name: logstash networks: - esnet volumes: esdata1: driver: local esdata2: driver: local networks: esnet:
启动多个容器
docker-compose up -d
加入 Kibana 到 compose 中
kibana: image: docker.elastic.co/kibana/kibana:6.2.4 container_name: kibana ports: - "5601:5601" networks: - esnet depends_on: - elasticsearch - elasticsearch2
完整内容如下
version: '2.2' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 container_name: elasticsearch environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata1:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet elasticsearch2: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 container_name: elasticsearch2 environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata2:/usr/share/elasticsearch/data logstash: image: docker.elastic.co/logstash/logstash:6.2.4 environment: - "LS_JAVA_OPTS=-Xms256m -Xmx256m" container_name: logstash networks: - esnet depends_on: - elasticsearch - elasticsearch2 kibana: image: docker.elastic.co/kibana/kibana:6.2.4 container_name: kibana ports: - "5601:5601" networks: - esnet depends_on: - elasticsearch - elasticsearch2 volumes: esdata1: driver: local esdata2: driver: local networks: esnet:
启动 compose
docker-compose up -d
在浏览器中输入
http://192.168.1.112:5601
再次网compose 中添加一个logstash 服务这时
elastic有两个服务,logstash 也有两个服务,一个kibana服务,此时compose 文件内容如下所示
version: '2.2' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 container_name: elasticsearch environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata1:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet elasticsearch2: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 container_name: elasticsearch2 environment: - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata2:/usr/share/elasticsearch/data logstash: image: docker.elastic.co/logstash/logstash:6.2.4 environment: - "LS_JAVA_OPTS=-Xms256m -Xmx256m" container_name: logstash networks: - esnet depends_on: - elasticsearch - elasticsearch2 logstash2: image: docker.elastic.co/logstash/logstash:6.2.4 environment: - "LS_JAVA_OPTS=-Xms256m -Xmx256m" container_name: logstash2 networks: - esnet depends_on: - elasticsearch - elasticsearch2 kibana: image: docker.elastic.co/kibana/kibana:6.2.4 container_name: kibana ports: - "5601:5601" networks: - esnet depends_on: - elasticsearch - elasticsearch2 volumes: esdata1: driver: local esdata2: driver: local networks: esnet:
Docker Compose 多主机环境ELK系统搭建
Swarm 介绍
集群版Docker Compose工作原理
步骤
- 使用docker swarm配置多个docker node集群节点
- 配置集群版ELK的docker-compose.yaml文件
- 利用docker stack deploy部署集群版ELK环境
官方文档
https://docs.docker.com/engine/reference/commandline/swarm/
初始化 swarm
docker swarm init
这个保存下来
docker swarm join --token SWMTKN-1-3kulbpf51q5gmtl8hoa2bow2x9ea1pcx6ficg1w2tntrami4f7-cqgtgc5vf8b449sio740pqrh9 192.168.1.112:2377
只需在另一个服务器上执行上面的即可
查看 节点
docker node ls
swarm-elk.yaml
version: '3.6' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 environment: - cluster.name=docker-cluster - bootstrap.memory_lock=false - "ES_JAVA_OPTS=-Xms512m -Xmx512m" volumes: - esdata1:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet deploy: placement: constraints: - node.role == manager elasticsearch2: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 environment: - cluster.name=docker-cluster - bootstrap.memory_lock=false - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch" volumes: - esdata2:/usr/share/elasticsearch/data networks: - esnet deploy: placement: constraints: - node.role == worker logstash: image: docker.elastic.co/logstash/logstash:6.2.4 environment: - "LS_JAVA_OPTS=-Xms256m -Xmx256m" networks: - esnet deploy: replicas: 2 logstash2: image: docker.elastic.co/logstash/logstash:6.2.4 environment: - "LS_JAVA_OPTS=-Xms256m -Xmx256m" networks: - esnet deploy: replicas: 2 kibana: image: docker.elastic.co/kibana/kibana:6.2.4 ports: - "5601:5601" networks: - esnet deploy: placement: constraints: - node.role == manager volumes: esdata1: driver: local esdata2: driver: local networks: esnet: driver: "overlay"
docker stack deploy 启动
docker stack deploy -h
docker stack deploy -c swarm-elk.yaml elk
查看所有服务
docker service ls
查看一个服务的日志
docker service logs elk_elasticsearch -f
把所有启动的服务都删掉
docker stack rm elk