docker镜像
1、镜像是什么
镜像是一种轻量级、可执行的独立软件包,用来打包软件运行环境和基于运行环境开发的软件,它包含了运行某个软件所需的所有内容,包括代码、运行时库、环境变量和配置问价等。 将所有的应用和环境直接打包成镜像,就可以直接运行。
2、镜像分层原理
docker的镜像实际上由一层一层的文件系统组成,这种层级的文件系统UnionFS。平时我们安装进虚拟机的CentOS都是好几个G,为什么Docker这里才200M?
对于个精简的OS,rootfs可以很小,只需要包合最基本的命令,工具和程序库就可以了,因为底层直接用 Host的kernel,自己只需要提供rootfs就可以了。由此可见对于不同的Linux发行版, boots基本是一致 的, rootfs会有差別,因此不同的发行版可以公用bootfs. 虚拟机是分钟级别,容器是秒级!
3、分层理解
[root@ecs-x-large-2-linux-20200305213344 ~]# docker pull redis:5.0 Trying to pull repository docker.io/library/redis ... 5.0: Pulling from docker.io/library/redis 45b42c59be33: Already exists 5ce2e937bf62: Already exists 2a031498ff58: Already exists ec50b60c87ea: Pull complete 2bf0c804a5c0: Pull complete 6a3615492950: Pull complete Digest: sha256:6ba62effb31d8d74e6e2dec4b7ef9c8985e7fcc85c4f179e13f622f5785a4135 Status: Downloaded newer image for docker.io/redis:5.0
docker镜像为什么要采用这种分层的结构呢?
最大的好处,我觉得莫过于资源共享了!比如有多个镜像都从相同的Base镜像构建而来,那么宿主机
只需在磁盘上保留一份base镜像,同时内存中也只需要加载一份base镜像,这样就可以为所有的容器
服务了,而且镜像的每一层都可以被共享。
总结:
所有的 Docker镜像都起始于一个基础镜像层,当进行修改或培加新的内容时,就会在当前镜像层之
上,创建新的镜像层。Docker 镜像都是只读的,当容器启动时,一个新的可写层加载到镜像的顶部!这一层就是我们通常说的容器层,容器之下的都叫镜像层!
查看docker镜像分层信息
命令:docker inspect 镜像id或镜像名称
[root@ecs-x-large-2-linux-20200305213344 ~]# docker inspect d00afcde654e[ { "Id": "sha256:d00afcde654e3125384d52fb872c88986d2046fa598a12abcee52ff0d98e7562", "RepoTags": [ "docker.io/redis:5.0" ], "RepoDigests": [ "docker.io/redis@sha256:6ba62effb31d8d74e6e2dec4b7ef9c8985e7fcc85c4f179e13f622f5785a4135" ], "Parent": "", "Comment": "", "Created": "2021-03-02T23:29:46.396151327Z", "Container": "6a7820655f2592fdc2b254036170652520beb98f79a41e6aedc17987ccec3829", "ContainerConfig": { "Hostname": "6a7820655f25", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "6379/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "GOSU_VERSION=1.12", "REDIS_VERSION=5.0.12", "REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-5.0.12.tar.gz", "REDIS_DOWNLOAD_SHA=7040eba5910f7c3d38f05ea5a1d88b480488215bdbd2e10ec70d18380108e31e" ], "Cmd": [ "/bin/sh", "-c", "#(nop) ", "CMD [\"redis-server\"]" ], "Image": "sha256:f43399b52be67a391b4bf53e210c55002a2bce5e4fa5f1021d4dc9725ec7f537", "Volumes": { "/data": {} }, "WorkingDir": "/data", "Entrypoint": [ "docker-entrypoint.sh" ], "OnBuild": null, "Labels": {} }, "DockerVersion": "19.03.12", "Author": "", "Config": { "Hostname": "", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "6379/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "GOSU_VERSION=1.12", "REDIS_VERSION=5.0.12", "REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-5.0.12.tar.gz", "REDIS_DOWNLOAD_SHA=7040eba5910f7c3d38f05ea5a1d88b480488215bdbd2e10ec70d18380108e31e" ], "Cmd": [ "redis-server" ], "Image": "sha256:f43399b52be67a391b4bf53e210c55002a2bce5e4fa5f1021d4dc9725ec7f537", "Volumes": { "/data": {} }, "WorkingDir": "/data", "Entrypoint": [ "docker-entrypoint.sh" ], "OnBuild": null, "Labels": null }, "Architecture": "amd64", "Os": "linux", "Size": 98358570, "VirtualSize": 98358570, "GraphDriver": { "Name": "overlay2", "Data": { "LowerDir": "/var/lib/docker/overlay2/343be33bc297acdf8bc2b57b335c025ea76b8d1263548ba269c0aefb81aaf28d/diff:/var/lib/docker/overlay2/3302ce8415cd3a8a1e1e9753eebbb38df5b15cc02fef109e30be41f4310ee810/diff:/var/lib/docker/overlay2/44c8b45db6fd63960703e604f43a4acc5633f09a3a91a8d7263ad2f9bfd0d038/diff:/var/lib/docker/overlay2/5eb368e142c6079aa1f507149216281ca79b5df08ba19bad51390d74dfbf3c1f/diff:/var/lib/docker/overlay2/219cf0492ba08d03dc4f2a5649ec1124fff82ebe22c6f9a0a26ccf303be0e0d1/diff", "MergedDir": "/var/lib/docker/overlay2/d38f31592715a55459f4556623786c5878014bf8ffdcc1e88506069e32ba75dc/merged", "UpperDir": "/var/lib/docker/overlay2/d38f31592715a55459f4556623786c5878014bf8ffdcc1e88506069e32ba75dc/diff", "WorkDir": "/var/lib/docker/overlay2/d38f31592715a55459f4556623786c5878014bf8ffdcc1e88506069e32ba75dc/work" } }, "RootFS": { "Type": "layers", "Layers": [ #镜像分层信息 "sha256:9eb82f04c782ef3f5ca25911e60d75e441ce0fe82e49f0dbf02c81a3161d1300", "sha256:f973e3e0e07c6e9f9418a6dd0c453cd70c7fb87a0826172275883ab4bdb61bf4", "sha256:c16b4f3a3f99ebbcd59795b54faf4cdf2e00ee09b85124fda5d0746d64237ca6", "sha256:01b7eeecc774b7669892f89fc8b84eea781263448978a411f0f429b867410fc5", "sha256:f2df42e57d5eef289656ef8aad072d2828a61e93833e2928a789a88bc2bc1cbc", "sha256:b537eb7339bcbff729ebdc63a0f910b39ae3d5540663a74f55081b62e92f66e3" ] } } ]
容器数据卷
1、什么是数据卷
docker的理念就是将应用和环境打包成一个镜像。但是如果容器被删除,那么容器中的数据将会丢失。我们需要将容器中的数据进行持久化,并且容器之间可以共享数据。这就是容器卷,保障容器数据的持久化。将容器数据目录挂在到宿主机的目录。我们以后修改只需要在本地修改即可,容器内会自动同步!
2、使用数据卷
方式一:指定路径挂载
docker run -it -v 主机目录:容器内目录 -p 主机端口:容器内端口#1、启动并挂在centos镜像[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -it -v /home/ceshi:/home 300e315adb2f /bin/bash#2、查看该容器挂载信息[root@ecs-x-large-2-linux-20200305213344 ~]# docker inspect 242dbee6c4a9 }, "Mounts": [ { "Type": "bind", "Source": "/home/ceshi", "Destination": "/home", "Mode": "", "RW": true, "Propagation": "rprivate" } ],#3、测试文件数据的同步#宿主机创建文件[root@ecs-x-large-2-linux-20200305213344 ~]# cd /home/ceshi/ [root@ecs-x-large-2-linux-20200305213344 ceshi]# echo "11111" > fanxiang.java [root@ecs-x-large-2-linux-20200305213344 ceshi]# ls fanxiang.java [root@ecs-x-large-2-linux-20200305213344 ceshi]# cat fanxiang.java 11111#查看容器目录是否存在该文件[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker exec -it 242dbee6c4a9 /bin/bash [root@242dbee6c4a9 /]# ls bin dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var [root@242dbee6c4a9 /]# cd home/ [root@242dbee6c4a9 home]# ls fanxiang.java [root@242dbee6c4a9 home]# cat fanxiang.java 11111#修改容器内文件内容,看是否可以同步到宿主机文件中[root@242dbee6c4a9 home]# vi fanxiang.java [root@242dbee6c4a9 home]# cat fanxiang.java 11111 22222 [root@242dbee6c4a9 home]##查看宿主机文件[root@ecs-x-large-2-linux-20200305213344 ceshi]# cat /home/ceshi/fanxiang.java 11111 22222 [root@ecs-x-large-2-linux-20200305213344 ceshi]#
方式二、具名挂载
[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker run -d --name nginx02 -P -v nginx-fanxiang:/etc/nginx 35c43ace9216 2cbc89399189416b4a4c04e1d20fc945eb9bffff3e3f400d4d7cacb45701281a#查看生成卷信息[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker volume ls DRIVER VOLUME NAME local bb91e16d78d66b188bba86c8f5646b10c93b955953695737b1162fca0fbef279 local nginx-fanxiang #卷名称#查看卷详细信息[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker inspect nginx-fanxiang [ { "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/nginx-fanxiang/_data", "Name": "nginx-fanxiang", "Options": {}, "Scope": "local" } ]#所有的docker容器内的卷,没有指定目录的情况下都是在 /var/lib/docker/volumes/xxxx/_data 下#如果指定了目录,docker volume ls 是查看不到的。
方式三、匿名挂载
[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker run -d --name nginx01 -P -v /etc/nginx 35c43ace9216 5c0d290079eb097bdc50a8bc66cc4a044fc35ae7826f7c2e934c0bd86f9b8544 [root@ecs-x-large-2-linux-20200305213344 ceshi]# docker volume ls DRIVER VOLUME NAME local bb91e16d78d66b188bba86c8f5646b10c93b955953695737b1162fca0fbef279#查看生成卷信息[root@ecs-x-large-2-linux-20200305213344 ceshi]# docker volume ls DRIVER VOLUME NAME local bb91e16d78d66b188bba86c8f5646b10c93b955953695737b1162fca0fbef279
总结
# 三种挂载: 匿名挂载、具名挂载、指定路径挂载 -v 容器内路径 #匿名挂载 -v 卷名:容器内路径 #具名挂载 -v /宿主机路径:容器内路径 #指定路径挂载 docker volume ls 是查看不到的# 通过 -v 容器内路径: ro rw 改变读写权限 ro #readonly 只读 rw #readwrite 可读可写 docker run -d -P --name nginx05 -v juming:/etc/nginx:ro nginx docker run -d -P --name nginx05 -v juming:/etc/nginx:rw nginx # ro 只要看到ro就说明这个路径只能通过宿主机来操作,容器内部是无法操作!
数据卷容器
#命令:-–volumes-from#实战[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d -P --name nginx01 -v /home/ceshimu /home/ceshi:/var 35c43ace9216e7039f7e2285f4280ed097fce518832c4ddbaac9de7cf803ba0e3a68cd559616 [root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d -P --name nginx02 --volumes-from nginx01 35c43ace9216cae26f23da995802cf827b247a1b85364e925716a312c215a01729a8526be11b [root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d -P --name nginx03 --volumes-from nginx01 35c43ace921665d9eb1bd803a24fe16f22c65bcb8d1ea2ab300833801e11e7d334d1014ea422#这样这三个容器就共用了 /home/ceshimu 目录,达到了容器间文件和数据共享#容器之间的配置信息的传递,数据卷容器的生命周期一直持续到没有容器使用为止。#但是一旦你持久化到了本地,这个时候,本地的数据是不会删除的!
DockerFile
1、DockerFile介绍
dockerfile是用来构建镜像的文件,是命令行参数脚本;具体构建步骤如下:
- 编写一个dockerfile文件
- docker build构建成为一个镜像
- docker run 运行镜像
- docker push 发布镜像到docker hub
因为很多官网镜像都是基础包,很多功能都没有,所以我们经常通过dockerfie构建我们自己的镜像;
2、DockerFile构建过程
基础知识:
- 每个保留的关键字其实就是指令,都必须大写;
- 执行顺序从上至下;
表示注释的意思;
- 每个指令都会创建并提交一个新的镜像层;
其实dockerfile是面向开发的,我们以后要是发布项目、做镜像就需要编写dockerfile文件,这个文件其实十分的简单;
dockerfile:构建文件,定义了一切的步骤、源代码;
dockerimages:通过dockerfile构建生成的镜像文件,最终发布和运行的产品;
docker容器:容器其实就是镜像运行起来提供服务;
DockerFile常用指令
FROM # 基础镜像,一切从这里开始构建 MAINTAINER # 镜像是谁写的, 姓名+邮箱 RUN # 镜像构建的时候需要运行的命令 ADD # 步骤,tomcat镜像,这个tomcat压缩包!添加内容 添加同目录 WORKDIR # 镜像的工作目录 VOLUME # 挂载的目录 EXPOSE # 保留端口配置 CMD # 指定这个容器启动的时候要运行的命令,只有最后一个会生效,可被替代。 ENTRYPOINT # 指定这个容器启动的时候要运行的命令,可以追加命令 ONBUILD # 当构建一个被继承 DockerFile 这个时候就会运行ONBUILD的指令,触发指令。COPY # 类似ADD,将我们文件拷贝到镜像中 ENV # 构建的时候设置环境变量!
实战测试
#1、编写一个我们自己的centos镜像vim dockerfile FROM centos MAINTAINER fanxiang<12345@163.com> ENV MYPATH /usr/localWORKDIR $MYPATHRUN yum -y install vim EXPOSE 80 CMD echo $MYPATHCMD echo "---end---"CMD /bin/bash#2、构建镜像[root@ecs-x-large-2-linux-20200305213344 ~]# docker build -f dockerfile -t mycentos:0.1.1 .Sending build context to Docker daemon 24.06 kB Step 1/9 : FROM centos ---> 300e315adb2f Step 2/9 : MAINTAINER fanxiang<12345@163.com> ---> Running in c194f8179cc6 ---> 49f504155f74 Removing intermediate container c194f8179cc6 Step 3/9 : ENV MYPATH /usr/local ---> Running in 49486765253a ---> 72d62d727c0c Removing intermediate container 49486765253a Step 4/9 : WORKDIR $MYPATH ---> a2830ad521ca Removing intermediate container a508b7e1f11c Step 5/9 : RUN yum -y install vim ---> Running in 9b78249d5c31#3、运行镜像文件为容器[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -it ba7d934dc97e /bin/bash[root@a60302927139 local]# pwd/usr/local
比较CMD和ENTRYPOINT的区别
CMD # 指定这个容器启动的时候要运行的命令,只有最后一个会生效,可被替代。ENTRYPOINT # 指定这个容器启动的时候要运行的命令,可以追加命令#实战#测试CMD#1、编写镜像文件FROM centos CMD ["ls","-a"]#2、构建镜像[root@ecs-x-large-2-linux-20200305213344 ~]# docker build -f dockerfile-cmd -t centos-cmd:1.0.0 .Sending build context to Docker daemon 25.09 kB Step 1/2 : FROM centos ---> 300e315adb2f Step 2/2 : CMD ls -a ---> Running in 6d49ee474c06 ---> 2c8c1dc0ba19 Removing intermediate container 6d49ee474c06 Successfully built 2c8c1dc0ba19#3、运行镜像root@ecs-x-large-2-linux-20200305213344 ~]# docker run 2c8c1dc0ba19. .. .dockerenv bin dev etc home lib lib64#4、此时追加一个命令在后面 -l 编程了 ls -al[root@ecs-x-large-2-linux-20200305213344 ~]# docker run 2c8c1dc0ba19 -lcontainer_linux.go:235: starting container process caused "exec: \"-l\": executable file not found in $PATH"/usr/bin/docker-current: Error response from daemon: oci runtime error: container_linux.go:235: starting container process caused "exec: \"-l\": executable file not found in $PATH".#因为cmd的情况下,由于命令后面追加了-l,所以替换了CMD ["ls","-a"]为CMD ["ls","-l"],因为-l不是命令,所以报错了;#测试ENTRYPOINT#1、编写镜像文件FROM centos ENTRYPOINT ["ls","-a"]#2、构建镜像[root@ecs-x-large-2-linux-20200305213344 ~]# docker build -f dockerfile-en -t centos-en:1.1.1 .Sending build context to Docker daemon 26.11 kB Step 1/2 : FROM centos ---> 300e315adb2f Step 2/2 : ENTRYPOINT ls -a ---> Running in 49c8f33c0272 ---> da1b02c99aec Removing intermediate container 49c8f33c0272 Successfully built da1b02c99aec#3、运行镜像[root@ecs-x-large-2-linux-20200305213344 ~]# docker run da1b02c99aec. .. .dockerenv bin dev etc home#4、运行镜像在后面追加命令[root@ecs-x-large-2-linux-20200305213344 ~]# docker run da1b02c99aec -ltotal 72 drwxr-xr-x. 1 root root 4096 Mar 9 09:00 . drwxr-xr-x. 1 root root 4096 Mar 9 09:00 .. -rwxr-xr-x. 1 root root 0 Mar 9 09:00 .dockerenv lrwxrwxrwx. 1 root root 7 Nov 3 15:22 bin -> usr/bin drwxr-xr-x. 5 root root 340 Mar 9 09:00 dev drwxr-xr-x. 1 root root 4096 Mar 9 09:00 etc drwxr-xr-x. 2 root root 4096 Nov 3 15:22 home lrwxrwxrwx. 1 root root 7 Nov 3 15:22 lib -> usr/
dockerfile综合实战测试,生成tomcat镜像
1、准备tomcat、jdk包到当前目录
-rwxrwxrwx 1 root root 10515248 3月 9 17:06 apache-tomcat-8.5.63.tar.gz #tomcat-rwxrwxrwx 1 root root 143142634 12月 2 13:21 jdk-8u271-linux-x64.tar.gz #jdkdrwxrwxrwx 2 3434 3434 4096 7月 22 2020 node_exporter-0.18.1.linux-amd64 -rwxrwxrwx 1 root root 8083296 3月 18 2020 node_exporter-0.18.1.linux-amd64.tar.gz -rwxrwxrwx 1 root root 20143759 12月 7 2015 Python-3.5.1.tgz drwxrwxrwx 2 root root 4096 6月 28 2018 scripts
2、编写dockerfile
FROM centos # MAINTAINER cheng<1204598429@qq.com> COPY README /usr/local/README #复制文件 ADD jdk-8u231-linux-x64.tar.gz /usr/local/ #复制解压 ADD apache-tomcat-9.0.35.tar.gz /usr/local/ #复制解压 RUN yum -y install vim ENV MYPATH /usr/local #设置环境变量 WORKDIR $MYPATH #设置工作目录 ENV JAVA_HOME /usr/local/jdk1.8.0_231 #设置环境变量 ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.35 #设置环境变量 ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib #设置环境变量 分隔符是: EXPOSE 8080 #设置暴露的端口 CMD /usr/local/apache-tomcat-9.0.35/bin/startup.sh && tail -F /usr/local/apache- tomcat-9.0.35/logs/catalina.out #设置默认命令
3、构建镜像
# 因为dockerfile命名使用默认命名 因此不用使用-f 指定文件 $ docker build -t mytomcat:0.1 .
4、运行镜像
$ docker run -d -p 8080:8080 --name tomcat01 -v /home/kuangshen/build/tomcat/test:/usr/local/apache-tomcat-9.0.35/webapps/test - v /home/kuangshen/build/tomcat/tomcatlogs/:/usr/local/apache-tomcat-9.0.35/logs mytomcat:0.1
5、访问测试
6、发布项目(由于做了卷挂载,我们直接在本地编写项目就可以发布了!)
发现:项目部署成功,可以直接访问!
我们以后开发的步骤:需要掌握Dockerfile的编写!我们之后的一切都是使用docker镜像来发布运行!
发布自己的镜像到dockerhub
2、确定这个账号可以登录
3、登录
4、提交 push镜像
# 会发现push不上去,因为如果没有前缀的话默认是push到 官方的library # 解决方法 # 第一种 build的时候添加你的dockerhub用户名,然后在push就可以放到自己的仓库了 $ docker build -t chengcoder/mytomcat:0.1 . # 第二种 使用docker tag #然后再次push $ docker tag 容器id chengcoder/mytomcat:1.0 #然后再次push
Docker网络
1、如何理解docker0
[root@ecs-x-large-2-linux-20200305213344 ~]# ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo #本地回环地址 valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:87:2c:ab brd ff:ff:ff:ff:ff:ff#阿里云内网地址 inet 192.168.0.158/24 brd 192.168.0.255 scope global noprefixroute dynamic eth0 valid_lft 75530sec preferred_lft 75530sec inet6 fe80::f816:3eff:fe87:2cab/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:95:d5:1c:dd brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 #docker0地址 valid_lft forever preferred_lft forever inet6 fe80::42:95ff:fed5:1cdd/64 scope link valid_lft forever preferred_lft forever
一共三个网络
2、docker如何处理容器网络请求
#运行一个tomcat容器[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat01 tomcat 3a030b178e509a29f76c9fe6cd0fe70ba644983c35807c18b61466f2efa63461#查看目前宿主机网络状态[root@ecs-x-large-2-linux-20200305213344 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:87:2c:ab brd ff:ff:ff:ff:ff:ff inet 192.168.0.158/24 brd 192.168.0.255 scope global noprefixroute dynamic eth0 valid_lft 75211sec preferred_lft 75211sec inet6 fe80::f816:3eff:fe87:2cab/64 scope link valid_lft forever preferred_lft forever 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:95:d5:1c:dd brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:95ff:fed5:1cdd/64 scope link valid_lft forever preferred_lft forever #多了一组IP地址,59 -58 59: vethece8904@if58: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 22:70:2a:91:fb:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::2070:2aff:fe91:fbf9/64 scope link valid_lft forever preferred_lft forever#进入内容内部查看网络情况[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it 3a030b178e50 /bin/bash root@3a030b178e50:/usr/local/tomcat# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever # 58 - 59 58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:2/64 scope link valid_lft forever preferred_lft forever#测试宿主机和容器是否可以互相ping通可以ping通
总结原理:
1、我们每启动一个docker容器,docker就会给docker容器分配一个ip,我们只要按照了docker, 就会有一个docker0桥接模式,使用的技术是veth-pair技术! 2 、在启动一个容器测试,发现又多了一对网络。我们发现这个容器带来网卡,都是一对对的 veth-pair 就是一对的虚拟设备接口,他们都是成对出现的,一端连着协议,一端彼此相连 正因为有这个特性 veth-pair 充当一个桥梁,连接各种虚拟网络设备的 OpenStac,Docker容器之间的连接,OVS的连接,都是使用evth-pair技术。 3、我们来测试下tomcat01和tomcat02是否可以ping通 **结论**: tomcat01和tomcat02公用一个路由器,docker0。 所有的容器不指定网络的情况下,都是docker0路由的,docker会给我们的容器分配一个默认的可用 ip。Docker使用的是Linux的桥接,宿主机是一个Docker容器的网桥 docker0。
3、-link的使用
问题:我们想要访问一个容器,比如mysql容器。但是我们不想因为该容易重启后ip地址变了,导致我需要修改我的链接配置文件,我想通过名字来访问该mysql容器?
不同容器之间通过docker0是可以相互ping通的,但是无法通过名字来ping通,所以产生了-link;
[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat02 tomcat815f89b080aa83ba93e023237a962fa327198dfeef8e1b770156fd03717b21fc [root@ecs-x-large-2-linux-20200305213344 ~]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 815f89b080aa tomcat "catalina.sh run" 5 seconds ago Up 4 seconds 8080/tcp tomcat02 3a030b178e50 tomcat "catalina.sh run" 18 minutes ago Up 18 minutes 8080/tcp tomcat01 [root@ecs-x-large-2-linux-20200305213344 ~]# docker exec 815f89b080aa ping tomcat01ping: tomcat01: Name or service not known #无法通过名字ping通[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec 3a030b178e50 ping tomcat02ping: tomcat02: Name or service not known [root@ecs-x-large-2-linux-20200305213344 ~]##但是通过ip是可以ping通的[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec 815f89b080aa ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 60: eth0@if61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.3/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:3/64 scope link valid_lft forever preferred_lft forever [root@ecs-x-large-2-linux-20200305213344 ~]# docker exec 3a030b178e50 ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 58: eth0@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:2/64 scope link valid_lft forever preferred_lft forever [root@ecs-x-large-2-linux-20200305213344 ~]# docker exec 815f89b080aa ping 172.17.0.2PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.058 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.045 ms 64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.055 ms 64 bytes from 172.17.0.2: icmp_seq=4 ttl=64 time=0.050 ms 64 bytes from 172.17.0.2: icmp_seq=5 ttl=64 time=0.055 ms
使用 -link
#运行tomcat03 --link tomcat02[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat03 --link tomcat02 tomcatcb7d633736d4fa52916950e2477ec229dd2a4193e2d2121a55fa42a16471840f#用tomcat03可以ping通tomcat02[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat03 ping tomcat02PING tomcat02 (172.17.0.3) 56(84) bytes of data. 64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=0.078 ms 64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=0.043 ms 64 bytes from tomcat02 (172.17.0.3): icmp_seq=3 ttl=64 time=0.049 ms#用 tomcat02 却无法ping通tomcat03[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat02 ping tomcat03ping: tomcat03: Name or service not known#原理#查看tomcat03的hosts文件[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat03 cat /etc/hosts127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.3 tomcat02 815f89b080aa #存在tomcat02172.17.0.4 cb7d633736d4#查看tomcat02的hosts文件[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat02 cat /etc/hosts127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes #没有tomcat03ff02::2 ip6-allrouters 172.17.0.3 815f89b080aa --link的本质就是在hosts配置中增加映射,不过现在已经不建议使用--link,也不建议使用docker0
4、自定义网络
[root@ecs-x-large-2-linux-20200305213344 ~]# docker network Commands: connect Connect a container to a network create Create a network disconnect Disconnect a container from a network inspect Display detailed information on one or more networks ls List networks prune Remove all unused networks rm Remove one or more networks
查看所有的docker网络
[root@ecs-x-large-2-linux-20200305213344 ~]# docker network lsNETWORK ID NAME DRIVER SCOPE 049b2d455318 bridge bridge localea9aabe8b492 host host local4c9f7b958e9b none null localbridge :桥接 docker(默认,自己创建也是用bridge模式) none :不配置网络,一般不用 host :和所主机共享网络
测试实战
# 我们直接启动的命令 --net bridge,而这个就是我们得docker0 # bridge就是docker0 $ docker run -d -P --name tomcat01 tomcat 等价于 => docker run -d -P --name tomcat01 --net bridge tomcat # docker0,特点:默认,域名不能访问。 --link可以打通连接,但是很麻烦!#自定义一个网络[root@ecs-x-large-2-linux-20200305213344 ~]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 myneta8ba53cc0f3b5b37e972fd57012d68fb78f95d2b9b4c3ef3316717687887ae6d [root@ecs-x-large-2-linux-20200305213344 ~]# docker network lsNETWORK ID NAME DRIVER SCOPE 049b2d455318 bridge bridge localea9aabe8b492 host host locala8ba53cc0f3b mynet bridge local #自定义网络4c9f7b958e9b none null local#查看自定义网络详情[root@ecs-x-large-2-linux-20200305213344 ~]# docker inspect mynet[ { "Name": "mynet", "Id": "a8ba53cc0f3b5b37e972fd57012d68fb78f95d2b9b4c3ef3316717687887ae6d", "Created": "2021-03-09T19:16:52.302646691+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "192.168.0.0/16", "Gateway": "192.168.0.1" } ] }, "Internal": false, "Attachable": false, "Containers": {}, "Options": {}, "Labels": {} } ]#启动两个tomcat[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat01 --net mynet tomcate14ddd601d9713635adab96caef5b054c263eff6f37c9d052e512ef6efb1015b [root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat02 --net mynet tomcatff97ba52684433594ca09511a34842401b9d7f0027729f251e23b12f183a182b [root@ecs-x-large-2-linux-20200305213344 ~]# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ff97ba526844 tomcat "catalina.sh run" 4 seconds ago Up 4 seconds 8080/tcp tomcat02 e14ddd601d97 tomcat "catalina.sh run" 11 seconds ago Up 10 seconds 8080/tcp tomcat01#查看mynet详情[root@ecs-x-large-2-linux-20200305213344 ~]# docker inspect mynet[ { "Name": "mynet", "Id": "a8ba53cc0f3b5b37e972fd57012d68fb78f95d2b9b4c3ef3316717687887ae6d", "Created": "2021-03-09T19:16:52.302646691+08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "192.168.0.0/16", "Gateway": "192.168.0.1" } ] }, "Internal": false, "Attachable": false, "Containers": { "e14ddd601d9713635adab96caef5b054c263eff6f37c9d052e512ef6efb1015b": { "Name": "tomcat01", "EndpointID": "f7d9b56bdef9d5739d016f8495c619273ebabbb069ff63bfa45be36a3579ca47", "MacAddress": "02:42:c0:a8:00:02", "IPv4Address": "192.168.0.2/16", "IPv6Address": "" }, "ff97ba52684433594ca09511a34842401b9d7f0027729f251e23b12f183a182b": { "Name": "tomcat02", "EndpointID": "699eb4fdba2f17d4a3750451139380f2c5ce7ceb2ed65f66a6e748c87c4ff405", "MacAddress": "02:42:c0:a8:00:03", "IPv4Address": "192.168.0.3/16", "IPv6Address": "" } }, "Options": {}, "Labels": {} } ]#在自定义网络下可以相互ping通[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat01 ping tomcat02PING tomcat02 (192.168.0.3) 56(84) bytes of data. 64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from tomcat02.mynet (192.168.0.3): icmp_seq=3 ttl=64 time=0.040 ms ^C --- tomcat02 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.040/0.043/0.046/0.008 ms [root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat02 ping tomcat01PING tomcat01 (192.168.0.2) 56(84) bytes of data. 64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.029 ms 64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.034 ms ^C --- tomcat01 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1ms rtt min/avg/max/mdev = 0.029/0.031/0.034/0.006 ms#总结我们自定义的网络为我们维护好的相应关系,我们推荐使用该中方式;
5、网络联通
#测试两个不同的网络联通(启动一个tomcat03.默认使用docker0)[root@ecs-x-large-2-linux-20200305213344 ~]# docker run -d --name tomcat03 tomcat 70fb08bab733dd100a748ba587f5e72b36045fc8575ebd4fb2ae6f5bd695652f [root@ecs-x-large-2-linux-20200305213344 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70fb08bab733 tomcat "catalina.sh run" 4 seconds ago Up 4 seconds 8080/tcp tomcat03 ff97ba526844 tomcat "catalina.sh run" 7 minutes ago Up 7 minutes 8080/tcp tomcat02 e14ddd601d97 tomcat "catalina.sh run" 7 minutes ago Up 7 minutes 8080/tcp tomcat01#测试tomcat03 ping tomcat01[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat03 ping tomcat01 ping: tomcat01: Name or service not known#要将tomcat03 联通到tomgcat01,就是将tomcat03添加到mynet网络中docker network connect mynet tomcat03#测试tomcat03和tomcat01ping[root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat03 ping tomcat01 PING tomcat01 (192.168.0.2) 56(84) bytes of data. 64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.033 ms 64 bytes from tomcat01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.053 ms ^C --- tomcat01 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.033/0.043/0.053/0.011 ms [root@ecs-x-large-2-linux-20200305213344 ~]# docker exec -it tomcat01 ping tomcat03 PING tomcat03 (192.168.0.4) 56(84) bytes of data. 64 bytes from tomcat03.mynet (192.168.0.4): icmp_seq=1 ttl=64 time=0.025 ms 64 bytes from tomcat03.mynet (192.168.0.4): icmp_seq=2 ttl=64 time=0.045 ms ^C --- tomcat03 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.025/0.035/0.045/0.010 ms#结论假如要跨网络链接服务器,那就要使用docker network connect 将容器与网络联通;