云计算openstack核心组件——nova计算服务(7)
一、nova介绍:
- 客户(可以是 OpenStack 最终用户,也可以是其他程序)向 API(nova-api)发送请求:“帮我创建一个虚机”
- API 对请求做一些必要处理后,向 Messaging(RabbitMQ)发送了一条消息:“让 Scheduler 创建一个虚机”
- Scheduler(nova-scheduler)从 Messaging 获取到 API 发给它的消息,然后执行调度算法,从若干计算节点中选出节点 A
- Scheduler 向 Messaging 发送了一条消息:“在计算节点 A 上创建这个虚机”
- 计算节点 A 的 Compute(nova-compute)从 Messaging 中获取到 Scheduler 发给它的消息,然后在本节点的 Hypervisor 上启动虚机。
- 在虚机创建的过程中,Compute 如果需要查询或更新数据库信息,会通过 Messaging 向 Conductor(nova-conductor)发送消息,Conductor 负责数据库访问。
1、界面或命令行通过RESTful API向keystone获取认证信息。
2、keystone通过用户请求认证信息,并生成auth-token返回给对应的认证请求。
3、界面或命令行通过RESTful API向nova-api发送一个boot instance的请求(携带auth-token)。
4、nova-api接受请求后向keystone发送认证请求,查看token是否为有效用户和token。
5、keystone验证token是否有效,如有效则返回有效的认证和对应的角色(注:有些操作需要有角色权限才能操作)。
6、通过认证后nova-api和数据库通讯。
7、初始化新建虚拟机的数据库记录。
8、nova-api通过rpc.call向nova-scheduler请求是否有创建虚拟机的资源(Host ID)。
9、nova-scheduler进程侦听消息队列,获取nova-api的请求。
10、nova-scheduler通过查询nova数据库中计算资源的情况,并通过调度算法计算符合虚拟机创建需要的主机。
11、对于有符合虚拟机创建的主机,nova-scheduler更新数据库中虚拟机对应的物理主机信息。
12、nova-scheduler通过rpc.cast向nova-compute发送对应的创建虚拟机请求的消息。
13、nova-compute会从对应的消息队列中获取创建虚拟机请求的消息。
14、nova-compute通过rpc.call向nova-conductor请求获取虚拟机消息。(Flavor)
15、nova-conductor从消息队队列中拿到nova-compute请求消息。
16、nova-conductor根据消息查询虚拟机对应的信息。
17、nova-conductor从数据库中获得虚拟机对应信息。
18、nova-conductor把虚拟机信息通过消息的方式发送到消息队列中。
19、nova-compute从对应的消息队列中获取虚拟机信息消息。
20、nova-compute通过keystone的RESTfull API拿到认证的token,并通过HTTP请求glance-api获取创建虚拟机所需要镜像。
21、glance-api向keystone认证token是否有效,并返回验证结果。
22、token验证通过,nova-compute获得虚拟机镜像信息(URL)。
23、nova-compute通过keystone的RESTfull API拿到认证k的token,并通过HTTP请求neutron-server获取创建虚拟机所需要的网络信息。
24、neutron-server向keystone认证token是否有效,并返回验证结果。
25、token验证通过,nova-compute获得虚拟机网络信息。
26、nova-compute通过keystone的RESTfull API拿到认证的token,并通过HTTP请求cinder-api获取创建虚拟机所需要的持久化存储信息。
27、cinder-api向keystone认证token是否有效,并返回验证结果。
28、token验证通过,nova-compute获得虚拟机持久化存储信息。
29、nova-compute根据instance的信息调用配置的虚拟化驱动来创建虚拟机。
1、控制节点上操作查看计算节点,删除node1
1
2
|
openstack host list nova service-list |
2、将node1上的计算服务设置为down,然后disabled
1
2
|
systemctl stop openstack-nova-compute nova service-list |
1
2
|
nova service-disable node1 nova-compute nova service-list |
3、在数据库里清理(nova库)
(1)参看现在数据库状态
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
[root@node1 ~] # mysql -u root -p
Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 90
Server version: 10.1.20-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> use nova; Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A Database changed MariaDB [nova]> select host from nova.services;
+---------+ | host | +---------+ | 0.0.0.0 | | 0.0.0.0 | | node1 | | node1 | | node1 | | node1 | | node2 | +---------+ 7 rows in set (0.00 sec)
MariaDB [nova]> select hypervisor_hostname from compute_nodes;
+---------------------+ | hypervisor_hostname | +---------------------+ | node1 | | node2 | +---------------------+ 2 rows in set (0.00 sec)
|
(2)删除数据库中的node1节点信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
MariaDB [nova]> delete from nova.services where host= "node1" ;
Query OK, 4 rows affected (0.01 sec) MariaDB [nova]> delete from compute_nodes where hypervisor_hostname= "node1" ;
Query OK, 1 row affected (0.00 sec) MariaDB [nova]> MariaDB [nova]> MariaDB [nova]> MariaDB [nova]> select host from nova.services;
+---------+ | host | +---------+ | 0.0.0.0 | | 0.0.0.0 | | node2 | +---------+ 3 rows in set (0.00 sec)
MariaDB [nova]> select hypervisor_hostname from compute_nodes;
+---------------------+ | hypervisor_hostname | +---------------------+ | node2 | +---------------------+ 1 row in set (0.00 sec)
MariaDB [nova]> |
[DEFAULT]
my_ip=172.16.254.63
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:admin@controller [api]
auth_strategy = keystone [api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api [barbican] [cache] [cells] [cinder]
os_region_name = RegionOne [cloudpipe] [conductor] [console] [consoleauth] [cors] [cors.subdomain] [crypto] [database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova [ephemeral_storage_encryption] [filter_scheduler] [glance]
api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [image_file_url] [ironic] [key_manager] [keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova [libvirt]
virt_type=qemu [matchmaker_redis] [metrics] [mks] [neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET [notifications] [osapi_v21] [oslo_concurrency]
lock_path=/var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci]
[placement]
os_region_name = RegionOne
auth_type = password
auth_url = http://controller:35357/v3
project_name = service
project_domain_name = Default
username = placement
password = placement
user_domain_name = Default [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [ssl] [trusted_computing] [upgrade_levels] [vendordata_dynamic_auth] [vmware] [vnc]
enabled=true
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ip
novncproxy_base_url = http://172.16.254.63:6080/vnc_auto.html [workarounds] [wsgi] [xenserver] [xvp]
nova组件:
1 nova-api
2 nova-scheduler(调度服务)包括过滤器(filter)权重计算(weighting)
3 Nova compute nova-compute 是管理虚机的核心服务
4 nova-conductor
一循检准备工作
source openrc 开启云计算
共享服务循检工作
1 查看数据库
systemctl status mariadb.service
2 查看缓存
systemctl status memcache.service
3 查看消息队列
systemctl status rabbitmq-server.service
4 查看rabbitmq应用服务的状态
rabbitmqctl cluster_status
5 查看http
systemctl status httpd
6 启动不了,拍错方式找error faild
方法1 journalctl -ex
方法2 cd /var/log/httpd/
[root@node1 ~]# cd /var/log/httpd/
[root@node1 httpd]# ls
access_log error_log error_log-20190701 error_log-20190802 keystone.log
access_log-20190622 error_log-20190622 error_log-20190711 keystone_access.lo
查看报错日志
vim error_log
/error 进行查询
7 nova service-list 如果关闭开启
web界面
8 kvm下虚拟机的保存路径
/var/lib/libvirt/images/
9 openstack下虚拟机默认保存路径排错用到
/var/lib/nova/installces/
Nova list
Nova show Id 查看机器的具体信息
kvm命令查看虚拟机
virsh list --all
virsh edit 名称查看
二 nova(控制节点 node1)部署工作
source openrc
1 node1 mysql -uroot -p123
2 创建三种库
MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0;
3 3种库创建两种登录方式
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ IDENTIFIED BY 'NOVA_DBPASS'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ IDENTIFIED BY 'NOVA_DBPASS';
4 创建一个nova用户密码为 nova 在default中的域
openstack user create --domin default --password=nova nova
5 在service的项目里面设置nova的用户 admin的角色
openstack role add --project service --user nova admin
6 创建一个nova的计算服务
openstack service create --name nova \
--description "openstack compute" compute
检查创建的服务
openstack service list
7 服务的三种模式的地址发出去宣告端点
openstack endpoint create --region RegionOne \ compute public http://node1:8774/v2.1
openstack endpoint create --region RegionOne \ compute internal http://node1:8774/v2.1
openstack endpoint create --region RegionOne \ compute admin http://node1:8774/v2.1
openstack endpoint list | grep nova
8 创建 placement 追踪每个节点的资源使用情况
openstack user create --demain default -- password=placement placement (密码账户)
openstack role add --project service --user placement admin
openstack service create --name lacement --description "placement API" placement
9 服务的三种模式的地址发出去宣告端点
openstack endpoint create --region RegionOne \ placement public http://node1:8778/v2.1
openstack endpoint create --region RegionOne \ placement internal http://node1:8778/v2.1
openstack endpoint create --region RegionOne \ placement admin http://node1:8778/v2.1
10 node1 节点安装nova 的 6个软件包
yum install openstack-nova-api openstack-nova-conductor \ openstack-nova-console openstack-nova-novncproxy \ openstack-nova-scheduler openstack-nova-placement-api
11 修改配置文件
cd /etc/nova 切换到配置文件
ls
copy nova.conf nova.conf.beifen 进行备份
ll 查看属主属组
vim nova.conf 之前的内容全部删除
复制本章最下面的配置文件内容 然后修改
[DEFAULT] 修改内容
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@node1
my_ip = 192.168.194.6 当前节点ip use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database] 修改内容
connection = mysql+pymysql://nova:NOVA_DBPASS@node1/nova_api
[database] 修改内容 connection = mysql+pymysql://nova:NOVA_DBPASS@node1/nova
[api] 修改内容
auth_strategy = keystone
[keystone_authtoken] 修改内容
auth_uri = http://node1:5000 auth_url = http://node1:35357 memcached_servers = node1:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password =nova
[vnc] 修改内容 enabled = true vncserver_listen = $my_ip vncserver_proxyclient_address = $my_ip
[glance] 修改内容
api_servers = http://node1:9292
[oslo_concurrency] 修改内容
lock_path = /var/lib/nova/tmp
[placement] 修改内容
os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://node1:35357/v3 username = placement password = placement
12 继续修改配置文件
vim /etc/httpd/conf.d/00-nova-placement-api.conf
添加下面内容到文件最后面
<Directory /usr/bin> <IfVersion >= 2.4> Require all granted </IfVersion> <IfVersion < 2.4> Order allow,deny Allow from all </IfVersion> </Directory>
12 启动http服务
systemctl restart httpd
13 倒库
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
14 查看cells列表
nova-manage cell_v2 list_cells
15 开启nova服务
systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service
验证服务开启查看状态
systemctl statusopenstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service
三 nova(计算节点 node2)部署工作
1 检查源
2 下载计算节点软件包
yum install openstack-nova-compute -y
3 下载失败解决依赖如果正常进入下一步
node1上面执行
cd /openstack-ocata
ls
cirros-0.3.3-x86_64-disk.img openstack-compute-yilai
复制依赖发送node2
scp -r openstack-compute-ylai node2:/root/
node2上面执行
ls
cd openstack-compute-yilai /
yum localinstall -y ./*
下载计算节点
yum install openstack-nova-compute -y
4 修改配置文件之前的全部删除
node1有几处不一样的地方修改
复制本章最下面的配置文件内容 然后修改 多余的注释掉
[DEFAULT] 修改内容
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@node1
my_ip = 192.168.194.7 当前节点ip use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api] 修改内容
auth_strategy = keystone
[keystone_authtoken] 修改内容
auth_uri = http://node1:5000 auth_url = http://node1:35357 memcached_servers = node1:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password =nova
[vnc] 修改内容 enabled = true vncserver_listen = 0.0.0.0 监听所有 vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://node1:6080/vnc_auto.html
[glance] 修改内容
api_servers = http://node1:9292
[oslo_concurrency] 修改内容
lock_path = /var/lib/nova/tmp
[placement] 修改内容
os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://node1:35357/v3 username = placement password = placement
[libvirt] 修改内容
virt_type = qemu
查看虚拟端口是否打开
egrep -c '(vmx|svm)' /proc/cpuinfo
5 在node1控制节点查看nova部署的两种方法
nova service-list
openstack compute service list
6 在node2开启compute服务
libvirtd.service 作为kvm的虚拟化服务需要开启
systemctl start libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
7 在node1控制节点查看nova部署
openstack compute service list
openstack catalog list
关闭服务
systemctl stop openstack-nova-compute
8 虚拟化管理层显示
openstack hypervisor list
把node2的信息写入数据库 否则启动虚拟机 启不了这个节点
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova openstack image list 镜像列表
nova-status upgrade check
第1台控制节点 第 2 3 台计算节点 设置之前解决依赖