一.openstack计算服务nova
1.1nova介绍
Nova是openstack最早的两块模块之一,另一个是对象存储swift
。在openstack体系中一个叫做计算节点
,一个叫做控制节点
。这个主要和nova相关,我们把安装为计算节点nova-compute
,把除了nova-compute叫做控制节点。nova-compute是创建虚拟机的,只是创建虚拟机,所有的控制都在另一台上。
1.2Nova组件介绍
- API:实现了RESTful API功能,是外部访问Nova的唯一途径。
接收外部的请求并通过Message Queue将请求发送给其他的服务组件,同时也兼容EC2 API,所以也可以用EC2的管理工具对nova进行日常管理。
- Scheduler:模块在OpenStack中负责决策虚拟机创建在那台主机(计算节点)上。
决策一个虚拟机应该调度到某物理节点,需要分两个步骤:
1.过滤(Fliter) 首先获取过未过滤的主机列表,根据过滤属性,选择服务条件的计算节点主机。
2.计算权值(Weight) 经过主机过滤,需要对主机进行权值的计算,根据策略选择相应的某一台主机。
- Cert:负责身份认证
- Conductor:计算节点访问数据库的中间件
- Consoleauth:用于控制台的授权验证
- Novncproxy:VNC代理
1.3nova环境准备
1.3.1安装nova包及组件
[root@linux-node1 ~]# yum install –y \
openstack-nova-api \
openstack-nova-conductor \
openstack-nova-console \
openstack-nova-novncproxy \
openstack-nova-scheduler
注解:(从上往下依次)
nova-api接口
计算节点访问数据库中间件
控制台授权认证组件
VNC代理组件
云主机调度组件
1.3.2创建nova库及用户
#登录数据库
[root@linux-node1 ~]# mysql -uroot –p
#创建nova库
MariaDB [(none)]> create database nova;
Query OK, row affected (0.00 sec)
#创建nova用户并授权
MariaDB [(none)]> grant all privileges on nova.* to nova@'%' identified by 'nova';
Query OK, rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on nova.* to nova@'localhost' identified by 'nova';
Query OK, rows affected (0.00 sec)
#创建nova_api库
MariaDB [(none)]> create database nova_api;
Query OK, row affected (0.00 sec)
#授权nova用户使用nova_api库
MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'nova';
Query OK, rows affected (0.00 sec)
MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified by 'nova';
Query OK, rows affected (0.00 sec)
1.3.3创建openstack的nova用户
#创建nova用户
[root@linux-node1 ~]# openstack user create --domain default --password-prompt nova
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 71f572aaf1ec431695f2ed0b27b8c908 |
| name | nova |
| password_expires_at | None |
+---------------------+----------------------------------+
#将nova用户加入到service项目并且赋予admin角色
[root@linux-node1 ~]# openstack role add --project service --user nova admin
1.4安装配置nova控制节点
1.4.1编辑配置文件
#编辑nova配置文件
[root@linux-node1 ~]# vim /etc/nova/nova.conf
#只启用计算和元数据API,打开注释
enabled_apis=osapi_compute,metadata
#在api_database标签下添加内容
[api_database]
#nova_api连接数据库
connection=mysql+pymysql://nova:nova@192.168.56.11/nova_api
#在database标签下添加内容
[database]
#nova连接数据库
connection=mysql+pymysql://nova:nova@192.168.56.11/nova
#在default标签下添加内容
[default]
#消息队列配置
transport_url=rabbit://openstack:openstack@192.168.56.11
#允许keystone认证方式,打开注释
auth_strategy=keystone
#在keystone_authtoken标签下添加内容
[keystone_authtoken]
#配置nova连接keystone
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
#启用网络服务支持
use_neutron=true
#防火墙驱动
firewall_driver = nova.virt.firewall.NoopFirewallDriver
#配置VNC代理使用控制节点的管理接口IP地址
vncserver_listen=0.0.0.0
#VNC客户端地址
vncserver_proxyclient_address=192.168.56.11
#配置镜像服务 API 的位置
api_servers=http://192.168.56.11:9292
#配置锁路径,打开注释
lock_path=/var/lib/nova/tmp
1.4.2将数据导入数据库
#导入nova-api数据
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
#导入nova数据
[root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova
1.4.3检查数据库
[root@linux-node1 ~]# mysql -h192.168.56. -unova -pnova -e "use nova;show tables;"
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_auth_tokens |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_extra |
| instance_faults |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| inventories |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| resource_provider_aggregates |
| resource_providers |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_extra |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
| shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| snapshot_id_mappings |
| snapshots |
| tags |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
+--------------------------------------------+
1.4.4项目及端点配置
#创建nova实体服务
[root@linux-node1 ~]# openstack service create --name nova \
--description "OpenStack Compute" compute
#创建nova端点
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \
compute public http://192.168.56.11:8774/v2.1/%\(tenant_id\)s
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \
compute internal http://192.168.56.11:8774/v2.1/%\(tenant_id\)s
[root@linux-node1 ~]# openstack endpoint create --region RegionOne \
compute admin http://192.168.56.11:8774/v2.1/%\(tenant_id\)s
1.4.5检查端点列表
[root@linux-node1 ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
| 1cf120812e2142c1ac9c239a71146ed8 | RegionOne | nova | compute | True | admin | http://192.168.56.11:8774/v2.1/%(tenant_id)s |
| 30ead02b5d1b4198bc5bf5c030182113 | RegionOne | nova | compute | True | public | http://192.168.56.11:8774/v2.1/%(tenant_id)s |
| 46bb270ff4f04b0da6a69a554322bc27 | RegionOne | keystone | identity | True | public | http://192.168.56.11:5000/v3/ |
| 5da8b564f1244915a8d0bdf1d1f65a18 | RegionOne | glance | image | True | internal | http://192.168.56.11:9292 |
| 77bca853dafb413da29dcbac4bed9305 | RegionOne | keystone | identity | True | admin | http://192.168.56.11:35357/v3/ |
| 7cc4f83fc4f34cf9b1ec5033739aefc1 | RegionOne | keystone | identity | True | internal | http://192.168.56.11:35357/v3/ |
| 9f35261f1894470d81abfb8dce6876a4 | RegionOne | glance | image | True | admin | http://192.168.56.11:9292 |
| aa50739225fc4aecb9b2e9fa589d2706 | RegionOne | nova | compute | True | internal | http://192.168.56.11:8774/v2.1/%(tenant_id)s |
| fc8978523b064b518eab75f40a7db017 | RegionOne | glance | image | True | public | http://192.168.56.11:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
1.4.6验证nova
[root@linux-node1 ~]# openstack host list
+-------------------------+-------------+----------+
| Host Name | Service | Zone |
+-------------------------+-------------+----------+
| linux-node1.example.com | consoleauth | internal |
| linux-node1.example.com | conductor | internal |
| linux-node1.example.com | scheduler | internal |
+-------------------------+-------------+----------+
1.4.7启动nova及其所有组件服务
#允许开机自启
[root@linux-node1 ~]# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
#启动服务
[root@linux-node1 ~]# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
1.5安装配置nova计算节点
1.5.1环境准备
#安装计算节点nova
[root@linux-node2 ~]# yum install -y openstack-nova-compute
1.5.2修改配置文件
#将控制节点nova配置文件拷贝到计算节点
[root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/
#编辑配置文件
[root@linux-node2 ~]# vim /etc/nova/nova.conf
#删除以下两行内容(计算节点的nova连接数据库,用nova-conductor中间件所以不需要配置数据库)
connection=mysql+pymysql://nova:nova@192.168.56.11/nova_api
connection=mysql+pymysql://nova:nova@192.168.56.11/nova
#将VNC客户端地址改为计算节点IP
vncserver_proxyclient_address=192.168.56.12
#添加VNC代理url地址
novncproxy_base_url=http://192.168.56.11:6080/vnc_auto.html
#允许使用VNC,打开注释
enabled=true
#允许键盘,打开注释
keymap=en-us
#配置virt类型,打开注释
virt_type=kvm
注:这一步首先要确定计算节点CPU是否支持虚拟化、支持硬件加速,egrep -c '(vmx|svm)' /proc/cpuinfo 执行此命令来查看,如果返回值为1,或者大于1则不需要修改,如果返回值为0则必须配置 libvirt 来使用 QEMU 去代替 KVM
#在[default]标签下添加内容
[default]
#配置消息队列
transport_url=rabbit://openstack:openstack@192.168.56.11
1.5.3启动计算节点nova及libvirt服务
#允许开机自启
[root@linux-node2 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
#启动服务
[root@linux-node2 ~]# systemctl start libvirtd.service openstack-nova-compute.service
【开源是一种精神,分享是一种美德】
— By GoodCook
— 笔者QQ:253097001
— 欢迎大家随时来交流
—原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 、作者信息和本声明。否则将追究法律责任。