openstack(liberty):部署实验平台(二,简单版本软件安装 part2)

继续前面的part1,将后续的compute以及network部分的安装过程记录完毕!

首先说说compute部分nova的安装。

n1。准备工作。创建数据库,配置权限!(密码依旧是openstack,还是在controller节点机器node0上操作)

 mysql -u root -p
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';

n2. 配置环境

 source admin-openrc.sh

n3. 创建nova用户并添加role。

 openstack user create --domain default --password-prompt nova
openstack role add --project service --user nova admin

n4.创建服务并建立endpoint。

 openstack service create --name nova  --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://node0:8774/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://node0:8774/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://node0:8774/v2/%\(tenant_id\)s

n5.安装组建。

 yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

n6.配置/etc/nova/nova.conf

 [DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.1.100
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
verbose = True [oslo_messaging_rabbit]
rabbit_host = node0
rabbit_userid = openstack
rabbit_password = openstack [database]
connection = mysql://nova:openstack@node0/nova [keystone_authtoken]
auth_uri = http://node0:5000
auth_url = http://node0:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = openstack [vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip [glance]
host = node0 [oslo_concurrency]
lock_path = /var/lib/nova/tmp

n7. 同步数据库

 su -s /bin/sh -c "nova-manage db sync" nova

n8. 启动服务

 systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service  openstack-nova-scheduler.service openstack-nova-conductor.service  openstack-nova-novncproxy.service

 systemctl start openstack-nova-api.service  openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service   openstack-nova-novncproxy.service

在这一步遇到了错误,主要是systemctl start openstack-nova-api.service的错误,没有权限!错误信息如下:

 Feb   :: localhost nova-api: -- ::46.765  CRITICAL nova [-] OSError: [Errno ] Permission denied: '/usr/lib/python2.7/site-packages/keys'
Feb :: localhost nova-api: -- ::46.765 ERROR nova Traceback (most recent call last):
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/bin/nova-api", line , in <module>
Feb :: localhost nova-api: -- ::46.765 ERROR nova sys.exit(main())
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/cmd/api.py", line , in main
Feb :: localhost nova-api: -- ::46.765 ERROR nova server = service.WSGIService(api, use_ssl=should_use_ssl)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/service.py", line , in __init__
Feb :: localhost nova-api: -- ::46.765 ERROR nova self.app = self.loader.load_app(name)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/wsgi.py", line , in load_app
Feb :: localhost nova-api: -- ::46.765 ERROR nova return deploy.loadapp("config:%s" % self.config_path, name=name)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in loadapp
Feb :: localhost nova-api: -- ::46.765 ERROR nova return loadobj(APP, uri, name=name, **kw)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in loadobj
Feb :: localhost nova-api: -- ::46.765 ERROR nova return context.create()
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in create
Feb :: localhost nova-api: -- ::46.765 ERROR nova return self.object_type.invoke(self)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in invoke
Feb :: localhost nova-api: -- ::46.765 ERROR nova **context.local_conf)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line , in fix_call
Feb :: localhost nova-api: -- ::46.765 ERROR nova val = callable(*args, **kw)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/api/openstack/urlmap.py", line , in urlmap_factory
Feb :: localhost nova-api: -- ::46.765 ERROR nova app = loader.get_app(app_name, global_conf=global_conf)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in get_app
Feb :: localhost nova-api: -- ::46.765 ERROR nova name=name, global_conf=global_conf).create()
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in create
Feb :: localhost nova-api: -- ::46.765 ERROR nova return self.object_type.invoke(self)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in invoke
Feb :: localhost nova-api: -- ::46.765 ERROR nova **context.local_conf)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line , in fix_call
Feb :: localhost nova-api: -- ::46.765 ERROR nova val = callable(*args, **kw)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/api/auth.py", line , in pipeline_factory_v21
Feb :: localhost nova-api: -- ::46.765 ERROR nova return _load_pipeline(loader, local_conf[CONF.auth_strategy].split())
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/api/auth.py", line , in _load_pipeline
Feb :: localhost nova-api: -- ::46.765 ERROR nova app = loader.get_app(pipeline[-])
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in get_app
Feb :: localhost nova-api: -- ::46.765 ERROR nova name=name, global_conf=global_conf).create()
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in create
Feb :: localhost nova-api: -- ::46.765 ERROR nova return self.object_type.invoke(self)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line , in invoke
Feb :: localhost nova-api: -- ::46.765 ERROR nova return fix_call(context.object, context.global_conf, **context.local_conf)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line , in fix_call
Feb :: localhost nova-api: -- ::46.765 ERROR nova val = callable(*args, **kw)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line , in factory
Feb :: localhost nova-api: -- ::46.765 ERROR nova return cls()
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/__init__.py", line , in __init__
Feb :: localhost nova-api: -- ::46.765 ERROR nova super(APIRouterV21, self).__init__(init_only)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line , in __init__
Feb :: localhost nova-api: -- ::46.765 ERROR nova self._register_resources_check_inherits(mapper)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line , in _register_resources_check_inherits
Feb :: localhost nova-api: -- ::46.765 ERROR nova for resource in ext.obj.get_resources():
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/cloudpipe.py", line , in get_resources
Feb :: localhost nova-api: -- ::46.765 ERROR nova CloudpipeController())]
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/cloudpipe.py", line , in __init__
Feb :: localhost nova-api: -- ::46.765 ERROR nova self.setup()
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/nova/api/openstack/compute/cloudpipe.py", line , in setup
Feb :: localhost nova-api: -- ::46.765 ERROR nova fileutils.ensure_tree(CONF.keys_path)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib/python2.7/site-packages/oslo_utils/fileutils.py", line , in ensure_tree
Feb :: localhost nova-api: -- ::46.765 ERROR nova os.makedirs(path, mode)
Feb :: localhost nova-api: -- ::46.765 ERROR nova File "/usr/lib64/python2.7/os.py", line , in makedirs
Feb :: localhost nova-api: -- ::46.765 ERROR nova mkdir(name, mode)
Feb :: localhost nova-api: -- ::46.765 ERROR nova OSError: [Errno ] Permission denied: '/usr/lib/python2.7/site-packages/keys'
Feb :: localhost nova-api: -- ::46.765 ERROR nova
Feb :: localhost systemd: openstack-nova-api.service: main process exited, code=exited, status=/FAILURE
Feb :: localhost systemd: Failed to start OpenStack Nova API Server.
Feb :: localhost systemd: Unit openstack-nova-api.service entered failed state.
Feb :: localhost systemd: openstack-nova-api.service failed.
Feb :: localhost systemd: openstack-nova-api.service holdoff time over, scheduling restart.
Feb :: localhost systemd: Starting OpenStack Nova API Server...

由于是没有权限,结合错误日志分析,将/var/log/nova及/var/lib/nova的目录改属组,以及读写权限。

 [root@node0 log]# chown -R nova:nova /var/lib/nova
[root@node0 log]#
[root@node0 log]# chown -R nova:nova /var/log/nova
[root@node0 log]#
[root@node0 log]# chmod -R /var/lib/nova
[root@node0 log]# chmod -R /var/log/nova

然后,在/etc/nova/nova.conf的【DEFAULT】里面添加下面的内容

 state_path=/var/lib/nova
keys_path=$state_path/keys
log_dir=/var/log/nova

最后再次尝试启动nova服务,就正常了,ok!

下面的操作,是要在compute节点node1(192.168.1.110)上进行了。

c1.安装nova compute包

 yum install openstack-nova-compute sysfsutils

c2.配置/etc/nova/nova.conf

 [DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.1.110
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
verbose = True
state_path=/var/lib/nova  #注意,这三行,在官方的安装指南中是没有的。
keys_path=$state_path/keys
log_dir=/var/log/nova

[oslo_messaging_rabbit]
rabbit_host = node0
rabbit_userid = openstack
rabbit_password = openstack [keystone_authtoken]
auth_uri = http://node0:5000
auth_url = http://node0:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = openstack [vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://node0:6080/vnc_auto.html [glance]
host = node0 [oslo_concurrency]
lock_path = /var/lib/nova/tmp

c3. 配置虚拟化方案并启动

 [root@node1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo    #这一步是检测compute机器是否支持硬件虚拟化,若返回值为0,则表示不支持,需要全软件虚拟,这时要配置虚拟类型为qemu

由于我的服务器支持硬件虚拟化,所以,我的virt_type采用默认的kvm. 下面开始启动虚拟服务以及nova compute服务。

 systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

呀,什么情况,[root@node1 opt]# systemctl start openstack-nova-compute.service迟迟不OK,问题来了,查看日志/etc/log/nova/nova-compute.log,发现下面的错误:

 -- ::29.620  INFO oslo_service.periodic_task [-] Skipping periodic task _periodic_update_dns because its interval is negative
-- ::29.669 INFO nova.virt.driver [-] Loading compute driver 'libvirt.LibvirtDriver'
-- ::29.742 INFO oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] Connecting to AMQP server on 192.168.1.100:
-- ::29.752 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100: is unreachable: [Errno ] EHOSTUNREACH. Trying again in seconds.
-- ::30.769 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100: is unreachable: [Errno ] EHOSTUNREACH. Trying again in seconds.
-- ::32.789 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100: is unreachable: [Errno ] EHOSTUNREACH. Trying again in seconds.
-- ::34.808 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100: is unreachable: [Errno ] EHOSTUNREACH. Trying again in seconds.
-- ::36.826 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100: is unreachable: [Errno ] EHOSTUNREACH. Trying again in seconds.
-- ::38.845 ERROR oslo.messaging._drivers.impl_rabbit [req-ba82b667-e62b-40d1-a549-07f211ee6d4a - - - - -] AMQP server on 192.168.1.100: is unreachable: [Errno ] EHOSTUNREACH. Trying again in seconds.

分析,从compute node可以ping通controller,说明路由不存在问题。
但是从compute node上telnet无法到达controller。说明端口有问题。一说端口,就应该想到防火墙。。。。
在compute节点node1上做端口扫描controller节点node0.。。如下,发现只有22端口是开的,进一步说明防火墙的问题了。

 [root@node1 ~]# nmap 192.168.1.100

 Starting Nmap 6.40 ( http://nmap.org ) at 2016-02-04 11:38 CST
Nmap scan report for node0 (192.168.1.100)
Host is up (.00027s latency).
Not shown: filtered ports
PORT STATE SERVICE
/tcp open ssh
MAC Address: :::F0:C3: (Dell)

查看iptables是关闭的,为何还有端口被锁定的问题???靠,我这个是centos7,里面多了一个firewalld的程序,将其关闭了就完事大吉了。

然后去node0这个控制节点查看服务是否ok:

 [root@node0 log]# nova service-list
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
| | nova-consoleauth | node0 | internal | enabled | up | --04T03::50.000000 | - |
| | nova-conductor | node0 | internal | enabled | up | --04T03::50.000000 | - |
| | nova-scheduler | node0 | internal | enabled | up | --04T03::51.000000 | - |
| | nova-compute | node1 | nova | enabled | up | --04T03::43.000000 | - |
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+

发现没有cert的服务,shit,又出错了。这个莫非和前面安装控制节点上的nova程序出错有关系?/var/log/nova及/var/lib/nova访问权限的问题。确认一下:

 [root@node0 log]# systemctl status openstack-nova-cert.service
● openstack-nova-cert.service - OpenStack Nova Cert Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-cert.service; enabled; vendor preset: disabled)
Active: active (running) since Thu -- :: CST; 1h 13min ago
Main PID: (nova-cert)
CGroup: /system.slice/openstack-nova-cert.service
└─ /usr/bin/python2 /usr/bin/nova-cert

接下来查看/var/log/message的内容,看看是不是有问题?

 Feb  :: node0 nova-cert[]: crypto.ensure_ca_filesystem()
Feb :: node0 nova-cert[]: File "/usr/lib/python2.7/site-packages/nova/crypto.py", line , in ensure_ca_filesystem
Feb :: node0 nova-cert[]: fileutils.ensure_tree(ca_dir)
Feb :: node0 nova-cert[]: File "/usr/lib/python2.7/site-packages/oslo_utils/fileutils.py", line , in ensure_tree
Feb :: node0 nova-cert[]: os.makedirs(path, mode)
Feb :: node0 nova-cert[]: File "/usr/lib64/python2.7/os.py", line , in makedirs
Feb :: node0 nova-cert[]: mkdir(name, mode)
Feb :: node0 nova-cert[]: OSError: [Errno ] Permission denied: '/usr/lib/python2.7/site-packages/CA'
Feb :: node0 systemd[]: Started OpenStack Nova Cert Server.
Feb :: node0 systemd[]: Started OpenStack Nova Cert Server.

靠,真是这个错误!将cert服务重启一下呗,试试:看下面就可以知道,现在ok了

 [root@node0 log]# systemctl restart openstack-nova-cert.service
[root@node0 log]#
[root@node0 log]#
[root@node0 log]#
[root@node0 log]# systemctl status openstack-nova-cert.service
● openstack-nova-cert.service - OpenStack Nova Cert Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-cert.service; enabled; vendor preset: disabled)
Active: active (running) since Thu -- :: CST; 2s ago
Main PID: (nova-cert)
CGroup: /system.slice/openstack-nova-cert.service
└─ /usr/bin/python2 /usr/bin/nova-cert
 [root@node0 log]# nova service-list
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
| | nova-consoleauth | node0 | internal | enabled | up | --04T03::10.000000 | - |
| | nova-conductor | node0 | internal | enabled | up | --04T03::10.000000 | - |
| | nova-scheduler | node0 | internal | enabled | up | --04T03::11.000000 | - |
| | nova-compute | node1 | nova | enabled | up | --04T03::13.000000 | - |
| | nova-cert | node0 | internal | enabled | up | --04T03::10.000000 | - |
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+

到这里为止,关于compute的nova服务已经安装配置完毕了!

下面开始network的相关的neutron的安装配置!这个还是在控制节点node0上操作

n1. 数据库创建即权限控制

 mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'openstack';

n2. 环境配置(CLI)

 source admin-openrc.sh

n3.创建用户配置角色

 openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin

n4.创建服务以及endpoint

 openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://node0:9696
openstack endpoint create --region RegionOne network internal http://node0:9696
openstack endpoint create --region RegionOne network admin http://node0:9696

n5. 配置网络(我这里的网络配置选择的是option2,即self-network,因为我的网络只有一个中转机可以访问外网,只有一个公网IP,最终给vm配置外网IP会失败。。。

 yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset

n6. 配置/etc/neutron/neutron.conf

 [DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://node0:8774/v2
verbose = True [database]
connection = mysql://neutron:openstack@node0/neutron [oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack [keystone_authtoken]
auth_uri = http://node0:5000
auth_url = http://node0:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = openstack [nova]
auth_url = http://node0:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = openstack [oslo_concurrency]
lock_path = /var/lib/neutron/tmp

n7. 配置/etc/neutron/plugins/ml2/ml2_conf.ini

 [ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security [ml2_type_flat]
flat_networks = public [ml2_type_vxlan]
vni_ranges = : [securitygroup]
enable_ipset = True

n8. 配置linux bridge /etc/neutron/plugins/ml2/linuxbridge_agent.ini

 [linux_bridge]
physical_interface_mappings = public:em1 [vxlan]
enable_vxlan = True
local_ip = 192.168.1.100
l2_population = True [agent]
prevent_arp_spoofing = True [securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

n9. 配置L3 /etc/neutron/l3_agent.ini

 [DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
verbose = True

n10. 配置DHCP /etc/neutron/dhcp_agent.ini

 [DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True verbose = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

上面的文件dnsmasq-neutron.conf可能不存在,那么可以自己建一个。配置内容如下:

 dhcp-option-force=,

n11. 配置metadata信息。/etc/neutron/metadata_agent.ini

 [DEFAULT]
auth_uri = http://node0:5000
auth_url = http://node0:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = openstack nova_metadata_ip = node0 metadata_proxy_shared_secret = openstack verbose = True

n12. 配置compute使用网络/etc/nova/nova.conf

 [neutron]
url = http://node0:9696
auth_url = http://node0:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = openstack service_metadata_proxy = True
metadata_proxy_shared_secret = openstack

n13. 配置符号链接。

因为网络服务初始化需要一个符号链接/etc/neutron/plugin.ini指向paste文件/etc/neutron/plugins/ml2/ml2_conf.ini。

 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

n14. 同步数据库(注意,这一步很费时间,我这里起码用去了1分钟的时间。。。

 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

n15. 激活网络服务

 systemctl restart openstack-nova-api.service #重新启动nova api服务

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service #下面两条启动网络

systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service systemctl enable neutron-l3-agent.service #由于我的配置选择的是option2的self-network,所以需要配置l3的服务。
systemctl start neutron-l3-agent.service

下面,开始配置网络部分在compute节点node0的内容了。

nc1. 安装组件

 yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset

nc2. 配置公共component。/etc/neutron/neutron.conf

 [DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
verbose = True [oslo_messaging_rabbit]
rabbit_host = node0
rabbit_userid = openstack
rabbit_password = openstack [keystone_authtoken]
auth_uri = http://node0:5000
auth_url = http://node0:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = openstack [oslo_concurrency]
lock_path = /var/lib/neutron/tmp

nc3. 配置linux bridge。/etc/neutron/plugins/ml2/linuxbridge_agent.ini。

 [linux_bridge]
physical_interface_mappings = public:em1 [vxlan]
enable_vxlan = True
local_ip = 192.168.1.110
l2_population = True [agent]
prevent_arp_spoofing = True [securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

nc4. 配置计算节点使用网络/etc/nova/nova.conf

 [neutron]
url = http://node0:9696
auth_url = http://node0:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = openstack

nc5. 启动服务

 systemctl restart openstack-nova-compute.service

 systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

nc6. 验证安装了多少扩展的服务(在controller节点或者compute节点执行都一样,自己分析为什么,很简单。。。)

 [root@node1 opt]# neutron ext-list
+-----------------------+-----------------------------------------------+
| alias | name |
+-----------------------+-----------------------------------------------+
| dns-integration | DNS Integration |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| l3_agent_scheduler | L3 Agent Scheduler |
| external-net | Neutron external network |
| flavors | Neutron Service Flavors |
| net-mtu | Network MTU |
| quotas | Quota management support |
| l3-ha | HA Router extension |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| extraroute | Neutron Extra Route |
| router | Neutron L3 Router |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| security-group | security-group |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| rbac-policies | RBAC Policies |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
| dvr | Distributed Virtual Router |
+-----------------------+-----------------------------------------------+

验证网络agent启动了多少类型的服务:

 [root@node1 opt]# neutron agent-list
+--------------------------------------+--------------------+-------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+-------+-------+----------------+---------------------------+
| 6040557f--483b--2e8578614935 | L3 agent | node0 | :-) | True | neutron-l3-agent |
| 7eac038e-9daf-4ffa--537b148151bf | Linux bridge agent | node0 | :-) | True | neutron-linuxbridge-agent |
| 82be88ad-e273-405d-ac59-57eba50861c8 | DHCP agent | node0 | :-) | True | neutron-dhcp-agent |
| b0b1d65c--48e9-a4a1-6308289dbd25 | Metadata agent | node0 | :-) | True | neutron-metadata-agent |
| d615a741-bdb8-40f4-82c2-ea0b9da07bb8 | Linux bridge agent | node1 | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+-------+-------+----------------+---------------------------+

到此,简单版本软件安装全部的内容安装完成,此时,可以通过命令行来创建instance了。

d1. 配置keypair。

我的node0上已经有了ssh key文件,所以就用这个现有的key作为key来创建vm时使用。目前主要就是要将pub的key文件添加到nova的管理体系。

 nova keypair-add --pub-key ~/.ssh/id_rsa.pub hellokey

再来查看,添加的key信息:

 [root@node0 opt]# nova keypair-list
+----------+-------------------------------------------------+
| Name | Fingerprint |
+----------+-------------------------------------------------+
| hellokey | :a5:5e::f8:b2:a0:9d:ec:5d:db:b6:7a:b0:e5:cc |
+----------+-------------------------------------------------+

d2. 添加security rules。

Permit ICMP (ping) 以及 ssh:

 nova secgroup-add-rule default icmp - - 0.0.0.0/
nova secgroup-add-rule default tcp 0.0.0.0/

查看一下添加的规则:

 [root@node0 opt]# nova secgroup-list-rules default
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| | | | | default |
| | | | | default |
| icmp | - | - | 0.0.0.0/ | |
| tcp | | | 0.0.0.0/ | |
+-------------+-----------+---------+-----------+--------------+

最后,来进行命令行下创建instance的操作。

d3. 创建虚拟网络

根据前面的配置,是想创建的网络既支持public的网络,也同时支持private的网络,但是,由于服务器只有一个interface连接到交换机。若要支持private的management网络,需要额外的网卡与外部交换机连接。所以,此处,网络的配置是按照public和private双网络配置,但是这里创建虚拟网络时,只创建public的。【因为即使创建了private的网络,在后续的数据流配置过程中会出错,测试中发现,private的网络有问题。修改配置,应该是可以解决的,但在这里,这个展示不是重点,就不费这个时间】

public的网络连接图如下:

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)

 neutron net-create public --shared --provider:physical_network public --provider:network_type flat

上述指令中的配置信息,依据的是配置文件/etc/neutron/plugins/ml2/ml2_conf.ini中的内容。

[ml2_type_flat]

flat_networks = public

[linux_bridge]

physical_interface_mappings = public:em1

下面创建public的网络,指定网络的起点IP以及结束IP,指定一个IP范围,指定dns以及gateway。

 [root@node0 opt]# neutron subnet-create public 192.168.1.0/ --name public --allocation-pool start=192.168.1.140,end=192.168.1.254 --dns-nameserver 219.141.136.10 --gateway 192.168.1.1 
 

d4. 资源信息

 [root@node0 opt]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| | m1.tiny | | | | | | 1.0 | True |
| | m1.small | | | | | | 1.0 | True |
| | m1.medium | | | | | | 1.0 | True |
| | m1.large | | | | | | 1.0 | True |
| | m1.xlarge | | | | | | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 [root@node0 opt]# nova image-list
+--------------------------------------+--------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------+--------+--------+
| 686bec85-fe90-4aea--b6f7cc8ed686 | cirros | ACTIVE | |
+--------------------------------------+--------+--------+--------+
 [root@node0 opt]# neutron net-list
+--------------------------------------+---------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------+-----------------------------------------------------+
| 422a7cbd-46b0-474f-a09b- | private | 95b1f1a1-0e09-4fc2-8d6c-3ea8bf6e6c4b 172.16.1.0/ |
| ceb43e9a-69c1--bca2-082801bfe34f | public | 855aa082-0c19-465c-a622-418a8f7b8a4d 192.168.1.0/ |
+--------------------------------------+---------+-----------------------------------------------------+

说明下,上面的net-list内容,其中有个private,因为我的网络配置是按照self-network选项配置的,即可以支持public,也可以支持private的类型,所以在创建虚拟网络的时候,就创建了一个vxlan的租户私有网络,但是,由于物理连接以及网关的配置等原因,测试私有网络时是有点问题的,这个问题是可以解决的,只不过,在我的这个实验平台环境下,比较麻烦点,我就没有处理这个私有网络的问题。

安全组信息,只有一个default。这个映射到前面添加keypair的操作过程。

 [root@node0 opt]# nova secgroup-list
+--------------------------------------+---------+------------------------+
| Id | Name | Description |
+--------------------------------------+---------+------------------------+
| 844c51a3-09e9-41a0-bcd6-a6f7c3cffa56 | default | Default security group |
+--------------------------------------+---------+------------------------+

d5.创建instance。

 [root@node0 nova]# nova boot --flavor m1.tiny --image cirros --nic net-id=ceb43e9a-69c1--bca2-082801bfe34f --security-group default --key-name hellokey public-instance
+--------------------------------------+-----------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance- |
| OS-EXT-STS:power_state | |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | JCtWSicR3Z9B |
| config_drive | |
| created | --04T06::13Z |
| flavor | m1.tiny () |
| hostId | |
| id | 2830bb7e--46d2-8a0a-1c329bcb39f8 |
| image | cirros (686bec85-fe90-4aea--b6f7cc8ed686) |
| key_name | hellokey |
| metadata | {} |
| name | public-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | |
| security_groups | default |
| status | BUILD |
| tenant_id | c6669377868c438f8a81cc234f85338f |
| updated | --04T06::14Z |
| user_id | 34b11c08da3b4c2ebfd6ac3203768bc4 |
+--------------------------------------+-----------------------------------------------+

下面看看创建的实例吧,这个创建过程非常快,绝对是秒级的,几秒内。

 [root@node0 nova]# nova list
+--------------------------------------+-----------------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------------+--------+------------+-------------+----------------------+
| 2830bb7e--46d2-8a0a-1c329bcb39f8 | public-instance | ACTIVE | - | Running | public=192.168.1.142 |
+--------------------------------------+-----------------+--------+------------+-------------+----------------------+

d6。访问一下创建的实例

 [root@node0 opt]# nova get-vnc-console public-instance novnc
+-------+----------------------------------------------------------------------------+
| Type | Url |
+-------+----------------------------------------------------------------------------+
| novnc | http://node0:6080/vnc_auto.html?token=9ab57a02-272c-46cc-b8b1-3a350467e679 |
+-------+----------------------------------------------------------------------------+

在浏览器中输入http://tzj_IP:6080/vnc_auto.html?token=9ab57a02-272c-46cc-b8b1-3a350467e679,注意,这里的tzj_IP是我在part1中显示的中间跳转机的IP。最后,浏览器中看到的效果如下图所示的效果:

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)

说明下,上图中,1标识,提示用户点击该按钮,启动vnc显示出VM的登录界面,2标识提示用户login的信息,默认的是cirros为用户名,密码为cubswin:)

在这里,我提供一下我在跳转机的防火墙nat表中添加的dnat转换规则:

 [root@fedora1 ~]# iptables -t nat -A PREROUTING -p tcp --dport  -d tzj_IP -j DNAT --to-destination 192.168.1.100:
[root@fedora1 ~]# iptables -t nat -A PREROUTING -p udp --dport -d tzj_IP -j DNAT --to-destination 192.168.1.100:

下面看看ssh登录的这个instance的效果:

 [root@node0 opt]# ssh cirros@192.168.1.142
$ ls
$ pwd
/home/cirros
$

到此,实验环境的平台,简单版本软件安装已经完成,可以创建出instance。也可以像访问具体的机器一样来访问创建出来的虚拟机!

为了创建虚拟机方便些,类似AWS的EC2那样子方便的创建VM,接下来,安装dashboard。这一步比较简单。在控制节点node0上操作。

d1. 安装dashboard

 yum install openstack-dashboard

d2. 编辑并配置/etc/openstack-dashboard/local_settings

 OPENSTACK_HOST = "node0" #配置dashboard访问openstack的service
ALLOWED_HOSTS = ['*', ] #允许所有的机器都可以访问dashboard #配置memcached的session storage service
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
} #配置通过dashboard创建的用户默认为user角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True #配置API service的版本号,允许用户通过keystone v3的API登录dashboard
OPENSTACK_API_VERSIONS = {
"identity": ,
"volume": ,
} #配置时区
TIME_ZONE = "Asia/Chongqing"

d3. 启动dashboard(其实,就是重新启动apache)

 systemctl enable httpd.service memcached.service
systemctl restart httpd.service memcached.service

d4. 验证dashboard的web访问http://node0/dashboard

由于我的实验环境使用了跳转机,服务器都是内网的IP,所以,这里要再次配置DNAT。

 [root@fedora1 ~]# iptables -t nat -A PREROUTING -p tcp --dport  -d tzj_IP -j DNAT --to-destination 192.168.1.100:

下面展示几张截图作为此实验平台环境搭建记录过程的结尾。

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)

上图为登录界面。

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)

上图为列举出当前admin用户下的instance列表信息。

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)

上图为当前的keypair列表信息,只有一个hellokey。

openstack(liberty):部署实验平台(二,简单版本软件安装 part2)

上图表示通过dashboard创建VM的界面,有过AWS的EC2的经验的话,这些都很容易理解和接受!

上一篇:win7安装virtualbox


下一篇:设计一个简单的,低耗的能够区分红酒和白酒的感知器(sensor)