OpenStack Havana 部署在Ubuntu 12.04 Server 【OVS+GRE】(一)——控制节点的安装

 

序:OpenStack Havana 部署在Ubuntu 12.04 Server 【OVS+GRE】

控制节点:

1.准备Ubuntu
  • 安装好Ubuntu12.04 server 64bits后,进入root模式进行安装。

sudo su - 
  • 添加Havana仓库:
#apt-get install python-software-properties
#add-apt-repository cloud-archive:havana
  • 升级系统:
#apt-get update
#apt-get upgrade
#apt-get dist-upgrade
 
 
2.设置网络
  • 配置网卡编辑/etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.122.2
netmask 255.255.255.0
gateway 192.168.122.1
dns-nameservers 192.168.122.1

auto eth0:1
iface eth0:1 inet static
address 10.10.10.2
netmask 255.255.255.0

  • 重启网络服务
#/etc/init.d/networking restart
  • 开启路由转发:

 
sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf
sysctl -p
 
3.安装MySQL
  • 安装MySQL并为root用户设置密码:
#apt-get install mysql-server python-mysqldb
  • 配置mysql监听所有网络请求
#sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
#service mysql restart
4.安装RabbitMQ和NTP
  • 安装RabbitMQ:
#apt-get install rabbitmq-server
  • 安装NTP服务
#apt-get install ntp
 
 
5.创建数据库
  • 创建数据库
 
#mysql -u root -p

#Keystone
CREATE DATABASE keystone;
GRANT ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass';
#Glance
CREATE DATABASE glance;
GRANT ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY 'glancePass';
#Neutron
CREATE DATABASE neutron;
GRANT ALL ON neutron.* TO 'neutronUser'@'%' IDENTIFIED BY 'neutronPass';
#Nova
CREATE DATABASE nova;
GRANT ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED BY 'novaPass';
#Cinder
CREATE DATABASE cinder;
GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED BY 'cinderPass';
#Swift
CREATE DATABASE swift;
GRANT ALL ON swift.* TO 'swiftUser'@'%' IDENTIFIED BY 'swiftPass'; quit;
 
6.配置Keystone
  • 安装keystone软件包:

apt-get install keystone
  • 在/etc/keystone/keystone.conf中设置连接到新创建的数据库:

connection=mysql://keystone:keystonePass@10.10.10.2/keystone
  • 重启身份认证服务并同步数据库:

service keystone restart
keystone-manage db_dync
  • 下面这个就比较有点技术含量,使用keystone的创建脚本,来完成一系列的租户、用户、角色、服务的创建与设立,不过本脚本默认的
    SERVICE_TOKEN="admin",所以请务必将/etc/keystone/keystone.conf中的默认配置修改为“admin_token = admin”
#!/bin/sh
#
# Keystone Datas
#
# Description: Fill Keystone with datas.
# # Please set 13, 30 lines of variables
ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
SERVICE_PASSWORD=${SERVICE_PASSWORD:-admin}
export SERVICE_TOKEN="admin" # NOTICE! this word must be samed as the file /etc/keystone/keystone.conf
export SERVICE_ENDPOINT="http://10.10.10.2:35357/v2.0"
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service}
KEYSTONE_REGION=RegionOne
# If you need to provide the service, please to open keystone_wlan_ip and swift_wlan_ip
# of course you are a multi-node architecture, and swift service
# corresponding ip address set the following variables
KEYSTONE_IP="10.10.10.2"
EXT_HOST_IP="192.168.122.2" #KEYSTONE_WLAN_IP="172.16.0.254"
SWIFT_IP="10.10.10.2"
#SWIFT_WLAN_IP="172.16.0.254" COMPUTE_IP=$KEYSTONE_IP
EC2_IP=$KEYSTONE_IP
GLANCE_IP=$KEYSTONE_IP
VOLUME_IP=$KEYSTONE_IP
NEUTRON_IP=$KEYSTONE_IP
CEILOMETER_IP=$KEYSTONE_IP get_id () {
echo `$@ | awk '/ id / { print $4 }'`
} # Create Tenants
ADMIN_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=admin)
SERVICE_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=$SERVICE_TENANT_NAME)
DEMO_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=demo)
INVIS_TENANT=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT tenant-create --name=invisible_to_admin) # Create Users
ADMIN_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=admin --pass="$ADMIN_PASSWORD" --email=admin@domain.com)
DEMO_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=demo --pass="$ADMIN_PASSWORD" --email=demo@domain.com) # Create Roles
ADMIN_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=admin)
KEYSTONEADMIN_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=KeystoneAdmin)
KEYSTONESERVICE_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=KeystoneServiceAdmin) # Add Roles to Users in Tenants
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $ADMIN_TENANT
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant-id $DEMO_TENANT
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $KEYSTONEADMIN_ROLE --tenant-id $ADMIN_TENANT
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $ADMIN_USER --role-id $KEYSTONESERVICE_ROLE --tenant-id $ADMIN_TENANT # The Member role is used by Horizon and Swift
MEMBER_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=Member)
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE --tenant-id $DEMO_TENANT
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --user-id $DEMO_USER --role-id $MEMBER_ROLE --tenant-id $INVIS_TENANT # Configure service users/roles
NOVA_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=nova --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=nova@domain.com)
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $ADMIN_ROLE GLANCE_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=glance --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=glance@domain.com)
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $GLANCE_USER --role-id $ADMIN_ROLE SWIFT_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=swift --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=swift@domain.com)
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $SWIFT_USER --role-id $ADMIN_ROLE RESELLER_ROLE=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT role-create --name=ResellerAdmin)
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $NOVA_USER --role-id $RESELLER_ROLE NEUTRON_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=neutron --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=neutron@domain.com)
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $NEUTRON_USER --role-id $ADMIN_ROLE CINDER_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=cinder --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=cinder@domain.com)
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $CINDER_USER --role-id $ADMIN_ROLE CEILOMETER_USER=$(get_id keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-create --name=ceilometer --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=ceilometer@domain.com)
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT user-role-add --tenant-id $SERVICE_TENANT --user-id $CEILOMETER_USER --role-id $ADMIN_ROLE ## Create Service
KEYSTONE_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name keystone --type identity --description 'OpenStack Identity'| awk '/ id / { print $4 }' )
COMPUTE_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=nova --type=compute --description='OpenStack Compute Service'| awk '/ id / { print $4 }' )
CINDER_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=cinder --type=volume --description='OpenStack Volume Service'| awk '/ id / { print $4 }' )
GLANCE_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=glance --type=image --description='OpenStack Image Service'| awk '/ id / { print $4 }' )
SWIFT_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=swift --type=object-store --description='OpenStack Storage Service' | awk '/ id / { print $4 }' )
EC2_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=ec2 --type=ec2 --description='OpenStack EC2 service'| awk '/ id / { print $4 }' )
NEUTRON_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=neutron --type=network --description='OpenStack Networking service'| awk '/ id / { print $4 }' )
CEILOMETER_ID=$(keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT service-create --name=ceilometer --type=metering --description='Ceilometer Metering Service'| awk '/ id / { print $4 }' ) ## Create Endpoint
#identity
if [ "$KEYSTONE_WLAN_IP" != '' ];then
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$KEYSTONE_ID --publicurl http://"$EXT_HOST_IP":5000/v2.0 --adminurl http://"$KEYSTONE_WLAN_IP":35357/v2.0 --internalurl http://"$KEYSTONE_WLAN_IP":5000/v2.0
fi
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$KEYSTONE_ID --publicurl http://"$EXT_HOST_IP":5000/v2.0 --adminurl http://"$KEYSTONE_IP":35357/v2.0 --internalurl http://"$KEYSTONE_IP":5000/v2.0 #compute
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$COMPUTE_ID --publicurl http://"$EXT_HOST_IP":8774/v2/%\(tenant_id\)s --adminurl http://"$COMPUTE_IP":8774/v2/%\(tenant_id\)s --internalurl http://"$COMPUTE_IP":8774/v2/%\(tenant_id\)s #volume
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$CINDER_ID --publicurl http://"$EXT_HOST_IP":8776/v1/%\(tenant_id\)s --adminurl http://"$VOLUME_IP":8776/v1/%\(tenant_id\)s --internalurl http://"$VOLUME_IP":8776/v1/%\(tenant_id\)s #image
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$GLANCE_ID --publicurl http://"$EXT_HOST_IP":9292/v2 --adminurl http://"$GLANCE_IP":9292/v2 --internalurl http://"$GLANCE_IP":9292/v2 #object-store
if [ "$SWIFT_WLAN_IP" != '' ];then
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$SWIFT_ID --publicurl http://"$EXT_HOST_IP":8080/v1/AUTH_%\(tenant_id\)s --adminurl http://"$SWIFT_WLAN_IP":8080/v1 --internalurl http://"$SWIFT_WLAN_IP":8080/v1/AUTH_%\(tenant_id\)s
fi
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$SWIFT_ID --publicurl http://"$EXT_HOST_IP":8080/v1/AUTH_%\(tenant_id\)s --adminurl http://"$SWIFT_IP":8080/v1 --internalurl http://"$SWIFT_IP":8080/v1/AUTH_%\(tenant_id\)s #ec2
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$EC2_ID --publicurl http://"$EXT_HOST_IP":8773/services/Cloud --adminurl http://"$EC2_IP":8773/services/Admin --internalurl http://"$EC2_IP":8773/services/Cloud #network
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$NEUTRON_ID --publicurl http://"$EXT_HOST_IP":9696/ --adminurl http://"$NEUTRON_IP":9696/ --internalurl http://"$NEUTRON_IP":9696/ #ceilometer
keystone --token $SERVICE_TOKEN --endpoint $SERVICE_ENDPOINT endpoint-create --region $KEYSTONE_REGION --service-id=$CEILOMETER_ID --publicurl http://"$EXT_HOST_IP":8777/ --adminurl http://"$CEILOMETER_IP":8777/ --internalurl http://"$CEILOMETER_IP":8777/
  •  上述脚本文件为了填充keystone数据库,其中还有些内容根据自身情况修改。 
  • 创建一个简单的凭据文件,这样稍后不会因为输入过多的环境变量而感到厌烦。
vim env-set

export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin_pass
export OS_AUTH_URL="http://192.168.122.2:5000/v2.0/" source env-set
  • 通过命令列出keystone中添加的用户以及得到token:

keystone user-list
keystone token-get
7.设置Glance
  • 安装Glance

apt-get install glance
  • 更新/etc/glance/glance-api-paste.ini
 
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
delay_auth_decision = true
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = admin
 
  • 更新/etc/glance/glance-registry-paste.ini
 
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = admin
 
  • 更新/etc/glance/glance-api.conf
sql_connection = mysql://glanceUser:glancePass@10.10.10.2/glance

[paste_deploy]
flavor = keystone
  • 更新/etc/glance/glance-registry.conf
sql_connection = mysql://glanceUser:glancePass@10.10.10.2/glance
notifier_strategy=rabbit
rabbit_host=10.10.10.2 和
[paste_deploy] 
flavor = keystone
  • 重新启动glance服务:
cd /etc/init.d/;for i in $( ls glance-* );do service $i restart;done
  • 同步glance数据库
glance-manage db_sync
  • 测试Glance
mkdir images
cd images
wget http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
glance image-create --name="Cirros 0.3.1" --disk-format=qcow2 --container-format=bare --is-public=true <cirros-0.3.1-x86_64-disk.img
  • 列出镜像检查是否上传成功:
glance image-list
8.设置Neutron
  • 安装Neutron组件:

PS:因为keystone的脚本默认把neutron 服务(端口9696)的 API Endpoint设置在控制节点,所以neutron server同样也要安装在控制节点

apt-get install neutron-server
  • 编辑/etc/neutron/api-paste.ini

 
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = admin
 
  • 编辑OVS配置文件/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:

 
[OVS]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True #Firewall driver for realizing neutron security group function
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
 
  • 编辑/etc/neutron/neutron.conf

 
 
[database]
connection = mysql://neutronUser:neutronPass@10.10.10.2/neutron [keystone_authtoken]
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = admin
signing_dir = /var/lib/neutron/keystone-signing
 
  • 重启Neutron所有服务:
cd /etc/init.d/; for i in $( ls neutron-* ); do sudo service $i restart; done
9.设置Nova
  • 安装nova组件:
apt-get install  nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy nova-doc nova-conductor nova-ajax-console-proxy 
  • 编辑/etc/nova/api-paste.ini修改认证信息:
 
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = admin
signing_dir = /var/lib/nova/keystone-signing
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0
 
  • 编辑修改/etc/nova/nova.conf
[DEFAULT]
# This file is configuration of nova
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
nova_url=http://10.10.10.2:8774/v1.1/
sql_connection=mysql://novaUser:novaPass@10.10.10.2/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf #default_availability_zone=fbw #这个可以*设置,仅在nova-api所在的节点生效 # Auth
use_deprecated_auth=false
auth_strategy=keystone # Rabbit MQ
my_ip=10.10.10.2
rabbit_host=10.10.10.2
rpc_backend = nova.rpc.impl_kombu # Imaging service
glance_api_servers=10.10.10.2:9292
image_service=nova.image.glance.GlanceImageService # Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://192.168.122.2:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=10.10.10.2 # diffrente from every node
vncserver_listen=0.0.0.0 # Network settings
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://10.10.10.2:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=admin
neutron_admin_auth_url=http://10.10.10.2:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver #If you want Neutron + Nova Security groups
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron #If you want Nova Security groups only, comment the two lines above and uncomment line -1-.
#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver #Metadata
#metadata_host = 10.10.10.2
#metadata_listen = 0.0.0.0
#metadata_listen_port = 8775 service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = helloOpenStack # Compute #
compute_driver=libvirt.LibvirtDriver # Cinder #
volume_api_class=nova.volume.cinder.API
osapi_volume_listen_port=5900 # Migration #
image_cache_manager_interval=0
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_UNSAFE
  • 同步数据库:
nova-manage db sync
  • 重启Nova所有服务:
cd /etc/init.d/; for i in $( ls nova-* ); do  service $i restart; done
  • 检查所有nova服务是否启动正常:
nova-manage service list
10.设置Cinder
  • 安装Cinder软件包
apt-get install  cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms
  • 配置iscsi服务:
sed -i 's/false/true/g' /etc/default/iscsitarget
  • 重启服务:
service iscsitarget start
service open-iscsi start
  • 配置/etc/cinder/api-paste.ini
 
filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 192.168.122.2
service_port = 5000
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = admin
 
  • 编辑/etc/cinder/cinder.conf
 
[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinderUser:cinderPass@10.10.10.2/cinder
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper=ietadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
control_exchange=cinder
notification_driver=cinder.openstack.common.notifier.rpc_notifier
 
  • 接下来同步数据库:
cinder-manage db sync
  • 最后要创建一个卷组cinder-volumes。这里cinder是使用lvm2来提供块存储,如果是物理机的话,可以用单独一块硬盘来创建,虚拟机的话也可在libvirt xml中指定增加一个qcow2的硬盘。这里简单操作的话,就选用文件来模拟一个块设备。
dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=5G  #创建5G大小的文件,在/home/cloud目录下面
losetup /dev/loop2 cinder-volumes
fdisk /dev/loop2
#Type in the followings:
n
p
1
ENTER
ENTER
t
8e
w
  • 创建物理卷和卷组:
pvcreate /dev/loop2
vgcreate cinder-volumes /dev/loop2
  • 注意:重启后卷组不会自动挂载,如下进行设置:
vim /etc/init/losetup.conf

description "set up loop devices"
start on mounted MOUNTPOINT=/
task
exec /sbin/losetup /dev/loop2 /home/cloud/cinder-volumes
  • 重启cinder服务:
cd /etc/init.d/; for i in $( ls cinder-* ); do service $i restart; done
  • 确认Cinder服务在运行:
cd /etc/init.d/; for i in $( ls cinder-* ); do service $i status; done
12.设置Ceilometer
  • 安装Metering服务
apt-get install ceilometer-api ceilometer-collector ceilometer-agent-central python-ceilometerclient
  • 安装MongoDB数据库
apt-get install mongodb
  • 配置mongodb监听所有网络接口请求:
sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mongodb.conf

service mongodb restart
  • 创建ceilometer数据库用户:
#mongo
>use ceilometer
>db.addUser({ user:"ceilometer",pwd:"CEILOMETER_DBPASS",roles:["readWrite","dbAdmin"]})
  • 配置Metering服务使用数据库
 
...
[database]
...
# The SQLAlchemy connection string used to connect to the
# database (string value)
connection = mongodb://ceilometer:CEILOMETER_DBPASS@10.10.10.2:27017/ceilometer
...
 
  • 配置RabbitMQ访问,编辑/etc/ceilometer/ceilometer.conf
...
[DEFAULT]
rabbit_host = 10.10.10.2
  • 编辑/etc/ceilometer/ceilometer.conf使认证信息生效
 
[keystone_authtoken]
auth_host = 10.10.10.2
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = ceilometer
admin_password = admin
 
  • 重启服务,使配置信息生效
cd /etc/init.d;for i in $( ls ceilometer-* );do service $i restart;done

    至此,控制节点的工作就告一段落,下一篇开始网络节点的安装

          

上一篇:Python_oldboy_自动化运维之路(三)


下一篇:Machine Learning - week 1