当nova volume-attach instance_uuid volume_uuid 执行后,主要流程如下:
使用的存储类型是lvm+iscis
1、nova client解析该命令行,通过restful api接口访问nova-api
访问nova api的接口如下形式,及请求体的内容如下
post /servers/{server_id}/os-volume_attachments
请求体
{
"volumeAttachment": {
"volumeId": "a26887c6-c47b-4654-abb5-dfadf7d3f803",
"device": "/dev/vdd"
}
}
2、nova-api挂载volume的入口为nova/api/openstack/compute/volumes.py,controller为 VolumeAttachmentController ,挂载volume的方法为create。
这个方法的主要作用是获取请求体中volume_uuid,卷挂载到虚机上的设备名,通过instance_id从数据库instances表中获取该虚机信息,
最后调用compute目录中api模块的attach_volume方法
def create(self, req, server_id, body): ....... volume_id = body['volumeAttachment']['volumeId'] device = body['volumeAttachment'].get('device') instance = common.get_instance(self.compute_api, context, server_id) if instance.vm_state in (vm_states.SHELVED, vm_states.SHELVED_OFFLOADED): _check_request_version(req, '2.20', 'attach_volume', server_id, instance.vm_state) try: device = self.compute_api.attach_volume(context, instance, volume_id, device)
进一步调用到如下内容:
nova/compute/api.py:API.attach_volume
self._attach_volume(context, instance, volume_id, device,disk_bus, device_type)
nova/compute/api.py:API._attach_volume
step 1:volume_bdm = self._create_volume_bdm
step 2:self._check_attach_and_reserve_volume()
step 3:self.compute_rpcapi.attach_volume
step 1:在block_device_mapping 表中创建对应的虚机和数据卷的映射记录BDM entry。
bdm创建时,不是在API节点创建的,而是通过RPC请求到虚拟机所在的nova-compute节点创建,请求方法为compute_rpcapi.reserve_block_device_name
step 2:_check_attach_and_reserve_volume()
主要工作是,调用 cinderclient 获取需要挂载的 volume 信息,检查虚机和volume是否在相同的az域,
最后更新cinder数据库中,卷的状态volume['status']为attaching,防止其他api在别的地方使用这个卷
def _check_attach_and_reserve_volume(self, context, volume_id, instance): volume = self.volume_api.get(context, volume_id) self.volume_api.check_availability_zone(context, volume, instance=instance) self.volume_api.reserve_volume(context, volume_id)
step 3: self.compute_rpcapi.attach_volume
nova/compute/rpcapi.py:ComputeAPI.attach_volume
nova-api向虚机所在的计算节点发送cast 方式的RPC异步调用请求,nova-compute服务接受到这个rpc请求以后,进行后续请求处理,nova-api任务结束
def attach_volume(self, ctxt, instance, bdm): version = '4.0' cctxt = self.router.by_instance(ctxt, instance).prepare( server=_compute_host(None, instance), version=version) cctxt.cast(ctxt, 'attach_volume', instance=instance, bdm=bdm)
step 4:nova-compute计算节点接收到RPC请求,函数处理入口是
nova/compute/manager.py:ComputeManager 类attach_volume方法
nova.compute.manager.ComputeManager.attach_volume
step5: driver_bdm = driver_block_device.convert_volume(bdm)
step6: self._attach_volume(context, instance, driver_bdm)
step 5:根据bdm实例参数 中source_type 类型获取 BDM 驱动
由于是挂载卷,所以创建的bdm实例中,source_type的取值为volume, 因此获取的驱动是 DriverVolumeBlockDevice
nova/virt/block_device.py
def convert_volume(volume_bdm): try: return convert_all_volumes(volume_bdm)[0] except IndexError: pass def convert_all_volumes(*volume_bdms): source_volume = convert_volumes(volume_bdms) source_snapshot = convert_snapshots(volume_bdms) source_image = convert_images(volume_bdms) source_blank = convert_blanks(volume_bdms) return [vol for vol in itertools.chain(source_volume, source_snapshot, source_image, source_blank)] convert_volumes = functools.partial(_convert_block_devices, DriverVolumeBlockDevice)
step6: BDM driver attach
self._attach_volume(context, instance, driver_bdm)
nova/compute/manager.py:ComputeManager._attach_volume
bdm.attach(context, instance, self.volume_api, self.driver,do_check_attach=False, do_driver_attach=True)
进一步调用的是 DriverVolumeBlockDevice 中的attach方法 nova/virt/block_device.py:DriverVolumeBlockDevice.attach step 7:connector = virt_driver.get_volume_connector(instance)-----即使虚机挂载的卷是本地的lvm卷,也会把这个connetor信息发送给cinder使用 step 8:connection_info = volume_api.initialize_connection(context,volume_id,connector) step 9: virt_driver.attach_volume(context, connection_info, instance,self['mount_device'], disk_bus=self['disk_bus'],device_type=self['device_type'], encryption=encryption) step 13 :volume_api.attach((context, volume_id, instance.uuid,self['mount_device'], mode=mode))
step 7:该方法的主要作用是返回虚机所在计算节点的ip、操作系统类型、系统架构以及initiator name,给cinder 使用
connector = virt_driver.get_volume_connector(instance)
由于使用的是libvirt,所以virt_driver使用的是nova/virt/libvirt/driver.py
nova.virt.libvirt.driver.LibvirtDriver.get_volume_connector
from os_brick.initiator import connector def get_volume_connector(self, instance): root_helper = utils.get_root_helper() return connector.get_connector_properties( root_helper, CONF.my_block_storage_ip, CONF.libvirt.volume_use_multipath, enforce_multipath=True, host=CONF.host) def get_connector_properties(root_helper, my_ip, multipath, enforce_multipath, host=None): iscsi = ISCSIConnector(root_helper=root_helper) fc = linuxfc.LinuxFibreChannel(root_helper=root_helper) props = {} props['ip'] = my_ip-----------------计算节点nova.conf配置文件中my_block_storage_ip props['host'] = host if host else socket.gethostname()-----计算节点主机名 initiator = iscsi.get_initiator() if initiator: props['initiator'] = initiator------iscsc客户端的名字,获取的是计算节点/etc/iscsi/initiatorname.iscsi的内容 wwpns = fc.get_fc_wwpns() if wwpns: props['wwpns'] = wwpns wwnns = fc.get_fc_wwnns() if wwnns: props['wwnns'] = wwnns props['multipath'] = (multipath and _check_multipathd_running(root_helper, enforce_multipath)) props['platform'] = platform.machine()-----计算节点系统架构,x86_64 props['os_type'] = sys.platform------------操作系统类型 return props
step 8:该函数的作用是调用Cinder API的initialize_connection方法,由cinder去创建target,创建lun,认证信息,把这些信息拼接成合适的信息,返回给nova compute
同时在cinder数据库的volume_attachment表中,新增了一条记录,记录connector信息,
connection_info = volume_api.initialize_connection(context,volume_id,connector)
step 9: 计算节点获取到卷的connection_info以后,进行虚机挂载卷操作
virt_driver.attach_volume
由于使用的是libvirt,所以调用nova/virt/libvirt/driver.py:LibvirtDriver的 attach_volume 函数
主要调用 step 10:self._connect_volume(connection_info, disk_info) step 11:conf = self._get_volume_config(connection_info, disk_info) step 12:guest.attach_device(conf, persistent=True, live=live)
step 10:该方法的主要作用是使用iscsiadm命令的discovory以及login子命令,即把lun设备映射到本地设备
self._connect_volume(connection_info, disk_info)
discovory命令发现的所有的target保存在/var/lib/iscsi/nodes目录下
login的lun设备,映射到/dev/disk/by-path目录下
step 11:该函数的作用是拿到lun映射到计算节点的/dev/disk/by-pass的路径后,生成volume的xml文件
conf = self._get_volume_config(connection_info, disk_info)
step 12:调用virsh attach-device命令把卷挂载到虚拟机中
guest.attach_device(conf, persistent=True, live=live)
step 13:更新cinder数据库中volume的状态为in-use状态
volume_api.attach(context, volume_id, instance.uuid,self['mount_device'], mode=mode)
参考文档:http://int32bit.me/2017/09/08/OpenStack%E8%99%9A%E6%8B%9F%E6%9C%BA%E6%8C%82%E8%BD%BD%E6%95%B0%E6%8D%AE%E5%8D%B7%E8%BF%87%E7%A8%8B%E5%88%86%E6%9E%90/