一、bonding技术
bonding(绑定)是一种linux系统下的网卡绑定技术,可以把服务器上n个物理网卡在系统内部抽象(绑定)成一个逻辑上的网卡,能够提升网络吞吐量、实现网络冗余、负载等功能,有很多优势。
bonding技术是linux系统内核层面实现的,它是一个内核模块(驱动)。使用它需要系统有这个模块, 我们可以modinfo命令查看下这个模块的信息, 一般来说都支持.
modinfo bonding
filename: /lib/modules/3.10.-957.1..el7.x86_64/kernel/drivers/net/bonding/bonding.ko.xz
author: Thomas Davis, tadavis@lbl.gov and many others
description: Ethernet Channel Bonding Driver, v3.7.1
version: 3.7.
license: GPL
alias: rtnl-link-bond
retpoline: Y
rhelversion: 7.6
srcversion: 120C91D145D649655185C69
depends:
intree: Y
vermagic: 3.10.-957.1..el7.x86_64 SMP mod_unload modversions
signer: CentOS Linux kernel signing key
sig_key: E7:CE:F3::3A:9B:8B:D0::FA:E7:::::9B:B1::9C:
sig_hashalgo: sha256
parm: max_bonds:Max number of bonded devices (int)
parm: tx_queues:Max number of transmit queues (default = ) (int)
parm: num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int)
parm: num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int)
parm: miimon:Link check interval in milliseconds (int)
parm: updelay:Delay before considering link up, in milliseconds (int)
parm: downdelay:Delay before considering link down, in milliseconds (int)
parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; for off, for on (default) (int)
parm: mode:Mode of operation; for balance-rr, for active-backup, for balance-xor, for broadcast, for .3ad, for balance-tlb, for balance-alb (charp)
parm: primary:Primary network device to use (charp)
parm: primary_reselect:Reselect primary slave once it comes up; for always (default), for only if speed of primary is better, for only on active slave failure (charp)
parm: lacp_rate:LACPDU tx rate to request from .3ad partner; for slow, for fast (charp)
parm: ad_select:.3ad aggregation selection logic; for stable (default), for bandwidth, for count (charp)
parm: min_links:Minimum number of available links before turning on carrier (int)
parm: xmit_hash_policy:balance-alb, balance-tlb, balance-xor, .3ad hashing method; for layer (default), for layer +, for layer +, for encap layer +, for encap layer + (charp)
parm: arp_interval:arp interval in milliseconds (int)
parm: arp_ip_target:arp targets in n.n.n.n form (array of charp)
parm: arp_validate:validate src/dst of ARP probes; for none (default), for active, for backup, for all (charp)
parm: arp_all_targets:fail on any/all arp targets timeout; for any (default), for all (charp)
parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC; for none (default), for active, for follow (charp)
parm: all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; for never (default), for always. (int)
parm: resend_igmp:Number of IGMP membership reports to send on link failure (int)
parm: packets_per_slave:Packets to send per slave in balance-rr mode; for a random slave, packet per slave (default), > packets per slave. (int)
parm: lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is . (uint)
bonding的七种工作模式:
bonding技术提供了七种工作模式,在使用的时候需要指定一种,每种有各自的优缺点.
- balance-rr (mode=0) 默认, 有高可用 (容错) 和负载均衡的功能, 需要交换机的配置,每块网卡轮询发包 (流量分发比较均衡).
- active-backup (mode=1) 只有高可用 (容错) 功能, 不需要交换机配置, 这种模式只有一块网卡工作, 对外只有一个mac地址。缺点是端口利用率比较低
- balance-xor (mode=2) 不常用
- broadcast (mode=3) 不常用
- 802.3ad (mode=4) IEEE 802.3ad 动态链路聚合,需要交换机配置
- balance-tlb (mode=5) 不常用
- balance-alb (mode=6) 有高可用 ( 容错 )和负载均衡的功能,不需要交换机配置 (流量分发到每个接口不是特别均衡)。
二、Centos7配置bonding
系统: Centos7.5
网卡: ifcfg-eno49、ifcfg-eno50
bond0:10.162.97.41
负载模式: mode4(802.3ad 动态链路聚合)
1、关闭和停止NetworkManager服务
systemctl stop NetworkManager.service # 停止NetworkManager服务
systemctl disable NetworkManager.service # 禁止开机启动NetworkManager服务
ps: 一定要关闭,不关会对做bonding有干扰
2、加载bonding模块
modprobe bonding
没有提示说明加载成功, 如果出现modprobe: ERROR: could not insert 'bonding': Module already in kernel说明你已经加载了这个模块, 就不用管了
你也可以使用lsmod | grep bonding查看模块是否被加载
lsmod | grep bonding
bonding 136705 0
3、创建基于bond0接口的配置文件
vim /etc/sysconfig/network-scripts/ifcfg-bond0
修改成如下,根据你的情况:
DEVICE=bond0
TYPE=Bond
BOOTPROTO=none
ONBOOT=yes
IPADDR=10.162.97.41
NETMASK=255.255.255.0
GATEWAY=10.162.97.253
DNS1=10.1.0.62
BONDING_MASTER=yes
BONDING_OPTS="mode=4 miimon=100"
上面的BONDING_OPTS="mode=4 miimon=100" 表示这里配置的工作模式是802.3ad 动态链路聚合, miimon表示监视网络链接的频度 (毫秒), 我们设置的是100毫秒, 根据你的需求也可以指定mode成其它的负载模式。
4、修改ifcfg-eno49接口的配置文件
vim /etc/sysconfig/network-scripts/ifcfg-eno49
修改成如下:
TYPE=Ethernet
PROXY_METHOD=none
dBROWSER_ONLY=no
BOOTPROTO=static
DEFROUT=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eno49
UUID=29d2526a-2eec-4a5e-8190-3d1fe5e04f57
DEVICE=eno49.97
ONBOOT=yes
MASTER=bond0
SLAVE=yes
VLAN=yes //此处配置VLAN,因为所处交换机端口为Trunk
TYPE=Vlan
VLAN_ID=97
5、修改ifcfg-eno50接口的配置文件
vim /etc/sysconfig/network-scripts/ifcfg-eno50
修改成如下:
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eno50
UUID=dae63958-841f-4666-9308-28bda92dc66f
DEVICE=eno50.97
ONBOOT=yes
MASTER=bond0
SLAVE=yes
VLAN=yes
TYPE=Vlan
VLAN_ID=97
6、测试
重启网络服务
systemctl restart network
查看bond0的接口状态信息 ( 如果报错说明没做成功,很有可能是bond0接口没起来)
# cat /proc/net/bonding/bond0
[root@bogon ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation // 绑定模式: 当前是ald模式(mode 4), 也就是802.3ad 动态链路聚合
Transmit Hash Policy: layer2 (0)
MII Status: up // 接口状态: up(MII是Media Independent Interface简称, 接口的意思)
MII Polling Interval (ms): 100 // 接口轮询的时间隔(这里是100ms)
Up Delay (ms): 0
Down Delay (ms): 0 802.3ad info //802.3ad 信息
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 20:67:7c:1f:15:f0
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 1
Actor Key: 15
Partner Key: 1
Partner Mac Address: 00:00:00:00:00:00 Slave Interface: eno49.97 // 备接口: eno49.97
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 20:67:7c:1f:15:f0
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 20:67:7c:1f:15:f0
port key: 15
port priority: 255
port number: 1
port state: 197
details partner lacp pdu:
system priority: 65535
system mac address: 00:00:00:00:00:00
oper key: 1
port priority: 255
port number: 1
port state: 3 Slave Interface: eno50.97 // 备接口: eno50.97
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 20:67:7c:1f:15:f8
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 20:67:7c:1f:15:f0
port key: 15
port priority: 255
port number: 2
port state: 197
details partner lacp pdu:
system priority: 65535
system mac address: 00:00:00:00:00:00
oper key: 1
port priority: 255
port number: 1
port state: 3
通过ifconfig命令查看下网络的接口信息
# ifconfig [root@bogon ~]# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet 10.162.97.41 netmask 255.255.255.0 broadcast 10.162.97.255
ether 20:67:7c:1f:15:f0 txqueuelen 1000 (Ethernet)
RX packets 22039 bytes 1436892 (1.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6687 bytes 678240 (662.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno49: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 20:67:7c:1f:15:f0 txqueuelen 1000 (Ethernet)
RX packets 16645 bytes 1894648 (1.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7228 bytes 833488 (813.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 16 memory 0x96000000-967fffff eno50: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 20:67:7c:1f:15:f8 txqueuelen 1000 (Ethernet)
RX packets 11163 bytes 1107408 (1.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 791 bytes 119264 (116.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 17 memory 0x95000000-957fffff eno51: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 20:67:7c:1f:15:f1 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 17 memory 0x94000000-947fffff eno52: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 20:67:7c:1f:15:f9 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 18 memory 0x93000000-937fffff eno49.97: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 20:67:7c:1f:15:f0 txqueuelen 1000 (Ethernet)
RX packets 13004 bytes 1017228 (993.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6404 bytes 658552 (643.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eno50.97: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 20:67:7c:1f:15:f0 txqueuelen 1000 (Ethernet)
RX packets 7632 bytes 351072 (342.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 180 (180.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens2f0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 48:df:37:36:a9:24 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc8300000-c83fffff ens2f1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 48:df:37:36:a9:25 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc8200000-c82fffff ens2f2: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 48:df:37:36:a9:26 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc8100000-c81fffff ens2f3: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 48:df:37:36:a9:27 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc8000000-c80fffff lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 16 bytes 1356 (1.3 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16 bytes 1356 (1.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
测试网络高可用, 我们拔掉其中一根网线进行测试, 结论是:
- 在本次mode=6模式下丢包1个, 恢复网络时( 网络插回去 ) 丢包在5-6个左右,说明高可用功能正常但恢复的时候丢包会比较多
- 测试mode=1模式下丢包1个,恢复网络时( 网线插回去 ) 基本上没有丢包,说明高可用功能和恢复的时候都正常
- mode6这种负载模式除了故障恢复的时候有丢包之外其它都挺好的,如果能够忽略这点的话可以这种模式;而mode1故障的切换和恢复都很快,基本没丢包和延时。但端口利用率比较低,因为这种主备的模式只有一张网卡在工作.
引自于:
http://www.cnblogs.com/huangweimin/articles/6527058.html