bond技术是在linux2.4以后加入内核。
一般步骤是
1.把bonding模块加入内核,
2 编辑要绑定的网卡设置,去除地址设定
3 添加bond设备,设置地址等配置
4
重启网络
5 在交换机上做支持
具体信息看 内核文档 Documentation/networking/bonding.txt
参考实例:
Linux
双网卡绑定一个IP地址,实质工作就是使用两块网卡虚拟为一块,使用同一个IP地址,是我们能够得到更好的更快的服务。其实这项技术在Sun和Cisco
中早已存在,被称为Trunking和Etherchannel技术,在Linux的2.4.x的内核中也采用这这种技术,被称为bonding。
1、bonding 的原理:
什
么是bonding需要从网卡的混杂(promisc)模式说起。我们知道,在正常情况下,网卡只接收目的硬件地址(MAC
Address)是自身Mac的以太网帧,对于别的数据帧都滤掉,以减轻驱动程序的负担。但是网卡也支持另外一种被称为混杂promisc的模式,可以接
收网络上所有的帧,比如说tcpdump,就是运行在这个模式下。bonding也运行在这个模式下,而且修改了驱动程序中的mac地址,将两块网卡的
Mac地址改成相同,可以接收特定mac的数据帧。然后把相应的数据帧传送给bond驱动程序处理。
2、bonding模块工作方式:
bonding
mode=1 miimon=100。miimon是用来进行链路监测的。
比如:miimon=100,那么系统每100ms监测一次链路连接状态,如果有一条线路不通就转入另一条线路;mode的值表示工作模式,他共有0-6
七种模式,常用的为0,1,6三种。
mode=0:平衡负载模式,有自动备援,但需要”Switch”支援及设定。
mode=1:自动备援模式,其中一条线若断线,其他线路将会自动备援。
mode=6:平衡负载模式,有自动备援,不需要”Switch”支援及设定。
mode=0 (balance-rr)
Round-robin
policy: Transmit packets in sequential order from the first available
slave through the last. This mode provides load balancing and fault
tolerance.
mode=1 (active-backup)
Active-backup
policy: Only one slave in the bond is active. A different slave becomes
active if, and only if, the active slave fails. The bond’s MAC address
is externally visible on only one port (network adapter) to avoid
confusing the switch. This mode provides fault tolerance. The primary
option affects the behavior of this mode.
mode=2 (balance-xor)
XOR
policy: Transmit based on [(source MAC address XOR'd with destination
MAC address) modulo slave count]. This selects the same slave for each
destination MAC address. This mode provides load balancing and fault
tolerance.
mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
mode=4 (802.3ad)
IEEE
802.3ad Dynamic link aggregation. Creates aggregation groups that share
the same speed and duplex settings. Utilizes all slaves in the active
aggregator according to the 802.3ad specification. Pre-requisites: 1.
Ethtool support in the base drivers for retrieving
the speed and duplex of each slave. 2. A switch that supports IEEE 802.3ad Dynamic link
aggregation. Most switches will require some type of configuration to enable 802.3ad mode.
mode=5 (balance-tlb)
Adaptive
transmit load balancing: channel bonding that does not require any
special switch support. The outgoing traffic is distributed according to
the current load (computed relative to the speed) on each slave.
Incoming traffic is received by the current slave. If the receiving
slave fails, another slave takes over the MAC address of the failed
receiving slave. Prerequisite: Ethtool support in the base drivers for
retrieving the speed of each slave.
mode=6 (balance-alb)
Adaptive
load balancing: includes balance-tlb plus receive load balancing (rlb)
for IPV4 traffic, and does not require any special switch support. The
receive load balancing is achieved by ARP negotiation. The bonding
driver intercepts the ARP Replies sent by the local system on their way
out and overwrites the source hardware address with the unique hardware
address of one of the slaves in the bond such that different peers use
different hardware addresses for the server.
3、debian系统的安装配置
3.1、安装ifenslave
- apt-get install ifenslave
3.2、让系统开机自动加载模块bonding
- sudo sh -c "echo bonding mode=1 miimon=100 >> /etc/modules"
3.3、网卡配置
- sudo vi /etc/network/interfaces
- #实例内容如下:
- auto lo
- iface lo inet loopback
- auto bond0
- iface bond0 inet static
- address 192.168.1.110
- netmask 255.255.255.0
- gateway 192.168.1.1
- dns-nameservers 192.168.1.1
- post-up ifenslave bond0 eth0 eth1
- pre-down ifenslave -d bond0 eth0 eth1
3.4、重启网卡,完成配置
- #如果安装ifenslave后你没有重启计算机,必须手动加载bonding模块。
- sudo modprobe bonding mode=1 miimon=100
- #重启网卡
- sudo /etc/init.d/networking restart
4、redhat系统的安装配置
4.1、安装ifenslave
redhat默认一般已经安装。未安装的要先安装。
- yum install ifenslave
4.2、让系统开机自动加载模块bonding
- sudo sh -c "echo alias bond0 bonding >> /etc/modprobe.conf"
- sudo sh -c "echo options bond0 miimon=100 mode=1 >> /etc/modprobe.conf"
4.3、网卡配置
- sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
- #eth0配置如下
- DEVICE=eth0
- ONBOOT=yes
- BOOTPROTO=none
- sudo vi /etc/sysconfig/network-scripts/ifcfg-eth1
- #eth1配置如下
- DEVICE=eth1
- ONBOOT=yes
- BOOTPROTO=none
- sudo vi /etc/sysconfig/network-scripts/ifcfg-bond0
- #bond0配置如下
- DEVICE=bond0
- ONBOOT=yes
- BOOTPROTO=static
- IPADDR=192.168.1.110
- NETMASK=255.255.255.0
- GATEWAY=192.168.1.1
- SLAVE=eth0,eth1
- TYPE=Ethernet
- #系统启动时绑定双网卡
- sudo sh -c "echo ifenslave bond0 eth0 eth1 >> /etc/rc.local"
4.4、重启网卡,完成配置
- #如果安装ifenslave后你没有重启计算机,必须手动加载bonding模块。
- sudo modprobe bonding mode=1 miimon=100
- #重启网卡
- sudo /etc/init.d/network restart
5、交换机etherChannel配置
使用mode=0时,需要交换机配置支持etherChannel。
- Switch# configure terminal
- Switch(config)# interface range fastethernet 0/1 - 2
- Switch(config-if-range)# channel-group 1 mode on
- Switch(config-if-range)# end
- Switch#copy run start
参考
1 http://sapling.me/unixlinux/linux_two_nic_one_ip_bonding.html
2 http://www.linux-corner.info/bonding.html
Linux 网卡绑定技术
Linux下双网卡绑定技术实现负载均衡和失效保护
保持服务器的高可用性是企业级 IT 环境的重要因素。其中最重要的一点是服务器网络连接的高可用性。网卡(NIC)绑定技术有助于保证高可用性特性并提供其它优势以提高网络性能。
们在这介绍的Linux双网卡绑定实现就是使用两块网卡虚拟成为一块网卡,这个聚合起来的设备看起来是一个单独的以太网接口设备,通俗点讲就是两块网卡具
有相同的IP地址而并行链接聚合成一个逻辑链路工作。其实这项技术在Sun和Cisco中早已存在,被称为Trunking和Etherchannel技
术,在Linux的2.4.x的内核中也采用这这种技术,被称为bonding。bonding技术的最早应用是在集群——beowulf上,为了提高集
群节点间的数据传输而设计的。下面我们讨论一下bonding 的原理,什么是bonding需要从网卡的混杂(promisc)模式说起。我们知道,在
正常情况下,网卡只接收目的硬件地址(MAC Address)是自身Mac的以太网帧,对于别的数据帧都滤掉,以减轻驱动程序的负担。但是网卡也支持另
外一种被称为混杂promisc的模式,可以接收网络上所有的帧,比如说tcpdump,就是运行在这个模式下。bonding也运行在这个模式下,而且
修改了驱动程序中的mac地址,将两块网卡的Mac地址改成相同,可以接收特定mac的数据帧。然后把相应的数据帧传送给bond驱动程序处理。
说了半天理论,其实配置很简单,一共四个步骤:
实验的操作系统是 Redhat Linux Enterprise 3.0
绑定的前提条件: 芯片组型号相同,而且网卡应该具备自己独立的BIOS芯片
1.编辑虚拟网络接口配置文件,指定网卡IP
- vi /etc/sysconfig/ network-scripts/ ifcfg-bond0
- [root@rhas-13 root]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 ifcfg-bond0
- 将第一行改成 DEVICE=bond0
- # cat ifcfg-bond0
- DEVICE=bond0
- BOOTPROTO=static
- IPADDR=172.31.0.13
- NETMASK=255.255.252.0
- BROADCAST=172.31.3.254
- ONBOOT=yes
- TYPE=Ethernet
这里要主意,不要指定单个网卡的IP 地址、子网掩码或网卡 ID。将上述信息指定到虚拟适配器(bonding)中即可。[root@rhas-13 network-scripts]# cat ifcfg-eth0
- DEVICE=eth0
- ONBOOT=yes
- BOOTPROTO=dhcp
- [root@rhas-13 network-scripts]# cat ifcfg-eth1
- DEVICE=eth0
- ONBOOT=yes
- BOOTPROTO=dhcp
编辑 /etc/modules.conf 文件,加入如下一行内容,以使系统在启动时加载bonding模块,对外虚拟网络接口设备为 bond0加入下列两行
- alias bond0 bonding
- options bond0 miimon=100 mode=1
说明:miimon是用来进行链路监测的。 比如:miimon=100,那么系统每100ms监测一次链路连接状态,如果有一条线路不通就转入另一条线路;mode的值表示工作模式,他共有0,1,2,3四种模式,常用的为0,1两种。
- mode=0表示load balancing (round-robin)为负载均衡方式,两块网卡都工作。
- mode=1表示fault-tolerance (active-backup)提供冗余功能,工作方式是主备的工作方式,也就是说默认情况下只有一块网卡工作,另一块做备份.
bonding只能提供链路监测,即从主机到交换机的链路是否接通。如果只是交换机对外的链路down掉了,而交换机本身并没有故障,那么bonding会认为链路没有问题而继续使用
加入两行
- ifenslave bond0 eth0 eth1
- route add -net 172.31.3.254 netmask 255.255.255.0 bond0
重启会看见以下信息就表示配置成功了
................
Bringing up interface bond0 OK
Bringing up interface eth0 OK
Bringing up interface eth1 OK
................
下面我们讨论以下mode分别为0,1时的情况
mode=1
工作在主备模式下,这时eth1作为备份网卡是no arp的
- [root@rhas-13 network-scripts]# ifconfig 验证网卡的配置信息
- bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
- RX packets:18495 errors:0 dropped:0 overruns:0 frame:0
- TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:1587253 (1.5 Mb) TX bytes:89642 (87.5 Kb)
- eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:9572 errors:0 dropped:0 overruns:0 frame:0
- TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:833514 (813.9 Kb) TX bytes:89642 (87.5 Kb)
- Interrupt:11
- eth1 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING NOARP SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:8923 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:753739 (736.0 Kb) TX bytes:0 (0.0 b)
- Interrupt:15
那也就是说在主备模式下,当一个网络接口失效时(例如主交换机掉电等),不回出现网络中断,系统会按照cat /etc/rc.d/rc.local里指定网卡的顺序工作,机器仍能对外服务,起到了失效保护的功能.
负载均衡工作模式,他能提供两倍的带宽,下我们来看一下网卡的配置信息
- [root@rhas-13 root]# ifconfig
- bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
- RX packets:2817 errors:0 dropped:0 overruns:0 frame:0
- TX packets:95 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:226957 (221.6 Kb) TX bytes:15266 (14.9 Kb)
- eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:1406 errors:0 dropped:0 overruns:0 frame:0
- TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:113967 (111.2 Kb) TX bytes:7268 (7.0 Kb)
- Interrupt:11
- eth1 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
- inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
- UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:1411 errors:0 dropped:0 overruns:0 frame:0
- TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:112990 (110.3 Kb) TX bytes:7998 (7.8 Kb)
- Interrupt:15
在这种情况下出现一块网卡失效,仅仅会是服务器出口带宽下降,也不会影响网络使用.
通过查看bond0的工作状态查询能详细的掌握bonding的工作状态
- [root@rhas-13 bonding]# cat /proc/net/bonding/bond0
- bonding.c:v2.4.1 (September 15, 2003)
- Bonding Mode: load balancing (round-robin)
- MII Status: up
- MII Polling Interval (ms): 0
- Up Delay (ms): 0
- Down Delay (ms): 0
- Multicast Mode: all slaves
- Slave Interface: eth1
- MII Status: up
- Link Failure Count: 0
- Permanent HW addr: 00:0e:7f:25:d9:8a
- Slave Interface: eth0
- MII Status: up
- Link Failure Count: 0
- Permanent HW addr: 00:0e:7f:25:d9:8b
Linux下通过网卡邦定技术既增加了服务器的可靠性,又增加了可用网络带宽,为用户提供不间断的关键服务。用以上方法均在redhat的多个版本测试成功,而且效果良好.心动不如行动,赶快一试吧!
/usr/share/doc/kernel-doc-2.4.21/networking/bonding.txt
Finally,
today I had implemented NIC bounding (bind both NIC so that it works as
a single device). Bonding is nothing but Linux kernel feature that
allows to aggregate multiple like interfaces (such as eth0, eth1) into a
single virtual link such as bond0. The idea is pretty simple get higher
data rates and as well as link failover. The following instructions
were tested on:
- RHEL v4 / 5 / 6 amd64
- CentOS v5 / 6 amd64
- Fedora Linux 13 amd64 and up.
- 2 x PCI-e Gigabit Ethernet NICs with Jumbo Frames (MTU 9000)
- Hardware RAID-10 w/ SAS 15k enterprise grade hard disks.
- Gigabit switch with Jumbo Frame
Say
Hello To bounding DriverThis server act as an heavy duty ftp, and nfs
file server. Each, night a perl script will transfer lots of data from
this box to a backup server. Therefore, the network would be setup on a
switch using dual network cards. I am using Red Hat enterprise Linux
version 4.0. But, the inductions should work on RHEL 5 and 6 too.
Linux
allows binding of multiple network interfaces into a single channel/NIC
using special kernel module called bonding. According to official
bonding documentation:
The
Linux bonding driver provides a method for aggregating multiple network
interfaces into a single logical "bonded" interface. The behavior of
the bonded interfaces depends upon the mode; generally speaking, modes
provide either hot standby or load balancing services. Additionally,
link integrity monitoring may be performed.
Step #1: Create a Bond0 Configuration File
Red
Hat Enterprise Linux (and its clone such as CentOS) stores network
configuration in /etc/sysconfig/network-scripts/ directory. First, you
need to create a bond0 config file as follows:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
Append the following linest:
- DEVICE=bond0
- IPADDR=192.168.1.20
- NETWORK=192.168.1.0
- NETMASK=255.255.255.0
- USERCTL=no
- BOOTPROTO=none
- ONBOOT=yes
You need to replace IP address with your actual setup. Save and close the file.
Step #2: Modify eth0 and eth1 config files
Open both configuration using a text editor such as vi/vim, and make sure file read as follows for eth0 interface
- # vi /etc/sysconfig/network-scripts/ifcfg-eth0
Modify/append directive as follows:
- DEVICE=eth0
- USERCTL=no
- ONBOOT=yes
- MASTER=bond0
- SLAVE=yes
- BOOTPROTO=none
Open eth1 configuration file using vi text editor, enter:
- # vi /etc/sysconfig/network-scripts/ifcfg-eth1
Make sure file read as follows for eth1 interface:
- DEVICE=eth1
- USERCTL=no
- ONBOOT=yes
- MASTER=bond0
- SLAVE=yes
- BOOTPROTO=none
Save and close the file.
Step # 3: Load bond driver/module
Make
sure bonding module is loaded when the channel-bonding interface
(bond0) is brought up. You need to modify kernel modules configuration
file:
- # vi /etc/modprobe.conf
Append following two lines:
- alias bond0 bonding
- options bond0 mode=balance-alb miimon=100
Save file and exit to shell prompt. You can learn more about all bounding options by clicking here).Step # 4: Test configuration
First, load the bonding module, enter:
- # modprobe bonding
Restart the networking service in order to bring up bond0 interface, enter:
- # service network restart
Make sure everything is working. Type the following cat command to query the current status of Linux kernel bounding driver, enter:
- # cat /proc/net/bonding/bond0
Sample outputs:
- Bonding Mode: load balancing (round-robin)
- MII Status: up
- MII Polling Interval (ms): 100
- Up Delay (ms): 200
- Down Delay (ms): 200
- Slave Interface: eth0
- MII Status: up
- Link Failure Count: 0
- Permanent HW addr: 00:0c:29:c6:be:59
- Slave Interface: eth1
- MII Status: up
- Link Failure Count: 0
- Permanent HW addr: 00:0c:29:c6:be:63
To kist all network interfaces, enter:
- # ifconfig
Sample outputs:
- bond0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
- inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
- inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
- UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
- RX packets:2804 errors:0 dropped:0 overruns:0 frame:0
- TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:250825 (244.9 KiB) TX bytes:244683 (238.9 KiB)
- eth0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
- inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
- inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
- UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:2809 errors:0 dropped:0 overruns:0 frame:0
- TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:251161 (245.2 KiB) TX bytes:180289 (176.0 KiB)
- Interrupt:11 Base address:0x1400
- eth1 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
- inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
- inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
- UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
- RX packets:4 errors:0 dropped:0 overruns:0 frame:0
- TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
- collisions:0 txqueuelen:1000
- RX bytes:258 (258.0 b) TX bytes:66516 (64.9 KiB)
- Interrupt:10 Base address:0x1480
Read the official bounding howto which covers the following additional topics:
- VLAN Configuration
- Cisco switch related configuration
- Advanced routing and troubleshooting
This blog post is 1 of 2 in the "Linux NIC Interface Bonding (aggregate multiple links) Tutorial" series. Keep reading the rest of the series:
- Red Hat (RHEL/CentOS) Linux Bond or Team Multiple Network Interfaces (NIC) into a Single Interface
- Debian / Ubuntu Linux Configure Bonding [ Teaming / Aggregating NIC ]
======================================================
- http://www.chinaunix.net/jh/4/371049.html
- http://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-interfaces-nic-into-single-interface.html
http://os.51cto.com/art/200911/165875.htm