【Linux】Linux双网卡绑定实现

Linux双网卡绑定实现就是使用两块网卡虚拟成为一块网卡,这个聚合起来的设备看起来是一个单独的以太网接口设备,通俗点讲就是两块网卡具有相同的IP地址而并行链接聚合成一个逻辑链路工作。
   在正常情况下,网卡只接收目的硬件地址(MAC Address)是自身Mac的以太网帧,对于别的数据帧都滤掉,以减轻驱动程序的负担。但是网卡也支持另外一种被称为混杂promisc的模式,可以接收网络上所有的帧,比如说tcpdump,就是运行在这个模式下。bonding也运行在这个模式下,而且修改了驱动程序中的mac地址,将两块网卡的Mac地址改成相同,可以接收特定mac的数据帧。然后把相应的数据帧传送给bond驱动程序处理
【Linux】Linux双网卡绑定实现
测试环境版本:
CentOS release 5.3 (Final) X86_64
2.6.18-128.el5
具体的配置步骤如下:
1 新建/etc/sysconfig/network-scripts/ifcfg-bond0
[root@rac4 network-scripts]# cat ifcfg-bond0      
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=bond0
BOOTPROTO=static
IPADDR=10.250.7.220
NETMASK=255.255.255.0
BROADCAST=10.250.7.255
ONBOOT=yes
TYPE=Ethernet
GATEWAY=10.250.7.254
USERCTL=no
2 更改需要bond的网卡的属性:
[root@rac4 network-scripts]# cat ifcfg-eth0       
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
[root@rac4 network-scripts]# cat ifcfg-eth1
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
3 编辑/etc/modprobe.conf文件,加入以下红色标记的内容      
[root@rac4 network-scripts]# vi /etc/modprobe.conf
alias scsi_hostadapter mptbase
alias scsi_hostadapter1 mptspi
alias scsi_hostadapter2 ata_piix
alias eth0 e1000
alias eth1 e1000
alias bond0 bonding
options bond0 miimon=100 mode=1
说明:
miimon是用来进行链路监测的,比如:miimon=100,那么系统每100ms监测一次链路连接状态,如果有一条线路不通就转入另一条线路;mode的值表示工作模式,他共有0,1,2,3四种模式,常用的为0,1两种。 
mode=0表示load balancing  (round-robin)为负载均衡方式,两块网卡都工作。 
mode=1表示fault-tolerance (active-backup)提供冗余功能,工作方式是主备的工作方式,也就是说默认情况下只有一块网卡工作,另一块做备份.
4 加入/etc/rc.d/rc.local启动项 红色标记内容 
[root@rac4 network-scripts]# vi /etc/rc.d/rc.local 
#!/bin/sh
# This script. will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style. init stuff.
touch /var/lock/subsys/local
ifenslave bond0 eth0 eth1
5 启动bond0
[root@rac4 network-scripts]# ifconfig  bond0 10.250.7.220 up
当使用ifenslave 启动的时候报出如下错误:启动bond0的时候已经将eth0 eth1添加到/sys/class/net/bond0/bonding/slaves 里面了
[root@rac4 network-scripts]# ifenslave  bond0 eth0 eth1
Illegal operation: The specified slave interface 'eth0' is already a slave
Master 'bond0', Slave 'eth0': Error: Enslave failed
Illegal operation: The specified slave interface 'eth1' is already a slave
Master 'bond0', Slave 'eth1': Error: Enslave failed
[root@rac4 network-scripts]# ifenslave  bond0  eth1    
Illegal operation: The specified slave interface 'eth1' is already a slave
Master 'bond0', Slave 'eth1': Error: Enslave failed
最后重启服务器:
[root@rac4 ~]# reboot
下面讨论一下绑定网卡的特性:
当bonding 属性 mode=1时,绑定网卡工作在主备模式下,这时eth1作为备份网卡是no arp的 
验证网卡的配置信息:
[root@rac4 ~]# ifconfig                                
bond0     Link encap:Ethernet  HWaddr 00:50:56:8F:22:48  
          inet addr:10.250.7.220  Bcast:10.250.7.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe8f:2248/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:1109 errors:0 dropped:0 overruns:0 frame.:0
          TX packets:120 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:84101 (82.1 KiB)  TX bytes:13835 (13.5 KiB)
eth0      Link encap:Ethernet  HWaddr 00:50:56:8F:22:48  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:566 errors:0 dropped:0 overruns:0 frame.:0
          TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:43053 (42.0 KiB)  TX bytes:5791 (5.6 KiB)
          Base address:0x2000 Memory:d8920000-d8940000 
eth1      Link encap:Ethernet  HWaddr 00:50:56:8F:22:48  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:543 errors:0 dropped:0 overruns:0 frame.:0
          TX packets:61 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:41048 (40.0 KiB)  TX bytes:8214 (8.0 KiB)
          Base address:0x2040 Memory:d8940000-d8960000 
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:5694 errors:0 dropped:0 overruns:0 frame.:0
          TX packets:5694 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:8581664 (8.1 MiB)  TX bytes:8581664 (8.1 MiB)
[root@rac4 ~]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup) 主备模式
Primary Slave: None
Currently Active Slave: eth0 当前工作网卡为eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:50:56:8f:22:48
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:50:56:8f:7d:6
mode=1时,当一个网络接口失效时(例如主交换机掉电等),不回出现网络中断,系统会按照cat /etc/rc.d/rc.local里指定网卡的顺序工作,机器仍能对外服务,起到了失效保护的功能.
对于mode=0 负载均衡工作模式,此模式能提供两倍的带宽,在这种情况下出现一块网卡失效,仅仅会是服务器出口带宽下降,也不会影响网络使用。通过查看bond0的工作状态查询能详细的掌握bonding的工作状态:
[root@rac4 ~]# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: load balancing (round-robin)--负载模式
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:50:56:8f:22:48
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:50:56:8f:7d:6f
[root@rac4 ~]# ifconfig
bond0     Link encap:Ethernet  HWaddr 00:50:56:8F:22:48  
          inet addr:10.250.7.220  Bcast:10.250.7.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe8f:2248/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:376 errors:0 dropped:0 overruns:0 frame.:0
          TX packets:121 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:29934 (29.2 KiB)  TX bytes:13014 (12.7 KiB)
eth0      Link encap:Ethernet  HWaddr 00:50:56:8F:22:48  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:203 errors:0 dropped:0 overruns:0 frame.:0
          TX packets:61 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:15858 (15.4 KiB)  TX bytes:8146 (7.9 KiB)
          Base address:0x2000 Memory:d8920000-d8940000 
eth1      Link encap:Ethernet  HWaddr 00:50:56:8F:22:48  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:173 errors:0 dropped:0 overruns:0 frame.:0
          TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:14076 (13.7 KiB)  TX bytes:4868 (4.7 KiB)
          Base address:0x2040 Memory:d8940000-d8960000 
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:3080 errors:0 dropped:0 overruns:0 frame.:0
          TX packets:3080 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:4806596 (4.5 MiB)  TX bytes:4806596 (4.5 MiB)

【Linux】Linux双网卡绑定实现双网卡绑定的拓扑图.JPG

上一篇:【12c新特性】12c中新后台进程


下一篇:针对RHEL中双网卡IP不能同时被访问的解决方法