How to create Network Bonding On CentOS 6 and 7 version

Network bonding is a method of combining (joining) two or more network interfaces together into a single interface. It will increase the network throughput, bandwidth and will give redundancy. If one interface is down or unplugged, the other one will keep the network traffic up and alive. Network bonding can be used in situations wherever you need redundancy, fault tolerance or load balancing networks.

Linux allows us to bond multiple network interfaces into single interface using a special kernel module named bonding. The Linux bonding driver provides a method for combining multiple network interfaces into a single logical “bonded” interface. The behaviour of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring, may be performed.

Types of Network Bonding

According the to the official documentation, here is the types of network bonding modes.

mode=0 (balance-rr)

Round-robin policy: It the default mode. It transmits packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

mode=1 (active-backup)

Active-backup policy: In this mode, only one slave in the bond is active. The other one will become active, only when the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.

mode=2 (balance-xor)

XOR policy: Transmit based on [(source MAC address XOR’d with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.

mode=3 (broadcast)

Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

mode=4 (802.3ad)

IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

Prerequisites:

– Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
– A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.

mode=5 (balance-tlb)

 Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Prerequisite:

– Ethtool support in the base drivers for retrieving the speed of each slave.

mode=6 (balance-alb)

Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

In this article we will be using 4 network interfaces using CentOS 7 and will be using the mode=4 AKA 802.3ad LACP mode

  • 2 NIC for Management bonding (ifcfg-eno1 and ifcfg-eno2)
  • 2 NIC for Data-Trunk bonding (ifcfg-eno3 and ifcfg-eno4)

And we will configure the 2 Management bonding first followed by the Data-Trunk bonding

Step 1: This is the configuration in your Cisco Switch

Management ports

Switch# configure terminal 
 Switch(config)# interface gigabitethernet3/0/2
 Switch(config)# switchport mode access
 Switch(config)# switchport access vlan 10
 Switch(config)# channel-group 1 mode active 
 Switch(config)# exit
 Switch(config)# interface gigabitethernet3/0/3 
 Switch(config-if)# switchport mode access
 Switch(config-if)# switchport access vlan 10
 Switch(config-if)# channel-group 1 mode active 
 Switch(config-if)# exit

Data-Trunk ports

 Switch(config)# interface range gigabitethernet 3/0/4 - 5
 Switch(config-if-range)# channel-group 2 mode on
 Switch(config-if-range)# switchport trunk encapsulation dot1q
 Switch(config-if-range)# switchport mode trunk 
 Switch(config-if-range)# switchport trunk allowed vlan 10,20,30,40
 Switch(config-if)# exit

Step 2: Enable the Bonding module by typing this command below

# sudo modprobe –first-time bonding

And after that we can view the bonding module information

# sudo modinfo bonding

[[email protected] network-scripts]# modinfo bonding
filename: /lib/modules/3.10.0-327.18.2.el7.x86_64/kernel/drivers/net/bonding/bonding.ko
author: Thomas Davis, [email protected] and many others
description: Ethernet Channel Bonding Driver, v3.7.1
version: 3.7.1
license: GPL
alias: rtnl-link-bond
rhelversion: 7.2
srcversion: 49765A3F5CDFF2C3DCFD8E6
depends:
intree: Y
vermagic: 3.10.0-327.18.2.el7.x86_64 SMP mod_unload modversions
signer: CentOS Linux kernel signing key
sig_key: EB:27:91:DE:1A:BE:A5:F9:5A:A5:BC:B8:91:E1:33:2B:ED:29:8E:5E
sig_hashalgo: sha256
parm: max_bonds:Max number of bonded devices (int)
parm: tx_queues:Max number of transmit queues (default = 16) (int)
parm: num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int)
parm: num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int)
parm: miimon:Link check interval in milliseconds (int)
parm: updelay:Delay before considering link up, in milliseconds (int)
parm: downdelay:Delay before considering link down, in milliseconds (int)
parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int)
parm: mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp)
parm: primary:Primary network device to use (charp)
parm: primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp)
parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp)
parm: ad_select:803.ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp)
parm: min_links:Minimum number of available links before turning on carrier (int)
parm: xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3, 3 for encap layer 2+3, 4 for encap layer 3+4 (charp)
parm: arp_interval:arp interval in milliseconds (int)
parm: arp_ip_target:arp targets in n.n.n.n form (array of charp)
parm: arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp)
parm: arp_all_targets:fail on any/all arp targets timeout; 0 for any (default), 1 for all (charp)
parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp)
parm: all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; 0 for never (default), 1 for always. (int)
parm: resend_igmp:Number of IGMP membership reports to send on link failure (int)
parm: packets_per_slave:Packets to send per slave in balance-rr mode; 0 for a random slave, 1 packet per slave (default), >1 packets per slave. (int)
parm: lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is 1. (uint)
[[email protected] network-scripts]#

Step 3: Create a file that will be use for bonding interface called “ifcfg-bond0” in the /etc/sysconfig/network-scripts path and this below is the configuration using root privilege

# vim /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE="bond0"
NAME="bond0"
TYPE=Bond
ONBOOT="yes"
IPADDR="192.168.100.10"
PREFIX="24"
GATEWAY="192.168.100.1"
DNS1="192.168.100.254"
DNS2="192.168.100.253"
DOMAIN="mytechrepublic.com"
BONDING_MASTER=yes
BONDING_OPTS="mode=4 miimon=100 lacp_rate=1"
# mode=4 is the LACP link aggregation

Step 4:  Configure the ifcfg-eno1 and ifcfg-eno2 management ports in the same folder /etc/sysconfig/network-scripts with this below configuration.

[[email protected] network-scripts]# vim ifcfg-eno1
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="no"
NAME="eno1"
UUID="678678678-4138-768678-a3da-67867867"
DEVICE="eno1"
ONBOOT="yes"
MASTER=bond0
SLAVE=yes
[[email protected] network-scripts]#
[[email protected] network-scripts]# vim ifcfg-eno2
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL=no
IPV6INIT=no
PEERDNS=yes
PEERROUTES=yes
NAME="eno2"
UUID="123123-3213123-4485-919a-123123123"
DEVICE="eno2"
ONBOOT="yes"
MASTER=bond0
SLAVE=yes
[[email protected] network-scripts]#

Step 5: Reload the networking interfaces to activate and after that check if you able to see the LACP config

# systemctl restart network

# cat /proc/net/bonding/bond0

[[email protected] network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
 Aggregator ID: 1
 Number of ports: 1
 Actor Key: 13
 Partner Key: 1
 Partner Mac Address: 00:00:00:00:00:00

Slave Interface: eno1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 80:c1:6e:7a:57:40
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: churned
Actor Churned Count: 0
Partner Churned Count: 1
details actor lacp pdu:
 system priority: 65535
 port key: 13
 port priority: 255
 port number: 1
 port state: 77
details partner lacp pdu:
 system priority: 65535
 oper key: 1
 port priority: 255
 port number: 1
 port state: 1

Slave Interface: eno2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 80:c1:6e:7a:57:44
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: churned
Partner Churn State: churned
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
 system priority: 65535
 port key: 13
 port priority: 255
 port number: 2
 port state: 69
details partner lacp pdu:
 system priority: 65535
 oper key: 1
 port priority: 255
 port number: 1
 port state: 1
[[email protected] network-scripts]#

Note: if it fail make sure to check your SELINUX=disabled configuration in “/etc/selinux/config”

# vim /etc/selinux/config

Step 6: Now let’s configure the Data-Trunk port bonding by creating new file

# vim /etc/sysconfig/network-scripts/ifcfg-bond1

DEVICE="bond1"
NAME="bond1"
TYPE=Bond
ONBOOT="yes"
BONDING_MASTER=yes
BONDING_OPTS="mode=4 miimon=100 lacp_rate=1"
# mode=4 is the LACP link aggregation

Note: As you notice there is no IP and DNS configured in this as this will be the trunk port or AKA tagging ports

Step 7: You can follow the same configuration in step 4 and step 5 and remember to change the NIC card names to ifcfg-eno3 and ifcfg-eno4 then restart the Network Manager and double check the bond1 status.

[[email protected] network-scripts]# cat /proc/net/bonding/bond0

Step 8: Now lets start to do the vlan tagging in our Data-Trunk  bond1

[[email protected]]# nmcli connection show
[[email protected]]# nmcli connection add con-name virbr10 ifname virbr10 type bridge stp no
[[email protected]]# nmcli connection down virbr10
[[email protected]]# nmcli connection edit virbr10
 set ipv4.method disabled
 set ipv6.method ignore
 save
 quit
[[email protected]]# nmcli connection up virbr10
[[email protected]]# nmcli connection add con-name vlan10 ifname vlan10 type vlan dev bond1 id 10
[[email protected]]# nmcli connection down vlan10
[[email protected]]# nmcli connection edit vlan10
 set connection.master virbr10
 set connection.slave-type bridge
 verify fix
 save
 quit
[[email protected]]# nmcli connection up vlan10

Note: Do the same for virbr20+vlan20, virbr30+vlan30, virbr40+vlan40

KVM_Network_VLAN

 

Misc actual config:

CREATE BONDING NIC for MANAGEMENT (bond0) and TRUNK PORT (bond1)

[[email protected] network-scripts]# pwd
/etc/sysconfig/network-scripts
[[email protected] network-scripts]# cat ifcfg-bond0
DEVICE=”bond0″
NAME=”bond0″
TYPE=Bond
ONBOOT=”yes”
IPADDR=”192.168.252.36″
PREFIX=”24″
GATEWAY=”192.168.252.1″
DNS1=”192.168.50.254″
DNS2=”192.168.111.254″
DNS3=”192.168.30.254″
DOMAIN=”domain.com”
BONDING_MASTER=yes
BONDING_OPTS=”mode=4 miimon=100 lacp_rate=1″

[[email protected] network-scripts]# cat ifcfg-bond1
DEVICE=”bond1″
NAME=”bond1″
TYPE=Bond
ONBOOT=yes
BONDING_MASTER=yes
BONDING_OPTS=”mode=4 miimon=100 lacp_rate=1″

[[email protected] network-scripts]# cat ifcfg-eno1
TYPE=”Ethernet”
BOOTPROTO=”none”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=”no”
IPV6INIT=no
PEERDNS=”yes”
PEERROUTES=yes
NAME=”eno1″
UUID=”1234abcde-a8a0-4418-9778-1234abcde”
DEVICE=”eno1″
ONBOOT=”yes”
MASTER=bond0
SLAVE=”yes”

[[email protected] network-scripts]# cat ifcfg-eno2
TYPE=”Ethernet”
BOOTPROTO=”none”
DEFROUTE=”yes”
IPV4_FAILURE_FATAL=no
IPV6INIT=”no”
PEERDNS=yes
PEERROUTES=yes
NAME=”eno2″
UUID=4567abcd-eea7-4fc9-b98b-4567abcde
DEVICE=”eno2″
ONBOOT=”yes”
MASTER=bond0
SLAVE=yes

[[email protected] network-scripts]# cat ifcfg-eno3
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=eno3
UUID=7890acde-9d61-4539-902b-7890abcde
DEVICE=eno3
ONBOOT=yes
MASTER=bond1
SLAVE=yes

[[email protected] network-scripts]# cat ifcfg-eno4
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
NAME=eno4
UUID=45678abcde-67f6-4356-bb46-45678abcde
DEVICE=eno4
ONBOOT=yes
MASTER=bond1
SLAVE=yes

Verify the bonding ports if its in 802.ad/LACP mode
# cat /proc/net/bonding/bond0
# cat /proc/net/bonding/bond1

Be the first to comment on "How to create Network Bonding On CentOS 6 and 7 version"

Leave a comment

Your email address will not be published.