您的位置:首頁(yè) > 軟件教程 > 教程 > Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

來(lái)源:好特整理 | 時(shí)間:2024-05-27 08:46:07 | 閱讀:196 |  標(biāo)簽: T 本原 El S in 配置 高性能 Intel   | 分享到:

本篇主要介紹了 Intel HDSLB 的基本運(yùn)行原理和部署配置的方式,希望能夠幫助讀者們順利的把 HDSLB-DPVS 項(xiàng)目 “玩” 起來(lái)。

前言

在上一篇《 Intel HDSLB 高性能四層負(fù)載均衡器 — 快速入門和應(yīng)用場(chǎng)景 》中,我們著重介紹了 HDSLB(High Density Scalable Load Balancer,高密度可擴(kuò)展的負(fù)載均衡器)作為新一代高性能四層負(fù)載均衡器的需求定位、分析了 HDSLB 在云計(jì)算和邊緣計(jì)算應(yīng)用場(chǎng)景中的特性優(yōu)勢(shì),以及解讀了 HDSLB 的性能測(cè)試數(shù)據(jù)。

再進(jìn)一步的,在本篇中我們主要關(guān)注 HDSLB 的基本運(yùn)行原理和部署配置方式,更側(cè)重于實(shí)際的操作。為了讓更廣泛的開(kāi)發(fā)者們都能夠快捷方便的對(duì) HDSLB 展開(kāi)研究,所以在本篇中會(huì)采用 HDSLB-DPVS 開(kāi)源版本來(lái)進(jìn)行介紹。

HDSLB-DPVS 的基本原理

顧名思義,HDSLB-DPVS 是基于 DPVS 進(jìn)行二次開(kāi)發(fā)的項(xiàng)目。而 DPVS,又稱為 DPDK-LVS,是一個(gè)參考了 LVS 內(nèi)核態(tài)四層負(fù)載均衡器設(shè)計(jì)原理并基于 DPDK 用戶態(tài)數(shù)據(jù)面加速框架進(jìn)行開(kāi)發(fā)的四層負(fù)載均衡器?梢(jiàn),HDSLB-DPVS 的技術(shù)堆棧主要由以下 4 個(gè)部分組成:

  1. LVS
  2. DPDK
  3. DPVS
  4. HDSLB-DPVS

要清晰的理解 HDSLB-DPVS 的基本實(shí)現(xiàn)原理,我們需要從頭開(kāi)始講起。

LVS

LVS(Linux Virtual Server,Linux 虛擬服務(wù)器)是一個(gè)誕生于 1998 年的四層負(fù)載均衡器開(kāi)源項(xiàng)目,其目標(biāo)是使用 Local Balancer 技術(shù)和 Server Cluster 技術(shù)來(lái)實(shí)現(xiàn)一個(gè)具有良好可伸縮性(Scalability)、可靠性(Reliability)和可管理性(Manageability)的 Virtual Server。

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

現(xiàn)在來(lái)看,雖然 LVS 基于 Kernel 實(shí)現(xiàn)的數(shù)據(jù)面性能已經(jīng)不合時(shí)宜,但在邏輯架構(gòu)的設(shè)計(jì)層面,LVS 的核心術(shù)語(yǔ)依舊沿用至今,包括:

  • VS(Virtual Server,虛擬服務(wù)器) :VS 是由 DS 和 RS 組合構(gòu)成的一個(gè)邏輯概念,VS 最終通過(guò)一個(gè) VIP 對(duì)外部 Clients 提供服務(wù)。
  • DS(Director Server,流量調(diào)度服務(wù)器) :是充當(dāng) LB 流量入口的服務(wù)器,負(fù)責(zé)負(fù)載均衡策略的執(zhí)行和流量分發(fā)。所以也稱為 FE(前端服務(wù)器)。
  • RS(Real Server,真實(shí)服務(wù)器) :RS 是真正用于處理請(qǐng)求流量的服務(wù)器,也稱為 BE(后端服務(wù)器)。
  • VIP(Virtual IP,虛擬 IP 地址) :VS 向外部 Client 提供服務(wù)的 IP 地址。
  • DIP(Director IP,直連 IP 地址) :Director Server 向內(nèi)部與 RS 進(jìn)行通信的 IP 地址。
  • RIP(Real IP,真實(shí) IP 地址) :RS 與 DS 互通的 IP 地址。
  • CIP(Client IP,客戶端的 IP 地址)
  • NAT 反向代理轉(zhuǎn)發(fā)模式
  • IP Tunneling 透明轉(zhuǎn)發(fā)模式
  • DR 三角流量轉(zhuǎn)發(fā)模式
  • 等等

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

關(guān)于 LVS 更詳細(xì)的內(nèi)容,推薦閱讀:《 LVS & Keepalived 實(shí)現(xiàn) L4 高可用負(fù)載均衡器 》

DPDK

隨著 2010 年,IEEE 802.3 標(biāo)準(zhǔn)委員會(huì)發(fā)布了 40GbE 和 100GbE 802.3ba 以太網(wǎng)標(biāo)準(zhǔn)后,數(shù)據(jù)中心正式進(jìn)入了 100G 時(shí)代。從那時(shí)起,Linux 內(nèi)核協(xié)議棧的網(wǎng)絡(luò)處理性能就一直備受挑戰(zhàn)。先看幾個(gè)數(shù)據(jù):

  • CPU 訪問(wèn) Main Memory 所需要的時(shí)長(zhǎng)為 65 納秒。
  • 跨 NUMA node 的 Main Memory 數(shù)據(jù) Copy 所需要的時(shí)長(zhǎng)為 40 納秒。
  • CPU 處理一次硬件中斷所需要的時(shí)間為 100 微秒。

但實(shí)際上,100G 網(wǎng)卡線速為 2 億 PPS,即每個(gè)包處理的時(shí)間不能超過(guò) 50 納秒。

可見(jiàn),基于 Kernel 的數(shù)據(jù)面已經(jīng)走到了拐點(diǎn),DPDK 為此而生,并通過(guò)下列加速技術(shù)實(shí)現(xiàn)了 100G 線性轉(zhuǎn)發(fā)。

  1. 使用用戶態(tài)協(xié)議棧代替內(nèi)核協(xié)議棧:Kernel by-pass (user space implementation).
  2. 使用輪訓(xùn)代替中斷:Polling instead of interrupt.
  3. 使用多核編程代替多線程:Share-nothing, per-CPU for key data (lockless).
  4. 跨 CPU 無(wú)鎖通信:Lockless message for high performance IPC.
  5. RX Steering and CPU affinity (avoid context switch).
  6. Zero Copy (avoid packet copy and syscalls).
  7. Batching TX/RX.
  8. etc...

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

關(guān)于 DPDK 更詳細(xì)的內(nèi)容,推薦閱讀:《 DPDK — 數(shù)據(jù)加速方案的核心思想 》

DPVS

綜上,由于 LVS 的數(shù)據(jù)面是一個(gè) Linux Kernel Module(ipvs),其性能無(wú)法滿足現(xiàn)代化需求,所以國(guó)內(nèi)公司 iqiyi 基于 DPDK 開(kāi)發(fā)了 DPVS。值得一提的是,由于 DPVS 項(xiàng)目由國(guó)內(nèi)公司開(kāi)源和維護(hù),所以其開(kāi)源社區(qū)對(duì)中文開(kāi)發(fā)者也會(huì)更加友好。

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

除了性能方面的優(yōu)化之外,在功能層面,DPVS 也提供了更豐富的能力,包括:

  • L4 Load Balancer, including FNAT, DR, Tunnel, DNAT modes, etc.
  • SNAT mode for Internet access from internal network.
  • NAT64 forwarding in FNAT mode for quick IPv6 adaptation without application changes.
  • Different schedule algorithms like RR, WLC, WRR, MH(Maglev Hashing), Conhash(Consistent Hashing) etc.
  • User-space Lite IP stack (IPv4/IPv6, Routing, ARP, Neighbor, ICMP ...).
  • Support KNI, VLAN, Bonding, Tunneling for different IDC environment.
  • Security aspect, support TCP syn-proxy, Conn-Limit, black-list, white-list.
  • QoS: Traffic Control.

在軟件架構(gòu)方面,DPVS 沿用了數(shù)據(jù)分離架構(gòu)和基于 Keepalived 的 Master-Backup 高可用架構(gòu)。

  • ipvsadm:用于 VS、RS 等邏輯資源對(duì)象的管理。
  • dpip:用于 IP、Route 等基礎(chǔ)網(wǎng)絡(luò)資源的管理。
  • keepalived:用于提供基于 VRRP 協(xié)議的主備高可用。

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

HDSLB-DPVS

HDSLB-DPVS 和 DPVS 本身都作為高性能負(fù)載均衡器,那么兩者的本質(zhì)區(qū)別是什么呢?答案就是更強(qiáng)大的性能!

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

通常的,我們可以使用 RBP(Ratio of Bandwidth and Performance growth rate,帶寬性能增速比)來(lái)定義網(wǎng)絡(luò)帶寬的增速比上 CPU 性能的增速,即:RBP=BW GR/Perf. GR。

如下圖所示。2010 年前,網(wǎng)絡(luò)的帶寬年化增長(zhǎng)大約是 30%,到 2015 年增長(zhǎng)到 35%,然后在近年達(dá)到 45%。相對(duì)應(yīng)的,CPU 的性能增長(zhǎng)從 10 年前的 23%,下降到 12%,并在近年直接降低到 3.5%。在這 3 個(gè)時(shí)間段內(nèi),RBP 指標(biāo)從 RBP~1 附近(I/O 壓力尚未顯現(xiàn)出來(lái)),上升到 RBP~3,并在近年超過(guò)了 RBP~10。

可見(jiàn),CPU 幾乎已經(jīng)無(wú)法直接應(yīng)對(duì)網(wǎng)絡(luò)帶寬的增速。而圍繞 CPU 進(jìn)行純軟件加速的 DPDK 框架正在面臨挑戰(zhàn)。

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

回到 DPVS 和 HDSLB-DPVS 的本質(zhì)區(qū)別這個(gè)問(wèn)題。在理論設(shè)計(jì)層面,DPVS 的目標(biāo)是基于 DPDK 框架實(shí)現(xiàn)了軟件層面的加速,而 HDSLB-DPVS 則更進(jìn)一步的將這種加速融入到 CPU 和 NIC 互相結(jié)合的硬件平臺(tái)之上,并實(shí)現(xiàn)了 “高密度” 和 “可擴(kuò)展” 這 2 大目標(biāo):

  • 高密度 :指的是單個(gè) HDSLB 節(jié)點(diǎn)的 TCP 并發(fā)連接數(shù)量和吞吐量特別高。
  • 可拓展 :指的是其性能可以隨著 CPU Core 的數(shù)量或者資源總量的增加而線性拓展。

實(shí)踐方面,在最新型的 Intel Xeon CPU(e.g. 3rd & 4th generation)和 E810 100G NIC 硬件平臺(tái)上,實(shí)現(xiàn)了:

  • Concurrent Session : 100M level / Node
  • Throughput : > 8Mpps / Core @FNAT
  • TCP Session Est. Rate > 800K / Core
  • Linear growth

對(duì)此,我們?cè)凇? Intel HDSLB 高性能四層負(fù)載均衡器 — 快速入門和應(yīng)用場(chǎng)景 》文中已經(jīng)對(duì) HDSLB-DPVS 超越前代的性能數(shù)據(jù)進(jìn)行了分析,這里不在贅述。

HDSLB 的部署配置

硬件要求

下面進(jìn)入到實(shí)踐環(huán)節(jié),主要關(guān)注 HDSLB-DPVS 的編譯、部署和配置。為了降低開(kāi)發(fā)者門檻,所以本文主要使用了開(kāi)發(fā)機(jī)低門檻配置來(lái)進(jìn)行部署和調(diào)試。

物理測(cè)試機(jī)性能推薦 虛擬開(kāi)發(fā)機(jī)低門檻推薦
CPU 架構(gòu) Intel Xeon CPU 四代 支持 AVX512 系列指令集的 Intel CPU 型號(hào),例如:Skylake 等
CPU 資源 2NUMA,關(guān)閉超線程 1NUMA,4C
Memory 資源 128G 16G
NIC 型號(hào) Intel E810 100G VirtI/O 驅(qū)動(dòng),支持多隊(duì)列

本文 CPU 信息:

$ lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   46 bits physical, 57 bits virtual
CPU(s):                          4
On-line CPU(s) list:             0-3
Thread(s) per core:              2root@l4lb:~# lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   46 bits physical, 57 bits virtual
CPU(s):                          4
On-line CPU(s) list:             0-3
Thread(s) per core:              2
Core(s) per socket:              2
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           106
Model name:                      Intel(R) Xeon(R) Platinum 8350C CPU @ 2.60GHz
Stepping:                        6
CPU MHz:                         2599.994
BogoMIPS:                        5199.98
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       96 KiB
L1i cache:                       64 KiB
L2 cache:                        2.5 MiB
L3 cache:                        48 MiB
NUMA node0 CPU(s):               0-3
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_ts
                                 c arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_tim
                                 er aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb ibrs_enhanced fsgsbase tsc_adjust bm
                                 i1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xge
                                 tbv1 xsaves arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid arch_capabilities

軟件要求

  • OS:Ubuntu 20.04.3
  • Kernel:5.4.0-110-generic
  • GCC:9.4.0
  • DPDK:20.08

值的注意的是,Ubuntu /boot 分區(qū)要大于 2G,避免出現(xiàn)內(nèi)核升級(jí)故障問(wèn)題。參考引用: https://askubuntu.com/questions/1207958/error-24-write-error-cannot-write-compressed-block

本文 OS 信息:

# 更新系統(tǒng)
$ sudo apt-get update -y && sudo apt-get upgrade -y
# Dev
$ sudo apt-get install git vim wget patch unzip -y
# popt
$ sudo apt-get install libpopt-dev -y
# NUMA
$ sudo apt-get install libnuma-dev -y
$ sudo apt-get install numactl -y
# Pcap
$ sudo apt-get install libpcap-dev -y 
# SSL
$ sudo apt-get install libssl-dev -y


# Kernel 5.4.0-136
$ uname -r
5.4.0-136-generic

$ ll /boot/vmlinuz*
lrwxrwxrwx 1 root root       25 Dec 27  2022 /boot/vmlinuz -> vmlinuz-5.4.0-136-generic
-rw------- 1 root root 13660416 Aug 10  2022 /boot/vmlinuz-5.4.0-125-generic
-rw------- 1 root root 13668608 Nov 24  2022 /boot/vmlinuz-5.4.0-136-generic
-rw------- 1 root root 11657976 Apr 21  2020 /boot/vmlinuz-5.4.0-26-generic
lrwxrwxrwx 1 root root       25 Dec 27  2022 /boot/vmlinuz.old -> vmlinuz-5.4.0-125-generic

$ dpkg -l | egrep "linux-(signed|modules|image|headers)" | grep $(uname -r)
ii  linux-headers-5.4.0-136-generic       5.4.0-136.153                     amd64        Linux kernel headers for version 5.4.0 on 64 bit x86 SMP
ii  linux-image-5.4.0-136-generic         5.4.0-136.153                     amd64        Signed kernel image generic
ii  linux-modules-5.4.0-136-generic       5.4.0-136.153                     amd64        Linux kernel extra modules for version 5.4.0 on 64 bit x86 SMP
ii  linux-modules-extra-5.4.0-136-generic 5.4.0-136.153                     amd64        Linux kernel extra modules for version 5.4.0 on 64 bit x86 SMP

# GCC 9.4.0
$ gcc --version
gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

# 文件描述符
$ ulimit -n 655350
$ echo "ulimit -n 655350" >> /etc/rc.local
$ chmod a+x /etc/rc.local

編譯安裝 DPDK

DPDK 安裝部署的詳細(xì)內(nèi)容,推薦閱讀:《 DPDK — 安裝部署 》。

$ cd /root/
$ git clone https://github.com/intel/high-density-scalable-load-balancer hdslb

$ wget http://fast.dpdk.org/rel/dpdk-20.08.tar.xz
$ tar vxf dpdk-20.08.tar.xz

# 打補(bǔ)丁
$ cp hdslb/patch/dpdk-20.08/*.patch dpdk-20.08/
$ cd dpdk-20.08/
$ patch -p 1 < 0002-support-large_memory.patch
$ patch -p 1 < 0003-net-i40e-ice-support-rx-markid-ofb.patch

# 編譯
$ make config T=x86_64-native-linuxapp-gcc MAKE_PAUSE=n
$ make MAKE_PAUSE=n -j 4

編譯安裝 HDSLB-DPVS

HDSLB-DPVS 的編譯安裝過(guò)程中需要依賴許多 CPU 硬件加速指令,例如:AVX2、AVX512 等等。要編譯成功,有 2 方面的要求:

  1. 要求 CPU 硬件支持:推薦使用 Intel Xeon 數(shù)據(jù)中心系列,例如:Intel Xeon Gold。
  2. 要求 GCC 版本支持:推薦采用版本較高的 GCC,例如本文中的 9.4.0。
$ cd dpdk-20.08/
$ export RTE_SDK=$PWD

$ cd hdslb/
$ chmod +x tools/keepalived/configure

# 編譯安裝
$ make -j 4
$ make install

配置大頁(yè)內(nèi)存

在物理機(jī)測(cè)試環(huán)境中,大頁(yè)內(nèi)存應(yīng)該盡可能的給,HDSLB 的 LB connect pool 需要分配大量的內(nèi)存,這與實(shí)際的性能規(guī)格有直接關(guān)系。

$ mkdir /mnt/huge_1GB
$ mount -t hugetlbfs nodev /mnt/huge_1GB

$ vim /etc/fstab
nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0

$ # for NUMA machine
$ echo 15 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages

$ vim /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT} default_hugepagesz=1G hugepagesz=1G hugepages=15" 

$ sudo update-grub
$ init 6

$ cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:      13
HugePages_Free:       13
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:        13631488 kB

$ free -h
              total        used        free      shared  buff/cache   available
Mem:           15Gi        13Gi       2.0Gi       2.0Mi       408Mi       2.2Gi
Swap:            0B          0B          0B

配置網(wǎng)卡

$ modprobe vfio-pci
$ modprobe vfio enable_unsafe_noiommu_mode=1 # https://stackoverflow.com/questions/75840973/dpdk20-11-3-cannot-bind-device-to-vfio-pci
$ echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode

$ cd dpdk-20.08/
$ export RTE_SDK=$PWD
$ insmod ${RTE_SDK}/build/kmod/rte_kni.ko

$ ${RTE_SDK}/usertools/dpdk-devbind.py --status-dev net
Network devices using kernel driver
===================================
0000:01:00.0 'Virtio network device 1000' if=eth0 drv=virtio-pci unused=vfio-pci *Active*
0000:03:00.0 'Virtio network device 1000' if=eth1 drv=virtio-pci unused=vfio-pci
0000:04:00.0 'Virtio network device 1000' if=eth2 drv=virtio-pci unused=vfio-pci


$ ifconfig eth1 down # 0000:03:00.0
$ ifconfig eth2 down # 0000:04:00.0
$ ${RTE_SDK}/usertools/dpdk-devbind.py -b vfio-pci 0000:03:00.0 0000:04:00.0

$ ${RTE_SDK}/usertools/dpdk-devbind.py --status-dev net
Network devices using DPDK-compatible driver
============================================
0000:03:00.0 'Virtio network device 1000' drv=vfio-pci unused=
0000:04:00.0 'Virtio network device 1000' drv=vfio-pci unused=
Network devices using kernel driver
===================================
0000:01:00.0 'Virtio network device 1000' if=eth0 drv=virtio-pci unused=vfio-pci *Active*

配置 HDSLB-DPVS

$ cp conf/hdslb.conf.sample /etc/hdslb.conf

# 配置解析
$ cat /etc/hdslb.conf
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
! This is hdslb default configuration file.
!
! The attribute "" denotes the configuration item at initialization stage. Item of
! this type is configured oneshoot and not reloadable. If invalid value configured in the
! file, hdslb would use its default value.
!
! Note that hdslb configuration file supports the following comment type:
!   * line comment: using '#" or '!'
!   * inline range comment: using '<' and '>', put comment in between
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

! global config
global_defs {
    log_level   DEBUG # 方便調(diào)試
    ! log_file    /var/log/hdslb.log
    ! log_async_mode    on
}

! netif config
netif_defs {
     pktpool_size     1048575
     pktpool_cache    256
    # LAN Interface 配置
     device dpdk0 {
        rx {
            queue_number        3
            descriptor_number   1024
            ! rss                 all
        }
        tx {
            queue_number        3
            descriptor_number   1024
        }
        fdir {
            mode                perfect
            pballoc             64k
            status              matched
        }
        ! promisc_mode
        kni_name                dpdk0.kni
    }
    # WAN Interface 配置
     device dpdk1 {
        rx {
            queue_number        3
            descriptor_number   1024
            ! rss                 all
        }
        tx {
            queue_number        3
            descriptor_number   1024
        }
        fdir {
            mode                perfect
            pballoc             64k
            status              matched
        }
        ! promisc_mode
        kni_name                dpdk1.kni
    }

    !  bonding bond0 {
    !    mode        0
    !    slave       dpdk0
    !    slave       dpdk1
    !    primary     dpdk0
    !    kni_name    bond0.kni
    !}
}

! worker config (lcores)
worker_defs {
    # control plane CPU
     worker cpu0 {
        type    master
        cpu_id  0
    }
    # data plane CPU
    # dpdk0、1 這 2 個(gè) Port 的同一個(gè)收發(fā)隊(duì)列共用同一個(gè) CPU
     worker cpu1 {
        type    slave
        cpu_id  1
        port    dpdk0 {
            rx_queue_ids     0
            tx_queue_ids     0
            ! isol_rx_cpu_ids  9
            ! isol_rxq_ring_sz 1048576
        }
        port    dpdk1 {
            rx_queue_ids     0
            tx_queue_ids     0
            ! isol_rx_cpu_ids  9
            ! isol_rxq_ring_sz 1048576
        }
    }
     worker cpu2 {
        type    slave
        cpu_id  2
        port    dpdk0 {
            rx_queue_ids     1
            tx_queue_ids     1
            ! isol_rx_cpu_ids  10
            ! isol_rxq_ring_sz 1048576
        }
        port    dpdk1 {
            rx_queue_ids     1
            tx_queue_ids     1
            ! isol_rx_cpu_ids  10
            ! isol_rxq_ring_sz 1048576
        }
    }
     worker cpu3 {
        type    slave
        cpu_id  3
        port    dpdk0 {
            rx_queue_ids     2
            tx_queue_ids     2
            ! isol_rx_cpu_ids  11
            ! isol_rxq_ring_sz 1048576
        }
        port    dpdk1 {
            rx_queue_ids     2
            tx_queue_ids     2
            ! isol_rx_cpu_ids  11
            ! isol_rxq_ring_sz 1048576
        }
    }

}

! timer config
timer_defs {
    # cpu job loops to schedule dpdk timer management
    schedule_interval    500
}

! hdslb neighbor config
neigh_defs {
     unres_queue_length  128
     timeout             60
}

! hdslb ipv4 config
ipv4_defs {
    forwarding                 off
     default_ttl         64
    fragment {
         bucket_number   4096
         bucket_entries  16
         max_entries     4096
         ttl             1
    }
}

! hdslb ipv6 config
ipv6_defs {
    disable                     off
    forwarding                  off
    route6 {
         method           hlist
        recycle_time            10
    }
}

! control plane config
ctrl_defs {
    lcore_msg {
         ring_size                4096
        sync_msg_timeout_us             30000000
        priority_level                  low
    }
    ipc_msg {
         unix_domain /var/run/hdslb_ctrl
    }
}

! ipvs config
ipvs_defs {
    conn {
         conn_pool_size       2097152
         conn_pool_cache      256
        conn_init_timeout           30
        ! expire_quiescent_template
        ! fast_xmit_close
        !  redirect           off
    }

    udp {
        ! defence_udp_drop
        uoa_mode        opp
        uoa_max_trail   3
        timeout {
            normal      300
            last        3
        }
    }

    tcp {
        ! defence_tcp_drop
        timeout {
            none        2
            established 90
            syn_sent    3
            syn_recv    30
            fin_wait    7
            time_wait   7
            close       3
            close_wait  7
            last_ack    7
            listen      120
            synack      30
            last        2
        }
        synproxy {
            synack_options {
                mss             1452
                ttl             63
                sack
                ! wscale
                ! timestamp
            }
            ! defer_rs_syn
            rs_syn_max_retry    3
            ack_storm_thresh    10
            max_ack_saved       3
            conn_reuse_state {
                close
                time_wait
                ! fin_wait
                ! close_wait
                ! last_ack
           }
        }
    }
}

! sa_pool config
sa_pool {
    pool_hash_size   16
}

啟動(dòng) HDSLB-DPVS

$ cd hdslb/
$ ./bin/hdslb
current thread affinity is set to F
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:01:00.0 (socket 0)
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:03:00.0 (socket 0)
EAL:   using IOMMU type 8 (No-IOMMU)
EAL: Ignore mapping IO port bar(0)
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_virtio (1af4:1000) device: 0000:04:00.0 (socket 0)
EAL: Ignore mapping IO port bar(0)
EAL: No legacy callbacks, legacy socket not created
DPVS: HDSLB version: , build on 2024.05.24.14:37:02
CFG_FILE: Opening configuration file '/etc/hdslb.conf'.
CFG_FILE: log_level = WARNING
NETIF: dpdk0:rx_queue_number = 3
NETIF: dpdk1:rx_queue_number = 3
NETIF: worker cpu1:dpdk0 rx_queue_id += 0
NETIF: worker cpu1:dpdk0 tx_queue_id += 0
NETIF: worker cpu1:dpdk1 rx_queue_id += 0
NETIF: worker cpu1:dpdk1 tx_queue_id += 0
NETIF: worker cpu2:dpdk0 rx_queue_id += 1
NETIF: worker cpu2:dpdk0 tx_queue_id += 1
NETIF: worker cpu2:dpdk1 rx_queue_id += 1
NETIF: worker cpu2:dpdk1 tx_queue_id += 1
NETIF: worker cpu3:dpdk0 rx_queue_id += 2
NETIF: worker cpu3:dpdk0 tx_queue_id += 2
NETIF: worker cpu3:dpdk1 rx_queue_id += 2
NETIF: worker cpu3:dpdk1 tx_queue_id += 2
Kni: kni_add_dev: fail to set mac FA:27:00:00:0A:02 for dpdk0.kni: Timer expired
Kni: kni_add_dev: fail to set mac FA:27:00:00:0B:F6 for dpdk1.kni: Timer expired

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

HDSLB-DPVS 進(jìn)程起來(lái)后,可以看見(jiàn) 2 個(gè) DPDK Port 和對(duì)應(yīng)的 2 個(gè) KNI Interface。其中 DPDK Port 用于 LB 數(shù)據(jù)面轉(zhuǎn)發(fā),而 KNI 則用于 Keepalived HA 部署模式。

$ cd hdslb/bin/
$ ./dpip link show
1: dpdk0: socket 0 mtu 1500 rx-queue 3 tx-queue 3
    UP 10000 Mbps half-duplex auto-nego
    addr FA:27:00:00:0A:02 OF_TX_IP_CSUM
2: dpdk1: socket 0 mtu 1500 rx-queue 3 tx-queue 3
    UP 10000 Mbps half-duplex auto-nego
    addr FA:27:00:00:0B:F6 OF_TX_IP_CSUM


$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether fa:27:00:00:00:c0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.4/25 brd 192.168.0.127 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f827:ff:fe00:c0/64 scope link
       valid_lft forever preferred_lft forever
71: dpdk0.kni:  mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether fa:27:00:00:0a:02 brd ff:ff:ff:ff:ff:ff
72: dpdk1.kni:  mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether fa:27:00:00:0b:f6 brd ff:ff:ff:ff:ff:ff

測(cè)試 HDSLB-DPVS Two-arm Full-NAT 模式

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

  • HDSLB-DPVS
$ cd hdslb/bin

# add VIP to WAN interface
$ ./dpip addr add 10.0.0.100/32 dev dpdk1

# route for WAN/LAN access
$ ./dpip route add 10.0.0.0/16 dev dpdk1
$ ./dpip route add 192.168.100.0/24 dev dpdk0

# add routes for other network or default route if needed.
$ ./dpip route show
inet 10.0.0.100/32 via 0.0.0.0 src 0.0.0.0 dev dpdk1 mtu 1500 tos 0 scope host metric 0 proto auto
inet 192.168.100.0/24 via 0.0.0.0 src 0.0.0.0 dev dpdk0 mtu 1500 tos 0 scope link metric 0 proto auto
inet 10.0.0.0/16 via 0.0.0.0 src 0.0.0.0 dev dpdk1 mtu 1500 tos 0 scope link metric 0 proto auto

# add service  to forwarding, scheduling mode is RR.
$ ./ipvsadm -A -t 10.0.0.100:80 -s rr

# add two RS for service, forwarding mode is FNAT (-b)
$ ./ipvsadm -a -t 10.0.0.100:80 -r 192.168.100.2 -b
$ ./ipvsadm -a -t 10.0.0.100:80 -r 192.168.100.3 -b

# add at least one Local-IP (LIP) for FNAT on LAN interface
$ ./ipvsadm --add-laddr -z 192.168.100.200 -t 10.0.0.100:80 -F dpdk0

# Check
$  ./ipvsadm -Ln
IP Virtual Server version 0.0.0 (size=0)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.100:80 rr
  -> 192.168.100.2:80             FullNat 1      0          0
  -> 192.168.100.3:80             FullNat 1      0          0
  • Server
$ python -m SimpleHTTPServer 80
  • Client
$ curl 10.0.0.100

問(wèn)題分析

問(wèn)題 1 :hdslb/tools/keepalived/configure 沒(méi)有執(zhí)行權(quán)限。

make[1]: Leaving directory '/root/hdslb/src'
make[1]: Entering directory '/root/hdslb/tools'
if [ ! -f keepalived/Makefile ]; then \
        cd keepalived && \
        ./configure && \
        cd -; \
fi
/bin/sh: 3: ./configure: Permission denied
make[1]: *** [Makefile:29: keepalived_conf] Error 126
make[1]: Leaving directory '/root/hdslb/tools'
make: *** [Makefile:33: all] Error 1

# 解決
$ chmod +x /root/hdslb/tools/keepalived/configure

問(wèn)題 2 :缺少配置文件

Cause: ports in DPDK RTE (2) != ports in dpvs.conf(0)

# 解決
$ cp conf/hdslb.conf.sample /etc/hdslb.conf

問(wèn)題 3 :開(kāi)發(fā)機(jī) 2MB hugepage size 太小

Cause: Cannot init mbuf pool on socket 0

# 解決:把 hugepagesize 配置為 1G
# ref:https://stackoverflow.com/questions/51630926/cannot-create-mbuf-pool-with-dpdk

問(wèn)題 4 :缺少 rte_kni 模塊

Cause: add KNI port fail, exiting...

# 解決
$ insmod ${RTE_SDK}/build/kmod/rte_kni.ko

問(wèn)題 5 :開(kāi)發(fā)機(jī)大頁(yè)內(nèi)存不夠

Kni: kni_add_dev: fail to set mac FA:27:00:00:07:AA for dpdk0.kni: Timer expired
Kni: kni_add_dev: fail to set mac FA:27:00:00:00:E1 for dpdk1.kni: Timer expired
IPSET: ipset_init: lcore 3: nothing to do.
IPVS: dp_vs_conn_init: lcore 3: nothing to do.
IPVS: fail to init synproxy: no memory
Segmentation fault (core dumped)

# 解決:擴(kuò)容到 15G。

問(wèn)題 6 :開(kāi)發(fā)機(jī)網(wǎng)卡不支持 HDSLB-DPVS 需要的 hardware offloads 功能。

Kni: kni_add_dev: fail to set mac FA:27:00:00:0A:02 for dpdk0.kni: Timer expired
Kni: kni_add_dev: fail to set mac FA:27:00:00:0B:F6 for dpdk1.kni: Timer expired
Ethdev port_id=0 requested Rx offloads 0x62f doesn't match Rx offloads capabilities 0xa1d in rte_eth_dev_configure()
NETIF: netif_port_start: fail to config dpdk0
EAL: Error - exiting with code: 1
  Cause: Start dpdk0 failed, skipping ...

# 解決:修改 netif 模塊,不啟動(dòng)不支持的 offloads 功能。
static struct rte_eth_conf default_port_conf = {
    .rxmode = {
......
        .offloads = 0,
        //.offloads = DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_VLAN,
    },
......
    .txmode = {
......
        .offloads = 0,
        //.offloads = DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_MBUF_FAST_FREE,
    },

NOTE:根據(jù) DPDK 的文檔,offloads mask 的每個(gè) bit 都代表了特定的卸載功能。以下 0-15bit 對(duì)應(yīng)的 Features:

  1. DEV_RX_OFFLOAD_VLAN_STRIP
  2. DEV_RX_OFFLOAD_IPV4_CKSUM
  3. DEV_RX_OFFLOAD_UDP_CKSUM
  4. DEV_RX_OFFLOAD_TCP_CKSUM
  5. DEV_RX_OFFLOAD_TCP_LRO
  6. DEV_RX_OFFLOAD_QINQ_STRIP
  7. DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
  8. DEV_RX_OFFLOAD_MACSEC_STRIP
  9. DEV_RX_OFFLOAD_VLAN_FILTER
  10. DEV_RX_OFFLOAD_VLAN_EXTEND
  11. DEV_RX_OFFLOAD_SCATTER
  12. DEV_RX_OFFLOAD_TIMESTAMP
  13. DEV_RX_OFFLOAD_SECURITY
  14. DEV_RX_OFFLOAD_KEEP_CRC
  15. DEV_RX_OFFLOAD_SCTP_CKSUM
  16. DEV_RX_OFFLOAD_OUTER_UDP_CKSUM

問(wèn)題 7 :開(kāi)發(fā)機(jī)網(wǎng)絡(luò)不支持 RSS 多隊(duì)列。valid value: 0x0 表示當(dāng)前網(wǎng)卡不支持任何 RSS 哈希函數(shù)。


Kni: kni_add_dev: fail to set mac FA:27:00:00:0A:02 for dpdk0.kni: Timer expired
Kni: kni_add_dev: fail to set mac FA:27:00:00:0B:F6 for dpdk1.kni: Timer expired
Ethdev port_id=0 invalid rss_hf: 0x3afbc, valid value: 0x0
NETIF: netif_port_start: fail to config dpdk0
EAL: Error - exiting with code: 1
  Cause: Start dpdk0 failed, skipping ...

# 解決方式 1:使用支持 multi-queues 和 RSS hash 的網(wǎng)卡。
# 解決方式 2:修改 netif 模塊,不啟動(dòng) multi-queues 和 RSS hash 功能。
static struct rte_eth_conf default_port_conf = {
    .rxmode = {
        //.mq_mode        = ETH_MQ_RX_RSS,
        .mq_mode        = ETH_MQ_RX_NONE,
......
    },
    .rx_adv_conf = {
        .rss_conf = {
            .rss_key = NULL,
            //.rss_hf  = /*ETH_RSS_IP*/ ETH_RSS_TCP,
            .rss_hf  = 0
        },
    },
......
port->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;    

問(wèn)題 8 :不支持多播地址配置

Kni: kni_add_dev: fail to set mac FA:27:00:00:0A:02 for dpdk0.kni: Timer expired
Kni: kni_add_dev: fail to set mac FA:27:00:00:0B:F6 for dpdk1.kni: Timer expired
NETIF: multicast address add failed for device dpdk0
EAL: Error - exiting with code: 1
  Cause: Start dpdk0 failed, skipping ...

# 解決:關(guān)閉多播功能
    //ret = idev_add_mcast_init(port);
    //if (ret != EDPVS_OK) {
    //    RTE_LOG(WARNING, NETIF, "multicast address add failed for device %s\n", port->name);
    //    return ret;
    //}

問(wèn)題 9 :LB connect pool 內(nèi)存太小,程序崩潰退出。

$ ./ipvsadm -A -t 10.0.0.100:80 -s rr
[sockopt_msg_recv] socket msg header recv error -- 0/88 recieved  

IPVS: lb_conn_hash_table_init: lcore 0: create conn_hash_tbl failed. Requested size: 1073741824 bytes. LB_CONN_CACHE_LINES_DEF: 1, LB_CONN_TBL_SIZE: 16777216

# 解決方式 1:繼續(xù)加大頁(yè)內(nèi)存到實(shí)際需要的大小。
# 解決方式 2:
#	1):釋放一個(gè) lcore 的大頁(yè)內(nèi)存
#	2):調(diào)小 DPVS_CONN_POOL_SIZE_DEF 從 2097152 減少到 1048576
//#define DPVS_CONN_POOL_SIZE_DEF     2097152
#define DPVS_CONN_POOL_SIZE_DEF     1048576

問(wèn)題 10 :編譯器版本低缺少編譯指令。

error: inlining failed in call to always_inline   "'_mm256_cmpeq_epi64_mask':"  : target specific option mismatch

# 解決:
# 1)升級(jí) GCC 版本到 9.4.0
# 2)確定 CPU 支持指令集。ref:https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#expand=3828,301,2553&text=_mm256_cmpeq_epi64_mask&ig_expand=872

Intel HDSLB 高性能四層負(fù)載均衡器 — 基本原理和部署配置

最后

值得注意的是上述問(wèn)題記錄是筆者在低配開(kāi)發(fā)機(jī)中調(diào)試程序時(shí)所遇見(jiàn)的問(wèn)題,實(shí)際上在一個(gè)資源充足的物理測(cè)試機(jī)上通常不會(huì)出現(xiàn)由于資源不足導(dǎo)致的大部分問(wèn)題。

最后,本篇主要介紹了 Intel HDSLB 的基本運(yùn)行原理和部署配置的方式,希望能夠幫助讀者們順利的把 HDSLB-DPVS 項(xiàng)目 “玩” 起來(lái)。后面,我們將再次開(kāi)發(fā)機(jī)環(huán)境的基礎(chǔ)之上,通過(guò)《Intel HDSLB 高性能四層負(fù)載均衡器 — 高級(jí)特性和代碼剖析》,繼續(xù)深入挖掘 HDSLB-DPVS 的高級(jí)特性、軟件架構(gòu)分析和代碼解讀。敬請(qǐng)繼續(xù)期待。:)

小編推薦閱讀

好特網(wǎng)發(fā)布此文僅為傳遞信息,不代表好特網(wǎng)認(rèn)同期限觀點(diǎn)或證實(shí)其描述。

相關(guān)視頻攻略

更多

掃二維碼進(jìn)入好特網(wǎng)手機(jī)版本!

掃二維碼進(jìn)入好特網(wǎng)微信公眾號(hào)!

本站所有軟件,都由網(wǎng)友上傳,如有侵犯你的版權(quán),請(qǐng)發(fā)郵件[email protected]

湘ICP備2022002427號(hào)-10 湘公網(wǎng)安備:43070202000427號(hào)© 2013~2024 haote.com 好特網(wǎng)