云原生-Linux的高级部分笔记

26-1-13 (复习)

1.什么是内核、什么是shell、什么是Linux、什么是开源?

内核、shell、linux、开源的概念

内核:是操作系统的核心组件,是硬件与软件的“桥梁”,夜视整个操作系统的基石

Shell:是用户与内核之间的交互界面,也可以理解为 “命令解释器”,你之前提到的 Linux Shell 脚本,就是基于它来写的。

开源:是一种软件的授权模式,和 “闭源” 相对,核心是源代码公开

2.Linux中命令录入方式

2.1.命令行快捷键

2.1.1.光标移动快捷键

快捷键 作用
Ctrl + A 光标跳到命令行开头
Ctrl + E 光标跳到命令行结尾
Alt + F 光标向右跳一个单词(按空格分隔的单位)
Alt + B 光标向左跳一个单词
Ctrl + ← 等价于 Alt + B(部分终端支持)
Ctrl + → 等价于 Alt + F(部分终端支持)

2.1.2.命令编辑快捷键

快捷键 作用
Ctrl + U 删除光标左侧的所有字符(清空到行首)
Ctrl + K 删除光标右侧的所有字符(清空到行尾)
Ctrl + W 删除光标左侧的一个单词
Alt + D 删除光标右侧的一个单词
Ctrl + Y 粘贴之前用 Ctrl+U/K/W 删除的内容(剪贴板复用)
Ctrl + H 等同于退格键(删除光标前一个字符)
Ctrl + T 交换光标前两个字符的位置(输错字符时超实用)

2.1.3.历史命令快捷键

快捷键 作用
/ 向上 / 向下翻找历史命令(按输入顺序)
Ctrl + R 搜索历史命令(输入关键词,会自动匹配最近的命令,按 Ctrl+R 继续找更早的)
Ctrl + G 退出 Ctrl+R 的搜索模式,回到当前命令行
!! 快速执行上一条命令(比翻箭头更快)
!n 执行历史命令中第 n 条(history 命令可查看历史列表及序号)
!字符 执行最近一次以该字符开头的命令(例:!ls 执行最近的 ls 命令)
Alt + . 粘贴上一条命令的最后一个参数(例:上一条是 cp a.txt /home,按后直接出现 /home

2.1.4.进程控制快捷键

快捷键 作用
Ctrl + C 强制终止当前正在运行的命令(最常用,比如命令卡死时)
Ctrl + Z 把当前命令挂起(放到后台暂停),用 fg 命令可恢复到前台
Ctrl + D 退出当前 Shell 会话(等价于输入 exit 命令)

2.1.5.终端操作快捷键

快捷键 作用
Ctrl + L 清空终端屏幕(等价于输入 clear 命令,不会删除历史命令)
Ctrl + Shift + C 复制终端中选中的内容(图形化终端模拟器适用)
Ctrl + Shift + V 粘贴复制的内容到终端(图形化终端模拟器适用)

2.1.6.Linux Bash 命令行快捷键速查表

分类 快捷键 作用
光标移动 Ctrl + A 跳到命令行开头
Ctrl + E 跳到命令行结尾
Alt + F 向右跳一个单词
Alt + B 向左跳一个单词
Ctrl + ← 向左跳一个单词(部分终端支持)
Ctrl + → 向右跳一个单词(部分终端支持)
命令编辑 Ctrl + U 删除光标左侧所有字符
Ctrl + K 删除光标右侧所有字符
Ctrl + W 删除光标左侧一个单词
Alt + D 删除光标右侧一个单词
Ctrl + Y 粘贴之前删除的内容
Ctrl + H 等同于退格键
Ctrl + T 交换光标前两个字符
历史命令 / 向上 / 向下翻历史命令
Ctrl + R 搜索历史命令(按 Ctrl+R 继续找更早的)
Ctrl + G 退出历史命令搜索模式
!! 执行上一条命令
!n 执行历史命令列表中第 n
!字符 执行最近一次以该字符开头的命令
Alt + . 粘贴上一条命令的最后一个参数
进程控制 Ctrl + C 强制终止当前运行的命令
Ctrl + Z 挂起当前命令(后台暂停,fg 可恢复)
Ctrl + D 退出当前 Shell 会话
终端操作 Ctrl + L 清空终端屏幕
Ctrl + Shift + C 复制选中内容(图形化终端适用)
Ctrl + Shift + V 粘贴内容到终端(图形化终端适用)

3.命令行中如何获得帮助

命令 –help

man 命令

1
2
# 开启监控
watch -n 1 ls -Rl

4.linux上下文管理

4.1.常见命令表

Linux 下文件管理是系统操作的核心内容,常用命令覆盖文件 / 目录的创建、查看、复制、移动、删除、权限修改等场景。以下是整理的常用命令表,按功能分类并附关键参数说明:

命令分类 命令 功能说明 常用参数 & 示例
目录操作 pwd 显示当前工作目录的绝对路径 无参数示例:pwd/home/user
cd 切换工作目录 - cd ~:切换到当前用户家目录- cd ..:切换到上级目录- cd /usr/local:切换到指定绝对路径
ls 列出目录下的文件和子目录 - -l:以长格式显示(权限、大小、时间等)- -a:显示隐藏文件(以 . 开头的文件)- -h:人性化显示文件大小(KB/MB/GB)示例:ls -lah
mkdir 创建新目录 - -p:递归创建多级目录示例:mkdir -p /tmp/test/abc
rmdir 删除空目录 仅能删除空目录示例:rmdir /tmp/test/abc
文件操作 touch 创建空文件或修改文件时间戳 示例:touch newfile.txt → 创建空文件
cp 复制文件或目录 - -r:递归复制目录(必选参数)- -f:强制覆盖目标文件示例 1:cp file.txt /tmp/ → 复制文件示例 2:cp -r dir1 /tmp/ → 复制目录
mv 移动 / 重命名文件或目录 同一目录下执行是重命名,跨目录是移动示例 1:mv oldname.txt newname.txt → 重命名示例 2:mv file.txt /tmp/ → 移动文件
rm 删除文件或目录 - -r:递归删除目录- -f:强制删除,不提示确认危险命令rm -rf /(切勿执行!会删除系统所有文件)示例:rm -rf dir1 → 删除目录及所有内容
文件查看 cat 查看文件全部内容(适合小文件) - -n:显示行号示例:cat -n /etc/hosts
more 分页查看大文件(只能向下翻页) 示例:more /var/log/messages → 按空格翻页,q 退出
less 分页查看大文件(可上下翻页) 示例:less /var/log/syslog → 按 ↑/↓ 滚动,q 退出
head 查看文件前 N 行(默认前 10 行) -n:指定行数示例:head -n 5 file.txt → 查看前 5 行
tail 查看文件后 N 行(默认后 10 行) - -n:指定行数- -f:实时跟踪文件新增内容(日志监控常用)示例 1:tail -n 20 log.txt → 查看后 20 行示例 2:tail -f /var/log/nginx/access.log → 实时监控日志
权限管理 chmod 修改文件 / 目录的权限 两种方式:1. 符号法:chmod u+x file.sh(给所有者加执行权限)2. 数字法:chmod 755 file.sh(r=4,w=2,x=1,7=4+2+1)常用权限:755(用户可读可写可执行,其他只读可执行)、644(用户可读可写,其他只读)
chown 修改文件 / 目录的所有者和所属组 - -R:递归修改目录权限示例:chown -R user:group /home/user/data
文件查找 find 按路径、名称、大小等条件查找文件 - 按名称:find /tmp -name "*.txt" → 查找 tmp 下所有 txt 文件- 按大小:find / -size +100M → 查找系统中大于 100M 的文件- 按类型:find /home -type d → 查找 home 下所有目录
which 查找命令的可执行文件路径 示例:which ls/usr/bin/ls
locate 快速查找文件(基于数据库索引) 需先更新数据库:updatedb示例:locate passwd → 查找所有含 passwd 的文件
文件链接 ln 创建硬链接或软链接 - 软链接(常用):ln -s /path/source /path/link → 类似快捷方式- 硬链接:ln /path/source /path/link → 与源文件共享 inode

4.2.补充说明

  1. 隐藏文件:Linux 中以 . 开头的文件为隐藏文件,需用 ls -a 查看。
  2. 绝对路径 vs 相对路径
    • 绝对路径:从根目录 / 开始,如 /home/user/file.txt
    • 相对路径:相对于当前目录,如 ../file.txt(上级目录的文件)
  3. 权限符号含义r(读,4)、w(写,2)、x(执行,1),权限分三类:u(所有者)、g(所属组)、o(其他用户)。

5.文件批处理

5.1.Linux 文件批处理命令表

批处理命令 / 核心语法 核心用途 批处理示例(场景 + 执行命令)
for 循环 遍历指定文件 / 目录,执行自定义批处理操作(新手易理解) 场景:批量将当前目录下所有 .txt 文件重命名为 .bak``bash<br>for f in *.txt; do mv "$f" "${f%.txt}.bak"; done<br>${f%.txt} 表示去掉文件名后缀 .txt
find + xargs 高效查找文件并批量执行命令(适合大量文件) 场景:批量删除当前目录及子目录下所有 .log 文件```bashfind . -type f -name “*.log” xargs rm -f```(-type f 限定文件,-f 强制删除,避免确认)
mv(批量重命名) 基础批量重命名 / 移动文件 场景:批量将 file1.txtfile2.txt 移动到 backup 目录bash<br>mv file*.txt backup/<br>
cp(批量复制) 批量复制文件到指定目录 / 批量备份文件 场景:批量复制当前目录下所有 .sh 脚本到 /usr/local/bin``bash<br>cp *.sh /usr/local/bin/<br>
rm(批量删除) 批量删除指定类型 / 条件的文件(谨慎使用) 场景:批量删除当前目录下所有空文件(大小为 0)bash<br>rm -f $(find . -type f -size 0)<br>
sed(批量替换内容) 批量修改多个文件内的文本内容 场景:批量将所有 .conf 文件中的 old_ip 替换为 192.168.1.1``bash<br>sed -i 's/old_ip/192.168.1.1/g' *.conf<br>-i 直接修改文件,g 全局替换)
rename(专用重命名) 更简洁的批量重命名(支持正则) 场景:批量将所有文件名中的 test 替换为 prod(如 test1.txt → prod1.txt)bash<br>rename 's/test/prod/' *.txt<br>(需先安装:apt install rename/yum install rename
awk(批量处理内容) 批量提取 / 筛选文件中的指定数据 场景:批量提取所有 .csv 文件中第 2 列的内容并保存到 result.txt``bash<br>awk -F ',' '{print $2}' *.csv > result.txt<br>-F ',' 指定分隔符为逗号)

5.2.正则表达式符号 / 字符类 命令表

正则符号(命令) 用途说明 示例(匹配 / 不匹配)
* 量词,匹配前面的字符 / 表达式 0 次或多次(贪婪匹配) a*:匹配空字符、aaaaaa;不匹配 b1(仅针对前导字符a)例:`echo”aaa b” grep -o ‘a*’输出aaa、空、b` 旁的空
? 量词,匹配前面的字符 / 表达式 0 次或 1 次(非贪婪匹配) a?:匹配空字符、a;不匹配 aaaaa例:`echo”a aa aaa” grep -o ‘a?’输出a、空、aa、空、aaa、空
[[:alpha:]] 匹配任意字母字符(包含大小写,多语言环境适配) 匹配:aB(中文环境);不匹配:1!、(空格)例:`echo”Ab123 中!” grep -o ‘[[:alpha:]]’输出Ab中 `
[[:digit:]] 匹配任意数字字符,等价于 [0-9] 匹配:059;不匹配:a$、(空格)例:`echo”abc123def” grep -o ‘[[:digit:]]’输出123`
[[:lower:]] 匹配任意小写字母,等价于 [a-z] 匹配:azm;不匹配:A1!例:`echo”AbCdEf123” grep -o ‘[[:lower:]]’输出bdf`
[[:upper:]] 匹配任意大写字母,等价于 [A-Z] 匹配:AZM;不匹配:a9@例:`echo”AbCdEf123” grep -o ‘[[:upper:]]’输出ACE`
[[:alnum:]] 匹配任意字母或数字字符,等价于 [a-zA-Z0-9] 匹配:aB8;不匹配:!、(空格)、@例:`echo”abc123!@#” grep -o ‘[[:alnum:]]’输出abc123`
[[:punct:]] 匹配任意标点符号(非字母、数字、空白的可打印字符) 匹配:!@#,.:;不匹配:a5、(空格)例:`echo”abc!123@def” grep -o ‘[[:punct:]]’输出!@`
[[:space:]] 匹配任意空白字符(空格、制表符 \t、换行符 \n 等) 匹配:(空格)、\t(制表符)、\n(换行);不匹配:a9!例:`echo”a b\tc\nd” grep -o ‘[[:space:]]’ 输出 空格、\t\n`

5.3.补充说明

  1. 示例中的 grep -o 是终端常用命令,-o 参数表示 “只输出匹配到的部分”,新手可直接复制命令到终端执行,直观看到效果;
  2. 部分字符类(如 [[:alpha:]])在中文 / 多语言系统中可匹配非英文字母(如中文、日文),而 [a-zA-Z] 仅匹配英文字母,这是 POSIX 字符类的优势。

5.4.总结

  1. */?量词,示例核心看 “前导字符的匹配次数”,而非匹配特定字符;
  2. [[:xxx:]] 字符类的示例需区分 “匹配 / 不匹配”,快速掌握字符范围;
  3. 结合 grep -o 命令可直接验证示例效果,是新手学习正则的高效方式。

6.Linux中的输入和输出

6.1.字符设备是啥?

在 Linux 系统中,字符设备(Character Device) 是一类以字符流(字节流) 为单位进行数据传输的硬件设备,也是 Linux 设备文件的两大核心分类之一(另一类是块设备)。

与块设备的核心区别

特性 字符设备 块设备
访问单位 字节(字符流) 固定大小的块(通常 512B/4KB)
访问方式 串行、顺序访问,不支持随机访问 支持随机访问(可跳转到任意块)
缓冲区 无缓冲或小缓冲 有较大的内存缓冲区(提升读写效率)
文件类型标识 c b
典型设备 键盘、鼠标、串口 硬盘、U 盘、SD 卡

6.2.输入与输出

在 Linux 系统中,输入(Input)和输出(Output)的核心原理围绕 “一切皆文件” 思想展开,本质是用户程序通过内核提供的接口,对文件描述符进行读写操作,内核再作为中介完成用户程序与硬件设备的交互。

Linux 把所有输入输出设备都抽象成文件,用户程序无需关心硬件细节,只需通过文件描述符(File Descriptor,FD) 来标识和操作这些 “文件”。

  1. 文件描述符

    是一个非负整数,内核用它来索引和管理打开的文件(包括硬件设备、普通文件、管道等)。

    系统默认分配 3 个标准文件描述符,供所有进程使用:

文件描述符 名称 对应设备 / 用途 缩写
0 标准输入 键盘、管道输入等 stdin
1 标准输出 终端、管道输出等 stdout
2 标准错误 终端(专门输出错误信息) stderr
  1. 核心原则

    输入 = 从文件描述符 读取数据(如从 stdin 读键盘输入)

    输出 = 向文件描述符 写入数据(如向 stdout 写终端输出)

  2. 本质

    Linux 输入输出的本质是 “用户程序 ↔ 内核 ↔ 硬件” 的三层交互模型

    • 用户程序:通过 read/write 系统调用发起 IO 请求,只操作文件描述符,不碰硬件。
    • 内核:负责管理文件描述符、缓冲区,以及设备驱动,是 IO 交互的核心中介。
    • 硬件:由驱动程序控制,完成数据的物理输入输出(如键盘接收按键、屏幕显示字符)。

26-1-14 (复习)

1.Rhel9.6安装

PixPin_2026-01-17_11-20-55

选项–>高级–>固件类型 改为BIOS

安装源选择 服务器

虚拟网络配置

PixPin_2026-01-17_11-26-38

PixPin_2026-01-17_11-26-45

2.Rhel9.6配置

2.1.关闭安全配置

1
2
3
4
5
6
7
8
9
10
#关闭防火墙
[root@base ~]# systemctl disable --now firewalld
Removed "/etc/systemd/system/multi-user.target.wants/firewalld.service".
Removed "/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service".
[root@base ~]# systemctl mask firewalld
Created symlink /etc/systemd/system/firewalld.service → /dev/null.

#关闭SELiunx
[root@base ~]# vim /etc/sysconfig/selinux
SELINUX=disabled

2.2.更改网络配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@base ~]# vim /boot/loader/entries/1199b04b0d974659ad491305f98dcfe0-5.14.0-570.12.1.el9_6.x86_64.conf
在quiet 添加 net.ifnames=0
[root@base ~]# cd /etc/NetworkManager/system-connections/
[root@base system-connections]# mv ens160.nmconnection eth0.nmconnection
[root@base system-connections]# vim eth0.nmconnection
修改id与interface-name为 eth0
删除uuid
#如果网卡出问题
nmcli connection reload
nmcli connection up eth0
nmcli networking
nmcli networking on
nmcli networking show
ifconfig

2.3.永久挂载

1
2
3
4
5
6
7
[root@base ~]# mkdir /rhel9
[root@base ~]# mount /dev/cdrom /rhel9/
mount: /rhel9: WARNING: source write-protected, mounted read-only.
[root@base ~]# vim /etc/rc.d/rc.local
最后一行添加mount /dev/cdrom /rhel9
[root@base ~]# vim /etc/rc.d/rc.local
[root@base ~]# chmod +x /etc/rc.d/rc.local

2.4.源仓库配置

2.4.1.本地仓库源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@base ~]# cd /etc/yum.repos.d/
[root@base yum.repos.d]# vim rhel.repo
[AppStarem]
name = AppStream
baseurl = file:///rhel9/AppStream
gpgcheck = 0
[BaseOS]
name = BaseOS
baseurl = file:///rhel9/BaseOS
gpgcheck = 0

#检测
[root@base yum.repos.d]# dnf list httpd

2.4.2.网络源

1
2
3
4
5
6
7
8
#docker容器的网络源
https://mirrors.aliyun.com/docker-ce/linux/rhel/9.6/x86_64/stable/
#epel源
https://mirrors.aliyun.com/epel-archive/9.6-2025-11-11/Everything/x86_64/

#检测
[root@base ~]# dnf search docker
[root@base ~]# dnf search ansible

2.4.3.自建仓库源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@base ~]# dnf install httpd -y
[root@base ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@base ~]# mkdir /var/www/html/software
[root@base ~]# dnf install docker-ce --downloadonly --destdir /mnt -y
[root@base ~]# mv /mnt/* /var/www/html/software/
[root@base ~]# dnf install createrepo -y
[root@base ~]# createrepo -v /var/www/html/software/
[root@base yum.repos.d]# vim software.repo
[software]
name = software
baseurl = http://172.25.254.128/software/
gpgcheck = 0

#检测
[root@base ~]# dnf info docker-ce.x86_64

26-1-16 (复习)

1.网络管理

1.1.网络ip配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# 判断网卡是否存在
[root@node1 system-connections]# ls /sys/class/net/ | grep eth1
eth1

# 判断网卡是否被使用
[root@node1 system-connections]# nmcli connection show
NAME UUID TYPE DEVICE
eth0 7ba00b1d-8cdd-30da-91ad-bb83ed4f7474 ethernet eth0
lo d97aa458-8557-4dd1-a224-0167b68b3f84 loopback lo

# 修改网卡ip
[root@node1 ~]# nmcli connection modify eth0 ipv4.addresses 172.25.254.130/24
# 重新连接网卡
[root@node1 system-connections]# nmcli connection reload
[root@node1 system-connections]# nmcli connection up eth0

[root@node1 ~]# cd /etc/NetworkManager/system-connections/
[root@node1 system-connections]# vim eth1.nmconnection
[connection]
id=eth1
type=ethernet
interface-name=eth1

[ipv4]
method=manual
address1=172.25.254.100/24
gateway=172.25.254.2
dns=114.114.114.114;

[root@node1 system-connections]# chmod 600 eth1.nmconnection

ip a s eth1 #查看ip
route -n #网关
cat /etc/resolv.conf #查看dns

1.2.网络脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
#!/bin/bash
# ===================== 参数说明(统一顺序,避免混淆)=====================
#
# 用法: vmset.sh <主机名> <网卡名> <IP地址> [可选参数:nogateway/网关]
# 示例1:带网关(默认172.25.254.2)→ 主网卡用
# vmset.sh base eth0 172.25.254.100
# 示例2:无网关(避免重复路由)→ 从网卡用
# vmset.sh base et0 172.25.254.101 nogateway
# 示例3:自定义网关
# vmset.sh eth0 base 192.168.1.100 192.168.1.1
#
# =========================================================================

# 1.参数效验(至少3个参数:主机名、网卡名、IP地址)
[ $# -lt 3 ] && {
echo "错误:请传入至少3个参数!"
echo "用法:vmset.sh <主机名> <网卡名> <IP地址> [nogateway/网关]"
echo "用法:vmset.sh haha eth1 172.25.254.100 nogateway"
exit 1
}

# 2.变量定义
HOSTNAME=$1 # 主机名(必填,如base)
IFACE=$2 # 网卡名(必填,如eth1)
IP=$3 # IP地址(必填,如172.25.254.100)
DEFAULT_GW="172.25.254.2" # nogateway
OPTION=${4:-$DEFAULT_GW} # 可选参数:nogateway (无网关)或 自定义网关(如172.25.254.2)

CONNECTION=$(nmcli connection show | awk "/$IFACE/ {print \$1}" | grep $IFACE)

# 3.判断网卡是否使用
[ -n "$CONNECTION" ] && {
echo "$IFACE 正在使用!!!"
nmcli connection delete $CONNECTION
} || {
echo "$IFACE 未在使用"
}

# 4.选择网关
[ "$OPTION" = "nogateway" ] && {
cat > /etc/NetworkManager/system-connections/$IFACE.nmconnection <<EOF
[connection]
id=$IFACE
type=ethernet
interface-name=$IFACE


[ipv4]
method=manual
address1=$IP/24
dns=8.8.8.8
EOF
} || {
cat > /etc/NetworkManager/system-connections/$IFACE.nmconnection <<EOF
[connection]
id=$IFACE
type=ethernet
interface-name=$IFACE


[ipv4]
method=manual
address1=$IP/24,$OPTION
dns=8.8.8.8;
EOF
}

# 4.网卡重新加载
chmod 600 /etc/NetworkManager/system-connections/$IFACE.nmconnection
nmcli connection reload
nmcli connection up $IFACE
hostnamectl hostname $HOSTNAME

# 5.域名映射
cat > /etc/hosts<< EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
$IP $HOSTNAME
EOF

echo ""
echo "=====网卡详细信息===="
ip a s $IFACE
echo "====路由表===="
route -n
echo "====主机名===="
hostname
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
#!/bin/bash
set -euo pipefail # 严格模式,命令失败直接退出,避免无效配置

# ===================== 参数说明(统一顺序,避免混淆)=====================
# 用法:bash vmset.sh <网卡名> <主机名> <IP地址> [可选参数:noroute/网关]
# 示例1:带网关(默认172.25.254.2)→ 主网卡用
# vmset.sh eth0 base 172.25.254.100
# 示例2:无网关(避免重复路由)→ 从网卡用
# vmset.sh eth1 base 172.25.254.101 noroute
# 示例3:自定义网关
# vmset.sh eth0 base 192.168.1.100 192.168.1.1
# =======================================================================

# 1. 参数校验(至少3个必填参数:网卡名、主机名、IP地址)
if [ $# -lt 3 ]; then
echo "错误:请传入至少3个参数!"
echo "用法:vmset.sh <网卡名> <主机名> <IP地址> [noroute/网关]"
echo "示例:vmset.sh eth1 base 172.25.254.101 noroute(从网卡,无网关)"
exit 1
fi

# 2. 变量定义(清晰命名,避免混淆)
IFACE=$1 # 网卡名(必填,如eth1)
HOSTNAME=$2 # 主机名(必填,如base)
IP=$3 # IP地址(必填,如172.25.254.100)
OPTION=${4:-} # 可选参数:noroute(无网关)或 自定义网关(如172.25.254.2)
DEFAULT_GW="172.25.2`:w;54.2" # 默认网关(可选参数为空时使用)
CONN_FILE="/etc/NetworkManager/system-connections/${IFACE}.nmconnection" # 配置文件路径

# 3. 基础校验(避免低级错误)
# 3.1 检查网卡是否真实存在
if ! ip link show "$IFACE" &>/dev/null; then
echo "错误:网卡 $IFACE 不存在!请检查网卡名是否正确"
exit 1
fi

# 3.2 检查IP格式是否合法(避免输错IP)
if ! echo "$IP" | grep -E '^((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$' &>/dev/null; then
echo "错误:IP地址 $IP 格式非法!请输入正确的IPv4地址"
exit 1
fi

# 4. 处理可选参数(确定网关是否启用)
if [ "$OPTION" = "noroute" ]; then
GW="" # 无网关(从网卡用,避免重复路由)
DNS="" # 无网关时不配置DNS
else
# 可选参数为自定义网关,或使用默认网关
GW=${OPTION:-$DEFAULT_GW}
DNS="8.8.8.8;" # 有网关时配置DNS
fi

# 5. 删除网卡已有连接(避免配置冲突,精确匹配连接名)
CONNECTION=$(nmcli connection show | awk -v iface="$IFACE" '$1 == iface {print $1}')
if [ -n "$CONNECTION" ]; then
echo "→ 正在删除网卡 $IFACE 的已有连接:$CONNECTION"
nmcli connection delete "$CONNECTION" || {
echo "错误:删除连接 $CONNECTION 失败!"
exit 1
}
fi

# 6. 生成NetworkManager配置文件(覆盖写入,避免重复配置段)
echo "→ 生成网卡 $IFACE 的配置文件:$CONN_FILE"
cat > "$CONN_FILE" <<EOF
[connection]
id=$IFACE
type=ethernet
interface-name=$IFACE
autoconnect=true # 开机自动激活网卡

[ipv4]
method=manual
# 配置IP和网关(无网关时仅写IP)
address1=$IP/24$( [ -n "$GW" ] && echo ",$GW" || "" )
dns=$DNS
ignore-auto-dns=true # 忽略自动分配的DNS
EOF

# 7. 配置文件权限(NetworkManager要求必须是600,否则报错)
chmod 600 "$CONN_FILE"

# 8. 重载并激活网卡连接
echo "→ 重载NetworkManager配置..."
nmcli connection reload
echo "→ 激活网卡 $IFACE..."
nmcli connection up "$IFACE" || {
echo "错误:激活网卡 $IFACE 失败!请检查配置"
exit 1
}

# 9. 设置主机名
echo "→ 设置主机名为:$HOSTNAME"
hostnamectl hostname "$HOSTNAME"

# 10. 修改/etc/hosts(追加IP-主机名映射,不覆盖原有配置!)
if ! grep -q "$IP $HOSTNAME" /etc/hosts; then
echo "→ 向/etc/hosts添加映射:$IP $HOSTNAME"
echo "$IP $HOSTNAME" >> /etc/hosts
else
echo "→ /etc/hosts已存在映射:$IP $HOSTNAME(无需重复添加)"
fi

# 11. 配置结果验证(让你直观看到是否生效)
echo -e "\n======= 配置结果验证 ======="
echo "网卡 $IFACE 地址:"
ip a show "$IFACE" | grep -A2 "inet "
echo "当前主机名:$(hostname)"
echo "路由表(默认路由,仅主网卡显示):"
ip route show default 2>/dev/null || echo "无默认路由(符合noroute配置)"
echo "============================="
echo "✅ 配置完成!"

26-1-17 (复习)

1.系统引导过程修复

引导过程

页-1

1.1.磁盘损坏修复

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@base ~]# fdisk -l
Disk /dev/nvme0n1:100 GiB,107374182400 字节,209715200 个扇区
磁盘型号:VMware Virtual NVMe Disk
单元:扇区 / 1 * 512 = 512 字节
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标签类型:dos
磁盘标识符:0xb7f6e5e4

设备 启动 起点 末尾 扇区 大小 Id 类型
/dev/nvme0n1p1 * 2048 2099199 2097152 1G 83 Linux
/dev/nvme0n1p2 2099200 209715199 207616000 99G 8e Linux LVM


Disk /dev/mapper/rhel_172-root:95.08 GiB,102093553664 字节,199401472 个扇区
单元:扇区 / 1 * 512 = 512 字节
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节


Disk /dev/mapper/rhel_172-swap:3.91 GiB,4202692608 字节,8208384 个扇区
单元:扇区 / 1 * 512 = 512 字节
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节

# 清空磁盘/dev/nvme0n1的MBR引导扇区(前446字节),但保留分区表(后66字节)
[root@base ~]# dd if=/dev/zero of=/dev/nvme0n1 bs=446 count=1
记录了1+0 的读入
记录了1+0 的写出
446字节已复制,0.000239378 s,1.9 MB/s

1.1.1.磁盘引导问题表现

PixPin_2026-01-17_10-59-28

解决方法

1.关机

2.插入可安装系统的设备

3.从此设备启动

1.1.2.磁盘引导修复过程

PixPin_2026-01-17_10-59-28 PixPin_2026-01-17_11-01-22 PixPin_2026-01-17_11-01-48 PixPin_2026-01-17_11-01-22
1
2
3
#输入1并且0回车后输入
bash-5.1# chroot /mnt/sysroot
bash-5.1# grub2-install /dev/nvme0n1
PixPin_2026-01-17_11-08-08

1.2.引导文件缺失修复

1.2.1.查询引导文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@node2 ~]# ll /boot/grub2/grub.cfg
-rw-------. 1 root root 6955 1月 15 21:11 /boot/grub2/grub.cfg
[root@node2 ~]# ll /boot/loader/entries/
总用量 8
-rw-r--r--. 1 root root 500 1月 15 21:11 1199b04b0d974659ad491305f98dcfe0-0-rescue.conf
-rw-r--r--. 1 root root 475 1月 15 21:23 1199b04b0d974659ad491305f98dcfe0-5.14.0-570.12.1.el9_6.x86_64.conf

#磁盘
[root@node2 ~]# df
文件系统 1K-块 已用 可用 已用% 挂载点
devtmpfs 4096 0 4096 0% /dev
tmpfs 1855068 0 1855068 0% /dev/shm
tmpfs 742028 9220 732808 2% /run
/dev/mapper/rhel_172-root 99635200 3278240 96356960 4% /
/dev/nvme0n1p1 983040 315524 667516 33% /boot
/dev/sr0 12462174 12462174 0 100% /rhel9
tmpfs 371012 0 371012 0% /run/user/0

1.2.2.主引导文件缺失修复

1.2.2.1.删除主引导文件
1
[root@node2 ~]# rm -rf /boot/grub2/grub.cfg
1.2.2.2.问题表现

PixPin_2026-01-17_23-22-57

1.2.2.3.修复过程
1
2
3
4
5
6
7
8
#
grub> set root=(hd0,msdos1)
#
grub> linux16 /vmlinuz-5.14.0-570.12.1.e19_6.x86_64 ro root=/dev/mapper/rhel_172-root net.ifnames=0
#
grub>initrd16 /initramfs-5.14.0-570.12.1.e19_6.x86_64.img
# 启动系统
grub> boot

当手动引导启动系统后,需要修复自动引导文件,否则系统重启后仍然要手动引导

1
2
#修复自动引导文件
[root@base ~]# grub2-mkconfig > /boot/grub2/grub.cfg #修复完成

PixPin_2026-01-17_23-39-31

1.2.3.子引导文件缺失修复

1.2.3.1.删除子引导文件
1
[root@base ~]# rm -fr /boot/loader/entries/*
1.2.3.2.问题表现

PixPin_2026-01-17_23-22-57

1.2.3.3.修复过程

与上面的主引导文件修复过程一致

进入系统后

1
2
[root@base ~]# kernel-install add $(uname -r) /boot/vmlinuz-5.14.0-570.12.1.el9_6.x86_64
[root@base ~]# ls /boot/loader/entries/

PixPin_2026-01-18_00-06-54

当修复子启动文件后网卡的名称设定参数就丢失了需要重新设定,此参数和系统修复本身无关

PixPin_2026-01-18_00-15-39

1
2
[root@base ~]#  grubby --update-kernel ALL --args net.ifnames=0
# 重启后网络恢复

1.3.内核文件修复

1.3.1.删除内核文件

PixPin_2026-01-17_15-42-46

1.3.2.问题表现

PixPin_2026-01-17_15-44-30

1.3.3.解决方法

(1)打开电源时进入固件

PixPin_2026-01-17_15-45-45

(2)进入BOOT页面,将光驱启动调到最前面

PixPin_2026-01-17_15-47-07

(3)保存后,过程同磁盘修复一致

PixPin_2026-01-17_15-47-31

(4)进入下方页面

1
2
3
4
bash-5.1# df
bash-5.1# mount --bind /run/install/repo/ /mnt/sys/
bash-5.1# chroot /mnt/sysroot/
bash-5.1# rpm -ivh /media/BaseOS/Packages/kernel-core-5.14.0-570.12.1.el9_6.x86_64.rpm --force

PixPin_2026-01-17_15-57-32

2026-1-19 (复习)

1.apache构建web服务器

1
2
3
4
5
6
7
8
9
10
11
12
# 安装服务并启动服务
[root@base ~]# dnf install httpd -y
[root@base ~]# systemctl enable --now httpd
[root@base ~]# dnf install httpd-manual -y # httpd服务手册

# 基本配置信息
端口: 80
默认发布目录: /var/www/html
默认发布文件: index.html
主配置文件: /etc/httpd/conf/httpd.conf
子配置文件: /etc/httpd/conf.d/*.conf
管理命令: systemctl enable --now httpd

1.1.主配置文件

1.1.1.更改默认发布文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@node4 ~]#  echo "index.html ---> /var/www/html" > /var/www/html/index.html
[root@node4 ~]# echo "test.html ---> /var/www/html " > /var/www/html/test.html

[root@node4 ~]# curl 172.25.254.130
index.html ---> /var/www/html
## 修改Apache配置
[root@node4 ~]# vim /etc/httpd/conf/httpd.conf
168 <IfModule dir_module>
169 DirectoryIndex index.html
170 </IfModule>
## 重启服务
[root@base ~]# systemctl restart httpd
## 再次测试
[root@node4 ~]# curl 172.25.254.130
test.html ---> /var/www/html

1.1.2.修改默认发布目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@node4 ~]# mkdir /web/html -p
[root@node4 ~]# echo "index.html ---> /web/html" > /web/html/index.html

[root@node4 ~]# vim /etc/httpd/conf/httpd.conf
124 # DocumentRoot "/var/www/html"
125 DocumentRoot "/web/html"
126
127 #
128 # Relax access to content within /var/www.
129 #
130 <Directory "/web">
131 AllowOverride None
132 # Allow open access:
133 Require all granted
134 </Directory>

[root@node4 ~]# systemctl restart httpd
[root@node4 ~]# curl 172.25.254.130
index.html ---> /web/html

1.1.3.端口修改

1
2
3
4
5
6
7
8
9
10
[root@node4 ~]# vim /etc/httpd/conf/httpd.conf
47 Listen 80
48 Listen 8000
[root@node4 ~]# systemctl restart httpd
[root@node4 ~]# netstat -antlupe | grep httpd
tcp6 0 0 :::8000 :::* LISTEN 0 32220 2538/httpd
2538/httpd
[root@node4 ~]# curl 172.25.254.130:8000
index.html ---> /var/www/html

1.2.子配置文件

1.2.1.虚拟主机

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# 创建基础目录
[root@node4 ~]# mkdir /etc/httpd/logs/node4_log # 日志目录
[root@node4 ~]# mkdir mkdir -p /web/node4/{news,bbs}/html # 发布目录
# 创建发布文件
[root@node4 ~]# echo news.node4.com > /web/node4/news/html/index.html
[root@node4 ~]# echo bbs.node4.com > /web/node4/bbs/html/index.html
# 修改配置文件
[root@node4 ~]# vim /etc/httpd/conf.d/vhosts.conf
<Directory "/web">
AllowOverride None
Require all granted
</Directory>

<VirtualHost _default_:80>
DocumentRoot "/web/html"
CustomLog logs/default.log combined
</VirtualHost>

<VirtualHost *:80>
DocumentRoot "/web/node4/news/html"
ServerName news.node4.com
CustomLog logs/node4_log/news.log combined
</VirtualHost>

<VirtualHost *:80>
DocumentRoot "/web/node4/bbs/html"
ServerName bbs.node4.com
CustomLog logs/node4_log/bbs.log combined
</VirtualHost>


[root@node4 ~]# systemctl restart httpd
[root@node4 ~]# vim /etc/hosts
172.25.254.130 node4 www.node4.com bbs.node4.com news.node4.com
# 测试www.node4.com
[root@node4 ~]# curl www.node4.com
index.html ---> /web/html
# 测试news.node4.com
[root@node4 ~]# curl news.node4.com
news.node4.com
# 测试bbs.node4.com
[root@node4 ~]# curl bbs.node4.com
bbs.node4.com

1.2.2.访问控制

1.2.2.1.基于IP
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@node4 ~]# mkdir /web/html/admin
[root@node4 ~]# echo admin > /web/html/admin/index.html
[root@node4 ~]# curl 172.25.254.130/admin/
admin
# 拒绝其他IP,仅通过.1的流量
[root@node4 ~]# vim /etc/httpd/conf.d/vhosts.conf
<Directory "/web/html/admin/">
Order Deny,Allow
Deny from all
Allow from 172.25.254.1
</Directory>
# 重启服务
[root@node4 ~]# systemctl restart httpd
# 测试,本机测试
[root@node4 ~]# curl 172.25.254.130/admin/
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access this resource.</p>
</body></html>

172.25.254.1进行测试

1.2.2.2.基于用户认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[root@node4 ~]# mkdir -p /web/html/auth/
[root@node4 ~]# echo auth > /web/html/auth/index.html
[root@node4 ~]# curl 172.25.254.130/auth/
auth

## 生成认证文件
[root@node4 ~]# htpasswd -cm /etc/httpd/.htpasswd haha
New password:
Re-type new password:
Adding password for user haha
[root@node4 ~]# cat /etc/httpd/.htpasswd
haha:$apr1$MKFuKZB6$eTeFSJ4Mhn8TMnueDnARX0

[root@node4 ~]# vim /etc/httpd/conf.d/vhosts.conf
<Directory "/web/html/auth/">
AuthUserFile /etc/httpd/.htpasswd
AuthType basic
AuthName "Please input your username and password"
Require valid-user
</Directory>
# 重启
[root@node4 ~]# systemctl restart httpd

# 测试
## root用户
[root@node4 ~]# curl 172.25.254.130/auth/
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>401 Unauthorized</title>
</head><body>
<h1>Unauthorized</h1>
<p>This server could not verify that you
are authorized to access the document
requested. Either you supplied the wrong
credentials (e.g., bad password), or your
browser doesn't understand how to supply
the credentials required.</p>
</body></html>
## haha用户
[root@node4 ~]# curl 172.25.254.130/auth/ -u haha:haha
auth

1.2.3.https

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@node4 ~]# dnf install mod_ssl.x86_64 -y
[root@node4 ~]# mkdir -p /etc/httpd/certs
[root@node4 ~]# openssl req -newkey rsa:2048 -nodes -sha256 -keyout /etc/httpd/certs/node4.key -x509 -days 365 -out /etc/httpd/certs/node4.crt
[root@node4 ~]# ll /etc/httpd/certs/
总用量 8
-rw-r--r-- 1 root root 1237 1月 19 20:22 node4.crt
-rw------- 1 root root 1704 1月 19 20:22 node4.key

[root@node4 ~]# mkdir -p /web/node4/login/html
[root@node4 ~]# echo login.node4.com > /web/node4/login/html/index.html

[root@node4 ~]# vim /etc/httpd/conf.d/vhosts.conf
<VirtualHost *:443>
DocumentRoot "/web/node4/login/html"
ServerName login.node4.com
CustomLog logs/node4_log/login.log combined
SSLEngine on
SSLCertificateFile /etc/httpd/certs/node4.crt
SSLCertificateKeyFile /etc/httpd/certs/node4.key
</VirtualHost>

[root@node4 ~]# systemctl restart httpd
[root@node4 ~]# curl -k https://login.node4.com
login.node4.com

# http重定向
<VirtualHost *:80>
ServerName login.node4.com
RewriteEngine On
RewriteRule ^/(.*)$ https://login.node4.com/$1
</VirtualHost>
[root@node4 ~]# systemctl restart httpd

[root@node4 ~]# curl -I login.node4.com
HTTP/1.1 302 Found
Date: Mon, 19 Jan 2026 12:30:48 GMT
Server: Apache/2.4.62 (Red Hat Enterprise Linux) OpenSSL/3.2.2
Location: https://login.node4.com/
Content-Type: text/html; charset=iso-8859-1

2026-1-20

1.NAT模式实现方法

20260123_NAT

1.1.VS主机配置

1
2
3
4
5
6
7
8
[root@vsnode ~]# vmset.sh vsnode eth0 172.25.254.100 
[root@vsnode ~]# vmset.sh vsnode eth1 192.168.0.100 vsnode nogateway

[root@vsnode ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.25.254.2 0.0.0.0 UG 100 0 0 eth0
172.25.254.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0

1.2.RS1配置

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@vsnode ~]# vmset.sh RS1 eth0 192.168.0.10 192.168.0.100

[root@RS1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.100 0.0.0.0 UG 100 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0


[root@RS1 ~]# dnf install httpd -y --disablerepo=docker,epel
[root@RS1 ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@RS1 ~]# echo 192.168.0.10 --- RS1 > /var/www/html/index.html

1.3.RS2配置

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@vsnode ~]# vmset.sh RS2 eth0 192.168.0.20 192.168.0.100

[root@RS2 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.100 0.0.0.0 UG 100 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0


[root@RS2 ~]# dnf install httpd -y --disablerepo=docker,epel
[root@RS2 ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service →
[root@RS2 ~]# echo 192.168.0.20 --- RS1 > /var/www/html/index.html

1.4.验证

1
2
3
4
5
# VSnode中
[root@vsnode ~]# curl 192.168.0.10
192.168.0.10 --- RS1
[root@vsnode ~]# curl 192.168.0.20
192.168.0.20 --- RS2

2026-1-22

1.DR模式

1.1.环境流程图

20260123-DR

1.2.环境配置

1.2.1.配置路由器

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@router ~]# vmset.sh router eth0 172.25.254.100
[root@router ~]# vmset.sh router eth1 192.168.0.100 nogateway

[root@router ~]# dnf install ipvsadm -y
[root@router ~]# systemctl disable --now ipvsadm.service
[root@router ~]# ipvsadm -C

[root@router ~]# echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf
[root@router ~]# sysctl -p
net.ipv4.ip_forward = 1

[root@router ~]# iptables -t nat -A POSTROUTING -o eth1 -j SNAT --to-source 192.168.0.100
[root@router ~]# iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source 172.25.254.100

1.2.2.vsnode调度器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[root@vsnode ~]# vmset.sh vsnode eth0 192.168.0.50 192.168.0.100


[root@vsnode ~]# cd /etc/NetworkManager/system-connections/
[root@vsnode system-connections]# cp -p eth0.nmconnection lo.nmconnection
[root@vsnode system-connections]# vim lo.nmconnection
[connection]
id=lo
type=loopback
interface-name=lo


[ipv4]
method=manual
address1==127.0.0.1/8
address2=192.168.0.200/32

[root@vsnode system-connections]# nmcli connection reload
[root@vsnode system-connections]# nmcli connection up lo
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/8)

[root@vsnode ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.100 0.0.0.0 UG 100 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0


[root@vsnode ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 192.168.0.200/32 scope global lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:2c:93:ef brd ff:ff:ff:ff:ff:ff
altname enp3s0
altname ens160
inet 192.168.0.50/24 brd 192.168.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::ee1f:f906:ed7:7719/64 scope link noprefixroute
valid_lft forever preferred_lft forever

1.2.3.客户端

1
2
3
4
5
6
7
8
9
10
[root@client ~]# vmset.sh client eth0 172.25.254.99

[root@client ~]# ping 192.168.0.200 -c 2
PING 192.168.0.200 (192.168.0.200) 56(84) 比特的数据。
64 比特,来自 192.168.0.200: icmp_seq=1 ttl=128 时间=0.841 毫秒
64 比特,来自 192.168.0.200: icmp_seq=2 ttl=128 时间=0.691 毫秒

--- 192.168.0.200 ping 统计 ---
已发送 2 个包, 已接收 2 个包, 0% packet loss, time 1031ms
rtt min/avg/max/mdev = 0.691/0.766/0.841/0.075 ms

1.2.4.RS1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
[root@RS1 ~]# vmset.sh RS1 eth0 192.168.0.10 192.168.0.100
eth0 正在使用!!!
成功删除连接 "eth0" (7ba00b1d-8cdd-30da-91ad-bb83ed4f7474)。
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/6)

=====网卡详细信息====
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:75:27:4f brd ff:ff:ff:ff:ff:ff
altname enp3s0
altname ens160
inet 192.168.0.10/24 brd 192.168.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::d364:aca6:84cb:34d7/64 scope link tentative noprefixroute
valid_lft forever preferred_lft forever
====路由表====
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.100 0.0.0.0 UG 100 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
====主机名====
RS1

[root@RS1 ~]# cd /etc/NetworkManager/system-connections/
[root@RS1 system-connections]# cp -p eth0.nmconnection lo.nmconnection
[root@RS1 system-connections]# vim lo.nmconnection
[connection]
id=lo
type=loopback
interface-name=lo

[ipv4]
address1=127.0.0.1/8
address2=192.168.0.200/32
method=manual



[root@RS1 system-connections]# nmcli connection reload
[root@RS1 system-connections]# nmcli connection up lo
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/6)
[root@RS1 system-connections]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 192.168.0.200/32 scope global lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:c4:c1:29 brd ff:ff:ff:ff:ff:ff
altname enp3s0
altname ens160
inet 192.168.0.10/24 brd 192.168.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::d364:aca6:84cb:34d7/64 scope link noprefixroute
valid_lft forever preferred_lft forever

# arp 禁止响应
[root@RS1 ~]# vim arp.sh
#!/bin/bash
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 执行成功
[root@RS1 ~]# bash arp.sh

1.2.5.RS2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
[root@RS2 ~]#  vmset.sh RS2 eth0 192.168.0.20 192.168.0.100
eth0 正在使用!!!
成功删除连接 "eth0" (7ba00b1d-8cdd-30da-91ad-bb83ed4f7474)。

连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/6)

=====网卡详细信息====
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:30:97:ac brd ff:ff:ff:ff:ff:ff
altname enp3s0
altname ens160
inet 192.168.0.20/24 brd 192.168.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::d364:aca6:84cb:34d7/64 scope link tentative noprefixroute
valid_lft forever preferred_lft forever
====路由表====
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.100 0.0.0.0 UG 100 0 0 eth0
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
====主机名====
RS2


[root@RS2 ~]# cd /etc/NetworkManager/system-connections/
[root@RS2 system-connections]# cp -p eth0.nmconnection lo.nmconnection
[root@RS2 system-connections]# vim lo.nmconnection
[connection]
id=lo
type=loopback
interface-name=lo

[ethernet]

[ipv4]
method=manual
address1=127.0.0.1/8
address2=192.168.0.200/32

[root@RS2 system-connections]# nmcli connection reload
[root@RS2 system-connections]# nmcli connection up lo
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/8)

# arp 禁止响应
[root@RS1 ~]# vim arp.sh
#!/bin/bash
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 执行成功
[root@RS1 ~]# bash arp.sh

2.利用火墙标记解决轮询错误

2.1.在rs主机中同时开始http和https两种协议

1
2
3
4
5
[root@RS1 ~]# dnf install ipvsadm -y
[root@RS1 ~]# dnf install mod_ssl -y
[root@RS1 ~]# systemctl disable --now ipvsadm.service
[root@RS1 ~]# ipvsadm -C
[root@RS1 ~]# systemctl restart httpd

2.2. 在vsnode中添加https的轮询策略

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@vsnode ~]# dnf install ipvsadm -y

[root@vsnode ~]# ipvsadm -A -t 192.168.0.200:80 -s rr
[root@vsnode ~]# ipvsadm -a -t 192.168.0.200:80 -r 192.168.0.20 -g
[root@vsnode ~]# ipvsadm -a -t 192.168.0.200:80 -r 192.168.0.10 -g

[root@vsnode ~]# ipvsadm -A -t 192.168.0.200:443 -s rr
[root@vsnode ~]# ipvsadm -a -t 192.168.0.200:443 -r 192.168.0.10:443 -g
[root@vsnode ~]# ipvsadm -a -t 192.168.0.200:443 -r 192.168.0.20:443 -g

[root@vsnode ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.0.200:80 rr
-> 192.168.0.10:80 Route 1 0 0
-> 192.168.0.20:80 Route 1 0 0
TCP 192.168.0.200:443 rr
-> 192.168.0.10:443 Route 1 0 0
-> 192.168.0.20:443 Route 1 0 0

2.3. 轮询错误展示

1
2
3
4
5
6

[root@client ~]# curl 192.168.0.200;curl -k https://192.168.0.200
192.168.0.10 --- RS1
192.168.0.10 --- RS1

#当上述设定完成后http和https是独立的service,轮询会出现重复问题

2.4.解决方案

使用火墙标记访问vip的80和443的所有数据包,设定标记为6666,然后对此标记进行负载

1
2
3
4
5
6
[root@vsnode ~]# ipvsadm -C
[root@vsnode ~]# iptables -t mangle -A PREROUTING -d 192.168.0.200 -p tcp -m multiport --dports 80,443 -j MARK --set-mark 6666

[root@vsnode ~]# ipvsadm -A -f 6666 -s rr
[root@vsnode ~]# ipvsadm -a -f 6666 -r 192.168.0.10 -g
[root@vsnode ~]# ipvsadm -a -f 6666 -r 192.168.0.20 -g

2.5.测试

1
2
3
[root@client ~]# curl  192.168.0.200;curl -k https://192.168.0.200
192.168.0.20 --- RS2
192.168.0.10 --- RS1

3.利用持久连接实现会话粘滞

设定ipvs调度策略

1
[root@vsnode ~]# ipvsadm -A -f 6666 -s rr -p 1

2026-1-23

1.HAPORXY实验

1.环境图

2026-1-24_HA

2.环境搭建

2.1.haproxy主机

1
2
3
4
5
6
[root@haproxy ~]# vmset.sh haproxy eth0 172.25.254.100
[root@haproxy ~]# vmset.sh haproxy eth1 192.168.0.100 nogateway

[root@haproxy ~]# echo net.ipv4.ip_forward=1 > /etc/sysctl.conf
[root@haproxy ~]# sysctl -p
net.ipv4.ip_forward = 1

2.2.webserver1

1
2
3
4
5
[root@webserver1 ~]# vmset.sh webserver1 eth0 192.168.0.10 nogateway
[root@webserver1 ~]# dnf install httpd -y
[root@webserver1 ~]# echo webserver1 - 192.168.0.10 > /var/www/html/index.html
[root@webserver1 ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.

2.3.webserver2

1
2
3
4
[root@webserver2 ~]# vmset.sh webserver2 eth0 192.168.0.20 nogateway
[root@webserver2 ~]# echo webserver2 - 192.168.0.20 > /var/www/html/index.html
[root@webserver2 ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.

2.4.环境验证

1
2
3
4
[root@haproxy ~]# curl  192.168.0.10
webserver1 - 192.168.0.10
[root@haproxy ~]# curl 192.168.0.20
webserver2 - 192.168.0.20

3.Haproxy的安装及配置参数

3.1.安装

1
2
3
4
#在调度器(双网卡主机中)
[root@haproxy ~]# dnf install haproxy.x86_64 -y
[root@haproxy ~]# systemctl enable --now haproxy
Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /usr/lib/systemd/system/haproxy.service.

3.2.harpoxy的参数详解实验

2026-1-25

1.Haproxy算法实验

1.1.静态算法

1.1.1.static-rr

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
balance static-rr
hash-type consistent
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 4
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@haproxy ~]# systemctl restart haproxy.service

[kaitumei.DESKTOP-BMTM34T] ⮞ for i in {1..10}; do curl 172.25.254.100; done
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20

#检测是否支持热更新
[root@haproxy ~]# echo "get weight webcluster/haha" | socat stdio /var/lib/haproxy/stats
4 (initial 4)

[root@haproxy ~]# echo "set weight webcluster/haha 1" | socat stdio /var/lib/haproxy/stats
Backend is using a static LB algorithm and only accepts weights '0%' and '100%'.

1.1.2.first

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
balance first
hash-type consistent
server haha 192.168.0.10:80 maxconn 1 check inter 3s fall 3 rise 5 weight 4
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@haproxy ~]# systemctl restart haproxy.service


[kaitumei.DESKTOP-BMTM34T] ⮞ while true; do curl 172.25.254.100; done
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10

# 再开一个终端
[kaitumei.DESKTOP-BMTM34T] ⮞ while true; do curl 172.25.254.100; done
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20

1.2.动态算法

1.2.1.roundrobin

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
balance roundrobin
hash-type consistent
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 4
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@haproxy ~]# systemctl restart haproxy.service

[kaitumei.DESKTOP-BMTM34T] ⮞ for i in {1..10}; do curl 172.25.254.100; done
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20

# 动态权重更新
[root@haproxy ~]# echo "get weight webcluster/haha" | socat stdio /var/lib/haproxy/stats
4 (initial 4)

[root@haproxy ~]# echo "set weight webcluster/haha 1 " | socat stdio /var/lib/haproxy/stats
[root@haproxy ~]# echo "get weight webcluster/haha" | socat stdio /var/lib/haproxy/stats
1 (initial 4)

# 效果
[kaitumei.DESKTOP-BMTM34T] ⮞ for i in {1..10}; do curl 172.25.254.100; done
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20

1.2.2.leastconn

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
balance leastconn
hash-type consistent
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 4
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@haproxy ~]# systemctl restart haproxy

[kaitumei.DESKTOP-BMTM34T] ⮞ for i in {1..10}; do curl 172.25.254.100; done
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20

1.3.混合算法

1.3.1.source

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# 默认静态算法
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
balance source
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 4
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@haproxy ~]# systemctl restart haproxy

[kaitumei.DESKTOP-BMTM34T] ⮞ for i in {1..10}; do curl 172.25.254.100; done
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10


# 动态算法
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
balance source
hash-type consistent
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 4
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@haproxy ~]# systemctl restart haproxy

[kaitumei.DESKTOP-BMTM34T] ⮞ for i in {1..10}; do curl 172.25.254.100; done
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20

1.3.2.uri

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#主备实验环境
[root@webserver1 ~]# echo RS1 - 192.168.0.10 > /var/www/html/index1.html
[root@webserver1 ~]# echo RS1 - 192.168.0.10 > /var/www/html/index2.html
[root@webserver2 ~]# echo RS2 - 192.168.0.20 > /var/www/html/index1.html
[root@webserver2 ~]# echo RS2 - 192.168.0.20 > /var/www/html/index2.html


#设定uri算法
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
balance uri
hash-type consistent
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 4
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@haproxy ~]# systemctl restart haproxy.service

[kaitumei.DESKTOP-BMTM34T] ⮞ for i in {1..10}; do curl 172.25.254.100/index1.html; done
RS1 - 192.168.0.10
RS1 - 192.168.0.10
RS1 - 192.168.0.10
RS1 - 192.168.0.10
RS1 - 192.168.0.10
RS1 - 192.168.0.10
RS1 - 192.168.0.10
RS1 - 192.168.0.10
RS1 - 192.168.0.10
RS1 - 192.168.0.10

[kaitumei.DESKTOP-BMTM34T] ⮞ for i in {1..10}; do curl 172.25.254.100/index2.html; done
RS2 - 192.168.0.20
RS2 - 192.168.0.20
RS2 - 192.168.0.20
RS2 - 192.168.0.20
RS2 - 192.168.0.20
RS2 - 192.168.0.20
RS2 - 192.168.0.20
RS2 - 192.168.0.20
RS2 - 192.168.0.20
RS2 - 192.168.0.20

1.3.3.url_param

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# 设定url_param算法
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
balance url_param name
hash-type consistent
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 2
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@haproxy ~]# systemctl restart haproxy

[kaitumei.DESKTOP-BMTM34T] ⮞ for i in {1..10}; do curl 172.25.254.100/index.html?name=hua; done
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20
webserver2 - 192.168.0.20

[kaitumei.DESKTOP-BMTM34T] ⮞ for i in {1..10}; do curl 172.25.254.100/index.html?name=huaaaaa; done
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10
webserver1 - 192.168.0.10

1.3.4.hdr

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
balance hdr(User-Agent)
hash-type consistent
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 4
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@haproxy ~]# systemctl restart haproxy

[kaitumei.DESKTOP-BMTM34T] ⮞ curl -A "hua" 172.25.254.100
webserver2 - 192.168.0.20

[kaitumei.DESKTOP-BMTM34T] ⮞ curl -A "Yeming" 172.25.254.100
webserver1 - 192.168.0.10

2.基于cookie的会话保持

1
2
3
4
5
6
7
8
9
10
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
balance roundrobin
hash-type consistent
cookie WEBCOOKIE insert nocache indirect
server haha 192.168.0.10:80 cookie web1 check inter 3s fall 3 rise 5 weight 4
server hehe 192.168.0.20:80 cookie web2 check inter 3s fall 3 rise 5 weight 1

[root@haproxy ~]# systemctl restart haproxy.service

PixPin_2026-01-26_00-02-24

3.HAProxy状态页

1
2
3
4
5
6
7
8
9
10
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen stats
mode http
bind 0.0.0.0:1234
stats enable
log global
# stats refresh
stats uri /status
stats auth hua:hua
[root@haproxy ~]# systemctl restart haproxy.service

2026-1-26

1.IP透传

1.1.七层透传

1.1.1.环境搭建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Haproxy中
[root@Haproxy ~]# dnf install haproxy -y
[root@Haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
blance roundrobin
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 1
server haha 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@Haproxy ~]# systemctl restart haproxy

# Webserver1与Webserver2中
## webserver1
[root@Webserver1 ~]# dnf install httpd -y --disablerepo=docker,epel
[root@Webserver1 ~]# echo "webserver1 --- 192.168.0.10" > /var/www/html/index.html
[root@Webserver1 ~]# systemctl restart httpd
## webserver2
[root@Webserver1 ~]# dnf install httpd -y --disablerepo=docker,epel
[root@Webserver1 ~]# echo "webserver2 --- 192.168.0.20" > /var/www/html/index.html
[root@Webserver1 ~]# systemctl restart httpd

1.1.2.测试环境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# client中
[root@Client ~]# for i in {1..5};do curl 172.25.254.100;done
webserver1 --- 192.168.0.10
webserver2 --- 192.168.0.20
webserver1 --- 192.168.0.10
webserver2 --- 192.168.0.20
webserver1 --- 192.168.0.10



# 在rs主机中默认是未开启透传功能的
## webserver1 or 2中
[[root@Webserver1 ~]# cat /etc/httpd/logs/access_log
192.168.0.20 - - [27/Jan/2026:15:45:38 +0800] "GET / HTTP/1.1" 200 28 "-" "curl/7.76.1"
192.168.0.100 - - [27/Jan/2026:15:46:12 +0800] "GET / HTTP/1.1" 200 28 "-" "curl/7.76.1"


# 开启ip透传的方式
## haproxy中
[root@Haproxy ~]# vim /etc/haproxy/haproxy.cfg
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8 #开启haproxy透传功能
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000


# 在rs中设定采集透传IP
## webserver1 or 2中
[root@Webserver1 ~]# vim /etc/httpd/conf/httpd.conf
201 LogFormat "%h %l %u %t \"%r\" %>s %b \"%{X-Forwarded-For}i\" \"%{Referer }i\" \"%{User-Agent}i\"" combined

[root@Webserver1 ~]# systemctl restart httpd

1.1.3.测试效果

1
2
3
4
5
6
7
# client中
[root@Client ~]# for i in {1..5};do curl 172.25.254.100;done

# webserver1 or 2中
[root@Webserver1 ~]# cat /etc/httpd/logs/access_log
192.168.0.100 - - [27/Jan/2026:15:55:39 +0800] "GET / HTTP/1.1" 200 28 "172.25.254.99" "-" "curl/7.76.1"
192.168.0.100 - - [27/Jan/2026:15:55:39 +0800] "GET / HTTP/1.1" 200 28 "172.25.254.99" "-" "curl/7.76.1"

1.2.四层透传

1.2.1.环境搭建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 在RS中把apache停止
[root@Webserver1 ~]# systemctl disable --now httpd
[root@Webserver2 ~]# systemctl disable --now httpd

# 部署nginx
# webserver1
[root@Webserver1 ~]# dnf install nginx -y --disablerepo=docker,epel
[root@Webserver1 ~]# echo webserver1 - 192.168.0.10 > /usr/share/nginx/html/index.html
[root@Webserver1 ~]# systemctl enable --now nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
# webserver2
[root@Webserver2 ~]# dnf install nginx -y --disablerepo=docker,epel
[root@Webserver2 ~]# echo webserver2 - 192.168.0.20 > /usr/share/nginx/html/index.html
[root@Webserver2 ~]# systemctl enable --now nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.

1.2.2.测试环境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
# 启用nginx的四层访问控制
# webserver1 and 2
[root@webserver1 ~]# vim /etc/nginx/nginx.conf
server {
listen 80 proxy_protocol; # 启用四层访问
listen [::]:80;
server_name _;
root /usr/share/nginx/html;

# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;

error_page 404 /404.html;
location = /404.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

[root@Webserver1 ~]# systemctl restart nginx.service
[root@Webserver2 ~]# systemctl restart nginx.service

## client
[root@Client ~]# for i in {1..5}; do curl 172.25.254.100; done
<html><body><h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
</body></html>
<html><body><h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
</body></html>
<html><body><h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
</body></html>
<html><body><h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
</body></html>
<html><body><h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
</body></html>


# 出现上述报错标识nginx只支持四层访问

# 1.设定haproxy访问4层
## haproxy
[root@Haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen webcluster
bind *:80
mode tcp # 开启四层访问
balance roundrobin
server haha 192.168.0.10:80 send-proxy check inter 3s fall 3 rise 5 weight 1
server hehe 192.168.0.20:80 send-proxy check inter 3s fall 3 rise 5 weight 1


[root@Haproxy ~]# systemctl restart haproxy.service


# 2.设置4层ip透传
# Webserver1 and 2
[root@Webserver1 ~]# vim /etc/nginx/nginx.conf
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
' "$proxy_protocol_addr" ' #采集透传信息
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';


[root@Webserver1 ~]# systemctl restart nginx.service

1.2.3.测试效果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# client
[root@Client ~]# for i in {1..5}; do curl 172.25.254.100; done
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10
webserver2 - 192.168.0.20
webserver1 - 192.168.0.10


# webserver1 and 2
[root@Webserver1 ~]# cat /var/log/nginx/access.log
192.168.0.100 - - [27/Jan/2026:16:33:22 +0800] "GET / HTTP/1.1" "172.25.254.99" 200 26 "-" "curl/7.76.1" "-"
192.168.0.100 - - [27/Jan/2026:16:33:22 +0800] "GET / HTTP/1.1" "172.25.254.99" 200 26 "-" "curl/7.76.1" "-"
192.168.0.100 - - [27/Jan/2026:16:33:22 +0800] "GET / HTTP/1.1" "172.25.254.99" 200 26 "-" "curl/7.76.1" "-"

2.四层负载

2.1.环境搭建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# 部署mariadb数据库
# webserver1 and 2
[root@Webserver1 ~]# dnf install mariadb-server mariadb -y --disablerepo=docker,epel
[root@Webserver1 ~]# vim /etc/my.cnf.d/mariadb-server.cnf
[mysqld]
server_id=10 # webserver2中20
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/run/mariadb/mariadb.pid

[root@Webserver1 ~]# systemctl restart mariadb

# 建立远程登录用户并授权
[root@Webserver1 ~]# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 5
Server version: 10.5.27-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE USER 'hua'@'%' identified by 'hua';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL ON *.* TO 'hua'@'%';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> quit
Bye

[root@Webserver1 ~]# systemctl restart mariadb

# haproxy
[root@Haproxy ~]# vim /etc/haproxy/haproxy.cfg
listen mariadbcluster
bind *:6663
mode tcp
balance roundrobin
server haha 192.168.0.10:3306 check inter 3s fall 3 rise 5 weight 1
server hehe 192.168.0.20:3306 check inter 3s fall 3 rise 5 weight 1

[root@Haproxy ~]# systemctl restart haproxy


[root@Haproxy ~]# netstat -antlupe | grep haproxy
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 49468 2525/haproxy
tcp 0 0 0.0.0.0:6663 0.0.0.0:* LISTEN 0 49469 2525/haproxy

2.2.环境测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# client
[root@Client ~]# mysql -uhua -phua -h172.25.254.100 -P 6663
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 6
Server version: 10.5.27-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SELECT @@server_id;
+-------------+
| @@server_id |
+-------------+
| 20 |
+-------------+
1 row in set (0.001 sec)

MariaDB [(none)]> quit
Bye
[root@Client ~]# mysql -uhua -phua -h172.25.254.100 -P 6663
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 6
Server version: 10.5.27-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SELECT @@server_id;
+-------------+
| @@server_id |
+-------------+
| 10 |
+-------------+
1 row in set (0.001 sec)

MariaDB [(none)]> quit
Bye

3.自定义HAProxy 错误界面

3.1.sorryserver的设定

正常的所有服务器如果出现宕机,那么客户将被定向到指定的主机中,这个当业务主机出问题时被临时访问的主机叫做 sorryserver

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# 在新主机中安装apache(可以用haproxy主机代替)
# client
[root@Client ~]# dnf install httpd -y
[root@Client ~]# vim /etc/httpd/conf/httpd.conf
47 Listen 8080
[root@Client ~]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.

[root@Client ~]# echo "哈哈哈" > /var/www/html/index.html


# 配置sorryserver上线
[root@Haproxy ~]# vim /etc/haproxy/haproxy.cfg
bind *:80
mode tcp
balance roundrobin
server haha 192.168.0.10:80 send-proxy check inter 3s fall 3 rise 5 weight 1
server hehe 192.168.0.20:80 send-proxy check inter 3s fall 3 rise 5 weight 1
server wuwu 172.25.254.99:8080 backup #sorryserver

[root@Haproxy ~]# systemctl restart haproxy

# 测试
# client
[root@Client ~]# curl 172.25.254.100
webserver1 - 192.168.0.10
[root@Client ~]# curl 172.25.254.100
webserver2 - 192.168.0.20

#关闭两台正常的业务主机
# webserver1 and 2
[root@Webserver1 ~]# systemctl stop httpd
[root@Webserver2 ~]# systemctl stop httpd

# client
[root@Client ~]# curl 172.25.254.100
哈哈哈

3.2.自定义错误页面

当所有主机包括sorryserver都宕机了,那么haproxy会提供一个默认访问的错误页面,这个错误页面跟报错代码有关,这个页面可以通过定义来机型设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# 出现的错误页面
[root@Webserver1 ~]# systemctl stop httpd
[root@Webserver2 ~]# systemctl stop httpd
[root@Client ~]# systemctl stop httpd

# 所有后端web服务都宕机
[Administrator.DESKTOP-VJ307M3] ➤ curl 172.25.254.100
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>


[root@haproxy ~]# mkdir /errorpage/html/ -p
[root@haproxy ~]# vim /errorpage/html/503.http
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html;charset=UTF-8

<html><body><h1>什么动物生气最安静</h1>
大猩猩!!
</body></html>
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
errorfile 503 /errorpage/html/503.http #error 页面

listen webcluster
bind *:80
mode http
balance roundrobin
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 1
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1
server wuwu 172.25.254.99:8080 backup

[root@haproxy ~]# systemctl restart haproxy.service


#测试
[root@Client ~]# curl 172.25.254.100
<html><body><h1>什么动物生气最安静</h1>
大猩猩!!
</body></html>

3.3.重定向错误到指定网站

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
errorloc 503 http://www.baidu.com #error 页面
[root@haproxy ~]# systemctl restart haproxy.service

#在浏览器中访问

PixPin_2026-01-27_22-52-16

4.Haproxy ACL访问控制

4.1.实验素材

1
2
3
4
5
6
7
8
# 在浏览器或者curl主机中设定本地解析
以管理员身份打开 C:\Windows\System32\drivers\etc\hosts
172.25.254.100 www.kaitumei.com bbs.kaitumei.com news.kaitumei.com login.kaitumei.com
# #在Linux中设定解析
vim /etc/hosts
172.25.254.100 www.kaitumei.com bbs.kaitumei.com news.kaitumei.com login.kaitumei.com
#测试
ping bbs.timinglee.org

4.2.设定基础的haproxy实验配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@Haproxy ~]# vim /etc/haproxy/haproxy.cfg
frontend webcluster
bind *:80
mode http
use_backend webserver-80-web1

backend webserver-80-web1
server web1 192.168.0.10:80 check inter 3s fall 3 rise 5

backend webserver-80-web2
server web2 192.168.0.20:80 check inter 3s fall 3 rise 5


[root@Haproxy ~]# systemctl restart haproxy

4.3.基础acl示例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
#在访问的网址中,所有以.com  结尾的访问10,其他访问20
[root@Haproxy ~]# vim /etc/haproxy/haproxy.cfg
frontend webcluster
bind *:80
mode http

acl test hdr_end(host) -i .com #acl列表

use_backend webserver-80-web1 if test #acl列表访问匹配
default_backend webserver-80-web2 #acl列表访问不匹配

backend webserver-80-web1
server web1 192.168.0.10:80 check inter 3s fall 3 rise 5

backend webserver-80-web2
server web2 192.168.0.20:80 check inter 3s fall 3 rise 5

[root@Haproxy ~]# systemctl restart haproxy

#测试
[root@Client ~]# curl www.kaitumei.com
webserver1 --- 192.168.0.10
[root@Client ~]# curl www.kaitumei.org
webserver2 --- 192.168.0.20



#基于访问头部
[root@Haproxy ~]# vim /etc/haproxy/haproxy.cfg
frontend webcluster
bind *:80
mode http
acl test hdr_end(host) -i .com #acl列表
acl head hdr_beg(host) -i bbs.

use_backend webserver-80-web1 if head #acl列表访问匹配
default_backend webserver-80-web2 #acl列表访问不匹配

backend webserver-80-web1
server web1 192.168.0.10:80 check inter 3s fall 3 rise 5

backend webserver-80-web2
server web2 192.168.0.20:80 check inter 3s fall 3 rise 5

[root@Haproxy ~]# systemctl restart haproxy

#测试效果
[root@Client ~]# curl bbs.kaitumei.com
webserver1 --- 192.168.0.10
[root@Client ~]# curl www.kaitumei.com
webserver2 --- 192.168.0.20


#base参数acl
[root@Haproxy ~]# vim /etc/haproxy/haproxy.cfg
frontend webcluster
bind *:80
mode http

acl pathdir base_dir -i /yeming
use_backend webserver-80-web1 if pathdir
default_backend webserver-80-web2 #acl列表访问不匹配

backend webserver-80-web1
server web1 192.168.0.10:80 check inter 3s fall 3 rise 5

backend webserver-80-web2
server web2 192.168.0.20:80 check inter 3s fall 3 rise 5
[root@Haproxy ~]# systemctl restart haproxy

[root@Webserver1+2 ~]# mkdir -p /var/www/html/yeming/
[root@Webserver1+2 ~]# mkdir -p /var/www/html/yeming/test/


[root@Webserver1 ~]# echo yeming - 192.168.0.10 > /var/www/html/yeming/index.html
[root@Webserver1 ~]# echo yeming/test - 192.168.0.10 > /var/www/html/yeming/test/index.html
[root@Webserver2 ~]# echo yeming - 192.168.0.20 > /var/www/html/yeming/index.html
[root@Webserver2 ~]# echo yeming/test - 192.168.0.20 > /var/www/html/yeming/test/index.html


#测试
[root@Client ~]# curl 172.25.254.100/yeming/
yeming - 192.168.0.10
[root@Client ~]# curl 172.25.254.100/yeming/test/
yeming/test - 192.168.0.10
[root@Client ~]# curl 172.25.254.100/index.html
webserver2 --- 192.168.0.20



#acl禁止列表黑名单
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
frontend webcluster
bind *:80
mode http

acl test hdr_end(host) -i .com #acl列表

use_backend webserver-80-web1 if test #acl列表访问匹配
default_backend webserver-80-web2 #acl列表访问不匹配

acl invalid_src src 172.25.254.99
http-request deny if invalid_src

backend webserver-80-web1
server web1 192.168.0.10:80 check inter 3s fall 3 rise 5

backend webserver-80-web2
server web2 192.168.0.20:80 check inter 3s fall 3 rise 5

[root@Haproxy ~]# systemctl restart haproxy

#测试:
[root@Client ~]# curl 172.25.254.100
<html><body><h1>403 Forbidden</h1>
Request forbidden by administrative rules.
</body></html>



#禁止列表白名单
[root@haproxy ~]# vim /etc/haproxy/haproxy.cfg
frontend webcluster
bind *:80
mode http
acl test hdr_end(host) -i .com #acl列表

use_backend webserver-80-web1 if test #acl列表访问匹配
default_backend webserver-80-web2 #acl列表访问不匹配

acl invalid_src src 172.25.254.99
http-request deny if ! invalid_src

backend webserver-80-web1
server web1 192.168.0.10:80 check inter 3s fall 3 rise 5

backend webserver-80-web2
server web2 192.168.0.20:80 check inter 3s fall 3 rise 5

#测试:
[root@Client ~]# curl 172.25.254.100
webserver2 --- 192.168.0.20

[root@Haproxy ~]# curl 172.25.254.100
<html><body><h1>403 Forbidden</h1>
Request forbidden by administrative rules.
</body></html>

Haproxy全站加密

1.制作证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@Haproxy ~]# mkdir /etc/haproxy/certs/
[[root@Haproxy ~]# openssl req -newkey rsa:2048 -nodes -sha256 -keyout /etc/haproxy/certs/yeming.key -x509 -days 365 -out /etc/haproxy/certs/yeming.crt
.+...+.+++++++++++++++++++++++++++++++++++++++*.+.+.....+++++++++++++++++++++++++++++++++++++++*.......+...+......+.+..............+.......+.....+...+....+...+.....+....+...+........+.............+.....+......+...............+.+.....+.........+..........+...+..++++++
...+++++++++++++++++++++++++++++++++++++++*.....+..+.........+.+........+......+++++++++++++++++++++++++++++++++++++++*...+.+.....+.......+......+.........+..+..................+.+.....+..........+.........+............+........+.+..+....+.....+.........+......+.+...............+...+.........+.....+.......+..................+.....+............+....+............+.....+.............+.........+..+...+.+..+..................+.+..............+....+...+..+......+.+........+...+.........+.+........+....+...+...+.....+......+.+..+...+.......+.....+.+.....++++++
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:Hunan
Locality Name (eg, city) [Default City]:Hengyang
Organization Name (eg, company) [Default Company Ltd]:hh
Organizational Unit Name (eg, section) []:hh
Common Name (eg, your name or your server's hostname) []:www.kaitumei.com
Email Address []:

[root@Haproxy ~]# ls /etc/haproxy/certs/
yeming.crt yeming.key
[root@Haproxy ~]# cat /etc/haproxy/certs/yeming.{key,crt} > /etc/haproxy/certs/yeming.pem

2.全站加密

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
[root@Haproxy ~]# vim /etc/haproxy/haproxy.cfg 
frontend webcluster-http
bind *:80
redirect scheme https if ! { ssl_fc }

listen webcluster-https
bind *:443 ssl crt /etc/haproxy/certs/yeming.pem
mode http
balance roundrobin
server haha 192.168.0.10:80 check inter 3s fall 3 rise 5 weight 1
server hehe 192.168.0.20:80 check inter 3s fall 3 rise 5 weight 1

[root@Haproxy ~]# systemctl restart haproxy

#测试:
[root@Client ~]# curl -v -k -L http://172.25.254.100
* Trying 172.25.254.100:80...
* Connected to 172.25.254.100 (172.25.254.100) port 80 (#0)
> GET / HTTP/1.1
> Host: 172.25.254.100
> User-Agent: curl/7.76.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 302 Found
< content-length: 0
< location: https://172.25.254.100/
< cache-control: no-cache
<
* Connection #0 to host 172.25.254.100 left intact
* Clear auth, redirects to port from 80 to 443Issue another request to this URL: 'https://172.25.254.100/'
* Trying 172.25.254.100:443...
* Connected to 172.25.254.100 (172.25.254.100) port 443 (#1)
* ALPN, offering h2
* ALPN, offering http/1.1
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS header, Finished (20):
* TLSv1.2 (IN), TLS header, Unknown (23):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.2 (IN), TLS header, Unknown (23):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS header, Unknown (23):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.2 (IN), TLS header, Unknown (23):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS header, Unknown (23):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: C=CN; ST=Hunan; L=Hengyang; O=hh; OU=hh; CN=www.kaitumei.com
* start date: Jan 27 17:03:35 2026 GMT
* expire date: Jan 27 17:03:35 2027 GMT
* issuer: C=CN; ST=Hunan; L=Hengyang; O=hh; OU=hh; CN=www.kaitumei.com
* SSL certificate verify result: self-signed certificate (18), continuing anyway.
* TLSv1.2 (OUT), TLS header, Unknown (23):
> GET / HTTP/1.1
> Host: 172.25.254.100
> User-Agent: curl/7.76.1
> Accept: */*
>
* TLSv1.2 (IN), TLS header, Unknown (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.2 (IN), TLS header, Unknown (23):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* TLSv1.2 (IN), TLS header, Unknown (23):
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< date: Tue, 27 Jan 2026 17:09:23 GMT
< server: Apache/2.4.62 (Red Hat Enterprise Linux)
< last-modified: Tue, 27 Jan 2026 07:43:15 GMT
< etag: "1c-64959c6bf22d5"
< accept-ranges: bytes
< content-length: 28
< content-type: text/html; charset=UTF-8
<
webserver1 --- 192.168.0.10
* Connection #1 to host 172.25.254.100 left intact

2026-1-28

1.Keepalived实验环境设定

1.环境设定图

kastatus

2.环境设定

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
#部署rs1和rh2(单网卡NAT模式)
[root@rs1 ~]# vmset.sh rs1 eth0 172.25.254.10
[root@rs1 ~]# dnf install httpd -y
[root@rs1 ~]# echo RS1 - 172.25.254.10 > /var/www/html/index.html
[root@rs1 ~]# systemctl enable --now httpd

[root@rs2 ~]# vmset.sh rs2 eth0 172.25.254.20
[root@rs2 ~]# dnf install httpd -y
[root@rs2 ~]# echo RS2 - 172.25.254.20 > /var/www/html/index.html
[root@rs2 ~]# systemctl enable --now httpd


#测试:
[Administrator.DESKTOP-VJ307M3] ➤ curl 172.25.254.10
RS1 - 172.25.254.10

─────────────────────────────────────────────────────────────────────────────────────────────────────
[2026-01-28 10:36.42] ~
[Administrator.DESKTOP-VJ307M3] ➤ curl 172.25.254.20
RS2 - 172.25.254.20



#设定ka1和ka2
[root@KA1 ~]# vmset.sh KA1 eth0 172.25.254.50
[root@KA2 ~]# vmset.sh KA2 eth0 172.25.254.60


#设定本地解析
[root@KA1 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.50 KA1
172.25.254.60 KA2
172.25.254.10 rs1
172.25.254.20 rs2


[root@KA1 ~]# for i in 60 10 20; do scp /etc/hosts 172.25.254.$i:/etc/hosts; done

#在所有主机中查看/etc/hosts


#在ka1中开启时间同步服务
[root@KA1 ~]# vim /etc/chrony.conf
26 allow 0.0.0.0/0
29 local stratum 10

[root@KA1 ~]# systemctl restart chronyd
[root@KA1 ~]# systemctl enable --now chronyd



#在ka2中使用ka1的时间同步服务
[root@KA2 ~]# vim /etc/chrony.conf
pool 172.25.254.50 iburst

[root@KA2 ~]# systemctl restart chronyd
[root@KA2 ~]# systemctl enable --now chronyd

[root@KA2 ~]# chronyc sources -v

.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current best, '+' = combined, '-' = not combined,
| / 'x' = may be in error, '~' = too variable, '?' = unusable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* KA1 3 6 17 13 +303ns[+6125ns] +/- 69ms

2.Keepalived日志分离

默认情况下。keepalived的日志会被保存在/var/log/messages文件中,这个文件中除了含有keepalived的日志外,还有其他服务的日志信息,这样不利于对于keepalived的日志进行查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@KA1 ~]# vim /etc/sysconfig/keepalived
KEEPALIVED_OPTIONS="-D -S 6"
[root@KA1 ~]# systemctl restart keepalived.service

[root@KA1 ~]# vim /etc/rsyslog.conf
local6.* /var/log/keepalived.log
[root@KA1 ~]# systemctl restart rsyslog.service


#测试
[root@KA1 ~]# ls -l /var/log/keepalived.log
ls: 无法访问 'keepalived.log': 没有那个文件或目录
[root@KA1 ~]# systemctl restart keepalived.service
[root@KA1 ~]# ls -l /var/log/keepalived.log
-rw------- 1 root root 3294 1月 28 15:09 /var/log/keepalived.log

3.Keepalived的子配置文件设定

在主配置文件中如果写入过多的配置不利于对于主配置文件的阅读

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
[root@KA1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
timinglee_zln@163.com
}
notification_email_from timinglee_zln@163.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id KA1
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 1
vrrp_gna_interval 1
vrrp_mcast_group4 224.0.0.44
}

include /etc/keepalived/conf.d/*.conf #指定独立子配置文件

[root@KA1 ~]# mkdir /etc/keepalived/conf.d -p
[root@KA1 ~]# vim /etc/keepalived/conf.d/webvip.conf
vrrp_instance WEB_VIP {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
}

[root@KA1 ~]# keepalived -t -f /etc/keepalived/keepalived.conf
[root@KA1 ~]# systemctl restart keepalived.service
[root@KA1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.50 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::3901:aeea:786a:7227 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)
RX packets 17383 bytes 1417554 (1.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 32593 bytes 3135052 (2.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.100 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 118 bytes 6828 (6.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 118 bytes 6828 (6.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


4.抢占模式

4.1.抢占模式( 默认的,谁优先级高就把vip放到哪里)

4.2.非抢占模式(持有vip只要vrrp通告正常就不做vip迁移)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
#kA1中
[root@KA1 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance WEB_VIP {
state BACKUP #非抢占模式互为backup
interface eth0
virtual_router_id 51
nopreempt #启动非抢占模式
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
}

[root@KA1 ~]# systemctl stop keepalived.service

#KA2中
[root@KA2 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance WEB_VIP {
state BACKUP
interface eth0
virtual_router_id 51
nopreempt #开启非抢占模式
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
}
[root@KA2 ~]# systemctl stop keepalived.service

#测试:
[root@KA1 ~]# systemctl start keepalived.service
[root@KA2 ~]# systemctl start keepalived.service

[root@KA1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.50 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::3901:aeea:786a:7227 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)
RX packets 18917 bytes 1546417 (1.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34775 bytes 3349412 (3.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.100 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 162 bytes 9028 (8.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 162 bytes 9028 (8.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


[root@KA1 ~]# systemctl stop keepalived.service

[root@KA2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.60 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::26df:35e5:539:56bc prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:1e:fd:7a txqueuelen 1000 (Ethernet)
RX packets 22521 bytes 1553701 (1.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 18517 bytes 1535122 (1.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.100 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:1e:fd:7a txqueuelen 1000 (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 84 bytes 5128 (5.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 84 bytes 5128 (5.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


#开启KA1的服务ip不会被抢占到1中
[root@KA1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.50 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::3901:aeea:786a:7227 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)
RX packets 19102 bytes 1561277 (1.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 35034 bytes 3375682 (3.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 162 bytes 9028 (8.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 162 bytes 9028 (8.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

4.3.延迟抢占

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
#kA1中
[root@KA1 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance WEB_VIP {
state BACKUP #非抢占模式互为backup
interface eth0
virtual_router_id 51
preempt_delay 10 #启动延迟抢占,延迟10s抢占
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
}

[root@KA1 ~]# systemctl stop keepalived.service

#KA2中
[root@KA2 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance WEB_VIP {
state BACKUP
interface eth0
virtual_router_id 51
preempt_delay 10 #启动延迟抢占,延迟10s抢占
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
}
[root@KA2 ~]# systemctl stop keepalived.service

#测试:
[root@KA1 ~]# systemctl start keepalived.service
[root@KA2 ~]# systemctl start keepalived.service

#在一个独立的shell中开启ip的监控
[root@KA1 ~]# watch -n 1 ifconfig

#在KA1另外的shell中关闭keepalived
[root@KA1 ~]# systemctl stop keepalived.service

[root@KA1 ~]# systemctl start keepalived.service
#操作完毕后观察监控中vip的迁移延迟过程

5.keepalived的单播模式

为什么要单播,组播模式使用的网址资源最少,但是不能跨网络,如果主备两台主机是跨网络的,那么只能启用单播来实现vrrp通告

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
#在KA1中
[root@KA1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
timinglee_zln@163.com
}
notification_email_from timinglee_zln@163.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id KA1
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 1
vrrp_gna_interval 1
#vrrp_mcast_group4 224.0.0.44 #关闭组播
}

vrrp_instance WEB_VIP {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
unicast_src_ip 172.25.254.50 #指定单播源地址,通常是本机IP
unicast_peer {
172.25.254.60 #指定单播接收地址
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
}

#在KA2中
[root@KA2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
timinglee_zln@163.com
}
notification_email_from timinglee_zln@163.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id KA1
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 1
vrrp_gna_interval 1
#vrrp_mcast_group4 224.0.0.44 #关闭组播
}

vrrp_instance WEB_VIP {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
unicast_src_ip 172.25.254.60 #指定单播源地址,通常是本机IP
unicast_peer {
172.25.254.50 #指定单播接收地址
}
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
}

[root@KA1 ~]# systemctl restart keepalived.service
[root@KA2 ~]# systemctl restart keepalived.service

#测试
#在KA1中开启独立shell监控播报信息
[root@KA1 ~]# tcpdump -i eth0 -nn src host 172.25.254.50 and dst 172.25.254.60

#在KA2中开启独立shell监控播报信息
[root@KA2 ~]# tcpdump -i eth0 -nn src host 172.25.254.60 and dst 172.25.254.50


#在KA1正常时
#ka2播报信息不显示通告内容

[root@KA1 ~]# systemctl stop keepalived.service

#vip会被迁移到KA2,KA2上开始显示播报内容

[root@KA1 ~]# systemctl start keepalived.service

#vip因为优先级被KA1抢占,KA2中播报停止

Keepalived业务vip迁移告警

1.邮件告警环境构建

1
2
3
4
5
6
7
8
9
10
#安装邮件软件
[root@KA1 ~]# dnf install s-nail postfix -y
[root@KA2 ~]# dnf install s-nail postfix -y


#启动邮件代理
[root@KA1 ~]# systemctl start postfix.service
[root@KA2 ~]# systemctl start postfix.service

#设定sendmail可以通过公网邮箱发送邮件下面方式人选其一
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#在Linux主机中配置mailrc(KA1+KA2)
[root@KA1+KA2 ~]# vim /etc/mail.rc
set smtp=smtp.qq.com
set smtp-auth=login
set smtp-auth-user=kaitumei@foxmail.com
set smtp-auth-password=oxsfthfenecmfaae
set from=kaitumei@foxmail.com
set ssl-verify=ignore

#测试邮件
[root@KA1 mail]# echo hello | mailx -s test kaitumei@163.com

[root@KA1 mail]# mailq #查看邮件队列
Mail queue is empty


[root@KA1 mail]# mail #查看是否又退信
s-nail version v14.9.22. Type `?' for help
/var/spool/mail/root: 1 message
▸ 1 Mail Delivery Subsys 2026-01-28 16:26 69/2210 "Returned mail: see transcript for details "
&q 退出


#查看对应邮箱是否有邮件收到

2.设定keepalived告警脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[root@KA1 ~]# mkdir  -p /etc/keepalived/scripts
[root@KA2 ~]# mkdir -p /etc/keepalived/scripts

#编写告警脚本
[root@KA1+2 ~]# vim /etc/keepalived/scripts/waring.sh
#!/bin/bash
mail_dest='kaitumei@163.com'

mail_send()
{
mail_subj="$HOSTNAME to be $1 vip 转移"
mail_mess="`date +%F\ %T`: vrrp 转移,$HOSTNAME 变为 $1"
echo "$mail_mess" | mail -s "$mail_subj" $mail_dest
}
case $1 in
master)
mail_send master
;;
backup)
mail_send backup
;;
fault)
mail_send fault
;;
*)
exit 1
;;
esac


[root@KA1+2 ~]# chmod +x /etc/keepalived/scripts/waring.sh

[root@KA1 ~]# /etc/keepalived/scripts/waring.sh master

#对应邮箱中会出现邮件

3.配置keepalived告警

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
#在KA1和KA2中设定配置文件
! Configuration File for keepalived

global_defs {
notification_email {
timinglee_zln@163.com
}
notification_email_from timinglee_zln@163.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id KA1
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 1
vrrp_gna_interval 1
vrrp_mcast_group4 224.0.0.44
enable_script_security
script_user root
}
vrrp_instance WEB_VIP {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
# unicast_src_ip 172.25.254.50
# unicast_peer {
# 172.25.254.60
# }
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
notify_master "/etc/keepalived/scripts/waring.sh master"
notify_backup "/etc/keepalived/scripts/waring.sh backup"
notify_fault "/etc/keepalived/scripts/waring.sh fault"
}


[root@KA1+2 ~]# systemctl restart keepalived.service



#测试
[root@KA1 ~]# systemctl stop keepalived.service #停止服务后查看邮件
[root@KA1 ~]# systemctl start keepalived.service #开启服务后查看邮件


6.Keepalived双主模式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
#在KA1中
[root@KA1 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance WEB_VIP { #第一个虚拟路由,以master身份设定
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
}

vrrp_instance DB_VIP { #第二个虚拟路由。以backup身份设定
state BACKUP
interface eth0
virtual_router_id 52
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.200/24 dev eth0 label eth0:1
}
}


#KA2中
[root@KA2 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance WEB_VIP {
state BACKUP
interface eth0
virtual_router_id 51
preempt_delay 10
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
}
vrrp_instance DB_VIP {
state MASTER
interface eth0
virtual_router_id 52
preempt_delay 10
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.200/24 dev eth0 label eth0:1
}
}
[root@KA1 ~]# systemctl restart keepalived.service
[root@KA2 ~]# systemctl restart keepalived.service


#测试
[root@KA1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.50 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::3901:aeea:786a:7227 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)
RX packets 38766 bytes 3548249 (3.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 67456 bytes 6209788 (5.9 MiB)
TX errors 0 dropped 2 overruns 0 carrier 0 collisions 0

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.100 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 782 bytes 60465 (59.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 782 bytes 60465 (59.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


[root@KA2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.60 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::26df:35e5:539:56bc prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:1e:fd:7a txqueuelen 1000 (Ethernet)
RX packets 46164 bytes 3559703 (3.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 38170 bytes 3306899 (3.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.200 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:1e:fd:7a txqueuelen 1000 (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 532 bytes 39588 (38.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 532 bytes 39588 (38.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


[root@KA1 ~]# systemctl stop keepalived.service
[root@KA2 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.60 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::26df:35e5:539:56bc prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:1e:fd:7a txqueuelen 1000 (Ethernet)
RX packets 46204 bytes 3562823 (3.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 38240 bytes 3313319 (3.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.100 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:1e:fd:7a txqueuelen 1000 (Ethernet)

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.200 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:1e:fd:7a txqueuelen 1000 (Ethernet)


[root@KA2 ~]# systemctl stop keepalived.service
[root@KA1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.50 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::3901:aeea:786a:7227 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)
RX packets 39277 bytes 3653121 (3.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 67902 bytes 6264989 (5.9 MiB)
TX errors 0 dropped 2 overruns 0 carrier 0 collisions 0

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.100 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)

eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.200 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)

7.双主模式代理不同业务实现高可用

1.实验环境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#web服务设定再个实验已经设定完成
#在rs中设定lo添加vip2 172.25.254.200、32
#在rs中搭建数据库
[root@rs1+2 ~]# dnf install mariadb-server -y
[root@rs1+2 ~]# systemctl enable --now mariadb
[root@rs1+2 ~]# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 10.5.27-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE USER 'hua'@'%' identified by 'hua';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL ON *.* TO 'hua'@'%';
Query OK, 0 rows affected (0.001 sec)

#测试
[root@rs1 ~]# mysql -uhua -phua -h172.25.254.10
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.5.27-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> quit

[root@rs1 ~]# mysql -ulee -plee -h172.25.254.20
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.5.27-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> quit

2.实现不同vip代理不同业务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
#KA1和KA2
[root@KA1+2 ~]# vim /etc/keepalived/keepalived.conf
include /etc/keepalived/conf.d/webserver.conf
include /etc/keepalived/conf.d/datebase.conf

[root@KA1+2 ~]# vim /etc/keepalived/conf.d/webserver.conf
virtual_server 172.25.254.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP

real_server 172.25.254.10 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
retry 3
delay_before_retry 1
}
}

real_server 172.25.254.20 80 {
weight 1
TCP_CHECK {
connect_timeout 5
retry 3
delay_before_retry 3
connect_port 80
}
}
}
[root@KA1 ~]# vim /etc/keepalived/conf.d/datebase.conf
virtual_server 172.25.254.200 3306 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP

real_server 172.25.254.10 3306 {
weight 1
TCP_CHECK {
connect_timeout 5
retry 3
delay_before_retry 3
connect_port 3306
}
}

real_server 172.25.254.20 3306 {
weight 1
TCP_CHECK {
connect_timeout 5
retry 3
delay_before_retry 3
connect_port 3306
}
}
}

[root@KA1+2 ~]# systemctl restart keepalived.service


[root@rs1+2 ~]# vim /etc/NetworkManager/system-connections/lo.nmconnection
address3=172.25.254.200/32
[root@rs1 ~]# nmcli connection reload
[root@rs1 ~]# nmcli connection up lo

3.测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@rs2 ~]# mysql -uhua  -phua  -h172.25.254.200
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 89
Server version: 10.5.27-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>



[Administrator.DESKTOP-VJ307M3] ➤ curl 172.25.254.100
RS1 - 172.25.254.10

─────────────────────────────────────────────────────────────────────────────────────────────────────
[2026-01-29 11:58.55] ~
[Administrator.DESKTOP-VJ307M3] ➤ curl 172.25.254.100
RS2 - 172.25.254.20

8.利用VRRP Script 实现全能高可用

1.实验环境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#在KA1和KA2中安装haproxy
[root@KA1+2 ~]# dnf install haproxy-2.4.22-4.el9.x86_64 -y

[root@KA1+2 ~]# vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1

[root@KA1+2 ~]# vim /etc/haproxy/haproxy.cfg
listen webserver
bind 172.25.254.100:80
mode http
server web1 172.25.254.10:80 check
server web2 172.25.254.20:80 check

[root@KA1+2 ~]# systemctl enable --now haproxy.service

2.利用案例理解vrrp_scripts

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
#在KA1主机中
[root@KA1 ~]# vim /etc/keepalived/scripts/test.sh
#!/bin/bash
[ ! -f "/mnt/lee" ]

[root@KA1 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id KA1
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 1
vrrp_gna_interval 1
vrrp_mcast_group4 224.0.0.44
}

vrrp_script check_lee {
script "/etc/keepalived/scripts/test.sh"
interval 1
weight -30
fall 2
rise 2
timeout 2
user root
}

vrrp_instance WEB_VIP {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
nopreempt no
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
track_script {
check_lee
}
}

[root@KA2 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
timinglee_zln@163.com
}
notification_email_from timinglee_zln@163.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id KA2
vrrp_skip_check_adv_addr
#vrrp_strict
vrrp_garp_interval 1
vrrp_gna_interval 1
vrrp_mcast_group4 224.0.0.44
}

vrrp_script check_lee {
script "/etc/keepalived/scripts/test.sh"
interval 1
weight -30
fall 2
rise 2
timeout 2
user root
}


vrrp_instance WEB_VIP {
state BACKUP
interface eth0
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
track_script {
check_lee
}
}




[root@KA1 ~]# systemctl restart keepalived.service


#测试:
[root@KA1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.50 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::3901:aeea:786a:7227 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)
RX packets 98198 bytes 9235557 (8.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 145101 bytes 12247386 (11.6 MiB)
TX errors 0 dropped 9 overruns 0 carrier 0 collisions 0

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.100 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 932 bytes 72195 (70.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 932 bytes 72195 (70.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@KA1 ~]# touch /mnt/lee

[root@KA1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.50 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::3901:aeea:786a:7227 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)
RX packets 97968 bytes 9216259 (8.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 144858 bytes 12219108 (11.6 MiB)
TX errors 0 dropped 9 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 932 bytes 72195 (70.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 932 bytes 72195 (70.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@KA1 ~]# rm -fr /mnt/lee

[root@KA1 ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.50 netmask 255.255.255.0 broadcast 172.25.254.255
inet6 fe80::3901:aeea:786a:7227 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)
RX packets 98198 bytes 9235557 (8.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 145101 bytes 12247386 (11.6 MiB)
TX errors 0 dropped 9 overruns 0 carrier 0 collisions 0

eth0:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.25.254.100 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00:0c:29:26:33:d9 txqueuelen 1000 (Ethernet)

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 932 bytes 72195 (70.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 932 bytes 72195 (70.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

3.keepalived + haproxy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@KA1 ~]# vim /etc/keepalived/scripts/haproxy_check.sh
#!/bin/bash
killall -0 haproxy &> /dev/null

[root@KA1 ~]# chmod +x /etc/keepalived/scripts/haproxy_check.sh
[root@KA1 ~]# vim /etc/keepalived/keepalived.conf
vrrp_script haporxy_check {
script "/etc/keepalived/scripts/haproxy_check.sh"
interval 1
weight -30
fall 2
rise 2
timeout 2
user root
}
vrrp_instance WEB_VIP {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.254.100/24 dev eth0 label eth0:0
}
track_script {
haporxy_check
}
}

[root@KA1 ~]# systemctl restart keepalived.service


#测试
通过关闭和开启haproxy来观察vip是否迁移

2026-1-29

1.Nginx的源码编译

1.1.下载软件

1
[root@Nginx ~]# wget https://nginx.org/download/nginx-1.28.1.tar.gz

1.2.解压

1
2
3
4
5
[root@Nginx ~]# tar -zxf nginx-1.28.1.tar.gz
[root@Nginx ~]# cd nginx-1.28.1/
[root@Nginx nginx-1.28.1]# ls
auto CHANGES.ru conf contrib html man SECURITY.md
CHANGES CODE_OF_CONDUCT.md configure CONTRIBUTING.md LICENSE README.md src

1.3.检测环境

1
2
3
4
#安装依赖性
[root@Nginx ~]# dnf install gcc openssl-devel.x86_64 pcre2-devel.x86_64 zlib-devel -y

[root@Nginx nginx-1.28.1]# ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-stream --with-stream_ssl_module --with-stream_realip_module

1.4.编译

1
2
[root@Nginx nginx-1.28.1]# make
[root@Nginx nginx-1.28.1]# make install

1.5.nginx启动

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#设定环境变量
[root@Nginx sbin]# vim ~/.bash_profile
export PATH=$PATH:/usr/local/nginx/sbin

[root@Nginx sbin]# source ~/.bash_profile


[root@Nginx logs]# useradd -s /sbin/nologin -M nginx
[root@Nginx logs]# nginx
[root@Nginx logs]# ps aux | grep nginx
root 44012 0.0 0.1 14688 2356 ? Ss 17:01 0:00 nginx: master process nginx
nginx 44013 0.0 0.2 14888 3892 ? S 17:01 0:00 nginx: worker process
root 44015 0.0 0.1 6636 2176 pts/0 S+ 17:01 0:00 grep --color=auto nginx


#测试
[root@Nginx logs]# echo timinglee > /usr/local/nginx/html/index.html

[root@Nginx logs]# curl 172.25.254.100
timinglee