澳门新萄京官方网站-www.8455.com-澳门新萄京赌场网址

澳门新萄京官方网站keepalived高可用代理服务,利

2019-05-25 作者:澳门新萄京官方网站   |   浏览(138)

实现双主模型的ngnix高可用(一)

澳门新萄京官方网站 1

实验目的:使用keepalived实现Nginx的双主高可用负载均衡集群。

澳门新萄京官方网站 2

Nginx不仅是一款优秀的WEB服务器,同时可以根据nginx的反代理可以配置成强大的负载均衡器.这里就介绍如何把nginx配置成负载均衡器,并结合keepalived配置高可用的集群.

准备:主机7台

client:

172.18.x.x

调度器:keepalived nginx 带172.18.x.x/16 网卡

192.168.234.27

192.168.234.37

real_server

192.168.234.47

192.168.234.57

192.168.234.67

192.168.234.77

实验环境:两台Nginx proxy(双主Nginx,各需要两块网卡,eth0连接内网,eth1连接外网)、两台web server(请求的负载均衡)、一台client用于验证结果。


单主模型IPVS示例

一般集群主要架构为:

实验结果

  1 [root@234c17 ~]# for i in {1..4};do curl www.a.com;curl www.b.com;sleep 1;done
  2 234.57
  3 234.77
  4 234.47
  5 234.67
  6 234.57
  7 234.77
  8 234.47
  9 234.67

澳门新萄京官方网站 3


配置keepalive

高可用的ipvs集群示例:修改keepalived配置文件

前端为负载均衡器两个:主/备,两种工作方式,一种是备机待机状态,主机故障时备机接管主机工作实现故障庄毅,在主机故障恢复完成时备机继续仅需待机状态,第二种是主备同时工作,一台宕机另外一台自动接管另一台的工作实现故障转移.

过程:

注意:为了不影响实验结果,在实验开始前先关闭iptables和selinux

一、haproxy和nginx的区别

修改主机:192.168.234.27的keepalived配置文件

  1 [root@234c27 ~]# vim /etc/keepalived/keepalived.conf
  2 ! Configuration File for keepalived
  3 
  4 global_defs {
  5 notification_email {
  6 root@localhost  //接受邮件地址
  7 }
  8 notification_email_from keepalived@localhost  //发送邮件地址
  9 smtp_server 127.0.0.1  //发送邮件服务器IP
 10 smtp_connect_timeout 30  //邮件连接超时时长
 11 router_id kptwo  //路由id
 12 vrrp _mcast_group4 234.10.10.10  //指定vrrp协议的多播地址
 13 }
 14 
 15 vrrp_instance VI_1 {  //vrrp协议的
 16 state MASTER  //lvs的MASTER服务器
 17 interface ens37  //
 18 virtual_router_id 50  //虚拟路由
 19 priority 100  //权重为100.越大越先
 20 advert_int 1  //发送组博包的间隔
 21 authentication {  //验证
 22 auth_type PASS  //方式为pass( 明文)
 23 auth_pass 1111  //密码
 24 }
 25 virtual_ipaddress { //keepalived虚拟ip
 26 10.0.0.100/24
 27 }
 28 }
 29 virtual_server 10.0.0.100 80 {
 30     delay_loop 6  //检查后端服务器的时间间隔
 31     lb_algo wrr  //定义调度方法
 32     lb_kind DR  //集群的类型
 33     #persistence_timeout 50  //持久连接时长
 34     protocol TCP  //服务协议,仅支持TCP
 35     real_server 192.168.234.47 80 {  //后端real_server服务器地址
 36         weight 1 //权重
 37         HTTP_GET {  //应用层检测
 38             url {
 39               path /  //定义要监控的URL
 40               status_code 200  //判断上述检测机制为健康状态的响应码
 41             }
 42             connect_timeout 3  //连接请求的超时时长
 43             nb_get_retry 3  //重试次数
 44             delay_before_retry 3  //重试之前的延迟时长
 45         }
 46     }
 47     real_server 192.168.234.57 80 {
 48         weight 2
 49         HTTP_GET {
 50             url {
 51                 path /
 52                 status_code 200
 53             }
 54             connect_timeout 3
 55             nb_get_retry 3
 56             delay_before_retry 3
 57         }
 58     }
 59 }

第一种方式可以通过将域名解析到一个虚拟ip(vip)上,主负载均衡器绑定虚拟ip,当主负载均衡器出现故障时,通过keepalived自动将vip绑定到备用负载均衡器上同时arping网关刷新MAC地址.,避免单点故障.

一、先配置4台real_server,安装好测试用的httpd

  1 [root@234c47 ~]# curl 192.168.234.47;curl 192.168.234.57;curl 192.168.234.67;curl 192.168.234.77
  2 234.47
  3 234.57
  4 234.67
  5 234.77

操作步骤:

Haproxy的工作模式:代理模式为http和tcp做代理,可以为多种服务做代理,它是一个专门的代理服务器,自己不能成为web服务。

修改主机:192.168.234.37的keepalived配置文件

[root@234c37 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id kptwo
   vrrp _mcast_group4 234.10.10.10
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens37
    virtual_router_id 50
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       10.0.0.100/24
    }
}
virtual_server 10.0.0.100 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    sorry_server 127.0.0.1:80
    real_server 192.168.234.47 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.234.57 80 {
        weight 2
        HTTP_GET {
            url {
              path /
                status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

第二种方式主备同时绑定一个vip,把域名通过DNS轮询的方式解析到这两个服务器上,主机出现故障,备机就将主机绑定vip绑定到备机上,同时arping网关刷新MAC地址.实现故障转移.

二、配置keepalived

因为是双主模型

一、配置IP

nginx的工作模式:web模式和代理,Nginx只为WEB服务做代理。

查看keepalived

[root@234c37 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
…………
[root@234c37 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
//暂无ipvsadm

中间为WEB服务器作为real server,处理请求.

1.配置keepalived主机234.27

[root@234c27 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
    notification_email {
      root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id kpone
    vrrp _mcast_group4 234.10.10.10
 }
 vrrp_instance VI_1 {
     state MASTER
     interface ens33
     virtual_router_id 50
     priority 100
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 1111
     }
     virtual_ipaddress {
         172.18.0.100/16  //这ip调度 192.168.234.47/57
     }
 }
vrrp_instance VI_2 {
     state BACKUP
     interface ens33
     virtual_router_id 51
     priority 80
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 2222
     }
     virtual_ipaddress {
         172.18.0.200/16  //这ip调度 192.168.234.147/157
     }
}

1.配置A主机的IP


启动服务

[root@234c27 keepalived]# systemctl start keepalived.service
[root@234c27 keepalived]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2018-08-31 20:30:02 CST; 12s ago
  Process: 9657 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 9658 (keepalived)
………………
[root@234c27 keepalived]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.100:80 wrr
  -> 192.168.234.47:80            Route   1      0          0
  -> 192.168.234.57:80            Route   2      0          0
//启动服务lvs vs已配置好

后端为数据库和分布式文件系统.数据库一般为主从两台.分布式文件系统有效解决WEB服务器之间的数据同步.有的还会将图片服务器单独分离出来放在后端.

2.配置keepalived主机234.37

[root@234c37 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
    notification_email {
      root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id kpone
    vrrp _mcast_group4 234.10.10.10
 }
 vrrp_instance VI_1 {
     state BACKUP
     interface ens33
     virtual_router_id 50
     priority 80
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 1111
     }
     virtual_ipaddress {
         172.18.0.100/16  //这ip调度 192.168.234.47/57
     }
 }
vrrp_instance VI_2 {
     state MASTER
     interface ens33
     virtual_router_id 51
     priority 100
     advert_int 1
     authentication {
         auth_type PASS
         auth_pass 2222
     }
     virtual_ipaddress {
         172.18.0.200/16  //这ip调度 192.168.234.147/157
     }
}

这样双主模型简单的就搭建好了

# ip addr add dev eth0 192.168.10.2/24

二、安装配置

后端real_server准备

本文使用环境:

3.配置nginx主机234.27/37

先配置http语块

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;
    upstream web1{  //
        server 192.168.234.47:80;
        server 192.168.234.57:80;
        }
    upstream web2{
        server 192.168.234.67:80;
        server 192.168.234.77:80;
        }

/*
ngx_http_upstream_module
ngx_http_upstream_module模块
用于将多个服务器定义成服务器组,而由proxy_pass, fastcgi_pass等指令
进行引用
1、upstream name { ... }
定义后端服务器组,会引入一个新的上下文
默认调度算法是wrr
Context: http
upstream httpdsrvs {
server ...
server...
...
*/

然后配置server

    server {
        listen       80 default_server; //默认监听80端口
        server_name www.a.com //域名
        listen       [::]:80 default_server;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
                proxy_pass http://web1 ;  //定义访问80端口的请求,以web1提供服务。而指定的web1在http语块中为 192.168.234.47/57:80 提供服务
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
    server {
        server_name www.b.com
        listen 80;
        location / {
                proxy_pass http://web2 ; //定义访问80端口的请求,以web2提供服务。而指定的web2在http语块中为 192.168.234.147/157:80 提供服务

        }
    }
}

这样访问 www.a.com就是访问192.168.234.47/57:80

访问 www.b.com就是访问192.168.234.67/77:80

现在客户机将host添加www.a/b.com

172.18.0.100 www.a.com
172.18.0.200
www.b.com

    客户端将www.a.com 解析 172.18.0.100

[root@234c17 ~]# ping www.a.com
PING www.a.com (172.18.0.100) 56(84) bytes of data.
64 bytes from www.a.com (172.18.0.100): icmp_seq=1 ttl=64 time=0.358 ms
64 bytes from www.a.com (172.18.0.100): icmp_seq=2 ttl=64 time=0.376 ms
64 bytes from www.a.com (172.18.0.100): icmp_seq=3 ttl=64 time=0.358 ms
64 bytes from www.a.com (172.18.0.100): icmp_seq=4 ttl=64 time=0.366 ms

    客户端将www.b.com 解析 172.18.0.200

[root@234c17 ~]# ping www.b.com
PING www.b.com (172.18.0.200) 56(84) bytes of data.
64 bytes from www.b.com (172.18.0.200): icmp_seq=1 ttl=64 time=0.582 ms
64 bytes from www.b.com (172.18.0.200): icmp_seq=2 ttl=64 time=0.339 ms
64 bytes from www.b.com (172.18.0.200): icmp_seq=3 ttl=64 time=0.524 ms
64 bytes from www.b.com (172.18.0.200): icmp_seq=4 ttl=64 time=0.337 ms

结果:

  1 [root@234c17 ~]# for i in {1..4};do curl www.a.com;curl www.b.com;sleep 1;done
  2 234.57
  3 234.77
  4 234.47
  5 234.67
  6 234.57
  7 234.77
  8 234.47
  9 234.67

2.配置B主机的IP

1、安装

增加ip在网卡上 修改限制arp通告及应答级别 rs1 rs2都做,网关并指向路由

ip a a 10.0.0.100/32 dev ens37

echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

route add default gw 192.168.234.17

安装httpd服务 写好网页文件

CentOS 5.5 32位

实现双主模型的ngnix高可用(二)

澳门新萄京官方网站 4

# ip addr add dev eth0 192.168.10.23/24

# yum -y install haproxy

启动服务

澳门新萄京官方网站 5

nginx:nginx-1.0.11

现在扩展实验

3.配置C主机的IP

注意,如果在生产中安装,一定要注意安装软件的版本要落后最新版本一到两个,否则,新版本中出现了bug无法解决将是致命的。

多主模型IPVS示例

澳门新萄京官方网站 6

keepalived:keepalived-1.1.19.tar.gz

将192.168.234.47/57主机加ip地址

[root@234c47 ~]#ip a a dev ens37 192.168.167/24

[root@234c57 ~]#ip a a dev ens37 192.168.177/24

# ip addr add dev eth0 192.168.10.3/24

2、配置详解

配置keepalive

高可用的ipvs集群示例:修改keepalived配置文件

主调度器:192.168.3.1

编辑http的的配置文件增加基于FQDN虚拟主机

[root@234c47 ~]# vim /etc/httpd/conf.d/vhost.conf

<virtualhost 192.168.234.167:80>
 documentroot /data/web1
 servername www.a.com
< directory /data/web1>
 require all granted
< /directory>
< /virtualhost>

4.配置D主机的IP

************************全局配置*****************************

修改主机:192.168.234.27的keepalived配置文件

[root@234c27 keepalived]# vim /etc/keepalived/keepalived.conf
global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id kpone
   vrrp _mcast_group4 234.10.10.10
}

vrrp_instance VI_1 {
    state MASTER
    interface ens37
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       10.0.0.100/24
    }
}
vrrp_instance VI_2 {
    state BACKUP
    interface ens37
    virtual_router_id 51
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 2222
    }
    virtual_ipaddress {
        10.0.0.200/24
    }
}
virtual_server 10.0.0.100 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    #sorry_server 127.0.0.1:80
    real_server 192.168.234.47 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
virtual_server 10.0.0.200 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    #sorry_server 127.0.0.1:80
    real_server 192.168.234.57 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

备调度器:192.168.3.2

另一个主机也加上虚拟主机

[root@234c57 ~]# vim /etc/httpd/conf.d/vhost.conf

<virtualhost 192.168.234.177:80>
documentroot /data/web1
servername www.a.com
<directory /data/web1>
require all granted
< /directory>
< /virtualhost>

# ip addr add dev eth0 192.168.10.33/24

Global
log     127.0.0.1 local2  # 定义全局日志服务器
chroot   /var/lib/haproxy  # 修改haproxy的工作目录到制定的目录,提高安全性
pidfile   /var/run/haproxy.pid # pid文件位置
maxconn   4000      # 最大连接数
user    haproxy     # 服务运行时的身份,也可以用uid来表示
group    haproxy     # 服务运行时的身份所属的组,可以用gid来表示
Daemon           # 服务以守护进程的身份运行
# turn on stats unix socket    # 默认打开UNIX socket
stats socket /var/lib/haproxy/stats # 指明unix socket 所在的位置
Node      www.a.com  # 定义当前节点的名称,用于HA场景中多haproxy进程共享同一个IP地址时
ulimit-n    100       # 设定每进程所能够打开的最大文件描述符数目,默认情况下其会自动进行计算,因此不推荐修改此选项

修改主机:192.168.234.37的keepalived配置文件

[root@234c37 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id kptwo
   vrrp _mcast_group4 234.10.10.10
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens37
    virtual_router_id 50
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       10.0.0.100/24
    }
}
vrrp_instance VI_2 {
    state MASTER
    interface ens37
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 2222
    }
    virtual_ipaddress {
        10.0.0.200/24
    }
}
virtual_server 10.0.0.100 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    #sorry_server 127.0.0.1:80
    real_server 192.168.234.47 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
virtual_server 10.0.0.200 80 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    #persistence_timeout 50
    protocol TCP
    #sorry_server 127.0.0.1:80
    real_server 192.168.234.57 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

让10.0.0.100的ip优先分配至192.168.234.47 192.168.234.57备用

让10.0.0.200的ip优先分配至192.168.234.57 192.168.234.47备用

real server:192.168.3.4/5/6

重启httpd服务

结果:访问www.a.com

  1 [root@234c17 ~]# for i in {1..8};do curl www.a.com;done
  2 234.167
  3 234.177
  4 234.47
  5 234.57
  6 234.167
  7 234.167
  8 234.177
  9 234.47
 10 

访问www.b.com

  1 [root@234c17 ~]# for i in {1..8};do curl www.b.com;done
  2 234.67
  3 234.67
  4 234.77
  5 234.67
  6 234.77
  7 234.67
  8 234.77
  9 234.77

二、配置web服务(C和D主机都做同样配置,只需修改默认主页中的IP地址为本机的IP即可,以示区别)

log``127.0``.``0.1``local2要想启用,可以看到默认配置文件中有这么一行注释

后端real_server准备

修改192.168.234.57的vip为10.0.0.200/32

  1 [root@234c27 keepalived]# ipvsadm -Ln
  2 IP Virtual Server version 1.2.1 (size=4096)
  3 Prot LocalAddress:Port Scheduler Flags
  4   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
  5 TCP  10.0.0.100:80 wrr
  6   -> 192.168.234.47:80            Route   1      0          0
  7 TCP  10.0.0.200:80 wrr
  8   -> 192.168.234.57:80            Route   1      0          0

澳门新萄京官方网站 7

现在宕掉一个lvs

  1 [root@234c27 keepalived]# systemctl stop keepalived.service
  2 [root@234c27 keepalived]# ipvsadm -Ln
  3 IP Virtual Server version 1.2.1 (size=4096)
  4 Prot LocalAddress:Port Scheduler Flags
  5   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
  6 

澳门新萄京官方网站 8

依然提供服务

  1 [root@234c37 ~]# ipvsadm -Ln
  2 IP Virtual Server version 1.2.1 (size=4096)
  3 Prot LocalAddress:Port Scheduler Flags
  4   -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
  5 TCP  10.0.0.100:80 wrr
  6   -> 192.168.234.47:80            Route   1      0          21
  7 TCP  10.0.0.200:80 wrr
  8   -> 192.168.234.57:80            Route   1      0          39

后一个实现基于前一个的基础上修改来的

本文采用第一种方式来进行vip为:192.168.3.253

1.安装apache

#local2.*/var/log/haproxy.log

假设要实现sorry_server

1.把rs服务都停掉。然后在lvs上安装apache或者nginx服务

2.将keepalived配置文件中的

  1 virtual_server 10.0.0.200 80 {
  2     delay_loop 6
  3     lb_algo wrr
  4     lb_kind DR
  5     #persistence_timeout 50
  6     protocol TCP
  7     #sorry_server 127.0.0.1:80  //这一行来修改 写出服务出错之后的页面
  8     real_server 192.168.234.57 80 {
  9         weight 1
 10         HTTP_GET {
 11             url {
 12               path /
 13               status_code 200
 14             }
 15             connect_timeout 3
 16             nb_get_retry 3
 17             delay_before_retry 3
 18         }
 19     }
 20 }

一、在主备服务器上部署nginx

# yum -y install apache

做如下配置即可启用

1.下载

wget http://nginx.org/download/nginx-1.0.11.tar.gz

2.创建默认主页

# touch /var/log/haproxy.log
# vim /etc/rsyslog.conf
$ModLoad imudp
$UDPServerRun 514
# service rsyslog restart
# tail -f /var/log/haproxy.log
Oct  6 10:45:22 localhost haproxy[22208]: 172.16.5.200:50332 [06/Oct/2013:10:45:22.852] web static/www.web1.com 6/0/2/4/32 200 45383 - - ---- 3/3/0/1/0 0/0 "GET / HTTP/1.1"

2.安装

 yum  -y install zlib-devel pcre-devel openssl-devel  # 安装依赖tar -zxvf nginx-1.0.11.tar.gzcd nginx-1.0.11./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_flv_module --with-http_gzip_static_modulemake && make install

3.配置

配置主调度器的nginx,编辑nginx.conf

vi /usr/local/nginx/conf/nginx.confhttp {    include       mime.types;    default_type  application/octet-stream;    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '    #                  '$status $body_bytes_sent "$http_referer" '    #                  '"$http_user_agent" "$http_x_forwarded_for"';    #access_log  logs/access.log  main;    sendfile        on;    #tcp_nopush     on;    #keepalive_timeout  0;    keepalive_timeout  65;    #gzip  on;    # 添加一组真实的服务器地址池    # 供proxy_pass和fastcgi_pass指令中使用的代理服务器    upstream real_server_pool {      # 后台如果有动态应用的时候,ip_hash指令可以通过hash算法      # 将客户端请求定位到同一台后端服务器上,解决session共享,      # 但建议用动态应用做session共享      # ip_hash;      # server用于指定一个后端服务器的名称和参数      # weight代表权,重默认为1,权重越高被分配的客户端越多      # max_fails 指定时间内对后端请求失败的次数      # fail_timeout 达到max_fails指定的失败次数后暂停的时间      server  192.168.3.4:80 weight=1 max_fails=2 fail_timeout=30s;      # down参数用来标记为离线,不参与负载均衡.在ip_hash下使用      # 在此做演示,后面测试会去掉      server  192.168.3.5:80 weight=1 max_fails=2 fail_timeout=30s down;      # backup仅仅在非backup服务器宕机或繁忙的时候使用      # (在此做演示,后面测试会去掉)      server  192.168.3.6:80 weight=1 max_fails=2 fail_timeout=30s backup;    }    server {        listen       192.168.3.1:80;        server_name  localhost;        #charset koi8-r;        #access_log  logs/host.access.log  main;        location / {            #root   html;            #index  index.html index.htm;            # 使用upstream设置的一组代理服务器            # 如果后端服务器出现502或504等执行错误时,            # 将自动将请求转发给负载均衡池中的另一台服务器.            proxy_next_upstream http_502 http_504 error timeout invalid_header;            proxy_pass http://real_server_pool;            proxy_set_header Host $host;            proxy_set_header X-Forwarded-For $remote_addr;        }    }}

(注意:配置文件中注释ip_hash,以为ip_hash这个功能将保证这个客户端请求总是被转发到一台服务器上,所以如果启用了ip_hash指令,将不能再使用weight(权重参数),配置文件中加入为解释ip_hash指令)

配置备用nginx,将监听ip改为备用调度器的ip

http {    include       mime.types;    default_type  application/octet-stream;    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '    #                  '$status $body_bytes_sent "$http_referer" '    #                  '"$http_user_agent" "$http_x_forwarded_for"';    #access_log  logs/access.log  main;    sendfile        on;    #tcp_nopush     on;    #keepalive_timeout  0;    keepalive_timeout  65;    #gzip  on;    upstream real_server_pool {      #ip_hash;      server  192.168.3.4:80 weight=1 max_fails=2 fail_timeout=30s;      server  192.168.3.5:80 weight=1 max_fails=2 fail_timeout=30s;      server  192.168.3.6:80 weight=1 max_fails=2 fail_timeout=30s;    }    server {        listen       192.168.3.2:80;             # 监听ip改为本地ip        server_name  localhost;        #charset koi8-r;        #access_log  logs/host.access.log  main;        location / {            #root   html;            #index  index.html index.htm;            proxy_next_upstream http_502 http_504 error timeout invalid_header;            proxy_pass http://real_server_pool;            proxy_set_header Host $host;            proxy_set_header X-Forwarded-For $remote_addr;        }

然后启动主备nginx:

/usr/local/nginx/sbin/nginx

# vim /var/www/html/index.html

显示了客户端ip和realserver主机名等信息

二、在主备服务器上部署keepalived

安装

安装依赖:

yum -y install kernel-devel              # 安装依赖

开启路由转发:

vi /etc/sysctl.confnet.ipv4.ip_forward = 1 # 此参数改为1sysctl -p # 使修改生效

首先安装ipvs:

ln -s /usr/src/kernels/2.6.18-194.el5-i686/ /usr/src/linux  # ipvs需要内核文件,做一个软连接# 下载wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.24.tar.gztar -zxvf ipvsadm-1.24.tar.gzcd ipvsadm-1.24makemake install

然后安装keepalived

# 下载wget http://www.keepalived.org/software/keepalived-1.1.19.tar.gztar -zxvf keepalived-1.1.19.tar.gzcd keepalived-1.1.19./configure --prefix=/             # 安装在默认位置(配置文件,二进制文件,启动脚本放到默认位置)--mandir=/usr/local/share/man/ --with-kernel-dir=/usr/src/kernels/2.6.18-194.el5-i686/    # 需要内核的头文件make && make install

<h1>192.168.10.3</h1>

**********************默认配置*********************************

配置keepalived

编辑主调度器配置文件/etc/keepalived/keepalived.conf

global_defs {   notification_email {        [email protected]             # 定义通知邮箱,有多个可以换行添加}   notification_email_from [email protected]# 定义发送邮件的邮箱   smtp_server www.linuxzen.com             # 定义发件服务器   smtp_connect_timeout 30                  # 定义连接smtp服务器超时时间   router_id LVS_DEVEL}vrrp_instance VI_1 {    state MASTER                   # 标示主备,备机上改为BACKUP    interface eth0                 # HA监测的端口    virtual_router_id 51           # 主备的virtual_router_id的值必须相同    priority 100                   # 优先级,通常主要比备稍大    advert_int 1                   # VRRP Multicast 广播周期秒数    authentication {               # 定义认证        auth_type PASS             # 认证方式        auth_pass 1111             # 认证口令字    }    virtual_ipaddress {            # 定义vip        192.168.3.253              # 多个可换行添加,一行一个    }}virtual_server 192.168.3.253 80 {    delay_loop 6             # 每隔 6 秒查询 realserver 状态    lb_algo rr    lb_kind NAT    nat_mask 255.255.255.0    persistence_timeout 50   # 同一IP 的连接50秒内被分配到同一台realserver    protocol TCP             # 用TCP监测realserver的状态    real_server 192.168.3.1 80 {        weight 3                # 权重        TCP_CHECK {            connect_timeout 10  # 10秒无响应超时            nb_get_retry 3            delay_before_retry 3            connect_port 80        }    }    real_server 192.168.3.2 80 {        weight 3        TCP_CHECK {            connect_timeout 3            delay_before_retry 3            connect_port 80        }    }}

配置备用调度器的keepalived,只需要将state MASTER 改为state BACKUP,降低priority 100 的值:

global_defs {   notification_email {        [email protected]}   notification_email_from [email protected]   smtp_server www.linuxzen.com   smtp_connect_timeout 30   router_id LVS_DEVEL}vrrp_instance VI_1 {    state BACKUP                   # 备机上改为BACKUP    interface eth0    virtual_router_id 51           # 主备的virtual_router_id的值必须相同    priority 99                    # 备用优先级小于主调度器    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.3.253    }}virtual_server 192.168.3.253 80 {    delay_loop 6   lb_algo rr    lb_kind NAT    nat_mask 255.255.255.0    persistence_timeout 50    protocol TCP            real_server 192.168.3.1 80 {        weight 3        TCP_CHECK {            connect_timeout 10            nb_get_retry 3            delay_before_retry 3            connect_port 80        }    }    real_server 192.168.3.2 80 {        weight 3        TCP_CHECK {            connect_timeout 3            delay_before_retry 3            connect_port 80        }    }}

主备上启动keepalived:

service keepalived start

三、测试—–部署后端服务器

在后端服务器安装nginx,这里仅部署一台然后创建3个基于ip的虚拟主机供测试:

绑定ip:

ifconfig eth0:1 192.168.3.4/24ifconfig eth0:2 192.168.3.5/24ifconfig eth0:3 192.168.3.6/24

安装nginx后编辑配置文件,在http块里添加:

http {    server {        listen  192.168.3.4:80;        server_name     192.168.3.4;        location / {             root html/s1;             index index.html index.htm;        }    }    server {        listen  192.168.3.5:80;        server_name     192.168.3.5;        location / {            root html/s2;            index index.html index.htm;        }    }    server {        listen 192.168.3.6:80;        server_name     192.168.3.5;        location / {            root html/s3;            index index.html index.htm;        }    }}

创建虚拟主机根目录,并创建不通的首页文档:

cd /usr/local/nginx/html/mkdir s1 s2 s3echo server1 > s1/index.htmlecho server2 > s2/index.htmlecho server3 > s3/index.html

启动nginx:

/usr/local/nginx/sbin/nginx

打开浏览器访问

刷新会看到显示不同的内容:server1,server2,server3(生产中的服务器应该是一样的)

现在停掉主调度器的keepalived

pkill keepalived

查看备调度器的日志:

cat /var/log/messagesFeb 10 16:36:27 cfhost Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATEFeb 10 16:36:28 cfhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATEFeb 10 16:36:28 cfhost Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.Feb 10 16:36:28 cfhost Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.3.253Feb 10 16:36:28 cfhost Keepalived_vrrp: Netlink reflector reports IP 192.168.3.253 addedFeb 10 16:36:28 cfhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.3.253 addedFeb 10 16:36:33 cfhost Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.3.253

现在访问.

大家也看到了备机keepalived只有检测主机的keepalived停止的时候才会切换vip,而不是检测一台real server的某一服务(比如检测80端口的HTTP)切换vip,所以在nginx进程停止的时候,如果服务器没有宕机这时候就无法实现故障转移,所以我们编写一个检测nginx状态的脚本结合keepalived实现故障转移:

#!/bin/bash#filename:nsc.shps aux ¦ grep nginx ¦ grep -v grep 2> /dev/null 1>&2   # 过滤nginx进程if [[ $? -eq 0 ]]               # 如果过滤有nginx进程会返回0则认为nginx存活then    sleep 5                     # 使脚本进入休眠else# 如果nginx没有存活尝试启动nginx,如果失败则杀死keepalived的进程    /usr/local/nginx/sbin/nginx    ps aux ¦ grep nginx ¦ grep -v grep 2> /dev/null 1>&2    if [[ $? -eq 0 ]]    then        pkill keepalived    fifi

然后后台运行此脚本:

nohup sh nsc.sh &

这样就实现了群集的高可靠和高可用.

...

3.启动apache

defaults
mode  http      # 为http服务代理,http为7层协议,tcp4层
log   global     # 全局日志
option httplog      # 日志类别为http日志格式
option dontlognull   # 不记录健康查询的日志
#########健康状况检测的意义在于,后端服务器若挂掉了,就不会再向它发送请求信息。
option http-server-close  # 每次请求完后主动关闭http通道,支持客户端长连接
option forwardfor  except 127.0.0.0/8 # 如果后端服务器需要获得客户端真实ip需要配置的参数,可以从http header中获得客户端ip
option  redispatch   #serverid对应的服务器挂掉后,强制定向到其他健康的服务器
retries  3       #3次连接失败就认为服务不可用,也可以通过后面设置
timeout http-request 10s # 请求超时间
timeout queue  1m   # 排队超时
timeout connect 10s   # 连接超时
timeout client  1m   # 客户端超时
timeout server  1m   # 服务器端超时
timeout http-keep-alive 10s # 保持连接超时
timeout check  10s    # 健康检测超时
maxconn    3000   # 每个进程最大连接数,可以在global中配置

# service httpd start

************************前端代理配置******************************

三、配置sorry_server(此服务配置于Nginx proxy主机上,两台Nginx proxy都做同样配置,只需修改默认主页中的IP地址为本机的IP即可,以示区别)

frontend main *:5000  # 前端定义服务器名称和端口
acl url_static  path_beg -i /static /images /javascript /stylesheets
acl url_static  path_end -i .jpg .gif .png .css .js
use_backend static     if url_static
default_backend       app
定义访问控制,如果符合 url_static,就代理到static,如果不是url_static,就使用默认的后端服务

1.安装apache

***********************后端服务器配置*****************************

# yum -y install apache

backend static
balance   roundrobin  #负载均衡调度算法
server   static 127.0.0.1:4331 check # 定义了一个后端服务器并做健康状况检测
backend app
balance   roundrobin
server app1 127.0.0.1:5001 check rise 2 fall 1
server app2 127.0.0.1:5002 check rise 2 fall 1
server app3 127.0.0.1:5003 check rise 2 fall 1
server app4 127.0.0.1:5004 check rise 2 fall 1
# check rise 2 fall 1 健康状况检查,rise表示后端realserver从stop到start检查的次数,fall表示从start到stop检查的次数

2.创建默认主页


# vim /var/www/html/index.html


<h1>sorry_server:192.168.10.2</h1>

三、实例配置

3.修改监听端口为8080,以免与nginx所监听的端口冲突

本机ip:172.16.5.16

# vim /etc/httpd/conf/httpd.conf

开启forward转发功能

Listen 8080

#sysctl-wnet.ipv4.ip_forward=1

4.启动apache服务

关闭防火墙

四、配置代理(两台Nginx proxy都做同样配置)

为后端ip:172.16.6.1做代理

1.安装nginx

为后端服务器提供页面并启动httpd

# yum -y install nginx

# vim /var/www/html/index.html
<h1>welcome!</>
# service httpd start
global
log     127.0.0.1 local2
chroot   /var/lib/haproxy
pidfile   /var/run/haproxy.pid
maxconn   4000
user    haproxy
group    haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode          http
log           global
option         httplog
option         dontlognull
option http-server-close
option forwardfor    except 127.0.0.0/8 header X-Forward-For # 后端服务器日志中记录远程客户端ip,别忘了在后端服务器上修改log格式
option         redispatch
retries         3
timeout http-request  10s
timeout queue      1m
timeout connect     10s
timeout client     1m
timeout server     1m
timeout http-keep-alive 10s
timeout check      10s
maxconn         3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend web
bind *:80
default_backend static
也可以写成
frontend web 172.16.5.16:80
dfault_backend static
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
server   www.web1.com 172.16.6.1:80 check
stats          enable # 开启服务器状态信息
stats          hide-version # 隐藏版本信息
stats          realm haproxy stats # 说明认证信息  转译了一个空格
stats          auth admin:admin # 认证用户
stats          admin if TRUE # 通过认证就允许管理
stats          uri /abc # 自定义stats显示页面uri

澳门新萄京官方网站,2.定义upstream集群组,在http{}段中定义;

效果图

# vim /etc/nginx/nginx.conf

澳门新萄京官方网站 9

        http {

单独使用一个端口来监听stats状态信息。

            upstream websrvs {

global
log     127.0.0.1 local2
chroot   /var/lib/haproxy
pidfile   /var/run/haproxy.pid
maxconn   4000
user    haproxy
group    haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
defaults
mode          http
log           global
option         httplog
option         dontlognull
option http-server-close
option forwardfor    except 127.0.0.0/8
option         redispatch
retries         3
timeout http-request  10s
timeout queue      1m
timeout connect     10s
timeout client     1m
timeout server     1m
timeout http-keep-alive 10s
timeout check      10s
maxconn         3000
listen stats
bind *:1080
stats          enable
stats          hide-version
stats          realm haproxy stats
stats          auth admin:admin
stats          admin if TRUE
stats          uri /abc
frontend web
bind *:80
default_backend static
backend static
server   www.web1.com 172.16.6.1:80 check

                server 192.168.10.3:80;

效果图:

                server 192.168.10.33:80;

澳门新萄京官方网站 10

                server 127.0.0.1:8080 backup;

澳门新萄京官方网站 11

            }


        }


3.调用定义的集群组,在server{}段的location{}段中调用;

四、负载均衡--调度算法

# vim /etc/nginx/conf.d/default.conf

roundrobin动态支持权重和在服务器运行时调整,支持慢速启动

        server {

static-rr静态不支持在服务器运行时调整,不支持慢速启动

            location / {

leastconn最少连接,只建议使用非常长的会话

                proxy_pass http://wersrvs;

source:后端服务器时动态服务器时使用,类似于nginx的iphash

                index index.html;

Hash-type:map-based静态hash码取余计算ip的hash码除以所有的服务器数,余数得几就放在第几个服务器上

            }

Hash-type:consistent动态一致性hashhash环

        }

基于权重weight动态

4.启动服务

uri根据用户访问的uri来负载均衡,它也有hash表,同样有hash-type,第一次访问的结果被负载到哪个服务器,保存在了hash表中,在来访问同样的uri,就会始终到这台服务器。

# service nginx start

url_param根据用户帐号信息,将请求发往同一个服务器,同样有hash-type。

五、配置keepalived

hdr:首部根据请求首部调度,同样有hash-type

A主机上操作

requestheader请求首部

1.安装keepalived

reponseheader响应首部

# yum -y install keepalived

hdrhosts)格式

2.编辑A主机的配置文件/etc/keepalived/keepalived.conf,作如下配置:

hdrwww.a.com)实例

! Configuration File for keepalived

一致性hash负载均衡

    global_defs {

global
log     127.0.0.1 local2
chroot   /var/lib/haproxy
pidfile   /var/run/haproxy.pid
maxconn   4000
user    haproxy
group    haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode          http
log           global
option         httplog
option         dontlognull
option http-server-close
option forwardfor    except 127.0.0.0/8
option         redispatch
retries         3
timeout http-request  10s
timeout queue      1m
timeout connect     10s
timeout client     1m
timeout server     1m
timeout http-keep-alive 10s
timeout check      10s
maxconn         3000
listen stats
bind *:1080
stats          enable
stats          hide-version
stats          realm haproxy stats
stats          auth admin:admin
stats          admin if TRUE
stats          uri /abc
frontend web
bind *:80
default_backend static
backend static
balance   source
hash-type  consistent
server   www.web1.com 172.16.6.1:80 check weight 3
server   www.web2.com 172.16.6.2:80 check weight 1

    notification_email {


        root@localhost


    }

五、acl访问控制

    notification_email_from keepalived@localhost

frontend web
bind *:8080
default_backend static
acl abc src 172.16.5.100
redirect prefix http://172.16.5.16/def if abc

    smtp_server 127.0.0.1

当客户端ip为172.16.5.100时,重定向到

    smtp_connect_timeout 30

acl要和redirectprefix或者redirectlocation搭配使用

    router_id CentOS6


    vrrp_mcast_group4 224.0.100.39


    }

官方实例,将用户登录后的url重定向到https安全连接。

    vrrp_script chk_down {

acl clear   dst_port 80
acl secure   dst_port 8080
acl login_page url_beg  /login
acl logout   url_beg  /logout
acl uid_given url_reg  /login?userid=[^&] 
acl cookie_set hdr_sub(cookie) SEEN=1
redirect prefix  https://mysite.com set-cookie SEEN=1 if !cookie_set
redirect prefix  https://mysite.com      if login_page !secure
redirect prefix  http://mysite.com drop-query if login_page !uid_given
redirect location http://mysite.com/      if !login_page secure
redirect location / clear-cookie USERID=    if logout

        script “[[ -f /etc/keepalived/down ]] && exit 1 || exit 0”


        interval 1

访问阻止

        weight -5

frontend web
bind *:8080
default_backend static
acl abc src 172.16.5.100
block if abc  # 阻止访问

    }

澳门新萄京官方网站 12

    vrrp_script chk_nginx {


        script “killall -0 nginx && exit 0 || exit 1”

修改原配置文件,实现动静分离

        interval 1

frontend web
bind *:80
acl url_static    path_beg    -i /static /images /javascript /stylesheets
#字符形式
acl url_static    path_reg    -i ^/static ^/images ^/javascript ^/stylesheets
#正则表达式
acl url_static    path_end    -i .jpg .jpeg .gif .png .css .js
#字符
acl url_static    path_reg   -i .jpg $.jpeg$ .gif $.png$ .css$ .js$
# 正则表达式
#一般能用字符,就不要用正则表达式,字符的比正则表达式快。
use_backend static_servers     if url_static
default_backend dynamic_servers
backend static_servers
balance roundrobin
server imgsrv1 172.16.200.7:80 check maxconn 6000
server imgsrv2 172.16.200.8:80 check maxconn 6000
backend dynamic_servers
balance source
server websrv1 172.16.200.7:80 check maxconn 1000
server websrv2 172.16.200.8:80 check maxconn 1000
server websrv3 172.16.200.9:80 check maxconn 1000

        weight -5


        fall 2

haproxylisten配置示例:

        rise 1

listen webfarm
bind 192.168.0.99:80
mode http
stats enable
stats auth someuser:somepassword
balance roundrobin
cookie JSESSIONID prefix
option httpclose
option forwardfor
option httpchk HEAD /check.txt HTTP/1.0
server webA 192.168.0.102:80 cookie A check
server webB 192.168.0.103:80 cookie B check

    }


    vrrp_instance ngx {

Haproxy综合配置事例

        state MASTER

global
pidfile /var/run/haproxy.pid
log 127.0.0.1 local0 info
defaults
mode http
clitimeout   600000
srvtimeout   600000
timeout connect 8000
stats enable
stats auth  admin:admin
stats uri/monitor
stats refresh5s
option httpchk GET /status
retries5
option redispatch
errorfile 503 /path/to/503.text.file
balanceroundrobin# each server is used in turns, according to assigned weight
frontend http
bind :80
monitor-uri  /haproxy # end point to monitor HAProxy status (returns 200)
acl api1 path_reg ^/api1/?
acl api2 path_reg ^/api2/?
use_backend api1 if api1
use_backend api2 if api2
backend api1
# option httpclose
server srv0 172.16.5.15:80 weight 1 maxconn 100 check inter 4000
server srv1 172.16.5.16:80 weight 1 maxconn 100 check inter 4000
server srv2 172.16.5.16:80 weight 1 maxconn 100 check inter 4000
backend api2
option httpclose
server srv01 172.16.5.18:80 weight 1 maxconn 50 check inter 4000

        interface eth1


        virtual_router_id 14


        priority 100


        advert_int 1

六、结合keepalived做高可用代理

        authentication {


            auth_type PASS

拓扑图

            auth_pass MDQ41fTp

澳门新萄京官方网站 13

        }

澳门新萄京官方网站keepalived高可用代理服务,利用keepalived实现lvs的高可用性。

        virtual_ipaddress {


            192.168.20.100/24 dev eth1

规划:

        }

准备工作请参照之前写的博客,无非就是时间同步,双机互信,主机名称能够互相解析。

        track_script {


            chk_down

node1:

            chk_nginx

ip:172.16.5.15

        }

hostname:www.a.com

    }

node2

    vrrp_instance ngx2 {

ip:172.16.5.16

        state BACKUP

hostname:www.b.com

        interface eth1

后端realserver让别人代做

        virtual_router_id 15


        priority 98

配置haproxy

        advert_int 1


        authentication {

node1:# yum -y install haproxy
node2:# yum -y install haproxy
# cd /etc/haproxy
# mv haproxy.cfg haproxy.bak
# vim haproxy.cfg
global
log         127.0.0.1 local2
chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode                    http
log                     global
option                  httplog
option                  dontlognull
option http-server-close
option forwardfor       except 127.0.0.0/8 header X-Forward-For
option                  redispatch
retries                 3
timeout http-request    10s
timeout queue           1m
timeout connect         10s
timeout client          1m
timeout server          1m
timeout http-keep-alive 10s
timeout check           10s
maxconn                 3000
listen stats #专门弄个端口进行状态管理
bind *:1080
stats                   enable
stats                   hide-version
stats                   realm haproxy stats
stats                   auth admin:admin
stats                   admin if TRUE
stats                   uri /abc
frontend web
    bind *:80
    acl danymic path_end -i .php
    acl abc src 172.16.5.100
    block if abc
    use_backend php if danymic
    default_backend static
backend static
    balance     roundrobin
    server      www.web1.com 172.16.5.16:8080 check rise 2 fall 1 weight 1
    server      www.web2.com 172.16.5.15:8080 check rise 2 fall 1 weight 1
backend php
    balance roundrobin
    server    www.web3.com 172.16.6.1:80 check rise 2 fall 1 weight 1
    server    www.web4.com 172.16.6.2:80 check rise 2 fall 1 weight 1
# scp haproxy.cfg b:/etc/haproxy/

            auth_type PASS


            auth_pass XYZ41fTp

配置keepalived

        }

node1

        virtual_ipaddress {

# yum -y install keepalived
# cd /etc/keepalived/
# vim keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 1
weight 2
}
#vrrp_script chk_mantaince_down {
#   script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
#   interval 1
#   weight 2
#}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 5
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 11111
}
virtual_ipaddress {
172.16.5.100/16
}
track_script {
chk_mantaince_down
chk_haproxy
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 11111
}
virtual_ipaddress {
172.16.5.101/16
}
track_script {
chk_mantaince_down
chk_haproxy
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}

            192.168.20.200/24 dev eth1

该配置文件主要实现的功能:1、两个实例VI,实现了双主模型,主要为前端dns负载均衡使用;2、单个主从模型可以实现高可用,前提是若是针对某个服务,这个服务必须在keepalived启动之前启动,而且要对之监控;3、当然,也要做好对keepalived服务本身的监控,这就需要编辑另外的脚本,脚本所在的目录必须与notify_master"/etc/keepalived/notify.shmaster"中提到的一致。

        }

编写对keepalived服务本身的监控脚本

        track_script {

# vim /etc/keepalived/notify.sh
#!/bin/bash
# Author: MageEdu <[email protected]>
# description: An example of notify script
#
vip=172.16.5.100
contact='[email protected]'
Notify() {
mailsubject="`hostname` to be $1: $vip floating"
mailbody="`date ' %F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
echo $mailbody | mail -s "$mailsubject" $contact
}
case "$1" in
master)
notify master
/etc/rc.d/init.d/haproxy start
exit 0
;;
backup)
notify backup
/etc/rc.d/init.d/haproxy restart
exit 0
;;
fault)
notify fault
exit 0
;;
*)
echo 'Usage: `basename $0` {master|backup|fault}'
exit 1
;;
esac

            chk_down

注意:本脚本中提到了vip,而本实验是双主模型,其中有两个vip,如果想省事,就写一个就行了,如果求精确,可以复制这个脚本,修改vip然后在配置文件中修改另一个实例中的notify.sh的路径。

        chk_nginx

node2中也要这样配置,不过要修改主从和优先级,这里不再罗嗦。

        }

配置完之后,启动了haproxy和keepalived之后,对配置做下校验。

    }

#service haproxy start
#service keepalived start
node1
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a5:31:22 brd ff:ff:ff:ff:ff:ff
inet 172.16.5.15/16 brd 172.16.255.255 scope global eth0
inet 172.16.5.101/16 scope global secondary eth0
inet6 fe80::20c:29ff:fea5:3122/64 scope link
valid_lft forever preferred_lft forever
node2
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:cc:55:6d brd ff:ff:ff:ff:ff:ff
inet 172.16.5.16/16 brd 172.16.255.255 scope global eth0
inet 172.16.5.100/16 scope global secondary eth0
inet6 fe80::20c:29ff:fecc:556d/64 scope link
valid_lft forever preferred_lft forever

B主机也作同样配置,稍作修改即可,需要修改的地方如下:

验证效果

    vrrp_instance ngx {


        state BACKUP

###########################keepalived的双主模型实现的负载均衡##################################

        priority 98

澳门新萄京官方网站 14

    }

澳门新萄京官方网站 15

    vrrp_instance ngx2 {


        state MASTER

############################动静分离之静态页面负载均衡############################

        priority 100

澳门新萄京官方网站 16

    }

澳门新萄京官方网站 17

六、模拟故障,验证结果


1.启动两台Nginx proxy的keepalived服务

############################动静分离之动态页面负载均衡##############################

# service keepalived start

澳门新萄京官方网站 18

2.访问192.168.20.100,结果应是后端的web server轮询响应请求

澳门新萄京官方网站 19

澳门新萄京官方网站 20


3.访问192.168.20.200,结果应是后端的web server轮询响应请求

**************************************************************************************************访问专门设定的用于查看代理状态的页面

澳门新萄京官方网站 21

澳门新萄京官方网站 22

4.将后端的web server关闭一台,访问192.168.20.100或192.168.20.200,响应请求的将只是另一台正常运行web server的主机


澳门新萄京官方网站 23

**************************************************************************************************修改配置文件,将拒绝访问的ip改为客户端ip,得到如下页面

5.将后端的web server都关闭,此时访问192.168.20.100或192.168.20.200,响应请求的将只是Nginx proxy中定义的主server中的sorry_server

frontendweb

澳门新萄京官方网站 24

bind*:80

6.关闭一台Nginx proxy 的nginx服务,备server将把IP地址添加到本机,继续提供服务,此时访问192.168.20.100或192.168.20.200并不会有任何察觉

default_backendstatic

澳门新萄京官方网站 25

aclabcsrc172.16.5.200

一些关于Keepalived相关教程集合

blockifabc

CentOS 7下Keepalived HAProxy 搭建配置详解  http://www.linuxidc.com/Linux/2017-03/141593.htm

172.16.5.200是我物理机的IP地址

Keepalived高可用集群应用场景与配置 http://www.linuxidc.com/Linux/2017-03/141866.htm

澳门新萄京官方网站 26

Nginx Keepalived实现站点高可用  http://www.linuxidc.com/Linux/2016-12/137883.htm

以上总结,有不足之处,望指教。。

Nginx Keepalived实现站点高可用(负载均衡)  http://www.linuxidc.com/Linux/2016-12/138221.htm

本文出自 “秋风颂” 博客,请务必保留此出处

构建高可用集群Keepalived Haproxy负载均衡 http://www.linuxidc.com/Linux/2016-12/138917.htm

haproxy 和 nginx 的区别 Haproxy 的工作模式:代理模式为 http 和 tcp 做代理,可以为多种服务做代理,它是一个专门的代理服务器,自己不...

CentOS6.5下 Keepalived高可用服务单实例配置 http://www.linuxidc.com/Linux/2016-12/138110.htm

Keepalived安装与配置 http://www.linuxidc.com/Linux/2017-02/140421.htm

Linux下Keepalived服务安装文档  http://www.linuxidc.com/Linux/2017-03/141441.htm

本文永久更新链接地址:http://www.linuxidc.com/Linux/2017-05/143738.htm

澳门新萄京官方网站 27

本文由澳门新萄京官方网站发布于澳门新萄京官方网站,转载请注明出处:澳门新萄京官方网站keepalived高可用代理服务,利

关键词: