RHCS套件+Nginx实现高可用负载均衡
红帽集群套件(RedHat Cluter Suite, RHCS)是一套综合的软件组件,可以通过在部署时采用不同的配置,以满足你对高可用性,负载均衡,可扩展性,文件共享和节约成本的需要。
它提供有如下两种不同类型的集群:
集群:
1.节点(node)
运行集群进程的一个独立主机,称为节点,节点是HA的核心组成部分,每个节点上运行着操作系统和集群软件服务,在集群中,节点有主次之分,分别称为主节点 和备用/备份节点,每个节点拥有唯一的主机名,并且拥有属于自己的一组资源,例如,磁盘、文件系统、网络地址和应用服务等。主节点上一般运行着一个或多个 应用服务。而备用节点一般处于监控状态
2.资源(resource)
资源是一个节点可以控制的实体,并且当节点发生故障时,这些资源能够被其它节点接管
3.事件(event)
也就是集群中可能发生的事情,例如节点系统故障、网络连通故障、网卡故障、应用程序故障等。这些事件都会导致节点的资源发生转移,HA的测试也是基于这些事件来进行的。
4.动作(action)
事件发生时HA的响应方式,动作是由shell脚步控制的,例如,当某个节点发生故障后,备份节点将通过事先设定好的执行脚本进行服务的关闭或启动。进而接管故障节点的资源
实验环境:
物理主机:172.25.26.250
server1:172.25.26.2
server2:172.25.26.3
1、server1和server2配置yum源
[[email protected] init.d]# vim /etc/yum.repos.d/rhel-source.repo
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.26.250/source6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[HighAvailability]
name=Red Hat Enterprise Linux HighAvailability
baseurl=http://172.25.26.250/source6.5/HighAvailability
gpgcheck=0
[LoadBalancer]
name=Red Hat Enterprise Linux LoadBalancer
baseurl=http://172.25.26.250/source6.5/LoadBalancer
gpgcheck=0
[ResilientStorage]
name=Red Hat Enterprise Linux ResilientStorage
baseurl=http://172.25.26.250/source6.5/ResilientStorage
gpgcheck=0
[ScalableFileSystem]
name=Red Hat Enterprise Linux ScalableFileSystem
baseurl=http://172.25.26.250/source6.5/ScalableFileSystem
gpgcheck=0
2.server1和server2都安装ricci,设置密码并启动
[[email protected] ~]# yum install ricci -y
[[email protected] ~]# passwd ricci
[[email protected] ~]# /etc/init.d/ricci start
[[email protected] ~]# chkconfig ricci on
3.在server1上安装luci,并启动
[[email protected] ~]# yum install luci -y
[[email protected] ~]# /etc/init.d/luci start
[[email protected] ~]# chkconfig luci on
- 界面登陆设置,用户为server1的root用户及其密码
https://172.25.26.2:8084
按图标进行创建,passwd是server1和server2中ricci密码设置中,server1和server4会重新启动
二.安装fence系统
物理主机:172.25.26.250
1.安装
[[email protected] ~]# yum install fence-* -y
[[email protected] ~]# fence_virtd -c
[[email protected] ~]# mkdir /etc/cluster
生成随机**文件,重启服务
[[email protected] cluster]# systemctl restart fence_virtd
[[email protected] cluster]# dd if=/dev/urandom of=/etc/cluster/fency_xvm.key bs=128 count=1
把**文件传递给server1和server2
[[email protected] cluster]# scp /etc/cluster/fency_xvm.key [email protected]:/etc/cluster/
[[email protected] cluster]# scp /etc/cluster/fency_xvm.key [email protected]:/etc/cluster/
2.创建fence设备
打开浏览器,选择Fence Devices 中Add
选择server1,点击Add Fence Method 为fence1
选择server2,点击Add Fence Method 为fence2
选择 Fence1 中 Add Fence Instance
添加Domain为虚拟机的uuid
fence2同样的方法添加
测试:
Server1:
[[email protected] ~]#fence_node server2
[[email protected] ~]#clustat
三.在搭建好的集群上添加Nginx服务
1.server1和server2上都部署好 nginx
[[email protected] ~]# ls
[[email protected] ~]#tar zxf nginx-1.14.0.tar.gz
[[email protected] ~]ls
[[email protected] ~]#cd nginx-1.14.0
cd src
cd core
vim nginx.h
cd /root/nginx-1.14.0
cd auto
cd cc
vim gcc
./configure --prefx=/usr/local/nginx --with-http_ssl_module --with-http_stub_status_module --with-threads --with-file-aio
yum install -y gcc
yum install -y pcre-devel
yum install openssl-devel -y
make
make install
cd /usr/local
ln -s /usr/local/nginx/conf/nginx.conf /sbin
nginx -t
lscpu
vim /etc/security/limits.conf
vim /usr/local/nginx/conf/nginx.conf
useradd -M -d /usr/local/nginx
id nginx
nginx
nginx -s reload
nginx -s stop
nginx
scp -r /usr/local/nginx/ [email protected]:/usr/local
Server2:
[[email protected] ~]ln -s /usr/local/nginx/sbin/nginx /sbin
[[email protected] ~]nginx -t
[[email protected] ~]useradd -M -d /usr/local/nginx nginx
[[email protected] ~]id nginx
2.配置 Luci web 端
(1)选 择Failover Domains,如图,填写Name(nginxfail),如图选择,前面打勾的三个分别是结点失效之后可以跳到另一个结点、只服务运行指定的结点、当结点失效之跳到另一个结点之后,原先的结点恢复之后,不会跳回原先的结点。下面的Member打勾,是指服务运行server1和server2结点,后 面的Priority值越小,优先级越高,选择Create
(2)选择Resourcs,点击Add,选择添加IPAddress如图,添加的172.25.26.100必须是未被占用的ip,24是子网掩码的位数,5指的是等待时间为5秒。选择Submit
- 以相同的方法添加Script,nginx是服务的名字,/etc/init.d/nginx是服务启动脚本的路径,选择Submit
注:将nginx的脚本放置到/etc/init.d/nginx下,脚本为自己编写,用户启动nginx
Vim /etc/init.d/nginx
#!/bin/bash
[ -f /etc/init.d/functions ] && . /etc/init.d/functions
pidfile=/application/nginx/logs/nginx.pid
Start_Nginx(){
if [ -f $pidfile ];then
echo "Nginx is running"
else
/usr/local/nginx/sbin/nginx &>/dev/null
action "Nginx is Started" /bin/true
fi
}
Stop_Nginx(){
if [ -f $pidfile ];then
/usr/local/nginx/sbin/nginx -s stop &>/dev/null
action "Nginx is Stopped" /bin/true
else
echo "Nginx is already Stopped"
fi
}
Reload_Nginx(){
if [ -f $pidfile ];then
/usr/local/nginx/sbin/nginx -s reload &>/dev/null
action "Nginx is Reloaded" /bin/true
else
echo "Can't open $pidfile ,no such file or directory"
fi
}
case $1 in
start)
Start_Nginx
RETVAL=$?
;;
stop)
Stop_Nginx
RETVAL=$?
;;
restart)
Stop_Nginx
sleep 3
Start_Nginx
RETVAL=$?
;;
reload)
Reload_Nginx
RETVAL=$?
;;
*)
echo "USAGE: $0 {start|stop|reload|restart}"
exit 1
esac
exit $RETVAL
3)选择Service Groups,点击Add如图,nginx是服务的名字,下面两个勾指分别的是自动开启服务、运行 ,选择Add Resource,将全局资源IP Address 和Script加入
选择:Add Resource选择172.25.26.100
选择Add Resource 选择script
选择Submit,完成,nginx服务组running
3.测试
clustat ,将server1的webib服务转移到server2上
clustat 查看状态
clusvcadm -r nginx -m server4 将nginx组转移到server4上
clusvcadm -e nginx 重新**nginx组
clusvcadm -d nginx 停止nginx组
/etc/cluster/cluster.conf 所有的配置都在里面,删了就没了