Oracle 11g RAC搭建(VMware环境)
Oracle 11g RAC搭建(VMware环境)
安装环境
主机操作系统:windows 7
虚拟机VMware12: 2台Red Hat Enterprise Linux 6 64 位
共享存储:ASM
由于采用的是共享存储ASM,VMware创建共享存储方式:
进入VMware安装目录,cmd命令下:
Cd /d f:
Cd vmware
vmware-vdiskmanager.exe -c -s 20000Mb -a lsilogic -t 2 E:\RAC\Sharedisk\data.vmdk
vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 E:\RAC\Sharedisk\fra.vmdk
这里创建了两个盘,一个20G的数据盘,一个10G的备份盘。
修改RAC1虚拟机目录下的vmx配置文件添加:
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedBus = "virtual"
scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.filename = "E: \RAC\Sharedisk\data.vmdk"
scsi1:1.deviceType = "plainDisk"
scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.filename = "E: \RAC\Sharedisk\fra.vmdk"
scsi1:2.deviceType = "plainDisk"
disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
修改RAC2的vmx配置文件添加:
scsi1.sharedBus = "virtual"
disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
gui.lastPoweredViewMode = "fullscreen"
checkpoint.vmState = ""
给虚拟机配置双网卡并分配IP地址,配置网络system-config-network
Eth0
1.关闭防火墙和selinux
[[email protected] ~]# vi /etc/sysconfig/selinux
SELINUX=disabled
[[email protected] ~]# service iptables stop
[[email protected] ~]# chkconfig iptables off
2.创建必要的用户、组和目录,并授权
[[email protected] /]# pvcreate /dev/sdd1
Physical volume "/dev/sdd1" successfully created
[[email protected] /]# vgcreate oravg /dev/sdd1
Volume group "oravg" successfully created
[[email protected] /]# lvcreate -L 19.99G -n oralv oravg
Rounding up size to full physical extent 19.99 GiB
Logical volume "oralv" created.
[[email protected] /]# mkfs.ext4 /dev/oravg/oralv
[[email protected] /]# mount /dev/oravg/oralv /u01
[[email protected] /]# vi /etc/fstab
添加 /dev/oravg/oralv /u01 ext4 defaults 0 0
/usr/sbin/groupadd -g 1000 oinstall
/usr/sbin/groupadd -g 1020 asmadmin
/usr/sbin/groupadd -g 1021 asmdba
/usr/sbin/groupadd -g 1022 asmoper
/usr/sbin/groupadd -g 1031 dba
/usr/sbin/groupadd -g 1032 oper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
mkdir /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
3.节点配置检查
内存大小:至少2.5GB
Swap大小:
当内存为2.5GB-16GB时,Swap需要大于等于系统内存。
当内存大于16GB时,Swap等于16GB即可。
查看内存和swap大小:
[[email protected] ~]# grep MemTotal /proc/meminfo
MemTotal: 2552560 kB
[[email protected] ~]# grep SwapTotal /proc/meminfo
SwapTotal: 2621436 kB
如果swap太小,swap调整方法:
通过此种方式进行swap 的扩展,首先要计算出block的数目。具体为根据需要扩展的swapfile的大小,以M为单位。block=swap分区大小*1024, 例如,需要扩展64M的swapfile,则:block=64*1024=65536.
然后做如下步骤:
#dd if=/dev/zero of=/swapfile bs=1024 count=65536
#mkswap /swapfile
#swapon /swapfile
#vi /etc/fstab
增加/swapf swap swap defaults 0 0
# cat /proc/swaps 或者# free –m //查看swap分区大小
# swapoff /swapf //关闭扩展的swap分区
4.系统文件设置
vi /etc/sysctl.conf
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
net.ipv4.tcp_wmem = 262144 262144 262144
net.ipv4.tcp_rmem = 4194304 4194304 4194304
配置oracle、grid用户的shell限制
vi /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
修改/etc/hosts 文件,在所有节点添加以下内容
5.检查安装需要的软件包是否安装
rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-libstdc++-33 \
elfutils-libelf \
elfutils-libelf-devel \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
glibc-headers \
ksh \
libaio \
libaio-devel \
libgcc \
libstdc++ \
libstdc++-devel \
make \
sysstat \
unixODBC \
unixODBC-devel
安装软件包
rpm -ivh compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm
rpm -ivh elfutils-libelf-devel-0.164-2.el6.x86_64.rpm
rpm -ivh libstdc++-devel-4.4.7-17.el6.x86_64.rpm
rpm -ivh gcc-c++-4.4.7-17.el6.x86_64.rpm
rpm -ivh libaio-devel-0.3.107-10.el6.x86_64.rpm
rpm -ivh unixODBC-2.2.14-14.el6.x86_64.rpm
rpm -ivh unixODBC-devel-2.2.14-14.el6.x86_64.rpm
6.配置用户等效性
Su- oracle
两个节点执行
ssh-****** -t rsa
ssh-****** -t dsa
一节点执行
[[email protected] ~]$
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac2:~/.ssh/
grid与oracle用户都需要做
su -grid
7.配置裸盘
配置两块盘,分别两个磁盘组,DATA和FRA,DATA用于存放数据文件,FRA用于存放归档文件
vi /etc/udev/rules.d/99-x-oracleasm.rules
添加KERNEL=="sd*", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29a1a58b12519a20af30946bc42", NAME="asm_disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29fc904fc1cfa86695d15e5d054", NAME="asm_disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"
8.安装用于Linux的cvuqdisk
在Oracle RAC两个节点上安装cvuqdisk,否则,集群验证使用程序就无法发现共享磁盘, cvuqdisk RPM在grid的安装介质上的rpm目录中。
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
查看cvu报告,修正错误
安装grid Infrastructure
只需要在一个节点上安装即可,会自动复制到其他节点中,这里在rac1中安装。
进入图形化界面,在grid用户下进行安装
1.ASM共享盘两节点可以识别,确认没有问题,后续root.sh正常执行,可忽略
2.本地虚拟机没有时钟同步,则设置一节点为服务端,二节点为客户端
脚本先在一节点执行后再二节点执行
问题一
执行/u01/11.2.0/grid/root.sh 报错
解决方案:[[email protected] lib64]# ln -s libcap.so.2.16 libcap.so.1
问题二
解决方案:在本机hosts里面添加
127.0.0.1 localhost
成功安装
查看报错日志
是因为没有配置resolve.conf,可以忽略。
安装完成
安装后的资源检查
查看集群是否开机自启动
rac1:/home/grid$more /etc/oracle/scls_scr/rac1/root/ohasdstr
enable
关闭开启自启动(root用户)
#crsctl disable crs
检查CRS(cluster ready service 集群就绪服务)状态
rac1:/home/grid$crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
检查Clusterware(集群软件)资源
rac1:/home/grid$crs_stat -t -v
检查集群节点
rac1:/home/grid$olsnodes -n
rac1 1
rac2 2
检查两个节点上的Oracle TNS监听器进程
rac1:/home/grid$ps -ef|grep lsnr|grep -v 'grep'|grep -v 'ocfs'|awk '{print$9}'
LISTENER_SCAN1
LISTENER
检查正在运行已安装的Oracle ASM
rac1:/home/grid$srvctl status asm -a
ASM is running on rac2,rac1
ASM is enabled.
创建ASM盘
两种方法
- asmca图形化安装
在 grid 用户下,执行 asmca,启动 asm 磁盘组创建向导
安装grid的时候已经创建了DATA盘,用于存放数据文件,
添加FRA磁盘组
- 连接ASM数据库实例添加
作为练习,这里做一次添加磁盘,删除磁盘和创建新磁盘组
#su – grid
rac1:/home/grid$sqlplus / as sysasm
SQL*Plus: Release 11.2.0.4.0 Production on Wed May 23 04:26:07 2018
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> alter diskgroup DATA add disk '/dev/asm_disk1' rebalance power 5;
Diskgroup altered.
SQL> set linesize 200 pagesize 999
SQL> select * from V$asm_operation; --监控asm磁盘组平衡速度,EST_MINUTES=0的时候 表示完成
no rows selected
查看磁盘组
rac1:/home/grid$sqlplus / as sysasm
磁盘成功添加进磁盘组
将刚刚添加的磁盘从DATA磁盘组中删除,新建FRA磁盘组并添加进去
删除成功,重平衡结束
创建新磁盘组
rac1:/home/grid$sqlplus / as sysasm
SQL*Plus: Release 11.2.0.4.0 Production on Wed May 23 04:26:07 2018
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> create diskgroup FRA external redundancy disk '/dev/asm_disk1';
Diskgroup created.
已经成功添加
但是发现在二节点磁盘组已经添加未挂载
SQL> alter diskgroup FRA mount;
挂载成功。
安装Oracle database软件(RAC)
只需要在节点rac1上执行即可
[[email protected] ~]# su - oracle
[[email protected] ~]$ cd db/database
[[email protected] database]$ ./runInstaller
跑完脚本,安装完成
创建集群数据库
Su – oracle
Dbca建库
本地练习数据库,所以最大文件数及日志组文件大小未做修改
本地练习数据库,所以最大文件数及日志组文件大小未做修改
等待安装完成
检查运行状况
检查集群运行状态
[[email protected] ~]# su – grid
rac1:/home/grid$srvctl status database -d orcl
Instance orcl1 is running on node rac1
Instance orcl2 is running on node rac2
查看集群SCAN VIP信息
rac1:/home/grid$srvctl config scan
SCAN name: scan-ip, Network: 1/192.168.248.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /scan-ip/192.168.248.105
查看集群SCAN Listener信息
rac1:/home/grid$srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
整个集群的数据库启停
进入grid用户
[[email protected] ~]$ srvctl stop database -d orcl
[[email protected] ~]$ srvctl start database -d orcl
关闭所有节点
进入root用户
[[email protected] bin]# pwd
/u01/app/11.2.0/grid/bin
[[email protected] bin]# ./crsctl stop cluster -all
只关闭当前节点
[[email protected] bin]# ./crsctl stop crs
查看相关进程都已漂移至二节点。