hadoop3.1.0 HA高可用完全分布式集群的安装部署(详细教程)

1.环境介绍

服务器环境:CentOS 7 5台服务器 两台namenode 三台datanode

 

 

节点

IP

NN

DN

ZK

ZKFC

JN

RM

NM

h01

192.168.163.135

1

 

 

1

 

1

 

h02

192.168.163.136

1

 

 

1

 

1

 

h03

192.168.163.137

 

1

1

 

1

 

1

h04

192.168.163.138

 

1

1

 

1

 

1

h05

192.168.163.139

 

1

1

 

1

 

1

 

安装文件版本:

JDK      :  jdk-8u65-linux-x64.tar.gz

Zookeeper :  zookeeper-3.4.9.tar.gz

Hadoop  : hadoop-3.1.0.tar.gz  

2.配置hadoop的hosts文件和关闭防火墙及配置ssh免密登陆每台都要配置

2.1添加hostname(每台都要执行)

hostnamectl --static set-hostname h01

2.2添加hosts文件配置项

vim /etc/hosts

内容如下:

192.168.163.135 h01

192.168.163.136 h02

192.168.163.137 h03

192.168.163.138 h04

192.168.163.139 h05

文件拷贝至其他4台服务器使用命令

scp -r /etc/hosts [email protected]:/etc/hosts

scp -r /etc/hosts [email protected]:/etc/hosts

scp -r /etc/hosts [email protected]:/etc/hosts

scp -r /etc/hosts [email protected]:/etc/hosts

2.3关闭防火墙

安装iptables

yum install iptables-services

关闭和永久关闭防火墙

systemctl stop iptables
systemctl disable iptables
systemctl stop firewalld
systemctl disable firewalld

2.4配置ssh免密登陆(一定要配置很重要)

在5台服务器上生成密匙

ssh-****** -t rsa

 

密匙相互拷贝服务器别漏掉

(每台都要执行包括自己本身)

ssh-copy-id h02

在任意机器测试连接情况

ssh [email protected]

 

3.安装JDK

新建文件夹

mkdir -p /usr/local/java

解压jdk文件

tar -zxvf jdk.....  -C /usr/local/jav

 

配置环境变量

 

vim /etc/profile

内容如下

export JAVA_HOME=/usr/local/java/jdk1.8.0_65

export JRE_HOME=$JAVA_HOME/jre

export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

生效配置文件

source /etc/profile

 

拷贝jdk到其他服务器

scp -r /usr/local/java/jdk1.8.0_65 h02:/usr/local/java/jdk1.8.0_65

拷贝环境变量到其他服务器

scp /etc/profile h05:/etc

每台机器生效环境变量

source /etc/profile

 

4.安装zookeeper集群

创建zookeeper的数据存放路径

(h03,h04,h05三台服务器都要创建)

mkdir zdata/data

安装zookeeper

tar -xzvf zookeeper-3.4.9.tar.gz

创建zookeeper的文件夹连接

ln -sf zookeeper-3.4.9 zookeeper

修改环境变量

vim /etc/profile

变成如下内容

export JAVA_HOME=/usr/local/java/jdk1.8.0_65

export JRE_HOME=$JAVA_HOME/jre

export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export ZOOKEEPER_HOME=/zdata/zookeeper

export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$ZOOKEEPER_HOME/bin

拷贝后修改配置文件

cp /zdata/zookeeper/conf/zoo_sample.cfg /zdata/zookeeper/conf/zoo.cfg

如下内容

tickTime=2000

# The number of ticks that the initial

# synchronization phase can take

initLimit=10

# The number of ticks that can pass between

# sending a request and getting an acknowledgement

syncLimit=5

# the directory where the snapshot is stored.

# do not use /tmp for storage, /tmp here is just

# example sakes.

dataDir=/zdata/data

# the port at which the clients will connect

clientPort=2181

server.1=h03:2888:3888

server.2=h04:2888:3888

server.3=h05:2888:3888

拷贝安装目录和环境变量到h04,h05服务器

scp /etc/profile h05:/etc

scp -r /zdata/zookeeper-3.4.9 h04:/zdata/zookeeper-3.4.9

生效配置文件

source /etc/profile

生成ID在data文件夹中(三台服务器对应)

echo "1" > myid

echo "2" > myid

echo "3" > myid

启动zookeeper服务进入bin目录(三台都要启动)

./zkServer.sh start

Jps查看是否启动成功

出现QuorumPeerMain标识启动成功

 

5.安装hadoop集群

新建hadoop文件夹 5台服务器都要创建

mkdir -p /usr/local/hadoop

解压hadoop

tar -zxvf hadoop-3.1.0.tar.gz -C /usr/local/hadoop/

配置环境变量每台都要配置

export JAVA_HOME=/usr/local/java/jdk1.8.0_65

export JRE_HOME=$JAVA_HOME/jre

export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export HADOOP_HOME=/usr/local/hadoop/hadoop-3.1.0

export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

生效配置文件命令上面有。

下面开始配置hadoop的配置文件

vim /usr/local/hadoop/hadoop-3.1.0/etc/hadoop/hadoop-env.sh

内容如下

export JAVA_HOME=/usr/local/java/jdk1.8.0_65

export HADOOP_HOME=/usr/local/hadoop/hadoop-3.1.0

 修改core-site.xml

<configuration>

<!-- 指定hdfs的nameservice为ns1 -->

    <property>

        <name>fs.defaultFS</name>

        <value>hdfs://ns1/</value>

    </property>

    <!-- 指定hadoop临时目录 -->

    <property>

        <name>hadoop.tmp.dir</name>

        <value>/hdata/tmp</value>

    </property>



    <!-- 指定zookeeper地址 -->

    <property>

        <name>ha.zookeeper.quorum</name>

        <value>h03:2181,h04:2181,h05:2181</value>

    </property>

</configuration>

修改hdfs-site.xml

<configuration>

    <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->

    <property>

        <name>dfs.nameservices</name>

        <value>ns1</value>

    </property>

    <!-- ns1下面有两个NameNode,分别是nn1,nn2 -->

    <property>

        <name>dfs.ha.namenodes.ns1</name>

        <value>nn1,nn2</value>

    </property>

    <!-- nn1的RPC通信地址 -->

    <property>

        <name>dfs.namenode.rpc-address.ns1.nn1</name>

        <value>h01:9000</value>

    </property>

    <!-- nn1的http通信地址 -->

    <property>

        <name>dfs.namenode.http-address.ns1.nn1</name>

        <value>h01:50070</value>

    </property>

    <!-- nn2的RPC通信地址 -->

    <property>

        <name>dfs.namenode.rpc-address.ns1.nn2</name>

        <value>h02:9000</value>

    </property>

    <!-- nn2的http通信地址 -->

    <property>

        <name>dfs.namenode.http-address.ns1.nn2</name>

        <value>h02:50070</value>

    </property>

    <!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->

    <property>

        <name>dfs.namenode.shared.edits.dir</name>

        <value>qjournal://h03:8485;h04:8485;h05:8485/ns1</value>

    </property>

    <!-- 指定JournalNode在本地磁盘存放数据的位置 -->

    <property>

        <name>dfs.journalnode.edits.dir</name>

        <value>/hdata/jdata</value>

    </property>

    <!-- 开启NameNode失败自动切换 -->

    <property>

        <name>dfs.ha.automatic-failover.enabled</name>

        <value>true</value>

    </property>

    <!-- 配置失败自动切换实现方式 -->

    <property>

        <name>dfs.client.failover.proxy.provider.ns1</name>

        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

    </property>

    <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->

    <property>

        <name>dfs.ha.fencing.methods</name>

        <value>

            sshfence

            shell(/bin/true)

        </value>

    </property>

    <!-- 使用sshfence隔离机制时需要ssh免登陆 -->

    <property>

        <name>dfs.ha.fencing.ssh.private-key-files</name>

        <value>/root/.ssh/id_rsa</value>

    </property>

    <!-- 配置sshfence隔离机制超时时间 -->

    <property>

        <name>dfs.ha.fencing.ssh.connect-timeout</name>

        <value>30000</value>

    </property>

</configuration>

 

修改mapred-site.xml

<configuration>

    <!-- 指定mr框架为yarn方式 -->

    <property>

        <name>mapreduce.framework.name</name>

        <value>yarn</value>

    </property>

</configuration>

修改yarn-site.xml

<configuration>



<!-- Site specific YARN configuration properties -->

<!-- 开启RM高可用 -->

    <property>

        <name>yarn.resourcemanager.ha.enabled</name>

        <value>true</value>

    </property>

    <!-- 指定RM的cluster id -->

    <property>

        <name>yarn.resourcemanager.cluster-id</name>

        <value>yrc</value>

    </property>

    <!-- 指定RM的名字 -->

    <property>

        <name>yarn.resourcemanager.ha.rm-ids</name>

        <value>rm1,rm2</value>

    </property>

    <!-- 分别指定RM的地址 -->

    <property>

        <name>yarn.resourcemanager.hostname.rm1</name>

        <value>h01</value>

    </property>

    <property>

        <name>yarn.resourcemanager.hostname.rm2</name>

        <value>h02</value>

    </property>

    <!-- 指定zk集群地址 -->

    <property>

        <name>yarn.resourcemanager.zk-address</name>

        <value>h03:2181,h04:2181,h05:2181</value>

    </property>

    <property>

        <name>yarn.nodemanager.aux-services</name>

        <value>mapreduce_shuffle</value>

    </property>

</configuration>

修改工作workers文件

vim workers

内容如下

h01

h02

h03

在/hadoop/sbin路径下:

将start-dfs.sh,stop-dfs.sh两个文件顶部添加以下参数

#!/usr/bin/env bash

HDFS_DATANODE_USER=root

HADOOP_SECURE_DN_USER=hdfs

HDFS_NAMENODE_USER=root

HDFS_SECONDARYNAMENODE_USER=root

HDFS_JOURNALNODE_USER=root

HDFS_ZKFC_USER=root

还有,start-yarn.sh,stop-yarn.sh顶部也需添加以下:

 

#!/usr/bin/env bash

YARN_RESOURCEMANAGER_USER=root

HADOOP_SECURE_DN_USER=yarn

YARN_NODEMANAGER_USER=root

把配置好的hadoop文件拷贝至其他机器上(别忘记修改环境变量)

scp -r /usr/local/hadoop/hadoop-3.1.0 h03:/usr/local/hadoop/hadoop-3.1.0

scp -r /usr/local/hadoop/hadoop-3.1.0 h04:/usr/local/hadoop/hadoop-3.1.0

scp -r /usr/local/hadoop/hadoop-3.1.0 h05:/usr/local/hadoop/hadoop-3.1.0

 

启动过程

第一步先启动journalnode分别在h03,h04,h05服务器上启动jps查看启动是否成功

sbin/hadoop-daemon.sh start journalnode

 

在h01上格式化hdfs(格式化后在吧hadoop拷贝至h02第二个节点)

 

hdfs namenode -format
scp -r /usr/local/hadoop/hadoop-3.1.0 h02:/usr/local/hadoop/hadoop-3.1.0

 

在h01上格式化ZKCF只需要执行一次

hdfs zkfc -formatZK

 

启动hdfs在h01上

sbin/start-dfs.sh

 

 

  1. 测试集群运行

http://h02:50070

hadoop3.1.0 HA高可用完全分布式集群的安装部署(详细教程)

http://h01:50070

hadoop3.1.0 HA高可用完全分布式集群的安装部署(详细教程)