hadoop2.6.0三个节点集群环境搭建(一)

一、操作系统配置


(1) 主机名更改
(2) Ip地址配置
(3) /etc/hosts文件配置
(4) 无密码登入
(5) Jdk安装
(6) /etc/hosts文件配置防火墙关闭
(7) Selinux关闭
(8) vm.swappiness参数配置
(9) 创建hadoop用户及目录

(1)主机名更改:


[[email protected] catchup]# vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=elephant


(2)ip地址配置


[[email protected] catchup]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
NM_CONTROLLED="no"
ONBOOT="yes"
BOOTPROTO="static"
ONBOOT="yes"
IPADDR=192.168.123.1
[[email protected] catchup]# service network restart


(3) / etc/hosts文件配置

[[email protected] catchup]# vi /etc/hosts
127.0.0.1               localhost.localdomain localhost
::1             localhost6.localdomain6 localhost6
192.168.123.10 master
192.168.123.11 slave1
192.168.123.12 slave2
192.168.123.13 hadoop04  <===将来做hadoop节点添加用,现在不用


(4)无密码登入

su - hadoop

ssh-****** -t rsa -P ''
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
scp .ssh/id_rsa.pub [email protected]:/home/hadoop/id_rsa_03.pub
on master
cat id_rsa_03.pub >> .ssh/authorized_keys
scp .ssh/authorized_keys [email protected]:/home/hadoop/.ssh/authorized_keys


(5)jdk安装


安装JDK1.7或者以上版本。这里安装jdk1.7.0_79。 
下载地址:http://www.oracle.com/technetwork/java/javase/downloads/index.html 


1,下载jdk-7u79-Linux-x64.gz,解压到/usr/Java/jdk1.7.0_79。 


2,在/root/.bash_profile中添加如下配置:

export JAVA_HOME=/usr/java/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH


3,使环境变量生效,#source ~/.bash_profile 


4,安装验证# java -version 
Java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)


(6) 关闭防火墙


 [[email protected] catchup]# chkconfig iptables off
[[email protected] catchup]# /etc/init.d/iptables stop
[[email protected] catchup]# /etc/init.d/iptables status
iptables: Firewall is not running.


(7) Selinux关闭


[[email protected] catchup]# vi /etc/selinux/config
SELINUX=disabled


[[email protected] catchup]# setenforce 0  <===不用重启即可生效


(8) vm.swappiness参数配置


[[email protected] catchup]# vi /etc/sysctl.conf
vm.swappiness = 0


(9)创建hadoop用户及目录

su - root

useradd hadoop
passwd hadoop
usermod -g root hadoop

su - hadoop
mkdir -p /home/hadoop/dfs/name       《======根据自己的实际情况创建路径
mkdir -p /home/hadoop/dfs/data
mkdir -p /home/hadoop/tmp

二、安装hadoop


(1)解压tar -xzvf hadoop-2.6.0.tar.gz


(2) move到指定目录下


mv hadoop-2.6.0 /opt/
ln –s hadoop-2.6.0 hadoop


(3) 修改配置文件


~/hadoop/etc/hadoop/hadoop-env.sh
~/hadoop/etc/hadoop/yarn-env.sh
~/hadoop/etc/hadoop/slaves
~/hadoop/etc/hadoop/core-site.xml
~/hadoop/etc/hadoop/hdfs-site.xml
~/hadoop/etc/hadoop/mapred-site.xml
~/hadoop/etc/hadoop/yarn-site.xm


1)~/hadoop/etc/hadoop/hadoop-env.sh
# The java implementation to use.

export JAVA_HOME=/usr/java/default


2)~/hadoop/etc/hadoop/yarn-env.sh
# some Java parameters
export JAVA_HOME=/usr/java/default


3)~/hadoop/etc/hadoop/slaves
slave1
slave2


4)~/hadoop/etc/hadoop/core-site.xml
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master:9000</value>
 </property>
 <property>
  <name>io.file.buffer.size</name>
  <value>131072</value>
 </property>
 <property>
  <name>hadoop.tmp.dir</name>
  <value>file:/home/spark/opt/hadoop-2.6.0/tmp</value>
  <description>Abasefor other temporary directories.</description>
 </property>


5)~/hadoop/etc/hadoop/hdfs-site.xml
<property>
  <name>dfs.namenode.secondary.http-address</name>
  <value>master:9001</value>
 </property>


  <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/opt/hadoop/dfs/name</value>
 </property>


 <property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/opt/hadoop/dfs/data</value>
  </property>


 <property>
  <name>dfs.replication</name>
  <value>2</value>
 </property>


 <property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
 </property>


6)~/hadoop/etc/hadoop/mapred-site.xml
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>
 <property>
  <name>mapreduce.jobhistory.address</name>
  <value>master:10020</value>
 </property>
 <property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>master:19888</value>
 </property>


7)~/hadoop/etc/hadoop/yarn-site.xm
<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
  </property>
  <property>
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
   <name>yarn.resourcemanager.address</name>
   <value>master:8032</value>
  </property>
  <property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>master:8030</value>
  </property>
  <property>
   <name>yarn.resourcemanager.resource-tracker.address</name>
   <value>master:8035</value>
  </property>
  <property>
   <name>yarn.resourcemanager.admin.address</name>
   <value>master:8033</value>
  </property>
  <property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>master:8088</value>
  </property>


(4)copy hadoop家目录到slave1、slave2节点

三、格式化namenode,并启动集群


[[email protected] ~]$ hdfs namenode –format


y expiry time is 600000 millis
17/03/16 03:00:29 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/03/16 03:00:29 INFO util.GSet: VM type       = 64-bit
17/03/16 03:00:29 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
17/03/16 03:00:29 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/03/16 03:00:29 INFO namenode.NNConf: ACLs enabled? false
17/03/16 03:00:29 INFO namenode.NNConf: XAttrs enabled? true
17/03/16 03:00:29 INFO namenode.NNConf: Maximum size of an xattr: 16384
17/03/16 03:00:29 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1977394629-192.168.123.10-1489647629240
17/03/16 03:00:29 INFO common.Storage: Storage directory /opt/hadoop/dfs/name has been successfully formatted.  《==表名成功
17/03/16 03:00:30 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/03/16 03:00:30 INFO util.ExitUtil: Exiting with status 0
17/03/16 03:00:30 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.123.10
************************************************************/


[[email protected] ~]$ start-dfs.sh 
17/03/16 04:07:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-namenode-master.out
slave2: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-datanode-slave2.out
slave1: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-datanode-slave1.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /opt/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-master.out
17/03/16 04:08:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 [[email protected] ~]$ start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/logs/yarn-hadoop-resourcemanager-master.out
slave1: starting nodemanager, logging to /opt/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-slave1.out
slave2: starting nodemanager, logging to /opt/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-slave2.out


[[email protected] ~]$ hdfs dfsadmin -report
17/03/16 04:12:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 50395455488 (46.93 GB)
Present Capacity: 36794449920 (34.27 GB)
DFS Remaining: 36794400768 (34.27 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0


-------------------------------------------------
Live datanodes (2):


Name: 192.168.123.11:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 25197727744 (23.47 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 6842331136 (6.37 GB)
DFS Remaining: 18355372032 (17.09 GB)
DFS Used%: 0.00%
DFS Remaining%: 72.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Mar 16 04:12:46 EDT 2017


Name: 192.168.123.12:50010 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 25197727744 (23.47 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 6758674432 (6.29 GB)
DFS Remaining: 18439028736 (17.17 GB)
DFS Used%: 0.00%
DFS Remaining%: 73.18%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Thu Mar 16 04:12:46 EDT 2017


[[email protected] ~]$


http://master:50070

hadoop2.6.0三个节点集群环境搭建(一)

http://master:8088

hadoop2.6.0三个节点集群环境搭建(一)

--------------------- 
作者:forever19870418 
来源:**** 
原文:https://blog.****.net/forever19870418/article/details/62425791 
版权声明:本文为博主原创文章,转载请附上博文链接!