Hadoop3.2学习-3.全分布式安装
1.修改hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_161
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
2.修改core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-master:9820</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/peseudo</value>
</property>
</configuration>
3.修改hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:9868</value>
</property>
</configuration>
4.修改workers
hadoop1
hadoop2
hadoop3
hadoop4
5.将hadoop-3.2分发到其他机器
scp -r hadoop-3.2.0/ hadoop1:`pwd`
6.编写启动命令/etc/profile
export HADOOP_HOME=/opt/hadoop-3.2.0
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
7.分发配置文件
scp /etc/profile hadoop1:/etc/
使配置生效
source /etc/profile
8.格式化节点
hdfs namenode -format
9.启动
start-dfs.sh
10.测试