hadoopHA部署
买了三台机器
感觉自己1个小时能够做成,
结果一直做到凌晨,服务器自动注销,还没有全部完成
所以的这里面需要学习理解的问题
为什么编程一直没有终极解决方案,随着问题的不断变化,复杂
原来的解决方案都不再能够解决问题
每一步都要理解其中的原理,如果不理解只能机械的复制,
在整个过程中
1.HA 在生产上一直没有使用,生产上使用的都是单点的模式
与原来安装不一样的点,
1.批量操作多个窗口,、
优点速度快,缺点 tab失效,
真不敢想象如果配置30台机器,同时操作会出现多少问题,
这时候靠一次配置成功几乎不可能,基本是一步一步检验,
验证
2.n1 n2 方式内容
jn 的概念第一次接触,主要是同步数据的日志
zfcs 主要作用是 nn 发送心跳给 zookeeper
NN 的 zfcs 是进程
MN 的 zfcs 是线程
多线程编程难就难在无法看到, zfcs
多线程的解决方案 就是并发解决问题
而大数据就是多线程
zk 主要的作用是投票决定谁来当老大
生产上都用7台
jn 生产上用 3-5台
注意:
DN 给两个 NN active standby 汇报
AM 只给 AM active 反馈信息
这里真的没有记住?需要复习一下
注意先启动JN再可以执行hdfs namenode -format
启动顺序
一定要按规范来,
一定要按规范来,
一定要按规范来
所以
注意最后的验证,解决问题
注意报错信息的分析 分析 分析
过滤英语是个不好的习惯,
原来有个习惯直接将报错信息给搜索引擎
注意logs查询
解决问题能够理清思路,
再遇到类似问题可以举一反三
Last login: Mon Nov 26 23:50:07 2018 from 223.78.248.18
Welcome to Alibaba Cloud Elastic Compute Service !
[[email protected] ~]# su - hadoop
[[email protected] ~]$ cd app/zookeeper-3.4.6/bin/
[[email protected] bin]$ cd app/zookeeper-3.4.6/bin/ cd app/zookeeper-3.4.6/bin/
-bash: cd: app/zookeeper-3.4.6/bin/: No such file or directory
[[email protected] bin]$ zkServer.sh stop
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Stopping zookeeper … STOPPED
[[email protected] bin]$ zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
[[email protected] bin]$ cd …
[[email protected] zookeeper-3.4.6]$ cd …
[[email protected] app]$ cd hadoop-2.6.0-cdh5.7.0/sbin/
[[email protected] sbin]$ ./start-dfs.sh
18/11/26 23:58:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for
your platform… using builtin-java classes where applicable
Starting namenodes on [hadoop001 hadoop002]
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop002: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop002.out
hadoop002: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop002.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
hadoop003: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop003.out
Starting journal nodes [hadoop001 hadoop002 hadoop003]
hadoop003: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop003.out
hadoop001: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop001.out
hadoop002: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop002.out
18/11/26 23:58:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for
your platform… using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop002: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop002.out
hadoop001: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop001.out
[[email protected] sbin]$ start-yarn.sh
-bash: start-yarn.sh: command not found
[[email protected] sbin]$ ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop001.out
hadoop003: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop003.out
hadoop002: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop002.out
hadoop001: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop001.out
[[email protected] sbin]$ ./yarn-daemon.sh start resourcemanager
resourcemanager running as process 5340. Stop it first.
[[email protected] sbin]$ ./$HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start
historyserver
-bash: .//home/hadoop/app/hadoop-2.6.0-cdh5.7.0/sbin/mr-jobhistory-daemon.sh: No such
file or directory
[[email protected] sbin]$ cd
[[email protected] ~]$ $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start
historyserver
starting historyserver, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/mapred-
hadoop-historyserver-hadoop001.out
[[email protected] ~]$ hdfs dfsadmin -report
18/11/27 00:00:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for
your platform… using builtin-java classes where applicable
Configured Capacity: 126418354176 (117.74 GB)
Present Capacity: 111909064704 (104.22 GB)
DFS Remaining: 111908978688 (104.22 GB)
DFS Used: 86016 (84 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Live datanodes (3):
Name: 172.26.252.120:50010 (hadoop003)
Hostname: hadoop003
Decommission Status : Normal
Configured Capacity: 42139451392 (39.25 GB)
DFS Used: 28672 (28 KB)
Non DFS Used: 4835282944 (4.50 GB)
DFS Remaining: 37304139776 (34.74 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.53%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Nov 27 00:00:09 CST 2018
Name: 172.26.252.118:50010 (hadoop001)
Hostname: hadoop001
Decommission Status : Normal
Configured Capacity: 42139451392 (39.25 GB)
DFS Used: 28672 (28 KB)
Non DFS Used: 4837081088 (4.50 GB)
DFS Remaining: 37302341632 (34.74 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.52%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Nov 27 00:00:07 CST 2018
Name: 172.26.252.119:50010 (hadoop002)
Hostname: hadoop002
Decommission Status : Normal
Configured Capacity: 42139451392 (39.25 GB)
DFS Used: 28672 (28 KB)
Non DFS Used: 4836925440 (4.50 GB)
DFS Remaining: 37302497280 (34.74 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.52%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Nov 27 00:00:09 CST 2018
Last login: Mon Nov 26 23:56:15 2018 from 223.78.248.18
Welcome to Alibaba Cloud Elastic Compute Service !
[[email protected] ~]# su - hadoop
[[email protected] ~]$ cd app/zookeeper-3.4.6/bin/
[[email protected] bin]$ zkServer.sh stop
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Stopping zookeeper … STOPPED
[[email protected] bin]$ zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
[[email protected] bin]$
Last login: Mon Nov 26 23:54:41 2018 from 223.78.248.18
Welcome to Alibaba Cloud Elastic Compute Service !
[[email protected] ~]# su - hadoop
[[email protected] ~]$ cd app/zookeeper-3.4.6/bin/
[[email protected] bin]$ zkServer.sh stop
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Stopping zookeeper … STOPPED
[[email protected] bin]$ zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
[[email protected] bin]$ cd …
[[email protected] zookeeper-3.4.6]$ cd …
[[email protected] app]$ cd sb
-bash: cd: sb: No such file or directory
[[email protected] app]$ cd hadoop-2.6.0-cdh5.7.0/sbin/
[[email protected] sbin]$ ./yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-
cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop002.out
[[email protected] sbin]$