HBase 1.2.4 部署
1. 上传hbase-1.2.4-bin.tar.gz并解压
# mkdir /root/Hbase
# cd /root/Hbase
# rz
通过弹出的“打开”对话框从宿主机上选择已经下载好的hbase-1.2.4-bin.tar.gz
# tar -xzvf hbase-1.2.4-bin.tar.gz
2. 配置profile
# vim /etc/profile
在文件最后添加如下配置
export HBASE_HOME=/root/Hbase/hbase-1.2.4
export PATH=$PATH:$HBASE_HOME/bin
# source /etc/profile
3. Lib更新
# cd /root/Hbase/hbase-1.2.4/lib
执行如下命令
# cp ~/Hbase/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar.
# cp~/Hbase/hadoop-2.7.3/share/hadoop/tools/lib/hadoop-auth-2.7.3.jar .
# cp~/Hbase/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar .
# cp~/Hbase/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar .
# cp~/Hbase/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar.
# cp~/Hbase/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar.
# cp~/Hbase/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar.
# cp~/Hbase/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar.
# cp~/Hbase/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar.
# cp ~/Hbase/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar.
# cp~/Hbase/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar .
# cp~/Hbase/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar .
# cp~/Hbase/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar .
# 解决java.lang.NoClassDefFoundError: org/htrace/Trace
# cp~/Hbase/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.0.4.jar .
# 删除老版的jar
rm *-2.5.1.jar
4. 配置hbase-env.sh
# vim /root/Hbase /hbase-1.2.4/conf/hbase-env.sh
修改如下配置
export JAVA_HOME=/usr/java/jdk1.8.0_112/
export HBASE_MANAGES_ZK=true #使用自带的zookeeper
exportHBASE_CLASSPATH=/root/Hadoop/hadoop-2.7.3/etc/hadoop
# 注释掉下面的配置(因为1.8JDK没有这个选项)
# exportHBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m-XX:MaxPermSize=128m"
#export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS-XX:PermSize=128m -XX:MaxPermSize=128m"
5. 配置hbase-site.xml
# vim /root/Hbase/hbase-1.2.4/conf/hbase-site.xml
在<configuration></configuration>中加入以下内容
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
<description>hadoop集群地址</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>是否启动集群模式</description>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/root/Hbase/hbase-1.2.4/tmp</value>
</property>
<property>
<name>hbase.master</name>
<value>master:60000</value>
<description> 指定hbase集群主控节点</description>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>slave1,slave2,slave3</value>
<description>zookeeper集群主机名列表</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/root/Hbase/hbase-1.2.4/zookeeper_data</value>
</property>
6. 配置regionservers
# vim $HBASE_HOME/conf/regionservers
修改为如下设置
master
slave1
slave2
slave3
注意:master必须添加,否则执行start-hbase.sh会报错
7. 拷贝hbase到其他节点
把hadoop的hdfs-site.xml和core-site.xml 放到hbase/conf下
cp/root/Hadoop/hadoop-2.7.3/etc/hadoop/hdfs-site.xml/root/Hbase/hbase-1.2.4/conf/
cp /root/Hadoop/hadoop-2.7.3/etc/hadoop/core-site.xml/root/Hbase/hbase-1.2.4/conf/
scp -r ~/Hbase/hbase-1.2.4 slave1:~/Hbase
scp -r ~/Hbase/hbase-1.2.4 slave2:~/Hbase
scp -r ~/Hbase/hbase-1.2.4 slave3:~/Hbase
8. 启动验证
启动
# start-hbase.sh
通过浏览器访问hbaseHMaster Web页面
http://slave1:16010
HRegionServer Web页面
http://slave1:16030
http://slave2:16030
http://slave3:16030
shell验证
# hbase shell
list验证
hbase(main):001:0> list
建表验证
hbase(main):001:0> create 'user','name','sex'
9. 创建bigdata表
hbase(main):017:0> create 'bigdata',{NAME=> 'labels', VERSIONS => 2},{NAME => 'fields', VERSIONS => 2}
10. 遇到的困难及解决方法
10.1 执行start-hbase.sh启动hbase出现警告Java HotSpot(TM) 64-Bit ServerVM warning: ignoring option PermSize=128m;如下图
由于JDK使用的是jdk1.8.0_65
$ vim $HBASE_HOME/conf/hbase-env.sh
注释掉如下两行
#exportHBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m-XX:MaxPermSize=128m"
#exportHBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m-XX:MaxPermSize=128m"
10.2 Hbase启动后不久,HMaster和HRetionServer自动关闭
原因一:防火墙没有关闭,解决办法参考本章第2节。
原因二:系统时间不一致,解决办法:修改系统时间,比如:# date -s 00:00:00
或者依次输入如下命令,按提示操作:
$ tzselect
$ TZ='Asia/Shanghai'; export TZ
$ clock --set --date="08/31/17 14:37"
$ clock --hctosys
10.3 执行hbase shell命令时提示如下警告
SLF4J: Class path contains multiple SLF4Jbindings.
SLF4J: Found binding in[jar:file:/root/Hbase/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in[jar:file:/root/Hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
slf4j-log4j重复包含,解决办法:将habase-1.2.4/lib/slf4j-log4j12-1.7.5.jar删除。注意,如果删除的是$HADOOP_HOME/ share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar,那么启动和停止HDFS时会报如下警告:
10.4 启动HBase后执行hbase(main):001:0> list ,报错:org.apache.hadoop.hbase.PleaseHoldException:Master is initializing如图:
quit之后,重新hbase shell就好了
10.5 启动HBase后执行hbase(main):001:0> list ,报错:***Server is not running yet
如下图:
解决办法:
# stop-hbase.sh
# hadoop dfsadmin -safemode leave
# hbase shell