spark集群配置
1.配置java环境
2.配置SPARK_HOME环境,
vi /etc/profile
export PATH=$PATH:/opt/gradle/bin
export SPARK_HOME=/opt/spark
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
3.编辑spark-env.sh,添加:
SPARK_MASTER_HOST=192.168.0.130
SPARK_MASTER_PORT=7077
SPARK_MASTER_WEBUI_PORT=9999
SPARK_WORKER_PORT=10001
SPARK_WORKER_WEBUI_PORT=9998
SPARK_WORKER_CORES=2
SPARK_WORKER_INSTANCES=1
SPARK_WORKER_MEMORY=2g
4.扩展jdbc驱动类(无需从msyql中读取数据则无需配置)
vi spark-defaults.conf
spark.driver.extraClassPath /opt/spark/lib/mysql-connector-java-5.1.27.jar
spark.executor.extraClassPath /opt/spark/lib/mysql-connector-java-5.1.27.jar
5.编辑conf/slaves文件
添加slave节点的主机名
slave01
slave02
slave03
slave04
slave05
slave06
slave07
vi /etc/hosts
192.168.0.130 master slave01
192.168.0.131 slave02
192.168.0.132 slave03
192.168.0.133 slave04
192.168.0.211 slave05
192.168.0.212 slave06
192.168.0.213 slave07
6.配置master几点和各节点的ssh免密登录
1.ssh-****** -t rsa生产公私要对,直接回车。
直接回车,空密码
2.cd /root/.ssh/
cp id_rsa.pub authorized_keys
cd /usr/local
ssh-copy-id -i 192.168.0.132
输入目标主机登录密码
完成!测试 ssh 192.168.0.132
7..sbin/start-all.sh 启动spark集群。建议关闭防火墙,以免端口不通。
8.访问主页面http://192.168.0.130:9999/