Spark练习 - 提交作业到集群 - submit job via cluster

Created by Wang, Jerry, last modified on Sep 12, 2015

start-master.sh ( sbin folder下)

then ps -aux
7334 5.6 0.6 1146992 221652 pts/0 Sl 12:34 0:05 /usr/jdk1.7.0_79/bin/java -cp /root/devExpert/spark-1.4.1/sbin/…/conf/:/root/devExpert/spar
monitor master node via url: http://10.128.184.131:8080
启动两个worker:

./spark-class org.apache.spark.deploy.worker.Worker spark://NKGV50849583FV1:7077 ( bin folder下)

Spark练习 - 提交作业到集群 - submit job via cluster

提交job到集群

./spark-submit --class “org.apache.spark.examples.JavaWordCount” --master spark://NKGV50849583FV1:7077 /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt

成功执行job

./spark-submit --class “org.apache.spark.examples.JavaWordCount” --master spark://NKGV50849583FV1:7077 /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt
added by Jerry: loading load-spark-env.sh !!!1
added by Jerry:…
/root/devExpert/spark-1.4.1/conf
added by Jerry, number of Jars: 1
added by Jerry, launch_classpath: /root/devExpert/spark-1.4.1/assembly/target/scala-2.10/spark-assembly-1.4.1-hadoop2.4.0.jar
added by Jerry,RUNNER:/usr/jdk1.7.0_79/bin/java
added by Jerry, printf argument list: org.apache.spark.deploy.SparkSubmit --class org.apache.spark.examples.JavaWordCount --master spark://NKGV50849583FV1:7077 /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt
added by Jerry, I am in if-else branch: /usr/jdk1.7.0_79/bin/java -cp /root/devExpert/spark-1.4.1/conf/:/root/devExpert/spark-1.4.1/assembly/target/scala-2.10/spark-assembly-1.4.1-hadoop2.4.0.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanucleus-rdbms-3.2.9.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanucleus-core-3.2.10.jar:/root/devExpert/spark-1.4.1/lib_managed/jars/datanucleus-api-jdo-3.2.6.jar -Xms512m -Xmx512m -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --master spark://NKGV50849583FV1:7077 --class org.apache.spark.examples.JavaWordCount /root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar /root/devExpert/spark-1.4.1/bin/test.txt
Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
15/08/15 14:08:02 INFO SparkContext: Running Spark version 1.4.1
15/08/15 14:08:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
15/08/15 14:08:03 WARN Utils: Your hostname, NKGV50849583FV1 resolves to a loopback address: 127.0.0.1; using 10.128.184.131 instead (on interface eth0)
15/08/15 14:08:03 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/08/15 14:08:03 INFO SecurityManager: Changing view acls to: root
15/08/15 14:08:03 INFO SecurityManager: Changing modify acls to: root
15/08/15 14:08:03 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/08/15 14:08:04 INFO Slf4jLogger: Slf4jLogger started
15/08/15 14:08:04 INFO Remoting: Starting remoting
15/08/15 14:08:04 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:44792]
15/08/15 14:08:04 INFO Utils: Successfully started service ‘sparkDriver’ on port 44792.
15/08/15 14:08:04 INFO SparkEnv: Registering MapOutputTracker
15/08/15 14:08:04 INFO SparkEnv: Registering BlockManagerMaster
15/08/15 14:08:04 INFO DiskBlockManager: Created local directory at /tmp/spark-6fc6b901-3ac8-4acd-87aa-352fd22cf8d4/blockmgr-4c660a56-0014-4b1f-81a9-7ac66507b9fa
15/08/15 14:08:04 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/08/15 14:08:05 INFO HttpFileServer: HTTP File server directory is /tmp/spark-6fc6b901-3ac8-4acd-87aa-352fd22cf8d4/httpd-b4344651-dbd8-4ba4-be1a-913ae006d839
15/08/15 14:08:05 INFO HttpServer: Starting HTTP Server
15/08/15 14:08:05 INFO Utils: Successfully started service ‘HTTP file server’ on port 46256.
15/08/15 14:08:05 INFO SparkEnv: Registering OutputCommitCoordinator
15/08/15 14:08:05 WARN Utils: Service ‘SparkUI’ could not bind on port 4040. Attempting port 4041.
15/08/15 14:08:05 WARN QueuedThreadPool: 2 threads could not be stopped
15/08/15 14:08:05 WARN Utils: Service ‘SparkUI’ could not bind on port 4041. Attempting port 4042.
15/08/15 14:08:05 WARN Utils: Service ‘SparkUI’ could not bind on port 4042. Attempting port 4043.
15/08/15 14:08:06 WARN Utils: Service ‘SparkUI’ could not bind on port 4043. Attempting port 4044.
15/08/15 14:08:06 WARN Utils: Service ‘SparkUI’ could not bind on port 4044. Attempting port 4045.
15/08/15 14:08:06 INFO Utils: Successfully started service ‘SparkUI’ on port 4045.
15/08/15 14:08:06 INFO SparkUI: Started SparkUI at http://10.128.184.131:4045
15/08/15 14:08:06 INFO SparkContext: Added JAR file:/root/devExpert/spark-1.4.1/example-java-build/JavaWordCount/target/JavaWordCount-1.jar at http://10.128.184.131:46256/jars/JavaWordCount-1.jar with timestamp 1439618886415
15/08/15 14:08:06 INFO AppClientClientActor:Connectingtomasterakka.tcp://sparkMaster@NKGV50849583FV1:7077/user/Master...15/08/1514:08:06INFOSparkDeploySchedulerBackend:ConnectedtoSparkclusterwithappIDapp20150815140806000315/08/1514:08:06INFOAppClientClientActor: Connecting to master akka.tcp://[email protected]:7077/user/Master... 15/08/15 14:08:06 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150815140806-0003 15/08/15 14:08:06 INFO AppClientClientActor: Executor added: app-20150815140806-0003/0 on worker-20150815125648-10.128.184.131-53710 (10.128.184.131:53710) with 8 cores
15/08/15 14:08:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150815140806-0003/0 on hostPort 10.128.184.131:53710 with 8 cores, 512.0 MB RAM
15/08/15 14:08:06 INFO AppClientClientActor:Executoradded:app201508151408060003/1onworker2015081512544310.128.184.13134423(10.128.184.131:34423)with8cores15/08/1514:08:06INFOSparkDeploySchedulerBackend:GrantedexecutorIDapp201508151408060003/1onhostPort10.128.184.131:34423with8cores,512.0MBRAM15/08/1514:08:06INFOAppClientClientActor: Executor added: app-20150815140806-0003/1 on worker-20150815125443-10.128.184.131-34423 (10.128.184.131:34423) with 8 cores 15/08/15 14:08:06 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150815140806-0003/1 on hostPort 10.128.184.131:34423 with 8 cores, 512.0 MB RAM 15/08/15 14:08:06 INFO AppClientClientActor: Executor updated: app-20150815140806-0003/0 is now LOADING
15/08/15 14:08:06 INFO AppClientClientActor:Executorupdated:app201508151408060003/1isnowLOADING15/08/1514:08:06INFOAppClientClientActor: Executor updated: app-20150815140806-0003/1 is now LOADING 15/08/15 14:08:06 INFO AppClientClientActor: Executor updated: app-20150815140806-0003/0 is now RUNNING
15/08/15 14:08:06 INFO AppClientKaTeX parse error: Double subscript at position 1112: …ock broadcast_0_̲piece0 stored a…OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
15/08/15 14:08:16 INFO SparkContext: Successfully stopped SparkContext
15/08/15 14:08:16 INFO Utils: Shutdown hook called
15/08/15 14:08:16 INFO RemoteActorRefProviderRemotingTerminator:Shuttingdownremotedaemon.15/08/1514:08:16INFOUtils:Deletingdirectory/tmp/spark6fc6b9013ac84acd87aa352fd22cf8d415/08/1514:08:16INFORemoteActorRefProviderRemotingTerminator: Shutting down remote daemon. 15/08/15 14:08:16 INFO Utils: Deleting directory /tmp/spark-6fc6b901-3ac8-4acd-87aa-352fd22cf8d4 15/08/15 14:08:16 INFO RemoteActorRefProviderRemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.

Spark练习 - 提交作业到集群 - submit job via cluster

如果关掉一个worker:

Spark练习 - 提交作业到集群 - submit job via cluster

要获取更多Jerry的原创文章,请关注公众号"汪子熙":
Spark练习 - 提交作业到集群 - submit job via cluster