This article represents detailed view on what happens when a driver program (spark application) is started on one of the worker node when working with Spark standalone cluster. Please feel free to comment/suggest if I missed to mention one or more important points. Also, sorry for the typos.
Following are the key points described later in this article:
In our example, we are starting a cluster with one master and two worker nodes. Following is the Docker-compose file used to start the cluster. For detailson setting up Spark standalone cluster, access this page on how to setup Spark standalone cluster using Dockers. Following is what happens when the cluster starts:
/usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Xmx1g org.apache.spark.deploy.master.Master -h spark-master
/usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Xmx1g org.apache.spark.deploy.worker.Worker spark://spark-master:7077
In following section, we will see processes which start upon starting a spark shell application.
Let us start a Spark application (Spark Shell) using command such as following on one of the worker nodes and take a snapshot of all the JVM processes running in each of the worker nodes and master node. Note that the Spark shell gets started in client mode.
spark-shell --master spark://192.168.99.100:7077
As the spark application gets started on the worker node, following is the snapshot of all the JVMs processes running in different nodes:
/usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Xmx1g org.apache.spark.deploy.master.Master -h spark-master
/usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Dscala.usejavacp=true -Xmx1g org.apache.spark.deploy.SparkSubmit --master spark://192.168.99.100:7077 --class org.apache.spark.repl.Main --name Spark shell spark-shell
/usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Xmx1024M -Dspark.driver.port=40442 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@172.17.0.3:40442 --executor-id 1 --hostname 172.17.0.3 --cores 2 --app-id app-20170104100629-0000 --worker-url spark://Worker@172.17.0.3:8881
Pay attention to some of the following facts:
/usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Xmx1024M -Dspark.driver.port=40442 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@172.17.0.3:40442 --executor-id 0 --hostname 172.17.0.4 --cores 2 --app-id app-20170104100629-0000 --worker-url spark://Worker@172.17.0.4:8882
Pay attention to some of the following facts:
Once the spark-shell program exit, executor app on both the workers are killed. Worker processes are asked to kill the executor app by master process. Worker processes then cleans up local directories created for the executor app.
In recent years, artificial intelligence (AI) has evolved to include more sophisticated and capable agents,…
Adaptive learning helps in tailoring learning experiences to fit the unique needs of each student.…
With the increasing demand for more powerful machine learning (ML) systems that can handle diverse…
Anxiety is a common mental health condition that affects millions of people around the world.…
In machine learning, confounder features or variables can significantly affect the accuracy and validity of…
Last updated: 26 Sept, 2024 Credit card fraud detection is a major concern for credit…