Categories: Big DataDockers

When a Spark application starts on Spark Standalone Cluster?

This article represents detailed view on what happens when a driver program (spark application) is started on one of the worker node when working with Spark standalone cluster. Please feel free to comment/suggest if I missed to mention one or more important points. Also, sorry for the typos.

Following are the key points described later in this article:

  • Snapshot into what happens when Spark Standalone Cluster Starts?
  • Snapshot into what happens when a spark application (Spark Shell) starts on one of the worker nodes?
  • Snapshot into what happens when a spark application (Spark Shell) stops on the worker node?
Snapshot into what happens when Spark Standalone Cluster Starts?

In our example, we are starting a cluster with one master and two worker nodes. Following is the Docker-compose file used to start the cluster. For detailson setting up Spark standalone cluster, access this page on how to setup Spark standalone cluster using Dockers. Following is what happens when the cluster starts:

Spark cluster when no spark application is running

  • Master Node: On master node, a java process started using class “org.apache.spark.deploy.master.Master”.
    /usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Xmx1g org.apache.spark.deploy.master.Master -h spark-master
    
  • 2 Worker Nodes: On both the worker nodes, java processes started with following command. Make a note of class “org.apache.spark.deploy.worker.Worker” passed to Java command.
    /usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Xmx1g org.apache.spark.deploy.worker.Worker spark://spark-master:7077
    

In following section, we will see processes which start upon starting a spark shell application.

 

Snapshot into what happens when a spark application (Spark Shell) starts on one of the worker nodes?

Let us start a Spark application (Spark Shell) using command such as following on one of the worker nodes and take a snapshot of all the JVM processes running in each of the worker nodes and master node. Note that the Spark shell gets started in client mode.

spark-shell --master spark://192.168.99.100:7077

As the spark application gets started on the worker node, following is the snapshot of all the JVMs processes running in different nodes:

Detailed view of Spark cluster when spark application is started

  • Master Node: On master node, no change. Only one java process such as following is found.
    /usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Xmx1g org.apache.spark.deploy.master.Master -h spark-master
    
  • Worker Node on which Spark Shell got started: Following are three Java processes found to be running:
    • JVM running Worker: This is the process which started when worker node started.
    • JVM running org.apache.spark.deploy.SparkSubmit: This is the process that got started as a result of invoking “spark-shell” program on the worker node. Following is how the command looks like:
      /usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Dscala.usejavacp=true -Xmx1g org.apache.spark.deploy.SparkSubmit --master spark://192.168.99.100:7077  --class org.apache.spark.repl.Main --name Spark shell spark-shell
      
    • JVM running org.apache.spark.executor.CoarseGrainedExecutorBackend: This is the executor process started for the driver program (spark-shell app) to run the tasks. Following is how the command looks like:
       /usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Xmx1024M -Dspark.driver.port=40442 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@172.17.0.3:40442 --executor-id 1 --hostname 172.17.0.3 --cores 2 --app-id app-20170104100629-0000 --worker-url spark://Worker@172.17.0.3:8881
      

      Pay attention to some of the following facts:

      • The executor process is another Java process (org.apache.spark.executor.CoarseGrainedExecutorBackend).
      • It is registered with Driver using a URL such as spark://CoarseGrainedScheduler@172.17.0.3:40442.
      • It is running on host with IP as 172.17.0.3
      • The worker node URL on which executor process is running is spark://Worker@172.17.0.4:8881.
  • Another Worker Node: Following are two different Java processes found on another worker node:
    • JVM running Worker: This is the process which started when worker node started.
    • strong>JVM running org.apache.spark.executor.CoarseGrainedExecutorBackend: This is the executor process started for the driver program (spark-shell app) to run the tasks. Following is how the command looks like:
       /usr/lib/jvm/java-8-oracle/bin/java -cp /conf/:/usr/local/spark-2.0.0-bin-hadoop2.7/jars/* -Xmx1024M -Dspark.driver.port=40442 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@172.17.0.3:40442 --executor-id 0 --hostname 172.17.0.4 --cores 2 --app-id app-20170104100629-0000 --worker-url spark://Worker@172.17.0.4:8882
      

      Pay attention to some of the following facts:

      • The executor process is another Java process (org.apache.spark.executor.CoarseGrainedExecutorBackend).
      • It is registered with Driver using a URL such as spark://CoarseGrainedScheduler@172.17.0.3:40442.
      • It is running on host with IP as 172.17.0.4
      • The worker node URL on which executor process is running is spark://Worker@172.17.0.4:8882.

 

Snapshot into what happens when a spark application (Spark Shell) stops on the worker node?

Once the spark-shell program exit, executor app on both the workers are killed. Worker processes are asked to kill the executor app by master process. Worker processes then cleans up local directories created for the executor app.

 

Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking.

Recent Posts

Agentic Reasoning Design Patterns in AI: Examples

In recent years, artificial intelligence (AI) has evolved to include more sophisticated and capable agents,…

2 weeks ago

LLMs for Adaptive Learning & Personalized Education

Adaptive learning helps in tailoring learning experiences to fit the unique needs of each student.…

4 weeks ago

Sparse Mixture of Experts (MoE) Models: Examples

With the increasing demand for more powerful machine learning (ML) systems that can handle diverse…

4 weeks ago

Anxiety Disorder Detection & Machine Learning Techniques

Anxiety is a common mental health condition that affects millions of people around the world.…

1 month ago

Confounder Features & Machine Learning Models: Examples

In machine learning, confounder features or variables can significantly affect the accuracy and validity of…

1 month ago

Credit Card Fraud Detection & Machine Learning

Last updated: 26 Sept, 2024 Credit card fraud detection is a major concern for credit…

1 month ago