当前位置:首页 - Spark

pyspark提交作业相关参数示例

作者:高景洋 日期:2021-02-02 14:30:23 浏览次数:2017
# 提交方式(测试)
# spark-submit --master local[2] --num-executors 2 --executor-memory 1G --jars ./spark-examples_2.11-1.6.0-typesafe-001.jar /home/hadoop/script/test_hbase_dataframe.py

# 打包
# zip -r collect_py.zip *

# ---------------提交方式(正式) : 适用于Python3.7 和 Spark2.3+-------------------
# spark-submit \
# --name hbase_scan \
# --py-files /home/hadoop/collect_py/collect_py.zip \
# --master yarn \
# --jars /home/hadoop/spark-examples_2.11-1.6.0-typesafe-001.jar \
# --driver-memory 3g \
# --executor-memory 12g \
# --executor-cores 4 \
# --num-executors 14 \
# --conf spark.yarn.dist.archives=hdfs:///user/hadoop/python/env/python3.7.6.zip#python_env \
# --conf spark.pyspark.driver.python=/data/anaconda2/envs/hbase_scan/bin/python \
# --conf spark.pyspark.python=./python_env/hbase_scan/bin/python \
# --conf spark.task.cpus=1 \
# --conf spark.executorEnv.CLASSPATH="/usr/local/service/hive/spark/examples/jars/*:${CLASSPATH}" \
# /home/hadoop/collect_py/data_collect_py/apps/collect/schedule/spark_schedule.py
#-------------------------------------------------------------------------------

# ---------------提交方式(正式) : 适用于Python3.6 和 Spark2.2 当前跳板机上提交应使用该方式------
# spark-submit \
# --name hbase_scan \
# --py-files /home/hadoop/collect_py/collect_py.zip \
# --master yarn \
# --jars /home/hadoop/spark-examples_2.11-1.6.0-typesafe-001.jar \
# --driver-memory 3g \
# --executor-memory 10g \
# --executor-cores 10 \
# --num-executors 14 \
# --conf spark.yarn.dist.archives=hdfs:///user/hadoop/python/env/python3.6.12.zip#python_env \
# --conf spark.pyspark.driver.python=/data/anaconda2/envs/hbase_scan_36/bin/python \
# --conf spark.pyspark.python=./python_env/hbase_scan_36/bin/python \
# --conf spark.task.cpus=1 \
# --conf spark.executorEnv.CLASSPATH="/usr/local/service/hive/spark/examples/jars/*:${CLASSPATH}" \
# /home/hadoop/collect_py/data_collect_py/apps/collect/schedule/spark_schedule.py
#-------------------------------------------------------------------------------

# ---------------提交方式(正式) : 适用于Python3.6 和 Spark2.2 当前azkaban提交------
# spark-submit \
# --name hbase_scan \
# --py-files /home/hadoop/collect_py/collect_py.zip \
# --master yarn \
# --jars /home/hadoop/spark-examples_2.11-1.6.0-typesafe-001.jar \
# --driver-memory 3g \
# --executor-memory 10g \
# --executor-cores 10 \
# --num-executors 14 \
# --conf spark.yarn.dist.archives=hdfs:///user/hadoop/python/env/python3.6.12.zip#python_env \
# --conf spark.pyspark.driver.python=/data/anaconda2/envs/hbase_scan_36/bin/python \
# --conf spark.pyspark.python=./python_env/hbase_scan_36/bin/python \
# --conf spark.task.cpus=1 \
# --conf spark.sql.shuffle.partitions=280 \
# --conf spark.default.parallelism=280 \
# /home/hadoop/collect_py/data_collect_py/apps/collect/schedule/spark_schedule.py
#-------------------------------------------------------------------------------
本文永久性链接:
<a href="http://r4.com.cn/art175.aspx">pyspark提交作业相关参数示例</a>
当前header:Host: r4.com.cn X-Host1: r4.com.cn X-Host2: r4.com.cn X-Host3: 127.0.0.1:8080 X-Forwarded-For: 3.145.2.184 X-Real-Ip: 3.145.2.184 X-Domain: r4.com.cn X-Request: GET /art175.aspx HTTP/1.1 X-Request-Uri: /art175.aspx Connection: close Accept: */* User-Agent: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com) Referer: http://www.yuezhiji.net/art175.aspx