首页 > spark submit的时候怎么导入自己写的依赖模块?

spark submit的时候怎么导入自己写的依赖模块?

python代码中的import

from spark_learning.utils.default_utils import setDefaultEncoding,initSparkContext,ensureOffset

submit命令:

bin/spark-submit --jars /home/jabo/software/spark-1.5.2-bin-hadoop2.6/lib/spark-streaming-kafka-assembly_2.10-1.5.2.jar\
/home/jabo/spark-by-python/spark_learning/third_day/streaming_kafka_avg.py\
--py-files /home/jabo/spark-by-python/spark_learning/utils/default_utils.py

官网解释:

For Python applications, simply pass a .py file in the place of <application-jar> instead of a JAR, and add Python .zip, .egg or .py files to the search path with --py-files.

但是会报错,找不到import模块:

Traceback (most recent call last):
  File "/home/jabo/spark-by-python/spark_learning/third_day/streaming_kafka_avg.py", line 10, in <module>
    import spark_learning.utils.default_utils
ImportError: No module named spark_learning.utils.default_utils

如何解决??


你可以试一下把--py-files 参数 放在你要运行脚本的前面哈!刚才我们也遇到这个问题 就是这样解决的!

【热门文章】
【热门文章】