1.在执行插入之前,必须要指定参数
spark.sql("set hive.exec.dynamic.partition.mode=nonstrict") spark.sql('''set mapred.output.compress=true''') spark.sql('''set hive.exec.compress.output=true''') spark.sql('''setmapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec''')
insert_sql = '''
insert overwrite table test partition(dt,hour) select * from tmp_view
'''
spark.sql(insert_sql)
说明,在pyspark里不像在python直接调用hive一样
from HiveTask import *
ht = HiveTask()
ht.exec_sql("adm",sql,lzo_path="true")