pyspark向lzo格式hive表插入数据

 

1.在执行插入之前,必须要指定参数

spark.sql("set hive.exec.dynamic.partition.mode=nonstrict")
spark.sql('''set mapred.output.compress=true''')
spark.sql('''set hive.exec.compress.output=true''')
spark.sql('''setmapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec''')
insert_sql = '''
insert overwrite table test partition(dt,hour) select * from tmp_view
'''
spark.sql(insert_sql)

说明,在pyspark里不像在python直接调用hive一样

from HiveTask import *
ht = HiveTask()
ht.exec_sql("adm",sql,lzo_path="true")
上一篇:hadoop之使用LZO压缩并支持分片


下一篇:CentOS6.5下编译安装openvpn