当前位置:首页 - Hive

pyspark将hbase的数据以dataframe的形式写入hive

作者:高景洋 日期:2021-09-03 18:16:00 浏览次数:1168

from pyspark import SparkContext,SparkConf,HiveContext

conf = SparkConf()
sc = SparkContext(conf=conf)

df_tmp = list_filter_websiteids.where('WebsiteID in ({})'.format(','.join(['1','71']))).filter(list_filter_websiteids['IsDeleted']==True
# 过滤生成dataframe

df_tmp.registerTempTable('test_hive'# 将dataframe 注册临时表
hivec = HiveContext(sc# 生成HiveContext对象
hivec.sql('create table test.product select * from test_hive'# 将临时表的数据,写入到hive表

本文永久性链接:
<a href="http://r4.com.cn/art196.aspx">pyspark将hbase的数据以dataframe的形式写入hive</a>
当前header:Host: r4.com.cn X-Host1: r4.com.cn X-Host2: r4.com.cn X-Host3: 127.0.0.1:8080 X-Forwarded-For: 18.218.234.83 X-Real-Ip: 18.218.234.83 X-Domain: r4.com.cn X-Request: GET /art196.aspx HTTP/1.1 X-Request-Uri: /art196.aspx Connection: close Accept: */* User-Agent: Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com) Referer: http://www.yuezhiji.net/art196.aspx