join:
关联查询
Spark 中的Rdd关联可以通过 4种方式
1、Join
2、leftOuterJoin
3、rightOuterJoin
4、fullOuterJoin
示例代码:
def my_join(): rdd_a = sc.parallelize([('A','a1'),('C','c1'),('D','d1'),('F','f1'),('F','f2')]) rdd_b = sc.parallelize([('A','a2'),('C','c2'),('C','c3'),('E','e1')]) result = rdd_a.join(rdd_b)
print(result.collect())
result_left_join = rdd_a.leftOuterJoin(rdd_b)
print(result_left_join.collect())
result_right_join = rdd_a.rightOuterJoin(rdd_b)
print(result_right_join.collect())
result_full_join = rdd_a.fullOuterJoin(rdd_b)
print(result_full_join.collect())
输出结果:
[('A', ('a1', 'a2')), ('C', ('c1', 'c2')), ('C', ('c1', 'c3'))] [('A', ('a1', 'a2')), ('F', ('f1', None)), ('F', ('f2', None)), ('D', ('d1', None)), ('C', ('c1', 'c2')), ('C', ('c1', 'c3'))] [('A', ('a1', 'a2')), ('C', ('c1', 'c2')), ('C', ('c1', 'c3')), ('E', (None, 'e1'))] [('A', ('a1', 'a2')), ('F', ('f1', None)), ('F', ('f2', None)), ('D', ('d1', None)), ('C', ('c1', 'c2')), ('C', ('c1', 'c3')), ('E', (None, 'e1'))]