一:什么是Pair RDD?
Spark为包含键值对对类型的RDD提供了一些专有操作,这些操作就被称为Pair RDD,Pair RDD是很多程序的构成要素,因为它们提供了并行操作对各个键或跨节点重新进行数据分组的操作接口。
二:Pair RDD的操作实例
1:创建Pair RDD
在saprk中有很多种创建pairRDD的方式,很多存储键值对的数据格式会在读取时直接返回由其键值对数据组成的pair RDD,此外需要把一个普通的RDD转化为pair RDD时,可以调用map函数来实现,传递的函数需要返回键值对。
scala> var lines = sc.parallelize(List("i love you")) lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[8] at parallelize at <console>:27 scala> val pairs = lines.map(x=>(x,1)) pairs: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[9] at map at <console>:29 scala> pairs.foreach(println) (i love you,1)
2:Pai RDDr的转化操作
由于pair RDD中包含二元组,所以需要传递函数应当操作二元组而不是独立的元素,假设键值对集合为{(1,2),(3,4),(3,6)}
rdd.reduceByKey(func):合并具有相同key的value值
scala> val rdd = sc.parallelize(List((1,2),(3,4),(3,6))) rdd: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[12] at parallelize at <console>:27 scala> val result = rdd.reduceByKey((x,y)=>x+y) result: org.apache.spark.rdd.RDD[(Int, Int)] = ShuffledRDD[14] at reduceByKey at <console>:29 scala> result.foreach(println) (1,2) (3,10)rdd.groupByKey():对具有相同键的进行分组 [数据分组]
scala> val result = rdd.groupByKey() result: org.apache.spark.rdd.RDD[(Int, Iterable[Int])] = ShuffledRDD[1] at groupByKey at <console>:29 scala> result.foreach(println) (3,CompactBuffer(4, 6)) (1,CompactBuffer(2))rdd.mapValues(func):对pairRDD中的每个值应用func 键不改变
scala> val result = rdd.mapValues(x=>x+1) result: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[2] at mapValues at <console>:29 scala> result.foreach(println) (1,3) (3,5) (3,7)rdd.flatMapValues(func):类似于mapValues,返回的是迭代器函数
scala> val result = rdd.flatMapValues(x=>(x to 5)) result: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[3] at flatMapValues at <console>:29 scala> result.foreach(println) (3,4) (3,5) (1,2) (1,3) (1,4) (1,5)rdd.keys:返回一个仅包含键的RDD
scala> val result = rdd.keys result: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[4] at keys at <console>:29 scala> result.foreach(println) 3 1 3rdd.values:返回一个仅包含value的RDD
scala> val result = rdd.values result: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[6] at values at <console>:29 scala> result.foreach(println) 2 4 6
rdd.sortByKey():返回一个根据键排序的RDD
数据排序,可以通过接受ascending的参数表示我们是否想要结果按升序排序(默认是true)
scala> val result = rdd.sortByKey().collect() result: Array[(Int, Int)] = Array((1,2), (3,4), (3,6)) scala> result res8: Array[(Int, Int)] = Array((1,2), (3,4), (3,6)) <pre name="code" class="java">scala> val result = rdd.sortByKey(ascending=false).collect() result: Array[(Int, Int)] = Array((3,4), (3,6), (1,2))
3:针对两个pair RDD 的转化操作
函数名 | 目的 | 示例 | 结果 |
substractByKey | 删掉RDD中键与other RDD中的键相同的元素 | rdd.subtractByKey(other) | {(1,2)} |
join | 对两个RDD进行内连接 | rdd.join(other) | {(3,(4,9)),(3,(6,9))} |
rightOuterJoin | 对两个RDD进行连接操作,右外连接 | rdd.rightOuterJoin(other) | {(3,(4,9)),(3,(6,9))} |
leftOuterJoin | 对两个RDD进行连接操作,左外连接 | rdd.rightOuterJoin(other) | {(1,(2,None)),(3,(4,9)),(3,(6,9))} |
cogroup | 将两个RDD中拥有相同键的数据分组 | rdd.cogroup(other) | {1,([2],[]),(3,[4,6],[9])} |
假设:rdd={(1,2),(3,4),(3,6)} other={(3,9)}
rdd.subtractByKey( other ):删除掉RDD中与other RDD中键相同的元素
scala> val rdd = sc.parallelize(List((1,2),(3,4),(3,6))) rdd: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[4] at parallelize at <console>:27 scala> val other = sc.parallelize(List((3,9))) other: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[5] at parallelize at <console>:27 scala> val result = rdd.subtractByKey(other) result: org.apache.spark.rdd.RDD[(Int, Int)] = SubtractedRDD[6] at subtractByKey at <console>:31 scala> result.foreach(println) (1,2)rdd.join( other ):对两个RDD进行内连接
scala> val result = rdd.join(other) result: org.apache.spark.rdd.RDD[(Int, (Int, Int))] = MapPartitionsRDD[12] at join at <console>:31 scala> result.foreach(println) (3,(4,9)) (3,(6,9))rdd.rightOuterJoin(other):对两个RDD进行连接操作,确保第二个RDD的键必须存在(右外连接)
scala> val result = rdd.rightOuterJoin(other) result: org.apache.spark.rdd.RDD[(Int, (Option[Int], Int))] = MapPartitionsRDD[15] at rightOuterJoin at <console>:31 scala> result.foreach(println) (3,(Some(4),9)) (3,(Some(6),9))rdd.leftOuterJoin(other):对两个RDD进行连接操作,确保第一个RDD的键必须存在(左外连接)
scala> val result = rdd.leftOuterJoin(other) result: org.apache.spark.rdd.RDD[(Int, (Int, Option[Int]))] = MapPartitionsRDD[18] at leftOuterJoin at <console>:31 scala> result.foreach(println) (3,(4,Some(9))) (3,(6,Some(9))) (1,(2,None))rdd.cogroup(other),将有两个rdd中拥有相同键的数据分组
scala> val result = rdd.cogroup(other) result: org.apache.spark.rdd.RDD[(Int, (Iterable[Int], Iterable[Int]))] = MapPartitionsRDD[20] at cogroup at <console>:31 scala> result.foreach(println) (1,(CompactBuffer(2),CompactBuffer())) (3,(CompactBuffer(4, 6),CompactBuffer(9)))
4:过滤操作
这里假设rdd={(1,2),(3,4),(3,6)}
对value做控制,key不加限制条件
scala> val result = rdd.filter{case(x,y)=>y%3==0} result: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[22] at filter at <console>:29 scala> result.foreach(println) (3,6) scala> val result = rdd.filter{case(x,y)=>y<=4} result: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[23] at filter at <console>:29 scala> result.foreach(println) (1,2) (3,4) scala>
对key做控制,value不控制
scala> val result = rdd.filter{case(x,y)=>x<3} result: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[24] at filter at <console>:29 scala> result.foreach(println) (1,2)
5:聚合操作
使用reduceByKey()和mapValues()计算每个键对应的平均值
scala> val rdd = sc.parallelize(List(Tuple2("panda",0),Tuple2("pink",3),Tuple2("pirate",3),Tuple2("panda",1),Tuple2("pink",4))) rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[45] at parallelize at <console>:27 scala> val result = rdd.mapValues(x=>(x,1)).reduceByKey((x,y)=>(x._1+y._1,x._2+y._2)) result: org.apache.spark.rdd.RDD[(String, (Int, Int))] = ShuffledRDD[47] at reduceByKey at <console>:29 scala> result.foreach(println) (pirate,(3,1)) (panda,(1,2)) (pink,(7,2))实现经典的分布式单词计数问题(使用flatMap() 来生成以单词为键,以数字1为值的pair RDD)
scala> val rdd = sc.parallelize(List("i am thinkgamer, i love cyan")) rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[49] at parallelize at <console>:27 scala> val words = rdd.flatMap(line => line.split(" ")) words: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[50] at flatMap at <console>:29 scala> val result = words.map(x=>(x,1)).reduceByKey((x,y) => x+y) result: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[52] at reduceByKey at <console>:31 scala> result.foreach(println) (cyan,1) (love,1) (thinkgamer,,1) (am,1) (i,2)实现经典的分布式单词计数问题(使用countByValue更快的实现单词计数)
scala> val rdd = sc.parallelize(List("i am thinkgamer, i love cyan")) rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:27 scala> val result = rdd.flatMap(x=>x.split(" ")).countByValue() result: scala.collection.Map[String,Long] = Map(am -> 1, thinkgamer, -> 1, i -> 2, love -> 1, cyan -> 1) scala> result.foreach(println) (am,1) (thinkgamer,,1) (i,2) (love,1) (cyan,1) scala>
并行度调优
每个RDD都有固定数目的分区,分区数决定了在RDD 上执行操作时的并行度,在执行聚合或分组函数时,可以要求Spark使用给定的分区,Spark始终尝试根据集群的大小,推断出一个有意义的默认值,但是有时候你可能要对并行度进行调优来获取更好的性能发展。
scala中自定义reduceByKey()的并行度
val data = Seq(("a",3),("b",4),("c",5)) sc.parallelize(data).reduceByKey((x,y)=>x+y) //默认并行度 sc.parallelize(data).reduceByKey((x,y)=>x+y,10) //自定义并行度
6:pair RDD的行动操作
和转化操作一样,所有基础RDD支持的传统行动操作也都在pair RDD上可用,pair RDD提供了一些额外的行动操作,可以让我们充分利用数据的键值对特性,如下
以键值对集合{(1,2),(3,4),(3,6)}为例
函数名 | 描述 | 示例 | 结果 |
countByKey | 对每个键对应的元素分别计数 | rdd.countByKey(other) | {(1,1),(3,2)} |
collectAsMap() | 将结果以映射表的形式返回,以便查询 | rdd.collectAsMap() | Map{(1,2),(3,4),(3,6)} |
lookup(key) | 返回给定键对应的所有值 | rdd.lookup(3) | [4,6] |
7:获取RDD的分区方式
scala> val pairs = sc.parallelize(List((1,1),(2,2),(3,3))) pairs: org.apache.spark.rdd.RDD[(Int, Int)] = ParallelCollectionRDD[9] at parallelize at <console>:27 scala> pairs.partitioner res4: Option[org.apache.spark.Partitioner] = None scala> val partitioned = pairs.partitionBy(new org.apache.spark.HashPartitioner(2)) partitioned: org.apache.spark.rdd.RDD[(Int, Int)] = ShuffledRDD[10] at partitionBy at <console>:29 scala> partitioned.partitioner res5: Option[org.apache.spark.Partitioner] = Some(org.apache.spark.HashPartitioner@2)