环境:Spark Yarn-Cluster模式下
要求:将三份文件中的 内容读取并合并,格式:学号,姓名,大数据成绩,Hadoop成绩,总成绩,平均分
一、启动集群,关闭安全模式,进入spark-shell
[zkpk@master ~]start-dfs.sh
[zkpk@master ~]start-yarn.sh
[zkpk@master ~]xcall.sh
[zkpk@master ~]hdfs dfsadmin -safemode leave
[zkpk@master ~]cd spark
[zkpk@master spark]bin/spark-shell
二、准备文件,上传到HDFS上
[zkpk@master ~]hdfs dfs -put /home/zkpk/student1.txt /user/zkpk
上传命令:hdfs dfs -put /文件所在路径 /上传到的路径
三、创建RDD(读取文件)
scala > val student = sc.textFile("./student1.txt")
scala > val bigdata = sc.textFile("./result_bigdataPaltform1.txt")
scala > val hadoop = sc.textFile("./result_hadoopTraining1.txt")
四、通过map函数将读取到的RDD进行切分
student1.txt文件中提取学号和姓名,另外两个文件中提取学号和成绩:
scala > val m_student = student.map{x=>val line=x.split("\t");(line(0),line(1))}
scala > val m_hadoop = hadoop.map{x=>val line=x.split("\t");(line(0),line(2))}
scala > val m_bigdata = bigdata.map{x=>val line=x.split("\t");(line(0),line(2))}
五、合并三个PairRDD,将合并后的PairRDD中的二元数组拆分成多元数组
scala > val result = m_student.join(m_hadoop)
scala > val result1 = result.join(m_bigdata)
scala > val result2 = result1.map(x=>(x._1,x._2._1._1,x._2._1._2,x._2._2))
结果展示:
六、 添加总成绩和平均成绩
scala > val result3 = result2.map(x=>(x._1,x._2,x._3.toInt,x._4.toInt,(x._3.toInt+x._4.toInt)))
scala > val result4 = result3.map(x=>(x._1,x._2,x._3,x._4,x._5,(x._5.toDouble/2)))
七、将RDD保存到HDFS上并查看前五行的结果
scala > result4.saveAsTextFile("/user/zkpk/output.txt")
scala > result4.saveAsTextFile("/user/zkpk/output.txt").take(5)