“决胜云计算大数据时代”
Spark亚太研究院100期公益大讲堂 【第15期互动问答分享】
Q1:AppClient和worker、master之间的关系是什么?
AppClient是在StandAlone模式下SparkContext.runJob的时候在Client机器上应 用程序的代表。要完毕程序的registerApplication等功能。
当程序完毕注冊后Master会通过Akka发送消息给client来启动Driver;
在Driver中管理Task和控制Worker上的Executor来协同工作;
Q2:Spark的shuffle 和hadoop的shuffle的差别大么?
Spark的Shuffle是一种比較严格意义上的shuffle,在Spark中Shuffle是有RDD操作的依赖关系中的Lineage上父RDD中的每一个partition元素的内容交给多个子RDD;
在Hadoop中的Shuffle是一个相对模糊的概念,Mapper阶段介绍后把数据交给Reducer就会产生Shuffle,Reducer三阶段的第一个阶段即是Shuffle。
Q3:Spark
的HA怎么处理的?
对于Master的HA,在Standalone模式下。Worker节点自己主动是HA的,对于Master的HA,一般採用Zookeeper;
Utilizing ZooKeeper to provide leader election and some statestorage, you can launch multiple Masters in your cluster connected to the sameZooKeeper instance. One will be elected “leader” and the others will remain
instandby mode. If the current leader dies, another Master will be elected,recover the old Master’s state, and then resume scheduling. The entire recoveryprocess (from the time the the first leader goes down) should take between 1and 2 minutes. Note that this
delay only affects scheduling new applications– applications that were already running during Master failover are unaffected;
对于Yarn和Mesos模式,ResourceManager一般也会採用ZooKeeper进行HA;