问题:
用 spark-submit --master yarn --deploy-mode cluster --driver-memory 2G --num-executors 6 --executor-memory 2G ~~~
提交任务时,最后一个executor 执行时间 超过了 160s 导致 timeout而退出,造成任务重新执行造成用时过长。具体请看下面介绍:
// :: WARN spark.HeartbeatReceiver: Removing executor with no recent heartbeats: ms exceeds timeout ms
// :: ERROR cluster.YarnClusterScheduler: Lost executor on slave10: Executor heartbeat timed out after ms
// :: WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID , slave10): ExecutorLostFailure (executor exited caused by one of the running tasks) Reason: Executor heartbeat timed out after ms
// :: INFO scheduler.DAGScheduler: Executor lost: (epoch )
// :: INFO cluster.YarnClusterSchedulerBackend: Requesting to kill executor(s)
// :: INFO scheduler.TaskSetManager: Starting task 0.1 in stage 0.0 (TID , slave06, partition ,RACK_LOCAL, bytes)
// :: INFO storage.BlockManagerMasterEndpoint: Trying to remove executor from BlockManagerMaster.
// :: INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(, slave10, )
// :: INFO storage.BlockManagerMaster: Removed successfully in removeExecutor
// :: INFO scheduler.DAGScheduler: Host added was in lost list earlier: slave10
// :: INFO yarn.ApplicationMaster$AMEndpoint: Driver requested to kill executor(s) .
// :: INFO scheduler.TaskSetManager: Finished task 0.1 in stage 0.0 (TID ) in ms on slave06 (/)
// :: INFO scheduler.DAGScheduler: ResultStage (saveAsNewAPIHadoopFile at DataFrameFunctions.scala:) finished in 162.495 s
初步估计是 因为最后一步用到的计算多,但是 spark的堆外内存配置低 如下所示
spark.yarn.executor.memoryOverhead |
executorMemory * 0.10, with minimum of 384 |
故加大配置,如下:
spark-submit --master yarn --deploy-mode cluster --driver-memory 2G --num-executors 6 --executor-memory 2G --conf spark.yarn.executor.memoryOverhead=512 --conf spark.yarn.driver.memoryOverhead=512 经测试上述问题不复存在!