Spark的Monitoring

一、启动历史页面监控配置:

$ vi spark-defaults.conf
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://hadoop000:8020/g6_directory

$ vi spark-env.sh
SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs://hadoop000:8020/g6_directory"

//在HDFS上创建监控历史文件夹
$ hadoop fs -mkdir hdfs://hadoop000:8020/g6_directory

二、启动后台服务

$ ./sbin/start-history-server.sh
starting org.apache.spark.deploy.history.HistoryServer, logging to /home/hadoop/apps/spark-2.4.2-bin-2.6.0-cdh5.7.0/logs/spark-hadoop-org.apache.spark.deploy.history.HistoryServer-1-hadoop000.out

通过日志查看启动情况:
$ tail -200f /home/hadoop/apps/spark-2.4.2-bin-2.6.0-cdh5.7.0/logs/spark-hadoop-org.apache.spark.deploy.history.HistoryServer-1-hadoop000.out

19/06/19 19:38:10 INFO Utils: Successfully started service on port 18080.
19/06/19 19:38:11 INFO HistoryServer: Bound HistoryServer to 0.0.0.0, and started at http://hadoop000:18080

三、查看页面监控

Spark的Monitoring

四、REST API的使用

1、基本使用:

http://hadoop000:18080/api/v1/applications Spark的Monitoring

2、进一步使用:

http://hadoop000:18080/api/v1/applications/local-1560944971495/jobs Spark的Monitoring

3、官网部分信息参考:

Spark的Monitoring

更多信息参考: http://spark.apache.org/docs/2.2.0/monitoring.html#rest-api

上一篇:linux开启coredump


下一篇:提交example案例到YARN上运行