九、Linux 上搭建 Spark 集群

参考:
https://www.cnblogs.com/codingexperience/p/5333202.html

解压、配置环境变量、使生效
export SPARK_HOME=/home/hadoop/spark-2.4.5-bin-hadoop2.7
export PATH=$PATH:$SPARK_HOME/bin

修改配置文件:
cp slaves.template slaves
vi slaves
storm-01
storm-02
storm-03

cp spark-env.sh.template spark-env.sh
vi spark-env.sh
追加
export JAVA_HOME=/home/hadoop/jdk1.8.0_131
export SCALA_HOME=/home/hadoop/scala-2.11.8
export HADOOP_HOME=/home/hadoop/hadoop-2.7.7
export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.7.7/etc/hadoop
export SPARK_MASTER_HOST=storm-01
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_HOME=/home/hadoop/spark-2.4.5-bin-hadoop2.7
export SPARK_DIST_CLASSPATH=$(/home/hadoop/hadoop-2.7.7/bin/hadoop classpath)

SPARK_DIST_CLASSPATH 可参考:
https://spark.apache.org/docs/latest/hadoop-provided.html#apache-hadoop

分发至其他机器
-q: 不显示传输进度条。
scp -qr spark-2.4.5-bin-hadoop2.7/ hadoop@storm-03:

在master节点启动spark
/home/hadoop/spark-2.4.5-bin-hadoop2.7/sbin/start-all.sh
停止:
/home/hadoop/spark-2.4.5-bin-hadoop2.7/sbin/stop-all.sh

访问:
http://storm-01:8080/

测试:
执行spark-shell
sc.textFile("hdfs://storm-01:9000/test/hankang/20200609/README.txt").flatMap(.split(" ")).map((,1)).reduceByKey(_ + ).collect
sc.textFile("hdfs://storm-01:9000/test/hankang/20200609/test.txt").flatMap(
.split(" ")).map((,1)).reduceByKey( + _).collect

九、Linux 上搭建 Spark 集群

上一篇:记一次shell脚本撰写


下一篇:linux make swap