大数据开发笔记

大数据开发组件        

HDFS

[atguigu@hadoop102 hadoop-3.1.3]$ sbin/start-dfs.sh

[atguigu@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh

http://hadoop102:9870/explorer.html#/

Yarn

[atguigu@hadoop102 hadoop-3.1.3]$ sbin/stop-yarn.sh

[atguigu@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh

http://hadoop103:8088/cluster

Zookeeper

[atguigu@hadoop102 zookeeper-3.5.7]$ zk.sh start

HA

[atguigu@hadoop102 ~]$ stop-dfs.sh

[atguigu@hadoop102 ~]$ zk.sh start

[atguigu@hadoop102 ~]$ start-dfs.sh

[atguigu@hadoop102 ~]$ start-yarn.sh

http://hadoop102:9870/explorer.html#/

http://hadoop104:8088/cluster

http://hadoop102:19888/jobhistory

Hive

[atguigu @hadoop102 opt]$ mysql -uroot -p

[atguigu@hadoop102 hive]$ bin/hive

hive> show tables;

[atguigu@hadoop102 hive]$ bin/hive --service hiveserver2

[atguigu@hadoop102 hive]$ bin/beeline -u jdbc:hive2://hadoop102:10000 -n atguigu

[atguigu@hadoop202 hive]$ nohup hive --service metastore 2>&1 &

[atguigu@hadoop202 hive]$ nohup hiveserver2 2>&1 &

[atguigu@hadoop102 hive]$ hiveservices.sh start

Idea连接hive数据库

Flume

[atguigu@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file conf/nc-flume-log.conf -Dflume.root.logger=INFO,console

[atguigu@hadoop102 flume]$ bin/flume-ng agent -c conf/ -n a1 -f conf/nc-flume-log.conf -Dflume.root.logger=INFO,console

[atguigu@hadoop102 ~]$ nc localhost 44444

[atguigu@hadoop102 conf]$ vim taildir-flume-hdfs.conf

添加如下内容

a1.sources = r1

a1.sinks = k1

a1.channels = c1

# Describe/configure the source

a1.sources.r1.type = TAILDIR

a1.sources.r1.filegroups = f1 f2

# 必须精确到文件,可以写匹配表达式匹配多个文件

a1.sources.r1.filegroups.f1 = /opt/module/flume/files1/.*file.*

a1.sources.r1.filegroups.f2 = /opt/module/flume/files2/.*log.*

# 实现断点续传的文件存放位置 不改有默认位置也能实现断点续传

a1.sources.r1.positionFile = /opt/module/flume/taildir_position.json

# Describe the sink

a1.sinks.k1.type = hdfs

a1.sinks.k1.hdfs.path = hdfs://hadoop102:8020/flume/%Y%m%d/%H

#上传文件的前缀

a1.sinks.k1.hdfs.filePrefix = log-

#是否使用本地时间戳

a1.sinks.k1.hdfs.useLocalTimeStamp = true

#积攒多少个Event才flush到HDFS一次

a1.sinks.k1.hdfs.batchSize = 100

#设置文件类型,可支持压缩

a1.sinks.k1.hdfs.fileType = DataStream

#多久生成一个新的文件

a1.sinks.k1.hdfs.rollInterval = 30

#设置每个文件的滚动大小大概是128M

a1.sinks.k1.hdfs.rollSize = 134217700

#文件的滚动与Event数量无关

a1.sinks.k1.hdfs.rollCount = 0

# Use a channel which buffers events in memory

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

[atguigu@hadoop102 flume]$ mkdir files1

[atguigu@hadoop102 flume]$ mkdir files2

[atguigu@hadoop102 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file conf/taildir-flume-hdfs.conf

[atguigu@hadoop102 files1]$ echo hello >> file1.txt

[atguigu@hadoop102 files1]$ echo atguigu >> file2.txt

查看HDFS上的数据

Kafka

先启动Zookeeper集群,然后启动kafaka

[atguigu@hadoop102   kafka]$ zk.sh start

[atguigu@hadoop102   kafka]$ kf.sh start

[atguigu@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 --list

[atguigu@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 --create --replication-factor 3 --partitions 1 --topic first

[atguigu@hadoop102 kafka]$ bin/kafka-console-producer.sh --broker-list hadoop102:9092 --topic first

[atguigu@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 --describe –

-topic first

[atguigu@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 --alter –-

topic first --partitions 6

Hbase

[atguigu@hadoop102 hbase]$ bin/start-hbase.sh

[atguigu@hadoop102 hbase]$ bin/stop-hbase.sh

[atguigu@hadoop102 hbase]$ bin/hbase shell

hbase(main):002:0> create 'student','info'

http://hadooo102:16010

http://hadooo103:16010

http://hadooo104:16010

Flume监控之Ganglia

[atguigu@hadoop102 flume]$ sudo service httpd start

[atguigu@hadoop102 flume]$ sudo service gmetad start

[atguigu@hadoop102 flume]$ sudo service gmond start

http://hadoop102/ganglia/

Kafka监控之kafka-eagle

 [atguigu@hadoop102 eagle]$ bin/ke.sh start

启动之前需要先启动ZK以及KAFKA

http://hadoop102:8048/ke/

上一篇:大三寒假第十四天


下一篇:数据采集框架 Flume