前几天替公司部署了一套FastDFS文件系统,查询了一下FastDFS是一个类似minIO的文件系统,但还是minIO强大一点。先记一笔吧,虽然现阶段很多都转更为强大的minIO了,还是有需要场景的。废话不多说,直接上操作步骤吧。
1、部署tracker服务
docker run -itd --network=host --name tracker -v /storage/FastDFS/tracker:/var/fdfs -v /storage/FastDFS/conf/tracker.conf:/etc/fdfs/tracker.conf:ro delron/fastdfs tracker
这里采用了host网络模式,运行之前一定要检查一下是否有端口冲突,当然也可以修改端口。端口的配置就是在我映射的
tracker.conf
配置中,修改port=
字段后的端口,我这里修改成19999
端口。tracker.conf
配置如下:
# is this config file disabled # false for enabled # true for disabled disabled=false # bind an address of this host # empty for bind all addresses of this host bind_addr= # the tracker server port port=19999 # connect timeout in seconds # default value is 30s connect_timeout=30 # network timeout in seconds # default value is 30s network_timeout=60 # the base path to store data and log files base_path=/var/fdfs # max concurrent connections this server supported max_connections=256 # accept thread count # default value is 1 # since V4.07 accept_threads=1 # work thread count, should <= max_connections # default value is 4 # since V2.00 work_threads=4 # min buff size # default value 8KB min_buff_size = 8KB # max buff size # default value 128KB max_buff_size = 128KB # the method of selecting group to upload files # 0: round robin # 1: specify group # 2: load balance, select the max free space group to upload file store_lookup=2 # which group to upload file # when store_lookup set to 1, must set store_group to the group name store_group=group2 # which storage server to upload file # 0: round robin (default) # 1: the first server order by ip address # 2: the first server order by priority (the minimal) store_server=0 # which path(means disk or mount point) of the storage server to upload file # 0: round robin # 2: load balance, select the max free space path to upload file store_path=0 # which storage server to download file # 0: round robin (default) # 1: the source storage server which the current file uploaded to download_server=0 # reserved storage space for system or other applications. # if the free(available) space of any stoarge server in # a group = 64KB # default value is 64KB thread_stack_size = 64KB # auto adjust when the ip address of the storage server changed # default value is true storage_ip_changed_auto_adjust = true # storage sync file max delay seconds # default value is 86400 seconds (one day) # since V2.00 storage_sync_file_max_delay = 86400 # the max time of storage sync a file # default value is 300 seconds # since V2.00 storage_sync_file_max_time = 300 # if use a trunk file to store several small files # default value is false # since V3.00 use_trunk_file = false # the min slot size, should slot_min_size # store the upload file to trunk file when it's size = 4MB # default value is 64MB # since V3.00 trunk_file_size = 64MB # if create trunk file advancely # default value is false # since V3.06 trunk_create_file_advance = false # the time base to create trunk file # the time format: HH:MM # default value is 02:00 # since V3.06 trunk_create_file_time_base = 02:00 # the interval of create trunk file, unit: second # default value is 38400 (one day) # since V3.06 trunk_create_file_interval = 86400 # the threshold to create trunk file # when the free trunk file size less than the threshold, will create # the trunk files # default value is 0 # since V3.06 trunk_create_file_space_threshold = 20G # if check trunk space occupying when loading trunk free spaces # the occupied spaces will be ignored # default value is false # since V3.09 # NOTICE: set this parameter to true will slow the loading of trunk spaces # when startup. you should set this parameter to true when neccessary. trunk_init_check_occupying = false # if ignore storage_trunk.dat, reload from trunk binlog # default value is false # since V3.10 # set to true once for version upgrade when your version less than V3.10 trunk_init_reload_from_binlog = false # the min interval for compressing the trunk binlog file # unit: second # default value is 0, 0 means never compress # FastDFS compress the trunk binlog when trunk init and trunk destroy # recommand to set this parameter to 86400 (one day) # since V5.01 trunk_compress_binlog_min_interval = 0 # if use storage ID instead of IP address # default value is false # since V4.00 use_storage_id = false # specify storage ids filename, can use relative or absolute path # since V4.00 storage_ids_filename = storage_ids.conf # id type of the storage server in the filename, values are: ## ip: the ip address of the storage server ## id: the server id of the storage server # this paramter is valid only when use_storage_id set to true # default value is ip # since V4.03 id_type_in_filename = ip # if store slave file use symbol link # default value is false # since V4.01 store_slave_file_use_link = false # if rotate the error log every day # default value is false # since V4.02 rotate_error_log = false # rotate error log time base, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 # default value is 00:00 # since V4.02 error_log_rotate_time=00:00 # rotate error log when the log file exceeds this size # 0 means never rotates log file by log file size # default value is 0 # since V4.02 rotate_error_log_size = 0 # keep days of the log files # 0 means do not delete old log files # default value is 0 log_file_keep_days = 0 # if use connection pool # default value is false # since V4.05 use_connection_pool = false # connections whose the idle time exceeds this time will be closed # unit: second # default value is 3600 # since V4.05 connection_pool_max_idle_time = 3600 # HTTP port on this tracker server http.server_port=8080 # check storage HTTP server alive interval seconds # <= 0 for never check # default value is 30 http.check_alive_interval=30 # check storage HTTP server alive type, values are: # tcp : connect to the storge server with HTTP port only, # do not request and get response # http: storage check alive url must return http status 200 # default value is tcp http.check_alive_type=tcp # check storage HTTP server alive uri/url # NOTE: storage embed HTTP server support uri: /status.html http.check_alive_uri=/status.html
2、部署storage服务
docker run -itd --network=host --name storage -e TRACKER_SERVER=127.0.0.1:19999 -v /storage/FastDFS/storage:/var/fdfs -v /storage/FastDFS/nginx/conf:/usr/local/nginx/conf -e GROUP_NAME=group1 delron/fastdfs storage
注意⚠️:这里的IP地址需要改成你宿主机的IP地址,端口就是你刚刚设置的端口,然后因为我们为了修改nginx端口更加的方便,所以我们将nginx配置文件映射出来了。nginx配置文件我放Gitee上了,有需要可以前去下载。当然你也可以通过先docker run一个先,然后将里面的配置拷贝出来再重新启动。
3、 问题
问题一
启动与重启失败的话,可以删除/var/fdfs/storage/data目录下的fdfs_storaged.pid 文件,然后重新运行storage, 注意这个pid文件十分重要!!
问题二
tail: cannot open '/var/fdfs/logs/storaged.log' for reading: No such file or directory
这个报错的话,可以在映射的那个目录下手动创建一个 storaged.log
文件。当然也需要先排查一下第一个问题先,如果无pid后缀文件的话那么手动创建这个日志文件。
4、 测试
到这里基本已经部署完毕了,因为FastDFS不跟miniIO一样有UI界面,所以我们只能通过命令行还测试一下是否可行了。
#先进入storage容器 docker exec -it storage bash #执行上传图片命令,这里的test.png需要你自行准备的哈。 /usr/bin/fdfs_upload_file /etc/fdfs/client.conf test.png group1/M00/00/00/CgACD1z7SEuAXrIqAA1eBLGVLow043.png
然后通过url访问http://ip:8888/group1/M00/00/00/CgACD1z7SEuAXrIqAA1eBLGVLow043.png,即可查看到图片
5、 参考链接
更多学习请关注「自在拉基」公众号。