有些时候需要在远程机器上执行命令,如果每次都等进去挺麻烦的,所以用脚本执行会方便很多。下面介绍一下在shell脚本中执行远程命令。
1,首先写好要运行的脚本 run-command.sh, 加上执行权限 chmod +x test.sh
2,把脚本文件放到远程服务器对应的用户目录
3,在本机执行 ssh remote_user@remote_ip "source /etc/profile;~/run-command.sh"
4,加上/etc/profile这样就不会出现找不到自己的环境变量了
5 一个小demo(源自网络)
#!/bin/bash #变量定义
ip_array=("192.168.1.1" "192.168.1.2" "192.168.1.3")
user="test1"
remote_cmd="/home/test/1.sh" #本地通过ssh执行远程服务器的脚本
for ip in ${ip_array[*]}
do
if [ $ip = "192.168.1.1" ]; then
port="7777"
else
port="22"
fi
ssh -t -p $port $user@$ip "remote_cmd"
done
附一个启动hadoop机群的命令,运行的时候输入参数,是启动还是关闭
sh opt-hadoop-all.sh start
下面是opt-hadoop-all.sh的内容
# command parameter:start or stop
command_input=$
##---------- opt zookeeper -------------------------------
ssh hadoop@- "source /etc/profile;zkServer.sh $command_input"
ssh hadoop@- "source /etc/profile;zkServer.sh $command_input"
ssh hadoop@- "source /etc/profile;zkServer.sh $command_input"
ssh hadoop@- "source /etc/profile;zkServer.sh $command_input"
ssh hadoop@- "source /etc/profile;zkServer.sh $command_input"
##---------- opt zookeeper end -------------------------- ##---------- opt hdfs-all -----------------------------------
master_namenode_ip="1421-0002"
ssh hadoop@$master_namenode_ip "source /etc/profile;/home/hadoop/software/cloud/hadoop-2.6.0/sbin/$command_input-all.sh"
##---------- opt hdfs-all end ----------------------------- ##---------- opt yarn-all ----------------------------------
primary_rm_ip="1423-0001"
standby_rm_ip="1423-0002"
historyjob_ip="1423-0003" ssh hadoop@$primary_rm_ip "source /etc/profile;$command_input-yarn.sh"
ssh hadoop@$standby_rm_ip "source /etc/profile;yarn-daemon.sh $command_input resourcemanager"
ssh hadoop@$historyjob_ip "source /etc/profile;mr-jobhistory-daemon.sh $command_input historyserver"
##---------- opt yarn-all end ----------------------------- ##---------- opt httpfs --------------------------------------
httpfs_ip="1421-0002"
ssh hadoop@$httpfs_ip "source /etc/profile;httpfs.sh $command_input"
##---------- opt httpfs end -----------------------------