Linux实战案例(2)实例讲解使用软连接的场景和过程

===================================

使用场景:使用软连接简化版本切换动作

进入操作目录,

cd /opt/modules/

===================================

1、创建软连接
ln -s jdk1.8.0_131 jdk1.8
ln -s spark-2.1.0-bin-hadoop2.7 spark
ln -s hadoop-2.7.3 hadoop

===================================
2、删除软连接
rm -rf hadoop
rm -rf spark
rm -rf jdk1.8

===================================

3、验证结果

查看是否存在
ll

[root@LexiaofeiN1 modules]# ll
总用量 16
drwxr-xr-x 9 elasticsearch elasticsearch 4096 3月 23 16:32 elasticsearch5.2

lrwxrwxrwx 1 root root 12 4月 24 15:12 hadoop -> hadoop-2.7.3
drwxr-xr-x 9 root root 4096 4月 24 15:14 hadoop-2.7.3

lrwxrwxrwx 1 root root 25 4月 24 15:13 jdk1.8 -> /opt/modules/jdk1.8.0_131
drwxr-xr-x 8 uucp 143 4096 3月 15 16:35 jdk1.8.0_131

lrwxrwxrwx 1 root root 25 4月 24 15:15 spark -> spark-2.1.0-bin-hadoop2.7
drwxr-xr-x 12 lexiaofei lexiaofei 4096 12月 16 10:18 spark-2.1.0-bin-hadoop2.7

===================================

4、场景举例

4.1、修改path变量,指向软连接jdk1.8和hadoop
vi /etc/profile

4.2、生效配置修改
source /etc/profile

4.3、检查配置是否修改成功

[root@LxfN1 ~]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

[root@LxfN1 ~]# javac -version
javac 1.8.0_131

[root@LxfN1 ~]# hadoop version
Hadoop 2.7.3
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by root on 2016-08-18T01:41Z
Compiled with protoc 2.5.0
From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using /opt/modules/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar

===================================

上一篇:HTTP响应状态码含义参考


下一篇:详细介绍如何在win7下首次实现通过Git bash向Github提交项目