CHANGING HOSTNAMES IN ORACLE RAC
by Alex Gorbachev June 11, 2007 Posted in: Technical Track Tags: DBA Lounge, OracleUpdate: this procedure is for Linux and should work on any UNIX OS. Ron supplied how he did this on Windows in the comment below. Thanks Ron.
Sometimes there is a desperate need to change hostnames for one or all nodes of an Oracle RAC cluster. However, this operation is not officially supported. From Metalink Note 220970.1 RAC Frequently Asked Questions:
Can I change the public hostname in my Oracle Database 10g Cluster using Oracle Clusterware?
Hostname changes are not supported in Oracle Clusterware (CRS), unless you want to perform a deletenode followed by a new addnode operation.
The hostname is used to store among other things the flag files and CRS stack will not start if hostname is changed.
One way to do it is to remove a node from a cluster, change its hostname, and then add it back to the cluster as a new node. You will need to make sure that ORACLE_HOME
is also added to this node as well as the database instance configuration.
If you are brave enough, there is another way to do this. It’s not described anywhere on the Metalink, but there are no major hacks needed to implement it. The idea is to simply re-run the configuration of CRS (including re-formatting OCR and voting disks) and re-create the CRS resources after that. Obviously, this is not an online operation and the whole cluster is down for the duration of rename.
I assume that we have an Oracle RAC cluster running, with database(s) running already, optionally including ASM instances.
Capture Resource Definitions
Before doing anything, we should capture resource definitions from the current CRS resources. This is an optional step, but it will simplify configuration later.
A single resource definition can be captured with a command $ORA_CRS_HOME/bincrs_stat -p
. Here is a small shell script to capture that for every resource and save it into a .cap file. As you will see later these files can be used to easily recreate resources:
Here is an example list of files created:
# ls -l /opt/oracle/resources total 52 -rw-r--r-- 1 root root 813 Feb 27 12:06 ora.A.A1.inst.cap -rw-r--r-- 1 root root 813 Feb 27 12:06 ora.A.A2.inst.cap -rw-r--r-- 1 root root 789 Feb 27 12:06 ora.A.db.cap -rw-r--r-- 1 root root 809 Feb 27 12:06 ora.vs10a.ASM1.asm.cap -rw-r--r-- 1 root root 799 Feb 27 12:06 ora.vs10a.gsd.cap -rw-r--r-- 1 root root 832 Feb 27 12:06 ora.vs10a.LISTENER_VS10A.lsnr.cap -rw-r--r-- 1 root root 804 Feb 27 12:06 ora.vs10a.ons.cap -rw-r--r-- 1 root root 828 Feb 27 12:06 ora.vs10a.vip.cap -rw-r--r-- 1 root root 809 Feb 27 12:06 ora.vs11a.ASM2.asm.cap -rw-r--r-- 1 root root 799 Feb 27 12:06 ora.vs11a.gsd.cap -rw-r--r-- 1 root root 832 Feb 27 12:06 ora.vs11a.LISTENER_VS11A.lsnr.cap -rw-r--r-- 1 root root 804 Feb 27 12:06 ora.vs11a.ons.cap -rw-r--r-- 1 root root 828 Feb 27 12:06 ora.vs11a.vip.capSample content from one file:
# cat /opt/oracle/resources/ora.A.db.cap NAME=ora.A.db TYPE=application ACTION_SCRIPT=/opt/oracle/A/product/10.2.0/crs/bin/racgwrap ACTIVE_PLACEMENT=0 AUTO_START=1 CHECK_INTERVAL=600 DESCRIPTION=CRS application for the Database FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 HOSTING_MEMBERS= OPTIONAL_RESOURCES= PLACEMENT=balanced REQUIRED_RESOURCES= RESTART_ATTEMPTS=1 SCRIPT_TIMEOUT=600 START_TIMEOUT=0 STOP_TIMEOUT=0 UPTIME_THRESHOLD=7d USR_ORA_ALERT_NAME= USR_ORA_CHECK_TIMEOUT=0 USR_ORA_CONNECT_STR=/ as sysdba USR_ORA_DEBUG=0 USR_ORA_DISCONNECT=false USR_ORA_FLAGS= USR_ORA_IF= USR_ORA_INST_NOT_SHUTDOWN= USR_ORA_LANG= USR_ORA_NETMASK= USR_ORA_OPEN_MODE= USR_ORA_OPI=false USR_ORA_PFILE= USR_ORA_PRECONNECT=none USR_ORA_SRV= USR_ORA_START_TIMEOUT=0 USR_ORA_STOP_MODE=immediate USR_ORA_STOP_TIMEOUT=0 USR_ORA_VIP=Stop Clusterware
Now we can stop Oracle Clusterware on all nodes using $ORA_CRS_HOME/bin/crsctl stop crs
, and then change hostnames. Note that this will stop all databases, listeners, and other resources registered within CRS, so this is the time when outage starts.
Rename Hosts
I won’t discuss changing the hostname itself it here — it’s a straight-forward SA task.
A few things to pay special attention to:
- Make sure that aliases in
/etc/hosts
are amended. - Don’t forget to change aliases for VIPs and private IPs. This is not strictly required but you are better off following the standard naming convention (-priv and -vip for interconnect and virtual IP respectively) unless you have really good reason not to. Note that at this stage you should be also able to change IP addresses as well. I didn’t try it, but it should work.
- Also make sure DNS configuration is also changed by your SA, if your applications use DNS to resolve hostnames.
Modify $ORA_CRS_HOME/install/rootconfig
$ORA_CRS_HOME/install/rootconfig is called as part of the root.sh script run after Oracle Clusterware installation. We have to modify it so that it uses different host names.
Generally, you would simply change every appearance of the old hostnames to the new hostnames. If you want to do that in vi
, use :%s/old_node/new_node/g
. Be careful not to change existing non-relevant parts of the script matching your old hostname. The variables that should be changed are
CRS_HOST_NAME_LIST
CRS_NODE_NAME_LIST
CRS_PRIVATE_NAME_LIST
CRS_NODELIST
CRS_NODEVIPS
The latter might need modification if you also change IPs.
At this stage, you can also change your OCR and voting disks locations. The following lines should be changed:
CRS_OCR_LOCATIONS={OCR path},{OCR mirror path} CRS_VOTING_DISKS={voting disk1 path},{voting disk2 path},{voting disk3 path}You can also change your cluster name via the variable CRS_CLUSTER_NAME
.
Cleanup OCR and Voting Disks
You should clear OCR and voting disks, otherwise, the script will refuse to format them (well, it will unless you use a special force option, but that’s more hassle). This can be done using dd
. In the example below I have mirrored OCR and 3 voting disks:
“Break” Clusterware Configuration
rootconfig
has some protection from idiots (as we say in Russia) built in — it checks if Clusterware has been already configured and, if it has, it exits without doing any harm. One way to “break” the configuration and make this script run for a second time is to delete the file /etc/oracle/ocr.loc
. (Note that this is a Linux-specific location; other Unix variants might have different path. On HP-UX, for example, it’s something like /var/opt/oracle/ocr.log
if I recall correctly.)
Run $ORA_CRS_HOME/install/rootconfig
If everything has gone alright, you should be able to run $ORA_CRS_HOME/install/rootconfig
as the root user without any issues. If there are problems, follow the standard CRS troubleshooting procedure — checking /var/log/messages
and $ORA_CRS_HOME/log/{nodename}
et cetera.
Note that this should be done on every node one by one — sequentially. On the last node of the cluster, the script will try to configure the VIPs, and there is a known bug here if you use a private range IP for VIP. This can be easily fixed by running $ORA_CRS_HOME/bin/vipca manually in graphical mode (i.e. you will need $DISPLAY
configured correctly).
Verify Clusterware Configuration and Status
This is a simple check to make sure that all nodes are up and have VIP components configured correctly:
[root@vs10 bin]# $ORA_CRS_HOME/bin/crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.vs10.gsd application ONLINE ONLINE vs10 ora.vs10.ons application ONLINE ONLINE vs10 ora.vs10.vip application ONLINE ONLINE vs10 ora.vs11.gsd application ONLINE ONLINE vs11 ora.vs11.ons application ONLINE ONLINE vs11 ora.vs11.vip application ONLINE ONLINE vs11Adding Listener Resources to CRS
There are two ways to do this — you can either use netca
to configure the listener from scratch (you might need to clean it up from listener.ora first), or you can change the configuration manually and register it with CRS from command line. I’ll show how to that manually — obviously, the preferred way when it comes to the real environments. ;-)
First of all, we will need to change the $ORACLE_HOME/network/admin/listener.ora
file, and you will probably want to change tnsnames.ora
at the same time. You need to replace old node aliases with new ones, and change the IPs if they are used instead of aliases, and if you changed them above during clusterware reconfiguration.
Note that depending on how your LOCAL_LISTENER
and REMOTE_LISTENER
init.ora
parameters are set, you might need to change them: if they reference connections descriptors from tnsname.ora
, then only the latter should be changed, but if there are full connection descriptors, they should also be modified).
You should also change listener names to reflect new hostnames. Usually, listeners are named as LISTENER_{hostname}
, and you should keep this convention again unless you have a very good reason not to. Do that on both nodes if you don’t have a shared ORACLE_HOME.
Now it’s time to get back to the .cap files with the CRS resource definitions we captured when we began. The files we are interested in are in format ora.{hostname}.LISTENER_{HOSTNAME}.lsnr.cap
. In my case, one of them is ora.vs10a.LISTENER_VS10A.lsnr
(my old hostname was vs10a). If you changed listener names above, you would need to amend it there as well — NAME=ora.vs10.LISTENER_VS10.lsnr
, and rename the file according to the new host name following the same naming convention.
Your VIP name has probably changed, so this line should be modified as well: REQUIRED_RESOURCES=ora.vs10.vip
. And finally, the hosting member will change: HOSTING_MEMBERS=vs10
. Check the whole file carefully — you should simply modify the old hostname to the new one in both lower and upper case.
Now it’s time to register the resource — the crs_register
command does just that. This command specifies the resource name to register and the directory where the .cap file is located. It should be named exactly like resource name plus a “.cap” extension. Each node’s listener can be added from the same node. It’s important that the content of the .cap file is modified appropriately. Assuming I have files ora.vs10.LISTENER_VS10.lsnr
and ora.vs11.LISTENER_VS11.lsnr
in directory /opt/oracle/A/resources
, I run:
Now the output from crs_stat -t
should be:
It’s now time to start the listeners:
$ORA_CRS_HOME/bin/srvctl start nodeapps -n vs10 $ORA_CRS_HOME/bin/srvctl start nodeapps -n vs11crs_stat -t
should show the listeners online:
Adding ASM Instances to CRS
This step is optional, and it you don’t use ASM, skip it.
Unfortunately, we can’t simply use .cap files to register ASM resources. There are more pieces required and the only way I could find to register ASM instances is to use srvctl
which is, actually, a more supported option. This is simple:
There is a catch — sometimes I had to prefix the name of the ASM instance with a “+” (i.e. making it like -i +ASM1
) and sometimes no plus-sign was required. I couldn’t figure out why, so if you have a clue, let me know.
crs_stat -t
should show now:
Register Databases
For each database, you need to register a database resource. Then, for every instance, you need to register an instance resource. So for database A, my two-node cluster, I use:
$ORACLE_HOME/bin/srvctl add database -d A -o $ORACLE_HOME $ORACLE_HOME/bin/srvctl add instance -d A -i A1 -n vs10 $ORACLE_HOME/bin/srvctl add instance -d A -i A2 -n vs11 $ORACLE_HOME/bin/srvctl start database -d AFinally, crs_stat -t
should show all resources online:
Other Resources
If you had other resources like services, user VIPs, or user-defined resources, you will probably be fine using the crs_register
command to get them back into CRS. I didn’t try it, but it should work.
Final Check
To make sure that everything is working, you should at least reboot every node and see if everything comes up.
I don’t know if that operation is considered to be supported. The only slippery bit is modifying the $ORA_CRS_HOME/install/rootconfig
file, because it’s usually created by the Universal Installer. Another tricky place is the “unusual” registration of listeners. Otherwise, all the commands are pretty much usual stuff, I think. Good luck!