{"id":573,"date":"2014-03-16T23:58:21","date_gmt":"2014-03-16T21:58:21","guid":{"rendered":"http:\/\/www.ludovicocaldara.net\/dba\/?p=573"},"modified":"2020-08-18T16:44:22","modified_gmt":"2020-08-18T14:44:22","slug":"multinode-rac12c-virtualbox-cloning","status":"publish","type":"post","link":"https:\/\/www.ludovicocaldara.net\/dba\/multinode-rac12c-virtualbox-cloning\/","title":{"rendered":"Multinode RAC 12c cluster on VirtualBox using linked clones"},"content":{"rendered":"<p>Recently I&#8217;ve had to install a four-node RAC cluster on my laptop in order to do some tests. I have found an &#8220;easy&#8221; (well, easy, it depends), fast and space-efficient way to do it so I would like to track it down.<\/p>\n<p><strong>The quick step list<\/strong><\/p>\n<ul>\n<li>Install the OS on the first node<\/li>\n<li>Add the shared disks<\/li>\n<li>Install the clusterware in RAC mode on on the first node only<\/li>\n<li>Remove temporarily the shared disks<\/li>\n<li>Clone the server as linked clone as many times as you want<\/li>\n<li>Reconfigure the new nodes with the new ip and naming<\/li>\n<li>Add back the shared disks on the first node and on all other nodes<\/li>\n<li>Clone the GI + database homes in order to add them to the cluster<\/li>\n<\/ul>\n<p>Using this method the Oracle binaries (the most space consuming portion of the RAC installation) are installed and allocated on the first node only.<\/p>\n<p><strong>The long step list<\/strong><\/p>\n<p>Actually you can follow many instruction steps from the <a href=\"http:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\">RAC Attack 12c book<\/a>.<\/p>\n<ul>\n<li>Review the <a href=\"http:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\/Hardware_Requirements\">HW requirements<\/a>\u00a0 but let at least 3Gb RAM for each guest + 2Gb more for your host (you may try with less RAM but everything will slow down).<\/li>\n<li>Download all the <a href=\"http:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\/Software_Components\">SW components<\/a> , additionally you may download the latest PSU (12.1.0.1.2) from <a title=\"Speaker and Ninja at Collaborate14 \u2013 #C14LV\" href=\"http:\/\/support.oracle.com\/\">MOS<\/a>.<\/li>\n<li><a href=\"http:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\/VirtualBox_Setup\">Prepare the host<\/a> and <a href=\"http:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\/Create_VirtualBox_VM\">install linux<\/a> on the first node. When configuring the OS, make sure you enter all the required IP addresses for the additional nodes. RAC Attack has two nodes collabn1, collabn2. Add as many nodes as you want to configure. As example, the DNS config may have four nodes<\/li>\n<\/ul>\n<pre class=\"lang:default decode:true\">collabn1        A       192.168.78.51\r\ncollabn2        A       192.168.78.52\r\ncollabn3        A       192.168.78.53\r\ncollabn4        A       192.168.78.54\r\ncollabn1-vip    A       192.168.78.61\r\ncollabn2-vip    A       192.168.78.62\r\ncollabn3-vip    A       192.168.78.63\r\ncollabn4-vip    A       192.168.78.64\r\ncollabn1-priv   A       172.16.100.51\r\ncollabn2-priv   A       172.16.100.52\r\ncollabn3-priv   A       172.16.100.53\r\ncollabn4-priv   A       172.16.100.54\r\ncollabn-cluster-scan     A       192.168.78.251\r\ncollabn-cluster-scan     A       192.168.78.252\r\ncollabn-cluster-scan     A       192.168.78.253<\/pre>\n<ul>\n<li><a href=\"https:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\/Create_Virtualbox_Shared_Storage\">\u00a0Create the shared disks<\/a> and <a href=\"https:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\/Configure_Storage_Persistent_Naming\">configure the persistent storage naming<\/a> using udev.<\/li>\n<\/ul>\n<p><strong>At this point, the procedure starts differing from the RAC Attack book.<\/strong><\/p>\n<ul>\n<li>Skip the creation of the second host and go directly to the <a href=\"https:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\/VNC_Server_Setup\">VNC Server setup<\/a>.<\/li>\n<li><a href=\"https:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\/Prepare_for_GI_install\">Install the Grid Infrastructure<\/a> and the<a href=\" https:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\/Install_Database_Software\"> Database software <\/a>using only the first node.<\/li>\n<li>You may want to <a title=\"Some notes about Grid Infrastructure PSU 12.1.0.1.2\" href=\"https:\/\/www.ludovicocaldara.net\/dba\/gi-psu-12-1-0-1-2\/\">install the latest PSU (12.1.0.1.2), use my previous post as guideline<\/a><\/li>\n<li>Once the GI + DB are installed correctly, stop and disable the crs on the first node:<\/li>\n<\/ul>\n<pre class=\"lang:plsql decode:true\" style=\"color: #000000;\"># &lt;GIHOME&gt;\/bin\/crsctl stop crs\r\n# &lt;GIHOME&gt;\/bin\/crsctl disable crs\r\n# shutdown -h now<\/pre>\n<ul>\n<li>\u00a0Go to the VirtualBox VM settings and delete all the shared disks<a href=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2014\/03\/2014_03_16_22_05_26_Oracle_VM_VirtualBox_Manager.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-592\" alt=\"2014_03_16_22_05_26_Oracle_VM_VirtualBox_Manager\" src=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2014\/03\/2014_03_16_22_05_26_Oracle_VM_VirtualBox_Manager.png\" width=\"657\" height=\"417\" srcset=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2014\/03\/2014_03_16_22_05_26_Oracle_VM_VirtualBox_Manager.png 657w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2014\/03\/2014_03_16_22_05_26_Oracle_VM_VirtualBox_Manager-300x190.png 300w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2014\/03\/2014_03_16_22_05_26_Oracle_VM_VirtualBox_Manager-472x300.png 472w\" sizes=\"auto, (max-width: 657px) 100vw, 657px\" \/><\/a><\/li>\n<li>\u00a0Clone the first server as linked clone (right-click, clone, choose the name, flag &#8220;Linked Clone&#8221; as many times as the number of additional servers you want.<\/li>\n<\/ul>\n<p><a href=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2014\/03\/2014-03-16-22_09_42-Clone-Virtual-Machine.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-593\" alt=\"2014-03-16 22_09_42-Clone Virtual Machine\" src=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2014\/03\/2014-03-16-22_09_42-Clone-Virtual-Machine.png\" width=\"445\" height=\"380\" srcset=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2014\/03\/2014-03-16-22_09_42-Clone-Virtual-Machine.png 445w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2014\/03\/2014-03-16-22_09_42-Clone-Virtual-Machine-300x256.png 300w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2014\/03\/2014-03-16-22_09_42-Clone-Virtual-Machine-351x300.png 351w\" sizes=\"auto, (max-width: 445px) 100vw, 445px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<ul>\n<li>By using this method \u00a0the new servers will use the same virtual disk file of the first server and a second file will be used to track the differences. This will save a lot of space on the disk.<\/li>\n<li>Add back the shared disks to all the servers.<\/li>\n<li>Start the other nodes and <a href=\"http:\/\/en.wikibooks.org\/wiki\/RAC_Attack_-_Oracle_Cluster_Database_at_Home\/RAC_Attack_12c\/Configure_Second_Linux_VM\">configure them following the RAC Attack instructions<\/a> again.<\/li>\n<li>Once all the nodes are configured, the GI installation has to be cleaned out on all the cloned servers using these guidelines:<\/li>\n<\/ul>\n<pre class=\"lang:sh decode:true\">cd $GI_HOME\r\nrm -rf log\/$(hostname)\r\nrm -rf gpnp\/$(hostname)\r\nfind gpnp -type f -exec rm -f {} \\;\r\nrm -rf cfgtoollogs\/*\r\nrm -rf crs\/init\/*\r\nrm -rf cdata\/*\r\nrm -rf crf\/*\r\nrm -rf network\/admin\/*.ora\r\nrm -rf crs\/install\/crsconfig_params\r\nfind . -name '*.ouibak' -exec rm {} \\;\r\nfind . -name '*.ouibak.1' -exec rm {} \\;\r\nrm -rf root.sh*\r\nrm -rf rdbms\/audit\/*\r\nrm -rf rdbms\/log\/*\r\nrm -rf inventory\/backup\/*\r\nchown -R oracle:oinstall \/u01\/app\r\n\r\nrm -f \/etc\/init.d\/ohasd \r\nrm -rf \/etc\/oracle \r\nrm -rf \/u01\/app\/oraInventory\/*<\/pre>\n<ul>\n<li>Then, on each cloned server, run the perl clone.pl as follows to clone the GI home, but change the LOCAL_NODE accordingly (note: the GI Home name must be identical to the one specified in the original installation!):<\/li>\n<\/ul>\n<pre class=\"lang:plsql decode:true\">[oracle@collabn2 bin]$ perl clone.pl -silent ORACLE_BASE=\/u01\/app\/oracle ORACLE_HOME=\/u01\/app\/12.1.0\/grid \\\r\n ORACLE_HOME_NAME=OraGI12Home1 INVENTORY_LOCATION=\/u01\/app\/oraInventory \\\r\n LOCAL_NODE=collabn2 \"CLUSTER_NODES={collabn1,collabn2,collabn3,collabn4}\"  CRS=TRUE \r\n\r\n .\/runInstaller -clone -waitForCompletion \"ORACLE_BASE=\/u01\/app\/oracle\" \"ORACLE_HOME=\/u01\/app\/12.1.0\/grid\" \"ORACLE_HOME_NAME=OraGI12Home1\" \"INVENTORY_LOCATION=\/u01\/app\/oraInventory\" \"LOCAL_NODE=collabn2\" \"CLUSTER_NODES={collabn1,collabn2}\" \"CRS=TRUE\" -silent -paramFile \/u01\/app\/12.1.0\/grid\/clone\/clone_oraparam.ini \r\nStarting Oracle Universal Installer...\r\n\r\nChecking Temp space: must be greater than 500 MB. Actual 5537 MB Passed \r\nChecking swap space: must be greater than 500 MB. Actual 3012 MB Passed \r\nPreparing to launch Oracle Universal Installer from \/tmp\/OraInstall2014-02-18_03-40-00PM. Please wait ...\r\nYou can find the log of this install session at: \/u01\/app\/oraInventory\/logs\/cloneActions2014-02-18_03-40-00PM.log\r\n .................................................. 5% Done.\r\n .................................................. 10% Done.\r\n .................................................. 15% Done.\r\n .................................................. 20% Done.\r\n .................................................. 25% Done.\r\n .................................................. 30% Done.\r\n .................................................. 35% Done.\r\n .................................................. 40% Done.\r\n .................................................. 45% Done.\r\n .................................................. 50% Done.\r\n .................................................. 55% Done.\r\n .................................................. 60% Done.\r\n .................................................. 65% Done.\r\n .................................................. 70% Done.\r\n .................................................. 75% Done.\r\n .................................................. 80% Done.\r\n .................................................. 85% Done.\r\n .................................................. 90% Done.\r\n .................................................. 95% Done.\r\n Copy files in progress.\r\n\r\n Copy files successful.\r\n\r\n Link binaries in progress.\r\n\r\n Link binaries successful.\r\n\r\n Setup files in progress.\r\n\r\n Setup files successful.\r\n\r\n Setup Inventory in progress.\r\n\r\n Setup Inventory successful.\r\n\r\n Finish Setup successful.\r\n The cloning of OraGI12Home1 was successful. \r\n Please check '\/u01\/app\/oraInventory\/logs\/cloneActions2014-02-18_03-40-00PM.log' for more details.\r\n\r\n As a root user, execute the following script(s): \r\n \t1. \/u01\/app\/12.1.0\/grid\/root.sh \r\n\r\nExecute \/u01\/app\/12.1.0\/grid\/root.sh on the following nodes: \r\n[collabn2,collabn2,collabn3,collabn4]\r\n.................................................. 100% Done.\r\n[oracle@collabn2 bin]$ su - Password: \r\n[root@collabn2 ~]# \/u01\/app\/oraInventory\/orainstRoot.sh \r\nChanging permissions of \/u01\/app\/oraInventory. \r\nAdding read,write permissions for group. \r\nRemoving read,write,execute permissions for world. \r\nChanging groupname of \/u01\/app\/oraInventory to oinstall. \r\nThe execution of the script is complete. \r\n[root@collabn2 ~]#<\/pre>\n<ul>\n<li>\u00a0Then, on the first node (that you have started and you have reactivated the clusterware stack on it with crsctl enable crs \/ crsctl start crs ;-)), run this command to add the new nodes in the definition of the cluster:<\/li>\n<\/ul>\n<pre class=\"lang:plsql decode:true\">[oracle@collabn1 addnode]$ .\/addnode.sh -silent -noCopy ORACLE_HOME=\/u01\/app\/12.1.0\/grid \"CLUSTER_NEW_NODES={collabn2,collabn3,collabn4}\" \"CLUSTER_NEW_VIRTUAL_HOSTNAMES={collabn2-vip,collabn3-vip,collabn4-vip}\"\r\nStarting Oracle Universal Installer...\r\n\r\nChecking Temp space: must be greater than 120 MB.   Actual 6125 MB    Passed\r\nChecking swap space: must be greater than 150 MB.   Actual 3017 MB    Passed\r\n[WARNING] [INS-13014] Target environment does not meet some optional requirements.\r\n   CAUSE: Some of the optional prerequisites are not met. See logs for details. \/u01\/app\/oraInventory\/logs\/addNodeActions2014-02-18_03-43-22PM.log\r\n   ACTION: Identify the list of failed prerequisite checks from the log: \/u01\/app\/oraInventory\/logs\/addNodeActions2014-02-18_03-43-22PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.\r\n\r\nPrepare Configuration in progress.\r\n\r\nPrepare Configuration successful.\r\n..................................................   40% Done.\r\n\r\nAs a root user, execute the following script(s):\r\n        1. \/u01\/app\/12.1.0\/grid\/root.sh\r\n\r\nExecute \/u01\/app\/12.1.0\/grid\/root.sh on the following nodes:\r\n[collabn2,collabn3,collabn4]\r\n\r\nThe scripts can be executed in parallel on all the nodes. If there are any policy managed databases managed by cluster, proceed with the addnode procedure without executing the root.sh script. Ensure that root.sh script is executed after all the policy managed databases managed by clusterware are extended to the new nodes.\r\n..................................................   60% Done.\r\n\r\nUpdate Inventory in progress.\r\n..................................................   100% Done.\r\n\r\nUpdate Inventory successful.\r\nSuccessfully Setup Software.<\/pre>\n<p>&nbsp;<\/p>\n<ul>\n<li>from the first server copy these files on all the other nodes:<\/li>\n<\/ul>\n<pre class=\"lang:plsql decode:true\">scp -rp \/u01\/app\/12.1.0\/grid\/crs\/install\/crsconfig_addparams collabn2:\/u01\/app\/12.1.0\/grid\/crs\/install\/crsconfig_addparams\r\nscp -rp \/u01\/app\/12.1.0\/grid\/crs\/install\/crsconfig_params collabn2:\/u01\/app\/12.1.0\/grid\/crs\/install\/crsconfig_params\r\nscp -rp \/u01\/app\/12.1.0\/grid\/gpnp collabn2:\/u01\/app\/12.1.0\/grid\/gpnp<\/pre>\n<ul>\n<li>\u00a0Then clone also the DB Home (again, run it on each new server and specify the same DB home name that you have used in the original installation):<\/li>\n<\/ul>\n<pre class=\"lang:plsql decode:true\">[oracle@collabn2 bin]$ perl clone.pl -O 'CLUSTER_NODES={collabn1,collabn2,collabn3,collabn4}' -O LOCAL_NODE=collabn2 ORACLE_BASE=\/u01\/app\/oracle ORACLE_HOME=\/u01\/app\/oracle\/product\/12.1.0\/dbhome_1 ORACLE_HOME_NAME=OraDB12Home1 -O -noConfig\r\n.\/runInstaller -clone -waitForCompletion   \"CLUSTER_NODES={collabn1,collabn2,collabn3,collabn4}\"  \"LOCAL_NODE=collabn2\" \"ORACLE_BASE=\/u01\/app\/oracle\" \"ORACLE_HOME=\/u01\/app\/oracle\/product\/12.1.0\/dbhome_1\" \"ORACLE_HOME_NAME=OraDB12Home1\"  -noConfig  -silent -paramFile \/u01\/app\/oracle\/product\/12.1.0\/dbhome_1\/clone\/clone_oraparam.ini\r\nStarting Oracle Universal Installer...\r\n\r\nChecking Temp space: must be greater than 500 MB.   Actual 3896 MB    Passed\r\nChecking swap space: must be greater than 500 MB.   Actual 3005 MB    Passed\r\nPreparing to launch Oracle Universal Installer from \/tmp\/OraInstall2014-02-18_05-22-22PM. Please wait ...You can find the log of this install session at:\r\n \/u01\/app\/oraInventory\/logs\/cloneActions2014-02-18_05-22-22PM.log\r\n..................................................   5% Done.\r\n..................................................   10% Done.\r\n..................................................   15% Done.\r\n..................................................   20% Done.\r\n..................................................   25% Done.\r\n..................................................   30% Done.\r\n..................................................   35% Done.\r\n..................................................   40% Done.\r\n..................................................   45% Done.\r\n..................................................   50% Done.\r\n..................................................   55% Done.\r\n..................................................   60% Done.\r\n..................................................   65% Done.\r\n..................................................   70% Done.\r\n..................................................   75% Done.\r\n..................................................   80% Done.\r\n..................................................   85% Done.\r\n..................................................   90% Done.\r\n..................................................   95% Done.\r\n\r\nCopy files in progress.\r\n\r\nCopy files successful.\r\n\r\nLink binaries in progress.\r\n\r\nLink binaries successful.\r\n\r\nSetup files in progress.\r\n\r\nSetup files successful.\r\n\r\nSetup Inventory in progress.\r\n\r\nSetup Inventory successful.\r\n\r\nFinish Setup in progress.\r\n\r\nFinish Setup successful.\r\nThe cloning of OraDB12Home1 was successful.\r\nPlease check '\/u01\/app\/oraInventory\/logs\/cloneActions2014-02-18_05-22-22PM.log' for more details.\r\n\r\nAs a root user, execute the following script(s):\r\n        1. \/u01\/app\/oracle\/product\/12.1.0\/dbhome_1\/root.sh\r\n\r\nExecute \/u01\/app\/oracle\/product\/12.1.0\/dbhome_1\/root.sh on the following nodes:\r\n[collabn2]\r\n\r\n..................................................   100% Done.<\/pre>\n<ul>\n<li>\u00a0On each new node run also the updatenodelist and the DB root.sh command to update the node list for the DB home:<\/li>\n<\/ul>\n<pre class=\"lang:plsql decode:true\">.\/runInstaller -updateNodeList ORACLE_HOME=\/u01\/app\/oracle\/product\/12.1.0\/dbhome_1 -O \"CLUSTER_NODES={collabn1,collabn2,collabn3,collabn4}\"<\/pre>\n<pre class=\"lang:plsql decode:true\"># \/u01\/app\/oracle\/product\/12.1.0\/dbhome_1\/root.sh<\/pre>\n<ul>\n<li>\u00a0and finally, run the GI root.sh on each new node to finalize their inclusion in the cluster!! \ud83d\ude42<\/li>\n<\/ul>\n<pre class=\"lang:plsql decode:true\">[root@collabn2 grid]# .\/root.sh\r\nPerforming root user operation for Oracle 12c\r\n\r\nThe following environment variables are set as:\r\n    ORACLE_OWNER= oracle\r\n    ORACLE_HOME=  \/u01\/app\/12.1.0\/grid\r\n   Copying dbhome to \/usr\/local\/bin ...\r\n   Copying oraenv to \/usr\/local\/bin ...\r\n   Copying coraenv to \/usr\/local\/bin ...\r\n\r\nEntries will be added to the \/etc\/oratab file as needed by\r\nDatabase Configuration Assistant when a database is created\r\nFinished running generic part of root script.\r\nNow product-specific root actions will be performed.\r\nRelinking oracle with rac_on option\r\nUsing configuration parameter file: \/u01\/app\/12.1.0\/grid\/crs\/install\/crsconfig_p\r\n2014\/02\/18 17:34:00 CLSRSC-363: User ignored prerequisites during installation\r\n\r\nOLR initialization - successful\r\n2014\/02\/18 17:34:47 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd\r\n\r\nCRS-4133: Oracle High Availability Services has been stopped.\r\nCRS-4123: Oracle High Availability Services has been started.\r\nCRS-4133: Oracle High Availability Services has been stopped.\r\nCRS-4123: Oracle High Availability Services has been started.\r\nCRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'collabn2'\r\nCRS-2673: Attempting to stop 'ora.drivers.acfs' on 'collabn2'\r\nCRS-2677: Stop of 'ora.drivers.acfs' on 'collabn2' succeeded\r\nCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'collabn2' has completed\r\nCRS-4133: Oracle High Availability Services has been stopped.\r\nCRS-4123: Starting Oracle High Availability Services-managed resources\r\nCRS-2672: Attempting to start 'ora.mdnsd' on 'collabn2'\r\nCRS-2672: Attempting to start 'ora.evmd' on 'collabn2'\r\nCRS-2676: Start of 'ora.mdnsd' on 'collabn2' succeeded\r\nCRS-2676: Start of 'ora.evmd' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.gpnpd' on 'collabn2'\r\nCRS-2676: Start of 'ora.gpnpd' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.gipcd' on 'collabn2'\r\nCRS-2676: Start of 'ora.gipcd' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.cssdmonitor' on 'collabn2'\r\nCRS-2676: Start of 'ora.cssdmonitor' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.cssd' on 'collabn2'\r\nCRS-2672: Attempting to start 'ora.diskmon' on 'collabn2'\r\nCRS-2676: Start of 'ora.diskmon' on 'collabn2' succeeded\r\nCRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'collabn2'\r\nCRS-2676: Start of 'ora.cssd' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'collabn2'\r\nCRS-2672: Attempting to start 'ora.ctssd' on 'collabn2'\r\nCRS-2676: Start of 'ora.ctssd' on 'collabn2' succeeded\r\nCRS-2676: Start of 'ora.cluster_interconnect.haip' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.asm' on 'collabn2'\r\nCRS-2676: Start of 'ora.asm' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.storage' on 'collabn2'\r\nCRS-2676: Start of 'ora.storage' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.crf' on 'collabn2'\r\nCRS-2676: Start of 'ora.crf' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.crsd' on 'collabn2'\r\nCRS-2676: Start of 'ora.crsd' on 'collabn2' succeeded\r\nCRS-6017: Processing resource auto-start for servers: collabn2\r\nCRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'collabn2'\r\nCRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'collabn1'\r\nCRS-2672: Attempting to start 'ora.ons' on 'collabn2'\r\nCRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'collabn1' succeeded\r\nCRS-2673: Attempting to stop 'ora.scan1.vip' on 'collabn1'\r\nCRS-2677: Stop of 'ora.scan1.vip' on 'collabn1' succeeded\r\nCRS-2672: Attempting to start 'ora.scan1.vip' on 'collabn2'\r\nCRS-2676: Start of 'ora.scan1.vip' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'collabn2'\r\nCRS-2676: Start of 'ora.ons' on 'collabn2' succeeded\r\nCRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.asm' on 'collabn2'\r\nCRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'collabn2' succeeded\r\nCRS-2676: Start of 'ora.asm' on 'collabn2' succeeded\r\nCRS-2672: Attempting to start 'ora.proxy_advm' on 'collabn2'\r\nCRS-2676: Start of 'ora.proxy_advm' on 'collabn2' succeeded\r\nCRS-6016: Resource auto-start has completed for server collabn2\r\nCRS-6024: Completed start of Oracle Cluster Ready Services-managed resources\r\nCRS-4123: Oracle High Availability Services has been started.\r\n2014\/02\/18 17:40:16 CLSRSC-343: Successfully started Oracle clusterware stack\r\n\r\nclscfg: EXISTING configuration version 5 detected.\r\nclscfg: version 5 is 12c Release 1.\r\nSuccessfully accumulated necessary OCR keys.\r\nCreating OCR keys for user 'root', privgrp 'root'..\r\nOperation successful.\r\n2014\/02\/18 17:40:38 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded<\/pre>\n<p>&nbsp;<\/p>\n<ul>\n<li>As result, you should be able to seen all the cluster resources started correctly on all the nodes.<\/li>\n<\/ul>\n<pre class=\"lang:plsql decode:true\">[oracle@collabn4 ~]$ crsctl stat res -t\r\n--------------------------------------------------------------------------------\r\nName           Target  State        Server                   State details\r\n--------------------------------------------------------------------------------\r\nLocal Resources\r\n--------------------------------------------------------------------------------\r\nora.ASMNET1LSNR_ASM.lsnr\r\n               ONLINE  ONLINE       collabn1                     STABLE\r\n               ONLINE  ONLINE       collabn2                     STABLE\r\n               ONLINE  ONLINE       collabn3                     STABLE\r\n               OFFLINE OFFLINE      collabn4                     STABLE\r\nora.DATA.dg\r\n               ONLINE  ONLINE       collabn1                     STABLE\r\n               ONLINE  ONLINE       collabn2                     STABLE\r\n               ONLINE  ONLINE       collabn3                     STABLE\r\n               OFFLINE OFFLINE      collabn4                     STABLE\r\nora.LISTENER.lsnr\r\n               ONLINE  ONLINE       collabn1                     STABLE\r\n               ONLINE  ONLINE       collabn2                     STABLE\r\n               ONLINE  ONLINE       collabn3                     STABLE\r\n               ONLINE  ONLINE       collabn4                     STABLE\r\nora.net1.network\r\n               ONLINE  ONLINE       collabn1                     STABLE\r\n               ONLINE  ONLINE       collabn2                     STABLE\r\n               ONLINE  ONLINE       collabn3                     STABLE\r\n               ONLINE  ONLINE       collabn4                     STABLE\r\nora.ons\r\n               ONLINE  ONLINE       collabn1                     STABLE\r\n               ONLINE  ONLINE       collabn2                     STABLE\r\n               ONLINE  ONLINE       collabn3                     STABLE\r\n               ONLINE  ONLINE       collabn4                     STABLE\r\nora.proxy_advm\r\n               ONLINE  ONLINE       collabn1                     STABLE\r\n               ONLINE  ONLINE       collabn2                     STABLE\r\n               ONLINE  ONLINE       collabn3                     STABLE\r\n               ONLINE  ONLINE       collabn4                     STABLE\r\n--------------------------------------------------------------------------------\r\nCluster Resources\r\n--------------------------------------------------------------------------------\r\nora.LISTENER_SCAN1.lsnr\r\n      1        ONLINE  ONLINE       collabn2                     STABLE\r\nora.LISTENER_SCAN2.lsnr\r\n      1        ONLINE  ONLINE       collabn3                     STABLE\r\nora.LISTENER_SCAN3.lsnr\r\n      1        ONLINE  ONLINE       collabn1                     STABLE\r\nora.MGMTLSNR\r\n      1        ONLINE  ONLINE       collabn1                     169.254.159.216 172.\r\n                                                             16.100.51,STABLE\r\nora.asm\r\n      1        ONLINE  ONLINE       collabn1                     STABLE\r\n      2        ONLINE  ONLINE       collabn2                     STABLE\r\n      3        ONLINE  ONLINE       collabn3                     STABLE\r\nora.cvu\r\n      1        ONLINE  ONLINE       collabn1                     STABLE\r\nora.mgmtdb\r\n      1        ONLINE  ONLINE       collabn1                     Open,STABLE\r\nora.oc4j\r\n      1        ONLINE  ONLINE       collabn1                     STABLE\r\nora.collabn1.vip\r\n      1        ONLINE  ONLINE       collabn1                     STABLE\r\nora.collabn2.vip\r\n      1        ONLINE  ONLINE       collabn2                     STABLE\r\nora.collabn3.vip\r\n      1        ONLINE  ONLINE       collabn3                     STABLE\r\nora.collabn4.vip\r\n      1        ONLINE  ONLINE       collabn4                     STABLE\r\nora.scan1.vip\r\n      1        ONLINE  ONLINE       collabn2                     STABLE\r\nora.scan2.vip\r\n      1        ONLINE  ONLINE       collabn3                     STABLE\r\nora.scan3.vip\r\n      1        ONLINE  ONLINE       collabn1                     STABLE\r\n--------------------------------------------------------------------------------<\/pre>\n<p>&nbsp;<\/p>\n<p>I know it seems a little complex, but if you have several nodes this is dramatically faster than the standard installation and also the space used is reduced. This is good if you have invested in a high-performance but low-capacity SSD disk like I did :-(.<\/p>\n<p>Hope it helps, I paste here the official documentation links that I&#8217;ve used to clone the installations. The other steps are my own work.<\/p>\n<p><strong>References<\/strong><\/p>\n<ul>\n<li><a style=\"font-style: normal;\" href=\"http:\/\/docs.oracle.com\/cd\/E16655_01\/rac.121\/e17886\/clonecluster.htm#CWADD92139\">Oracle\u00ae Clusterware Administration and Deployment Guide 12<i>c<\/i>\u00a0Release 1 (12.1)<\/a>\u00a0<a style=\"font-style: normal;\" href=\"http:\/\/docs.oracle.com\/cd\/E16655_01\/rac.121\/e17886\/clonecluster.htm#CWADD92139\">7\u00a0Cloning Oracle Clusterware<\/a><\/li>\n<li><a style=\"font-style: normal;\" href=\"http:\/\/docs.oracle.com\/cd\/E16655_01\/rac.121\/e17887\/cloneracwithoui.htm#RACAD007\">Oracle\u00ae Real Application Clusters Administration and Deployment Guide\u00a012<i>c<\/i>\u00a0Release 1\u00a0\u00a0(12.1)<\/a>\u00a0<a style=\"font-style: normal;\" href=\"http:\/\/docs.oracle.com\/cd\/E16655_01\/rac.121\/e17887\/cloneracwithoui.htm#RACAD007\">9\u00a0Using Cloning to Extend Oracle RAC to Nodes in the Same Cluster<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Recently I&#8217;ve had to install a four-node RAC cluster on my laptop in order to do some tests. I have found an &#8220;easy&#8221; (well, easy, it depends), fast and space-efficient way to do it so I would like to track &hellip; <a href=\"https:\/\/www.ludovicocaldara.net\/dba\/multinode-rac12c-virtualbox-cloning\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[327,326,3,52,330,149,132],"tags":[],"class_list":["post-573","post","type-post","status-publish","format-standard","hentry","category-oracle-maa","category-oracle","category-oracledb","category-12c","category-oracle-inst-upg","category-oracle-rac","category-triblog"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts\/573","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/comments?post=573"}],"version-history":[{"count":17,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts\/573\/revisions"}],"predecessor-version":[{"id":599,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts\/573\/revisions\/599"}],"wp:attachment":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/media?parent=573"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/categories?post=573"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/tags?post=573"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}