{"id":1836,"date":"2019-01-13T20:42:59","date_gmt":"2019-01-13T18:42:59","guid":{"rendered":"http:\/\/www.ludovicocaldara.net\/dba\/?p=1836"},"modified":"2020-08-18T16:04:01","modified_gmt":"2020-08-18T14:04:01","slug":"gi18c-patching-part3","status":"publish","type":"post","link":"https:\/\/www.ludovicocaldara.net\/dba\/gi18c-patching-part3\/","title":{"rendered":"Oracle Grid Infrastructure 18c patching part 3: Executing out-of-place patching with the local-mode automaton"},"content":{"rendered":"\r\n<p>I wish I had more time to blog in the recent weeks. Sorry for the delay in this blog series \ud83d\ude42<\/p>\r\n<p>If you have not read the two previous blog posts, please do it now. I suppose here that you have the Independent Local-Mode Automaton already enabled.<\/p>\r\n<p><strong>What does the Independent Local-mode Automaton?<\/strong><\/p>\r\n<p>The automaton automates the process of moving the active Grid Infrastructure Oracle Home from the current one to a new one. The new one can be either at a higher patch level or at a lower one. Of course, you will probably want to patch your grid infrastructure, going then to a higher level of patching.<\/p>\r\n<p><strong>Preparing the new Grid Infrastructure Oracle Home<\/strong><\/p>\r\n<p>The GI home, starting from 12.2, is just a zip that is extracted directly in the new Oracle Home. In this blog post I suppose that you want to patch your Grid Infrastructure from an existing 18.3 to a brand new 18.4 (18.5 will be released very soon).<\/p>\r\n<p>So, if your current OH is \/u01\/app\/grid\/crs1830, you might want to prepare the new home in \/u01\/app\/grid\/crs1840 by unzipping the software and then patching using the steps <a href=\"https:\/\/www.ludovicocaldara.net\/dba\/gi-18c-gridsetup-goldimage\/\">described here<\/a>.<\/p>\r\n<p>If you already have a golden image with the correct version, you can unzip it directly.<\/p>\r\n<p>Beware of four important things:\u00a0<\/p>\r\n<ol>\r\n<li>You have to register the new Oracle home in the Central Inventory using the SW_ONLY install, as\u00a0 <a href=\"https:\/\/www.ludovicocaldara.net\/dba\/gi-18c-gridsetup-goldimage\/\">described here<\/a>.<\/li>\r\n<li>You must do it for all the nodes in the cluster prior to upgrading<\/li>\r\n<li>The response file must contain the same groups (DBA, OPER, etc) as the current active Home, otherwise errors will appear.<\/li>\r\n<li>You must<strong> relink by hand your Oracle binaries with the RAC option:<\/strong><br \/>$ cd \/u01\/app\/grid\/1crs1840\/rdbms\/lib<br \/>$ make -f ins_rdbms.mk rac_on ioracle<\/li>\r\n<\/ol>\r\n<p>In fact, after every attach to the central inventory the binaries are relinked without RAC option, so it is important to activate RAC again to avoid bad problems when upgrading the ASM with the new Automaton.<\/p>\r\n<p><strong>Executing the move gihome<\/strong><\/p>\r\n<p>If everything is correct, you should have now the current and new Oracle Homes, correctly registered in the Central Inventory, with the RAC option activated.<\/p>\r\n<p>You can now do a first\u00a0<strong>eval\u00a0<\/strong>to check if everything looks good:<\/p>\r\n<pre class=\"lang:plsql highlight:0 decode:true \"># [ oracle@server1:\/u01\/app\/oracle\/home [12:01:52] [18.3.0.0.0 [GRID] SID=GRID] 0 ] #\r\n$ rhpctl move gihome -sourcehome \/u01\/app\/grid\/crs1830 -desthome \/u01\/app\/grid\/crs1840 -eval\r\nserver2.cern.ch: Audit ID: 4\r\nserver2.cern.ch: Evaluation in progress for \"move gihome\" ...\r\nserver2.cern.ch: verifying versions of Oracle homes ...\r\nserver2.cern.ch: verifying owners of Oracle homes ...\r\nserver2.cern.ch: verifying groups of Oracle homes ...\r\nserver2.cern.ch: Evaluation finished successfully for \"move gihome\".<\/pre>\r\n<p>My personal suggestion at least at your first experiences with the automaton, is to move the Oracle Home on\u00a0<strong>one node at a time<\/strong>. This way,\u00a0<strong>YOU<\/strong> control the relocation of the services and resources before doing the actual move operation.<\/p>\r\n<p>Here is the execution for the\u00a0<strong>first node:<\/strong><\/p>\r\n<pre class=\"lang:plsql highlight:0 decode:true \"># [ oracle@server1:\/u01\/app\/oracle\/home [15:17:26] [18.3.0.0.0 [GRID] SID=GRID] 0 ] #\r\n$ rhpctl move gihome -sourcehome \/u01\/app\/grid\/crs1830 -desthome \/u01\/app\/grid\/crs1840 -node server1\r\nserver2.cern.ch: Audit ID: 4\r\nserver2.cern.ch: verifying versions of Oracle homes ...\r\nserver2.cern.ch: verifying owners of Oracle homes ...\r\nserver2.cern.ch: verifying groups of Oracle homes ...\r\nserver2.cern.ch: starting to move the Oracle Grid Infrastructure home from \"\/u01\/app\/grid\/crs1830\" to \"\/u01\/app\/grid\/crs1840\" on server cluster \"CRSTEST-RAC16\"\r\nserver2.cern.ch: Executing prepatch and postpatch on nodes: \"server1\".\r\nserver2.cern.ch: Executing root script on nodes [server1].\r\nserver2.cern.ch: Successfully executed root script on nodes [server1].\r\nserver2.cern.ch: Executing root script on nodes [server1].\r\nUsing configuration parameter file: \/u01\/app\/grid\/crs1840\/crs\/install\/crsconfig_params\r\nThe log of current session can be found at:\r\n  \/u01\/app\/oracle\/crsdata\/server1\/crsconfig\/crs_postpatch_server1_2018-11-14_03-27-43PM.log\r\nOracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [70732493].\r\nCRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'server1'\r\nCRS-2673: Attempting to stop 'ora.crsd' on 'server1'\r\nCRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'server1'\r\nCRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.mgmt.ghchkpt.acfs' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.helper336.hlp' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.chad' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.chad' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.OCRVOT.dg' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.MGMT.dg' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.helper' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.cvu' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.qosmserver' on 'server1'\r\nCRS-2677: Stop of 'ora.helper336.hlp' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.OCRVOT.dg' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.MGMT.dg' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.asm' on 'server1'\r\nCRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.LISTENER.lsnr' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.scan2.vip' on 'server1'\r\nCRS-2677: Stop of 'ora.helper' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.cvu' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.scan2.vip' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.asm' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'server1'\r\nCRS-2677: Stop of 'ora.mgmt.ghchkpt.acfs' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.MGMT.GHCHKPT.advm' on 'server1'\r\nCRS-2677: Stop of 'ora.MGMT.GHCHKPT.advm' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.proxy_advm' on 'server1'\r\nCRS-2677: Stop of 'ora.chad' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.chad' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.mgmtdb' on 'server1'\r\nCRS-2677: Stop of 'ora.qosmserver' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.proxy_advm' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.mgmtdb' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'server1'\r\nCRS-2677: Stop of 'ora.MGMTLSNR' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.server1.vip' on 'server1'\r\nCRS-2677: Stop of 'ora.server1.vip' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.MGMTLSNR' on 'server2'\r\nCRS-2672: Attempting to start 'ora.qosmserver' on 'server2'\r\nCRS-2672: Attempting to start 'ora.scan2.vip' on 'server2'\r\nCRS-2672: Attempting to start 'ora.cvu' on 'server2'\r\nCRS-2672: Attempting to start 'ora.server1.vip' on 'server2'\r\nCRS-2676: Start of 'ora.cvu' on 'server2' succeeded\r\nCRS-2676: Start of 'ora.server1.vip' on 'server2' succeeded\r\nCRS-2676: Start of 'ora.MGMTLSNR' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.mgmtdb' on 'server2'\r\nCRS-2676: Start of 'ora.scan2.vip' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'server2'\r\nCRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'server2' succeeded\r\nCRS-2676: Start of 'ora.qosmserver' on 'server2' succeeded\r\nCRS-2676: Start of 'ora.mgmtdb' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.chad' on 'server2'\r\nCRS-2676: Start of 'ora.chad' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.ons' on 'server1'\r\nCRS-2677: Stop of 'ora.ons' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.net1.network' on 'server1'\r\nCRS-2677: Stop of 'ora.net1.network' on 'server1' succeeded\r\nCRS-2792: Shutdown of Cluster Ready Services-managed resources on 'server1' has completed\r\nCRS-2677: Stop of 'ora.crsd' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.asm' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.crf' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.drivers.acfs' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.mdnsd' on 'server1'\r\nCRS-2677: Stop of 'ora.drivers.acfs' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.crf' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.mdnsd' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.asm' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'server1'\r\nCRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.ctssd' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.evmd' on 'server1'\r\nCRS-2677: Stop of 'ora.ctssd' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.evmd' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.cssd' on 'server1'\r\nCRS-2677: Stop of 'ora.cssd' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.gipcd' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.gpnpd' on 'server1'\r\nCRS-2677: Stop of 'ora.gipcd' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.gpnpd' on 'server1' succeeded\r\nCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'server1' has completed\r\nCRS-4133: Oracle High Availability Services has been stopped.\r\n2018\/11\/14 15:30:10 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'\r\nCRS-4123: Starting Oracle High Availability Services-managed resources\r\nCRS-2672: Attempting to start 'ora.mdnsd' on 'server1'\r\nCRS-2672: Attempting to start 'ora.evmd' on 'server1'\r\nCRS-2676: Start of 'ora.mdnsd' on 'server1' succeeded\r\nCRS-2676: Start of 'ora.evmd' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.gpnpd' on 'server1'\r\nCRS-2676: Start of 'ora.gpnpd' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.gipcd' on 'server1'\r\nCRS-2676: Start of 'ora.gipcd' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.cssdmonitor' on 'server1'\r\nCRS-2672: Attempting to start 'ora.crf' on 'server1'\r\nCRS-2676: Start of 'ora.cssdmonitor' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.cssd' on 'server1'\r\nCRS-2672: Attempting to start 'ora.diskmon' on 'server1'\r\nCRS-2676: Start of 'ora.diskmon' on 'server1' succeeded\r\nCRS-2676: Start of 'ora.crf' on 'server1' succeeded\r\nCRS-2676: Start of 'ora.cssd' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'server1'\r\nCRS-2672: Attempting to start 'ora.ctssd' on 'server1'\r\nCRS-2676: Start of 'ora.ctssd' on 'server1' succeeded\r\nCRS-2676: Start of 'ora.cluster_interconnect.haip' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.asm' on 'server1'\r\nCRS-2676: Start of 'ora.asm' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.storage' on 'server1'\r\nCRS-2676: Start of 'ora.storage' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.crsd' on 'server1'\r\nCRS-2676: Start of 'ora.crsd' on 'server1' succeeded\r\nCRS-6017: Processing resource auto-start for servers: server1\r\nCRS-2673: Attempting to stop 'ora.server1.vip' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'server2'\r\nCRS-2672: Attempting to start 'ora.ons' on 'server1'\r\nCRS-2672: Attempting to start 'ora.chad' on 'server1'\r\nCRS-2677: Stop of 'ora.server1.vip' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.server1.vip' on 'server1'\r\nCRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.scan1.vip' on 'server2'\r\nCRS-2677: Stop of 'ora.scan1.vip' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.scan1.vip' on 'server1'\r\nCRS-2676: Start of 'ora.chad' on 'server1' succeeded\r\nCRS-2676: Start of 'ora.server1.vip' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'server1'\r\nCRS-2676: Start of 'ora.scan1.vip' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'server1'\r\nCRS-2676: Start of 'ora.LISTENER.lsnr' on 'server1' succeeded\r\nCRS-2679: Attempting to clean 'ora.asm' on 'server1'\r\nCRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'server1' succeeded\r\nCRS-2681: Clean of 'ora.asm' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.asm' on 'server1'\r\nCRS-2676: Start of 'ora.ons' on 'server1' succeeded\r\nORA-15150: instance lock mode 'EXCLUSIVE' conflicts with other ASM instance(s)\r\nCRS-2674: Start of 'ora.asm' on 'server1' failed\r\nCRS-2672: Attempting to start 'ora.asm' on 'server1'\r\nORA-15150: instance lock mode 'EXCLUSIVE' conflicts with other ASM instance(s)\r\nCRS-2674: Start of 'ora.asm' on 'server1' failed\r\nCRS-2679: Attempting to clean 'ora.proxy_advm' on 'server1'\r\nCRS-2681: Clean of 'ora.proxy_advm' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.proxy_advm' on 'server1'\r\nCRS-2676: Start of 'ora.proxy_advm' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.asm' on 'server1'\r\nORA-15150: instance lock mode 'EXCLUSIVE' conflicts with other ASM instance(s)\r\nCRS-2674: Start of 'ora.asm' on 'server1' failed\r\nCRS-2672: Attempting to start 'ora.MGMT.GHCHKPT.advm' on 'server1'\r\nCRS-2676: Start of 'ora.MGMT.GHCHKPT.advm' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.mgmt.ghchkpt.acfs' on 'server1'\r\nCRS-2676: Start of 'ora.mgmt.ghchkpt.acfs' on 'server1' succeeded\r\n===== Summary of resource auto-start failures follows =====\r\nCRS-2807: Resource 'ora.asm' failed to start automatically.\r\nCRS-6016: Resource auto-start has completed for server server1\r\nCRS-6024: Completed start of Oracle Cluster Ready Services-managed resources\r\nCRS-4123: Oracle High Availability Services has been started.\r\nOracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [70732493].\r\n2018\/11\/14 15:35:23 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.\r\n2018\/11\/14 15:37:11 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.\r\n2018\/11\/14 15:37:13 CLSRSC-672: Post-patch steps for patching GI home successfully completed.\r\nserver2.cern.ch: Successfully executed root script on nodes [server1].\r\nserver2.cern.ch: Updating inventory on nodes: server1.\r\n========================================\r\nserver2.cern.ch:\r\nStarting Oracle Universal Installer...\r\n\r\nThe inventory pointer is located at \/etc\/oraInst.loc\r\n'UpdateNodeList' was successful.\r\nserver2.cern.ch: Updated inventory on nodes: server1.\r\nserver2.cern.ch: Updating inventory on nodes: server1.\r\n========================================\r\nserver2.cern.ch:\r\nStarting Oracle Universal Installer...\r\n\r\nThe inventory pointer is located at \/etc\/oraInst.loc\r\n'UpdateNodeList' was successful.\r\nserver2.cern.ch: Updated inventory on nodes: server1.\r\nserver2.cern.ch: Continue by running 'rhpctl move gihome -destwc &lt;workingcopy_name&gt; -continue [-root | -sudouser &lt;sudo_username&gt; -sudopath &lt;path_to_sudo_binary&gt;]'.\r\nserver2.cern.ch: completed the move of Oracle Grid Infrastructure home on server cluster \"CRSTEST-RAC16\"<\/pre>\r\n<p>From this output you can see at line 15 that the cluster status is NORMAL, then the cluster is stopped on node 1 (lines 16 to 100), then the active version is modified in the oracle-ohasd.service file (line 101), then started back with the new version (lines 102 to 171). The cluster status now is ROLLING PATCH (line 172). The TFA and the node list are updated.\u00a0<\/p>\r\n<p>Before continuing with the other(s) node(s), make sure that all the resources are up &amp; running:<\/p>\r\n<pre class=\"lang:plsql highlight:0 decode:true \"># [ oracle@server1:\/u01\/app\/oracle\/home [15:37:26] [18.3.0.0.0 [GRID] SID=GRID] 0 ] #\r\n$ crss\r\nHA Resource                                   Targets                          States\r\n-----------                                   -----------------------------    ----------------------------------------\r\nora.ASMNET1LSNR_ASM.lsnr                      ONLINE,ONLINE                    ONLINE on server1,ONLINE on server2\r\nora.LISTENER.lsnr                             ONLINE,ONLINE                    ONLINE on server1,ONLINE on server2\r\nora.LISTENER_SCAN1.lsnr                       ONLINE                           ONLINE on server1\r\nora.LISTENER_SCAN2.lsnr                       ONLINE                           ONLINE on server2\r\nora.MGMT.GHCHKPT.advm                         ONLINE,ONLINE                    ONLINE on server1,ONLINE on server2\r\nora.MGMT.dg                                   ONLINE,ONLINE                    OFFLINE,ONLINE on server2\r\nora.MGMTLSNR                                  ONLINE                           ONLINE on server2\r\nora.OCRVOT.dg                                 OFFLINE,ONLINE                   OFFLINE,ONLINE on server2\r\nora.asm                                       ONLINE,ONLINE,OFFLINE            OFFLINE,ONLINE on server2,OFFLINE\r\nora.chad                                      ONLINE,ONLINE                    ONLINE on server1,ONLINE on server2\r\nora.cvu                                       ONLINE                           ONLINE on server2\r\nora.helper                                    ONLINE,ONLINE                    ONLINE on server1,ONLINE on server2\r\nora.helper336.hlp                             ONLINE,ONLINE                    ONLINE on server1,ONLINE on server2\r\nora.server1.vip                             ONLINE                           ONLINE on server1\r\nora.server2.vip                             ONLINE                           ONLINE on server2\r\nora.mgmt.ghchkpt.acfs                         ONLINE,ONLINE                    ONLINE on server1,ONLINE on server2\r\nora.mgmtdb                                    ONLINE                           ONLINE on server2\r\nora.net1.network                              ONLINE,ONLINE                    ONLINE on server1,ONLINE on server2\r\nora.ons                                       ONLINE,ONLINE                    ONLINE on server1,ONLINE on server2\r\nora.proxy_advm                                ONLINE,ONLINE                    ONLINE on server1,ONLINE on server2\r\nora.qosmserver                                ONLINE                           ONLINE on server2\r\nora.rhpserver                                 ONLINE                           ONLINE on server2\r\nora.scan1.vip                                 ONLINE                           ONLINE on server1\r\nora.LISTENER_LEAF.lsnr\r\nora.scan2.vip                                 ONLINE                           ONLINE on server2\r\n\r\n\r\n\r\n# [ oracle@server1:\/u01\/app\/oracle\/home [15:52:10] [18.4.0.0.0 [GRID] SID=GRID] 1 ] #\r\n$ crsctl query crs releasepatch\r\nOracle Clusterware release patch level is [59717688] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28547619 28655784 28655916 28655963 28656071 ] have been applied on the local node. The release patch string is [18.4.0.0.0].<\/pre>\r\n<p>You might want as well to relocate manually your resources back to node 1 prior to continuing on node 2.<\/p>\r\n<p>After that, node 2 can follow the very same procedure:<\/p>\r\n<pre class=\"lang:plsql highlight:0 decode:true \"># [ oracle@server1:\/u01\/app\/oracle\/home [15:54:30] [18.4.0.0.0 [GRID] SID=GRID] 130 ] #\r\n$ rhpctl move gihome -sourcehome \/u01\/app\/grid\/crs1830 -desthome \/u01\/app\/grid\/crs1840 -node server2\r\nserver2.cern.ch: Audit ID: 51\r\nserver2.cern.ch: Executing prepatch and postpatch on nodes: \"server2\".\r\nserver2.cern.ch: Executing root script on nodes [server2].\r\nserver2.cern.ch: Successfully executed root script on nodes [server2].\r\nserver2.cern.ch: Executing root script on nodes [server2].\r\nUsing configuration parameter file: \/u01\/app\/grid\/crs1840\/crs\/install\/crsconfig_params\r\nThe log of current session can be found at:\r\n  \/u01\/app\/oracle\/crsdata\/server2\/crsconfig\/crs_postpatch_server2_2018-11-14_03-58-21PM.log\r\nOracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [70732493].\r\nCRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'server2'\r\nCRS-2673: Attempting to stop 'ora.crsd' on 'server2'\r\nCRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'server2'\r\nCRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.cvu' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.rhpserver' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.OCRVOT.dg' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.MGMT.dg' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.qosmserver' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.chad' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.chad' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.helper336.hlp' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.helper' on 'server2'\r\nCRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.scan2.vip' on 'server2'\r\nCRS-2677: Stop of 'ora.LISTENER.lsnr' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.chad' on 'server1' succeeded\r\nCRS-2677: Stop of 'ora.chad' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.mgmtdb' on 'server2'\r\nCRS-2677: Stop of 'ora.OCRVOT.dg' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.MGMT.dg' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.asm' on 'server2'\r\nCRS-2677: Stop of 'ora.helper336.hlp' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.helper' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.scan2.vip' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.asm' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'server2'\r\nCRS-2677: Stop of 'ora.cvu' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.qosmserver' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.mgmtdb' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'server2'\r\nCRS-2677: Stop of 'ora.MGMTLSNR' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.server2.vip' on 'server2'\r\nCRS-2672: Attempting to start 'ora.MGMTLSNR' on 'server1'\r\nCRS-2677: Stop of 'ora.server2.vip' on 'server2' succeeded\r\nCRS-2676: Start of 'ora.MGMTLSNR' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.mgmtdb' on 'server1'\r\nCRS-2676: Start of 'ora.mgmtdb' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.chad' on 'server1'\r\nCRS-2676: Start of 'ora.chad' on 'server1' succeeded\r\nStop JWC\r\nCRS-5014: Agent \"ORAROOTAGENT\" timed out starting process \"\/u01\/app\/grid\/crs1830\/bin\/ghappctl\" for action \"stop\": details at \"(:CLSN00009:)\" in \"\/u01\/app\/oracle\/diag\/crs\/server2\/crs\/trace\/crsd_orarootagent_root.trc\"\r\nCRS-2675: Stop of 'ora.rhpserver' on 'server2' failed\r\nCRS-2679: Attempting to clean 'ora.rhpserver' on 'server2'\r\nCRS-2681: Clean of 'ora.rhpserver' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.mgmt.ghchkpt.acfs' on 'server2'\r\nCRS-2677: Stop of 'ora.mgmt.ghchkpt.acfs' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.MGMT.GHCHKPT.advm' on 'server2'\r\nCRS-2677: Stop of 'ora.MGMT.GHCHKPT.advm' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.proxy_advm' on 'server2'\r\nCRS-2677: Stop of 'ora.proxy_advm' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.qosmserver' on 'server1'\r\nCRS-2672: Attempting to start 'ora.scan2.vip' on 'server1'\r\nCRS-2672: Attempting to start 'ora.cvu' on 'server1'\r\nCRS-2672: Attempting to start 'ora.server2.vip' on 'server1'\r\nCRS-2676: Start of 'ora.cvu' on 'server1' succeeded\r\nCRS-2676: Start of 'ora.server2.vip' on 'server1' succeeded\r\nCRS-2676: Start of 'ora.scan2.vip' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'server1'\r\nCRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'server1' succeeded\r\nCRS-2676: Start of 'ora.qosmserver' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.ons' on 'server2'\r\nCRS-2677: Stop of 'ora.ons' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.net1.network' on 'server2'\r\nCRS-2677: Stop of 'ora.net1.network' on 'server2' succeeded\r\nCRS-2792: Shutdown of Cluster Ready Services-managed resources on 'server2' has completed\r\nCRS-2677: Stop of 'ora.crsd' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.asm' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.crf' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.drivers.acfs' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.mdnsd' on 'server2'\r\nCRS-2677: Stop of 'ora.drivers.acfs' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.crf' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.mdnsd' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.asm' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'server2'\r\nCRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.ctssd' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.evmd' on 'server2'\r\nCRS-2677: Stop of 'ora.ctssd' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.evmd' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.cssd' on 'server2'\r\nCRS-2677: Stop of 'ora.cssd' on 'server2' succeeded\r\nCRS-2673: Attempting to stop 'ora.gipcd' on 'server2'\r\nCRS-2673: Attempting to stop 'ora.gpnpd' on 'server2'\r\nCRS-2677: Stop of 'ora.gpnpd' on 'server2' succeeded\r\nCRS-2677: Stop of 'ora.gipcd' on 'server2' succeeded\r\nCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'server2' has completed\r\nCRS-4133: Oracle High Availability Services has been stopped.\r\n2018\/11\/14 16:01:42 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'\r\nCRS-4123: Starting Oracle High Availability Services-managed resources\r\nCRS-2672: Attempting to start 'ora.mdnsd' on 'server2'\r\nCRS-2672: Attempting to start 'ora.evmd' on 'server2'\r\nCRS-2676: Start of 'ora.mdnsd' on 'server2' succeeded\r\nCRS-2676: Start of 'ora.evmd' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.gpnpd' on 'server2'\r\nCRS-2676: Start of 'ora.gpnpd' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.gipcd' on 'server2'\r\nCRS-2676: Start of 'ora.gipcd' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.crf' on 'server2'\r\nCRS-2672: Attempting to start 'ora.cssdmonitor' on 'server2'\r\nCRS-2676: Start of 'ora.cssdmonitor' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.cssd' on 'server2'\r\nCRS-2672: Attempting to start 'ora.diskmon' on 'server2'\r\nCRS-2676: Start of 'ora.diskmon' on 'server2' succeeded\r\nCRS-2676: Start of 'ora.crf' on 'server2' succeeded\r\nCRS-2676: Start of 'ora.cssd' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'server2'\r\nCRS-2672: Attempting to start 'ora.ctssd' on 'server2'\r\nCRS-2676: Start of 'ora.ctssd' on 'server2' succeeded\r\nCRS-2676: Start of 'ora.cluster_interconnect.haip' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.asm' on 'server2'\r\nCRS-2676: Start of 'ora.asm' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.storage' on 'server2'\r\nCRS-2676: Start of 'ora.storage' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.crsd' on 'server2'\r\nCRS-2676: Start of 'ora.crsd' on 'server2' succeeded\r\nCRS-6017: Processing resource auto-start for servers: server2\r\nCRS-2673: Attempting to stop 'ora.server2.vip' on 'server1'\r\nCRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'server1'\r\nCRS-2672: Attempting to start 'ora.ons' on 'server2'\r\nCRS-2672: Attempting to start 'ora.chad' on 'server2'\r\nCRS-2677: Stop of 'ora.server2.vip' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.server2.vip' on 'server2'\r\nCRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'server1' succeeded\r\nCRS-2673: Attempting to stop 'ora.scan1.vip' on 'server1'\r\nCRS-2677: Stop of 'ora.scan1.vip' on 'server1' succeeded\r\nCRS-2672: Attempting to start 'ora.scan1.vip' on 'server2'\r\nCRS-2676: Start of 'ora.server2.vip' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'server2'\r\nCRS-2676: Start of 'ora.chad' on 'server2' succeeded\r\nCRS-2676: Start of 'ora.scan1.vip' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'server2'\r\nCRS-2676: Start of 'ora.LISTENER.lsnr' on 'server2' succeeded\r\nCRS-2679: Attempting to clean 'ora.asm' on 'server2'\r\nCRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'server2' succeeded\r\nCRS-2681: Clean of 'ora.asm' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.asm' on 'server2'\r\nCRS-2676: Start of 'ora.ons' on 'server2' succeeded\r\nORA-15150: instance lock mode 'EXCLUSIVE' conflicts with other ASM instance(s)\r\nCRS-2674: Start of 'ora.asm' on 'server2' failed\r\nCRS-2672: Attempting to start 'ora.asm' on 'server2'\r\nORA-15150: instance lock mode 'EXCLUSIVE' conflicts with other ASM instance(s)\r\nCRS-2674: Start of 'ora.asm' on 'server2' failed\r\nCRS-2679: Attempting to clean 'ora.proxy_advm' on 'server2'\r\nCRS-2681: Clean of 'ora.proxy_advm' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.proxy_advm' on 'server2'\r\nCRS-2676: Start of 'ora.proxy_advm' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.asm' on 'server2'\r\nORA-15150: instance lock mode 'EXCLUSIVE' conflicts with other ASM instance(s)\r\nCRS-2674: Start of 'ora.asm' on 'server2' failed\r\nCRS-2672: Attempting to start 'ora.MGMT.GHCHKPT.advm' on 'server2'\r\nCRS-2676: Start of 'ora.MGMT.GHCHKPT.advm' on 'server2' succeeded\r\nCRS-2672: Attempting to start 'ora.mgmt.ghchkpt.acfs' on 'server2'\r\nCRS-2676: Start of 'ora.mgmt.ghchkpt.acfs' on 'server2' succeeded\r\n===== Summary of resource auto-start failures follows =====\r\nCRS-2807: Resource 'ora.asm' failed to start automatically.\r\nCRS-6016: Resource auto-start has completed for server server2\r\nCRS-6024: Completed start of Oracle Cluster Ready Services-managed resources\r\nCRS-4123: Oracle High Availability Services has been started.\r\nOracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [59717688].\r\n\r\nSQL Patching tool version 18.0.0.0.0 Production on Wed Nov 14 16:09:01 2018\r\nCopyright (c) 2012, 2018, Oracle.  All rights reserved.\r\n\r\nLog file for this invocation: \/u01\/app\/oracle\/cfgtoollogs\/sqlpatch\/sqlpatch_181222_2018_11_14_16_09_01\/sqlpatch_invocation.log\r\n\r\nConnecting to database...OK\r\nGathering database info...done\r\n\r\nNote:  Datapatch will only apply or rollback SQL fixes for PDBs\r\n       that are in an open state, no patches will be applied to closed PDBs.\r\n       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation\r\n       (Doc ID 1585822.1)\r\n\r\nBootstrapping registry and package to current versions...done\r\nDetermining current state...done\r\n\r\nCurrent state of interim SQL patches:\r\nInterim patch 27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)):\r\n  Binary registry: Installed\r\n  PDB CDB$ROOT: Applied successfully on 13-NOV-18 04.35.06.794463 PM\r\n  PDB GIMR_DSCREP_10: Applied successfully on 13-NOV-18 04.43.16.948526 PM\r\n  PDB PDB$SEED: Applied successfully on 13-NOV-18 04.43.16.948526 PM\r\n\r\nCurrent state of release update SQL patches:\r\n  Binary registry:\r\n    18.4.0.0.0 Release_Update 1809251743: Installed\r\n  PDB CDB$ROOT:\r\n    Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 13-NOV-18 04.35.06.791214 PM\r\n  PDB GIMR_DSCREP_10:\r\n    Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 13-NOV-18 04.43.16.940471 PM\r\n  PDB PDB$SEED:\r\n    Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 13-NOV-18 04.43.16.940471 PM\r\n\r\nAdding patches to installation queue and performing prereq checks...done\r\nInstallation queue:\r\n  For the following PDBs: CDB$ROOT PDB$SEED GIMR_DSCREP_10\r\n    No interim patches need to be rolled back\r\n    Patch 28655784 (Database Release Update : 18.4.0.0.181016 (28655784)):\r\n      Apply from 18.3.0.0.0 Release_Update 1806280943 to 18.4.0.0.0 Release_Update 1809251743\r\n    No interim patches need to be applied\r\n\r\nInstalling patches...\r\nPatch installation complete.  Total patches installed: 3\r\n\r\nValidating logfiles...done\r\nPatch 28655784 apply (pdb CDB$ROOT): SUCCESS\r\n  logfile: \/u01\/app\/oracle\/cfgtoollogs\/sqlpatch\/28655784\/22509982\/28655784_apply__MGMTDB_CDBROOT_2018Nov14_16_11_00.log (no errors)\r\nPatch 28655784 apply (pdb PDB$SEED): SUCCESS\r\n  logfile: \/u01\/app\/oracle\/cfgtoollogs\/sqlpatch\/28655784\/22509982\/28655784_apply__MGMTDB_PDBSEED_2018Nov14_16_11_51.log (no errors)\r\nPatch 28655784 apply (pdb GIMR_DSCREP_10): SUCCESS\r\n  logfile: \/u01\/app\/oracle\/cfgtoollogs\/sqlpatch\/28655784\/22509982\/28655784_apply__MGMTDB_GIMR_DSCREP_10_2018Nov14_16_11_50.log (no errors)\r\nSQL Patching tool complete on Wed Nov 14 16:12:50 2018\r\n2018\/11\/14 16:13:40 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.\r\n2018\/11\/14 16:15:28 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.\r\n2018\/11\/14 16:17:48 CLSRSC-672: Post-patch steps for patching GI home successfully completed.\r\nserver2.cern.ch: Updating inventory on nodes: server2.\r\n========================================\r\nserver2.cern.ch:\r\nStarting Oracle Universal Installer...\r\n\r\nChecking swap space: must be greater than 500 MB.   Actual 16367 MB    Passed\r\nThe inventory pointer is located at \/etc\/oraInst.loc\r\n'UpdateNodeList' was successful.\r\nserver2.cern.ch: Updated inventory on nodes: server2.\r\nserver2.cern.ch: Updating inventory on nodes: server2.\r\n========================================\r\nserver2.cern.ch:\r\nStarting Oracle Universal Installer...\r\n\r\nChecking swap space: must be greater than 500 MB.   Actual 16367 MB    Passed\r\nThe inventory pointer is located at \/etc\/oraInst.loc\r\n'UpdateNodeList' was successful.\r\nserver2.cern.ch: Updated inventory on nodes: server2.\r\nserver2.cern.ch: Completed the 'move gihome' operation on server cluster.<\/pre>\r\n<p>As you can see, there are two differencse here: the second node was in this case the last one, so the cluster status gets back to NORMAL, and the GIMR is patched with datapatch (lines 176-227).<\/p>\r\n<p>At this point, the cluster has been patched. After some testing, you can safely remove the inactive version of Grid Infrastructure using the deinstall binary ($OLD_OH\/deinstall\/deinstall).<\/p>\r\n<p><strong>Quite easy, huh?<\/strong><\/p>\r\n<p>If you combine the Independent Local-mode Automaton with a home-developed solution for the creation and the provisioning of Grid Infrastructure Golden Images, you can easily achieve automated Grid Infrastructure patching of a big, multi-cluster environment.<\/p>\r\n<p>Of course, Fleet Patching and Provisioning remains the Rolls-Royce: if you can afford it, GI patching and much more is completely automated and developed by Oracle, so you will have no headaches when new versions are released. But the local-mode automaton might be enough for your needs.<\/p>\r\n<p>&#8212;\u00a0<\/p>\r\n<p>Ludo<\/p>\r\n<p><code><\/code><\/p>\r\n\r\n\r\n","protected":false},"excerpt":{"rendered":"<p>I wish I had more time to blog in the recent weeks. Sorry for the delay in this blog series \ud83d\ude42 If you have not read the two previous blog posts, please do it now. I suppose here that you &hellip; <a href=\"https:\/\/www.ludovicocaldara.net\/dba\/gi18c-patching-part3\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[321,333,327,326,309,3,308,330,149],"tags":[],"class_list":["post-1836","post","type-post","status-publish","format-standard","hentry","category-aced","category-oracle-fpp","category-oracle-maa","category-oracle","category-oracle-cloud","category-oracledb","category-oracle-database-18c","category-oracle-inst-upg","category-oracle-rac"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts\/1836","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/comments?post=1836"}],"version-history":[{"count":1,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts\/1836\/revisions"}],"predecessor-version":[{"id":1842,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts\/1836\/revisions\/1842"}],"wp:attachment":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/media?parent=1836"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/categories?post=1836"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/tags?post=1836"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}