Oracle Grid Infrastructure 18c patching part 3: Executing out-of-place patching with the local-mode automaton

I wish I had more time to blog in the recent weeks. Sorry for the delay in this blog series šŸ™‚

If you have not read the two previous blog posts, please do it now. I suppose here that you have the Independent Local-Mode Automaton already enabled.

What does the Independent Local-mode Automaton?

The automaton automates the process of moving the active Grid Infrastructure Oracle Home from the current one to a new one. The new one can be either at a higher patch level or at a lower one. Of course, you will probably want to patch your grid infrastructure, going then to a higher level of patching.

Preparing the new Grid Infrastructure Oracle Home

The GI home, starting from 12.2, is just a zip that is extracted directly in the new Oracle Home. In this blog post I suppose that you want to patch your Grid Infrastructure from an existing 18.3 to a brand new 18.4 (18.5 will be released very soon).

So, if your current OH is /u01/app/grid/crs1830, you might want to prepare the new home in /u01/app/grid/crs1840 by unzipping the software and then patching using the steps described here.

If you already have a golden image with the correct version, you can unzip it directly.

Beware of four important things:Ā 

  1. You have to register the new Oracle home in the Central Inventory using the SW_ONLY install, asĀ  described here.
  2. You must do it for all the nodes in the cluster prior to upgrading
  3. The response file must contain the same groups (DBA, OPER, etc) as the current active Home, otherwise errors will appear.
  4. You must relink by hand your Oracle binaries with the RAC option:
    $ cd /u01/app/grid/1crs1840/rdbms/lib
    $ make -f ins_rdbms.mk rac_on ioracle

In fact, after every attach to the central inventory the binaries are relinked without RAC option, so it is important to activate RAC again to avoid bad problems when upgrading the ASM with the new Automaton.

Executing the move gihome

If everything is correct, you should have now the current and new Oracle Homes, correctly registered in the Central Inventory, with the RAC option activated.

You can now do a firstĀ evalĀ to check if everything looks good:

My personal suggestion at least at your first experiences with the automaton, is to move the Oracle Home onĀ one node at a time. This way,Ā YOU control the relocation of the services and resources before doing the actual move operation.

Here is the execution for theĀ first node:

From this output you can see at line 15 that the cluster status is NORMAL, then the cluster is stopped on node 1 (lines 16 to 100), then the active version is modified in the oracle-ohasd.service file (line 101), then started back with the new version (lines 102 to 171). The cluster status now is ROLLING PATCH (line 172). The TFA and the node list are updated.Ā 

Before continuing with the other(s) node(s), make sure that all the resources are up & running:

You might want as well to relocate manually your resources back to node 1 prior to continuing on node 2.

After that, node 2 can follow the very same procedure:

As you can see, there are two differencse here: the second node was in this case the last one, so the cluster status gets back to NORMAL, and the GIMR is patched with datapatch (lines 176-227).

At this point, the cluster has been patched. After some testing, you can safely remove the inactive version of Grid Infrastructure using the deinstall binary ($OLD_OH/deinstall/deinstall).

Quite easy, huh?

If you combine the Independent Local-mode Automaton with a home-developed solution for the creation and the provisioning of Grid Infrastructure Golden Images, you can easily achieve automated Grid Infrastructure patching of a big, multi-cluster environment.

Of course, Fleet Patching and Provisioning remains the Rolls-Royce: if you can afford it, GI patching and much more is completely automated and developed by Oracle, so you will have no headaches when new versions are released. But the local-mode automaton might be enough for your needs.

—Ā 

Ludo

Oracle Grid Infrastructure 18c patching part 2: Independent Local-mode Automaton architecture and activation

The first important step before starting using the new Independent Local-mode Automaton is understanding which are its components inside a cluster.

Resources

Here’s the list of service that you will find when you install a Grid Infrastructure 18c:

As you can see, there are 4 components that are OFFLINE by default:

Three local resources (that are present on each node):

  • ora.MGMT.GHCHKPT.advm
  • ora.mgmt.ghchkpt.acfs
  • ora.helper

One cluster resource (active on only one server at a time, it can relocate):

  • ora.rhpserver

If you have ever worked with 12c Rapid Home Provisioning, those name should sound familiar.

The GHCHKPT filesystem (ant its relative volume), is used to store some data regarding the ongoing operations across the cluster during the GI home move.

The ora.helper is the process that actually does the operations. It is local because each node needs it to execute some actions at some point.

The rhpserver is the server process that coordinates the operations and delegates them to the helpers.

All those services compose the independent local-mode automaton, that is the default deployment. The full RHP framework (RHP Server and RHP Client) might be configured instead with some additional work.

Important note: Just a few weeks ago Oracle changed the name of Rapid Home Provisioning (RHP) to Fleet Patching and Provisioning (FPP). The name is definitely more appealing now, but it generates again some confusion about product names and acronyms, so beware that in this series sometimes I refer to RHP, sometimes to FPP, but actually it is the same thing.

Tomcat?

You might have noticed that tomcat is deployed now in the GI home, as there are patches specific to it (here I paste the 18.4 version):

 

Indeed Tomcat is registered in the inventory and patched just like any other product inside the OH:

 

Out of the box, Tomcat is used for the Quality of Services Management (ora.qosmserver resource):

But it is used for the Independent Local Mode Automaton as well, when it is started.

Enabling and starting the independent local-mode automaton

The resources are started using the following commands (as root, the order is quite important):

Before continuing with the rhpserver resource, you might want to check if the filesystem is mounted:

Now the rhpserver should start without problems, as oracle:

Please note that if you omit to activate the filesystem first, the rhpserver will fail to start.

As you can see, now both rhpserver and the helper are online:

Now all is set to start using it!

We’ll see how to use it in the next posts.

Ludo

 

Oracle Grid Infrastructure 18c patching part 1: Some history

Down the memory lane

Although sometimes I think I have been working with Oracle Grid Infrastructure since it exists, sometimes my memory does not work well. I still like to go through the Oracle RAC family history from time to time:

  • 8i -> no Oracle cluster did exist. RAC was leveraging 3rd party clusters (like Tru Cluster, AIX HACMP, Sun Cluster)…
  • 9i -> if I remember well, Oracle hired some developers of Tru Cluster after the acquisition of Compaq by HP. Oracle CRS was born and was quite similar to Tru Cluster. (The commands were almost the same: crs_stat instead of caa_stat, etc)
  • 10g -> Oracle re-branded CRS to Clusterware
  • 11g -> With the addition of ASM (and other components), Oracle created the concept of “Grid Infrastructure”, composed by Clusterware and additional products. All the new versions still use the name Grid Infrastructure and new products have been added through the years (ACFS, RHP, QoS …)

But I have missing souvenirs. For example, I cannot remember having ever upgraded an Oracle Cluster from 9i to 10g or from 10g to 11g. At that time I was working for several customers, and every new release was installed on new Hardware.

My first, real upgrade (as far as I can remember) was from 11gR2 to 12c, where the upgrade process was a nice, OUI-driven, out-of-place install.

The process was (still is šŸ™‚ ) nice and smooth:

  • The installer copies, prepares and links the binaries on all the nodes in a new Oracle Home
  • The upgrade process is rolling: the first node puts the cluster in upgrade mode
  • The last node does the final steps and exists the cluster from the upgrade mode.

This is about Upgrading to a new release. But what about patching?

In-place patching

Patching of Grid Infrastructure has always been in-place and, I will not hide it, quite painful.

If you wanted to patch a Grid Infrastructure before release 12cR2, you had to:

  • read the documentation carefully and check for possible conflicts
  • backup the Grid Home
  • copy the patch on the host
  • evacuate all the services and databases from the cluster node that you want to patch
  • patch the binaries (depending on the versions and patches, this might be easy with opatchauto or quite painful with manual unlocking/locking and manual opatch steps)
  • restart/relocate the services back on the node
  • repeat the tasks for every node

The disadvantages of in-place patching are many:

  • Need to stage the patch on every node
  • Need to repeat the patching process for every node
  • No easy rollback (some bad problems might lead to deconfiguring the cluster from one node and then adding it back to the cluster)

Out-of-place patching

Out-of-place patching is proven to be much a better solution. I am doing it regularly since a while for Oracle Database homes and I am very satisfied with it. I am implementing it at CERN as well, and it will unlock new levels of server consolidation šŸ™‚

I have written a blog series here, and presented about it a few times.

ButĀ out-of-place patching for Grid Infrastructure is VERY recent.

12cR2: opatchautoĀ 

Oracle 12cR2 introduced out-of-place patching as a new feature of opatchauto.

This MOS document explains it quite in detail:

Grid Infrastructure Out of Place ( OOP ) Patching using opatchauto (Doc ID 2419319.1)

The process is the following:

  • a preparation process clones the active Oracle Home on the current node and patches it
  • a switch process switches the active Oracle Home from the old one to the prepared clone
  • those two phases are repeated for each node

12cr2-oop

The good thing is that the preparation can be done in advance on all the nodes and the switch can be triggered only if all the clones are patched successfully.

However, the staging of the patch, the cloning and patching must still happen on every node, making the concept of golden images quite useless for patching.

It is worth to mention, at this point, that Grid Infrastructure Golden Images ARE A THING, and that they have been introduced by Rapid Home Provisioning release 12cR2, where cluster automatic provisioning has been included as a new feature.

This Grid Infrastructure golden images have already been mentioned here and here.

I have discussed about Rapid Home provisioning itself here, but I will ad a couple of thoughts in the next paragraph.

18c and the brand new Independent local-mode Automaton

I have been early tester of the Rapid Home Provisioning product, when it has been released with Oracle 12.1.0.2. I have presented about it at UKOUG and as a RAC SIG webinar.
https://www.youtube.com/watch?v=vaB4RWjYPq0
http://www.ludovicocaldara.net/dba/rhp-presentation/

I liked the product A LOT, despite a few bugs due to the initial release.Ā The concept of out-of-placing patching that RHP uses is the best one, in my opinion, to cope with frequent patches and upgrades.

Now, with Oracle 18c, the Rapid Home Provisioning Independent Local-mode Automaton comes to play. There is not that much documentation about it, even in the Oracle documentation, but a few things are clear:

  • The Independent local-mode automaton comes without additional licenses as it is not part of the RHP Server/Client infrastructure
  • It is 100% local to the cluster where it is used
  • Its main “job” is to allow moving Grid Infrastructure Homes from a non-patched version to an out-of-place patched one.

I will not disclore more here, as the rest of this blog series is focused on this new product šŸ™‚

Stay tuned for details, examples and feedback from its usage at CERN šŸ˜‰

Ludo

Port conflict with “Oracle Remote Method Invocation (ORMI)” during Grid Infrastructure install

After years of installing Grid Infrastructures, today I have got for the first time an error on something new:

Looking at the logs (which I do not have now as I removed them as part of the failed install cleanup šŸ™ ), the error is generated by the cluster verification utility (CVU) on this check:

The components verified by the CVU can be found inside $ORACLE_HOME/cv/cvdata/. In my case, precisely:

This check is critical, so the install fails.

In my case the port was used by mcollectived.

The port has been taken dynamically, and previous runs of CVU did not encounter the problem.

A rare port conflict that might happen when configuring GI šŸ™‚

Ludo

Grid Infrastructure 18c: changes in gridSetup.sh -applyRU and -createGoldImage

Starting with release 12cR2, Grid Infrastructure binaries are no more shipped as an installer, but as a zip file that is uncompressed directly in the Oracle Home path.
This opened a few new possibilities including patching the software before the Grid Infrastructure configuration.
My former colleague Markus Flechtner wrote an excellent blog post about it, here: https://www.markusdba.net/?p=294

Now, with 18c, there are a couple of things that changed comparing to Markus blog.

The -applyRU switch replaces the -applyPSU

While it is possible to apply several sub-patches of a PSU one by one:

it was possible to do all at once with:

Now the switch is called, for consistency with the patch naming, -applyRU.

E.g.:

Still there are no options to avoid the run of the Setup Wizard, but it is safe to ignore the error as the patch has been applied successfully.

The -createGoldImage does not work anymore if the Home is not attached

I have tried to create the golden image as per Markus post, but I get this error:

To workaround the issue, there are two ways:

  1. Create a zip file manually, as all the content needed to install the patched version is right there. No need to touch anything as the software is not configured yet.
  2. Configure the software with CRS_SWONLY before creating the gold image:

 

HTH

Ludo

Setting Grid Infrastructure 18c Oracle Home name during the install

A colleague has been struggling for some time in order to get the correct Oracle Home name for the Grid Infrastructure18.3.0 when running gridSetup.sh.

In the graphical Oracle Universal Installer there is no way (as far as we could find) to set the Home name. Moreover, it was our intention to automate the install of Grid Infrastructure.

The complete responsefile ($OH/inventory/response/oracle.crs_Complete.rsp) contains the parameter:

However, when using aĀ responsefile with such parameter, gridSetup.sh fails with the error:

After some tries (and a SR), this happens to actually work:

  • strip the ORACLE_HOME_NAME parameter from the responsefile
  • pass it as a double-quoted parameter at the end of the gridSetup.sh command line

HTH

Oracle Home Management ā€“ part 3: Strengths and limitations of Rapid Home Provisioning

In the previous post I mentioned that having a central repository storing the Golden Images would be the best solution for the Oracle Home provisioning.

In this context, Oracle provides Rapid Home Provisioning: a product included in Oracle Grid Infrastructure that automates home provisioning and patching of Oracle Database and Grid Infrastructure Homes, databases and also generic software.

rhp-conceptOracle Rapid Home Provisioning simplifies tremendously the software provisioning: you can use it to create golden images starting from existing installations and then deploy them locally, across different nodes, on local or remote clusters, standalone servers, etc.

Having a central store with enforced naming conventions ensures software standardization across the whole Oracle farm, and makes patching easier with less risks. Also, it allows to patch existing databases, moving them to Oracle Homes with a higher patch level, and taking care of service draining and rolling upgrades when RAC or RAC One Node deployments exist. Multiple databases can be patched in a single batch using one single rhpctl command.

I will not explain the technical details of Rapid Home Provisioning implementation operation. I already did a webinar a couple of years ago for the RAC SIG:

Burt Clouse, the RHP product manager, did a presentation as well about Rapid Home Provisioning 12c Release 2, that highlights some new features that the product was missing in the first release:

More details about the new features can be found here:

https://blogs.oracle.com/db_maintenance/whats-new-in-122-for-rapid-home-provisioning-and-maintenance

Close to be the perfect product, but…

If rapid home provisioning is so powerful, what makes it less appealing for most users?

In my opinion (read: very own personal opinion šŸ™‚ ), there are two main factors:

First: The technology stack RHP is relying on is quite complex

Although Rapid Home Provisioning 12c Release 2 allows Oracle Home deployments on standalone servers (it was not the case with 12c Release 1), the Rapid Home Provisioning sever itself relies on Oracle Grid Infrastructure 12cR2. That means that there must be skills in the company to manage the full stack: Clusterware, ASM, ACFS, NFS, GNS, SCAN, etc. as well as the RHP Server itself.

Second: remote provisioning requires Lifecycle Management Pack (extra-cost) option licensed on all the RHP targets

If Oracle Homes are deployed on the same cluster that hosts the RHP Server, the product can be used at no extra cost. But if you have many clusters, or using standalone servers for your Oracle databases, then RHP can become pricey very quickly: the price per processor for Lifecycle Management Pack is 12’000$, plus support (pricelist April 2018). So, buying this management pack just to introduce Rapid Home Provisioning in your company might be an excessive investment.

Of course, depending on your needs, you can evaluate it and leverage its full potential and make a bigger return of investment.

Or, you might explore if it is viable to configure each cluster as Rapid Home Provisioning Server: in this case it would be free, but it will have the additional complexity layer on all your clusters.

For small companies, simple architectures and especially where Standard Edition is deployed (no Management Pack for Standard Edition!), a self-made, simpler solution might be a better choice.

In the next post, before going into the details of a hypothetical self-made implementation, I will introduce my thoughts about the New Oracle Database Release Model.

 

My own Dbvisit Replicate integration with Grid Infrastructure

I am helping my customer for a PoC of Dbvisit Replicate as a logical replication tool. I will not discuss (at least, not in this post) about the capabilities of the tool itself, its configuration or the caveats that you should beware of when you do logical replication. Instead, I will concentrate on how we will likely integrate it in the current environment.

My role in this PoC is to make sure that the tool will be easy to operate from the operational point of view, and the database operations, here, are supported by Oracle Grid Infrastructure and cold failover clusters.

Note: there are official DbvisitĀ  online resourcesĀ  about how to configure Dbvisit Replicate in a cluster. I aim to complement those informations, not copy them.

Quick overview

If you know Dbvisit replicate, skip this paragraph.

There are three main components of Dbvisit Replicate: The FETCHER, the MINE and the APPLY processes. The FETCHER gets the redo stream from the source and sends it to the MINE process. The MINE process elaborates the redo streams and converts it in proprietary transaction log files (named plog). The APPLY process gets the plog files and applies the transactions on the destination database.

From an architectural point of view, MINE and APPLY do not need to run close to the databases that are part of the configuration. The FETCHER process, by opposite, needs to be local to the source database online log files (and archived logs).

Because the MINE process is the most resource intensive, it is not convenient to run it where the databases reside, as it might consume precious CPU resources that are licensed for Oracle Database. So, first step in this PoC: the FETCHER processes will run on the cluster, while MINE and APPLY will run on a dedicated Virtual Machine.

dbvisit_gi_overview

Clustering considerations

  • the FETCHER does NOT need to run on the server of the source database: having access to the online logs through the ASM instance is enough
  • to avoid SPoF, the fetcher should be a cluster resource that can relocate without problems
  • to simplify the configuration, the FETCHER configuration and the Dbvisit binaries should be on a shared filesystem (the FETCHER does not persist any data, just the logs)
  • the destination database might be literally anywhere: the APPLY connects via SQL*Net, so a correct name resolution and routing to the destination database are enough

so the implementation steps are:

  1. create a shared filesystem
  2. install dbvisit in the shared filesystem
  3. create the Dbvisit Replicate configuration on the dedicated VM
  4. copy the configuration files on the cluster
  5. prepare an action script
  6. configure the resource
  7. test!

Convention over configuration: the importance of a strong naming convention

Before starting the implementation, I decided to put all the caveats related to the FETCHERĀ  resource relocation on paper:

  • Where will the configuration files reside? Dbvisit has an important variable: the Configuration Name. All the operations are done by passing a configuration file named /{PATH}/{CONFIG_NAME}/{CONFIG_NAME}-{PROCESS_TYPE}.ddc to the dbvrep binary. So, I decided to put ALL the configuration directories under the same path: given the Configuration Name, I will always be able to get the configuration file path.
  • How will the configuration files relocate from one node to the other? Easy here: they won’t. I will use an ACFS filesystem
  • How can I link the cluster resource with its configuration name? Easy again: I call my resources dbvrep.CONFIGNAME.PROCESS_TYPE. e.g. dbvrep.FROM_A_TO_B.fetcher
  • How will I manage the need to use a new version of dbvisit in the future? Old and new versions must coexist: Instead of using external configuration files, I will just use a custom resource attribute named DBVREP_HOME inside my resource type definition. (see later)
  • What port number should I use? Of course, many fetchers started on different servers should not have conflicts. This is something that might be either planned or made dynamic. I will opt for the first one. But instead of getting the port number inside the Dbvisit configuration, I will use a custom resource attribute: DBVREP_PORT.

Considerations on the FETCHER listen address

This requires a dedicated paragraph. The Dbvisit documentation suggest toĀ  create a VIP, bind on the VIP address and create a dependency between the FETCHER resource and the VIP. Here is where my configuration will differ.

Having a separate VIP per FETCHER resource might, potentially, lead to dozens of VIPs in the cluster. Everything will depend on the success of the PoC and on how many internal clients will decide to ask for such implementation. Many VIPs == many interactions with network admins for address reservation, DNS configurations, etc. Long story short, it might slow down the creation and maintenance of new configurations.

Instead, each FETCHER will listen to the local server address, and the action script will take care of:

  • getting the current host name
  • getting the current ASM instance
  • changing the settings of the specific Dbvisit Replicate configuration (ASM instance and FETCHER listen address)
  • starting the FETCHER

Implementation

Now that all the caveats and steps are clear, I can show how I implemented it:

Create a shared filesystem

Install dbvisit in the shared filesystem

Create the Dbvisit Replicate configuration on the dedicated VM

Copy the configuration files from the Dbvisit VM to the cluster

Prepare an action script

Configure the resource

Test!

 

Also the relocation worked as expected: when the settings are modified through:

The MINE process get the change dynamically, so no need to restart it.

Last consideration

Adding a hard dependency between the DB and the FETCHER will require to stop the DB with the force option or to always stop the fetcher before the database. Also, the start of the DB will pullup the FETCHER (pullup:always) and the opposite as well. We will consider furtherly if we will use this dependency or if we will manage it differently (e.g. through the action script).

TheĀ hard dependency declared without the global keyword, will always start the fetcher on the server where the database runs. This is not required, but it might be nice to see the fetcher on the same node. Again, a consideration that we will discuss furtherly.

HTH

Ludovico

Get the Most out of Oracle Data Guard – The material

Here we go: as usual, the feedback that I usually get after my talks (specifically, after POUG High Five conference), is if I will share my demo scripts and material.

Sadly, the demos I am doing for my presentation “Get the most out of Oracle Data Guard” are quite tied to an environment built for the purpose of the demos. So, do not expect to get scripts easy to use as is, but rather to get some ideas beyond the demo themselves.

I hope they will help to get the whole picture.

Of course, if you need to implement a cloning strategy based on Data Guard or any other solution that I describe in this post, please feel free to contact me, I will be glad to help you implement it in your environment.

Slides

Demo 1

Video:

Scripts:

 

Demo 2

Video:


Scripts:

 

Demo 3

Video:

Scripts:

Preparation:

snap_acfs.pl

 

snap_databasae.pl

clone_from_snap.pl

Cheers

Ludovico

12.1.0.2 Bundle Patch 170718 breaks Data Guard and Duplicate from active database

Recently my customer patched its 12.1.0.2 databases with the Bundle Patch 170718 on the new servers (half of the customer’s environment). The old servers are still on 161018 Bundle Patch.

We realized that we could not move anymore the databases from the old servers to the new ones because the duplicate from active database was failing with this error:

The last lines shows the same error that Franck blogged about some months ago.

Oracle 12.2 had introduced incompatibility with previous releases in remote file transfer via SQL*Net. At least this is what it seems. According to Oracle, this is due to a bugfix present in Oracle 12.2

Now, the bundle patch that we installed on BP 170718 contains the same bugfix (Patch for bug 18633374).

So, the incompatibility happens now between databases of the same “Major Release” (12.1.0.2).

There are two possible workarounds:

  1. Apply the same patch level on both sides (BP170718 in my case)
  2. Apply just the patch 18633374 on top of your current PSU/DBBP (a merge might be necessary).

We used the second approach and now we can setup Data Guard again to move our databases without downtime:

HTH

Ludovico