I have been installing Grid Infrastructure 18c for a while, then switched to 19c when it became GA.
At the beginning I have been overly enthusiast by the shorter installation time:
Grid Infra 19c install process is MUCH faster than 18c/12cR2. Mean time for 2 node clusters @ CERN (incl. volumes, puppet runs, etc.) lowered from 1h30 to 45mins. No GIMR anymore by default!
— Ludovico Caldara (@ludodba) 5 maggio 2019
The GIMR is now optional, that means that deciding to install it is a choice of the customer, and a customer might like to keep it or not, depending on its practices.
Not having the GIMR by default means not having the local-mode automaton. This is also not a problem at all. The default configuration is good for most customers and works really well.
This new simplified configuration reduces some maintenance effort at the beginning, but personally I use a lot the local-mode automaton for out-of-place patching of Grid Infrastructure (read my blog posts to know why I really love the local-mode automaton), so it is something that I definitely need in my clusters.
A choice that makes sense for Oracle and most customers
Oracle vision regarding Grid Infrastructure consists of a central management of clusters, using the Oracle Domain Services Cluster. In this kind of deployment, the Management Repository, TFA, and many other services, are centralized. All the clusters use those services remotely instead of having them configured locally. The local-mode automaton is no exception: the full, enterprise-grade version of Fleet Patching and Provisioning (FPP, formerly Rapid home provisioning or RHP) allows much more than just out-of-place patching of Grid Infrastructure, so it makes perfectly sense to avoid those configurations everywhere, if you use a Domain Cluster architecture. Read more here.
Again, as I said many times in the past, doing out-of-place patching is the best approach in my opinion, but if you keep doing in-place patching, not having the local-mode automaton is not a problem at all and the default behavior in 19c is a good thing for you.
I need local-mode automaton on 19c, what I need to do at install time?
If you have many clusters, you are not installing them by hand with the graphic interface (hopefully!). In the responseFile for 19c Grid Infrastructure installation, this is all you need to change comparing to a 18c:
$ diff grid_install_template_18.rsp grid_install_template_19.rsp
as you can see, also Flex ASM is not part of the game by default in 19c.
Once you specify in the responseFile that you want GIMR, then the local-mode automaton is installed as well by default.
I installed GI 19c without GIMR and local-mode automaton. How can I add them to my new cluster?
First, recreate the empty MGMTDB CDB by hand:
$ dbca -silent -createDatabase -sid -MGMTDB -createAsContainerDatabase true \
-templateName MGMTSeed_Database.dbc -gdbName _mgmtdb \
-storageType ASM -diskGroupName +MGMT \
-datafileJarLocation $OH/assistants/dbca/templates \
-characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
Prepare for db operation
Registering database with Oracle Grid Infrastructure
Copying database files
Creating and starting Oracle instance
Completing Database Creation
Executing Post Configuration Actions
Database creation complete. For details check the logfiles at:
Global Database Name:_mgmtdb
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb/_mgmtdb2.log" for further details.
Then, configure the PDB for the cluster. Pay attention to the -local switch that is not documented (or at least it does not appear in the inline help):
$ mgmtca -local
After that, you might check that you have the PDB for your cluster inside the MGMTDB, I’ll skip this step.
Before creating the rhpserver (local-mode automaton resource), we need the volume and filesystem to make it work (read here for more information).
ASMCMD> volcreate -G MGMT -s 1536M --column 8 --width 1024k --redundancy unprotected GHCHKPT
ASMCMD> volinfo --all
Diskgroup Name: MGMT
Volume Name: GHCHKPT
Volume Device: /dev/asm/ghchkpt-303
Size (MB): 1536
Resize Unit (MB): 64
Stripe Columns: 8
Stripe Width (K): 1024
(oracle)$ mkfs -t acfs /dev/asm/ghchkpt-303
(root)# $CRS_HOME/bin/srvctl add filesystem -d /dev/asm/ghchkpt-303 -m /opt/oracle/rhp_images/chkbase -u oracle -fstype ACFS
(root)# $CRS_HOME/bin/srvctl enable filesystem -volume ghchkpt -diskgroup MGMT
(root)# $CRS_HOME/bin/srvctl start filesystem -volume ghchkpt -diskgroup MGMT
Finally, create the local-mode automaton resource:
(root)# $CRS_HOME/bin/srvctl add rhpserver -local -storage /opt/oracle/rhp_images
Again, note that there is a -local switch that is not documented. Specifying it will create the resource as a local-mode automaton and not as a full FPP Server (or RHP Server, damn, this change of name gets me mad when I write blog posts about it 🙂 ).
Latest posts by Ludovico (see all)
- Video: Where should I put the Observer in a Fast-Start Failover configuration? - November 29, 2022
- Video: The importance of Fast-Start Failover in an Oracle Data Guard configuration - November 29, 2022
- Find Ludovico at Oracle Cloud World 2022! - October 14, 2022
Local-mode automaton is not possible for Oracle Restart, right?
Is there any other way to do local out-of-place patching for Oracle Restart GI other than using “opatchauto apply -outofplace”?
correct, the local-mode automaton requires the full GI stack. As Oracle Restart has been deprecated for a while, I have not been using it in a long time, so I do not know the answer…
I guess that there is no other way except in-place patching. I am not sure about “opatchauto apply -outofplace” for Restart either, if you manage to make it work it would be nice to comment here 🙂
What is the structure of mgmtdb in 19.6? I mean at creation which users must contains? After the creation of mgmtdb I execute the oclumon and I receive login failure.
I struggeling around with the rhpserver resource. It seems that there was a “-storage” option to change the storage location but in v18 it seems not be there anymore.
Problem is that I can’t remove the rhpserver resource because of missing DBFS_DG (was removed) and I wanna try changing it before removing rhpserver resource.
I see that there are two similar oracle products exist: Oracle FPP based on Grid Infrastructure , rhpctl utility AND Oracle Fleet Maintenance based on EM, emcli utility. Why so? Which one will be the major product and another one will be deprecated? There is no any links in documentaion to competitor product
Indeed EM Fleet Maintenance and FPP are distinct products with many overlapping features.
EM Fleet Maintenance
– it is developed by the EM development team.
– it has a web console to ease the usage
– it is part of DBLM, which has many EM features that go way beyond what FPP does
– it is not validated by the MAA team
– it cannot patch Exadata infrastructure
– its evolution is tied with the evolution of EM
Fleet Patching and Provisioning
– it is developed by the ST development (RAC, Exa, etc)
– it does not have any interface other than the command line and the RESTful APIs
– it is easy to integrate with tools like Ansible, chef, or puppet
– it is particularly good when it comes to RAC patching, it’s MAA compliant
– it can patch Exadata infrastructure
– its evolution is tied with the evolution of the Database and Clusterware
– Windows targets are not supported
One of the major differences is that you can use FPP for free in RAC and RAC One Node environments. This makes it very appealing if you are not looking for any exclusive features of EM Fleet Maintenance.
If you have mainly single instances and already have an EM deployment, EM would probably make more sense.
Thank you for sharing this article ..