Script to check Data Guard status from SQL

In a previous blog post I have explained how to get the basic configuration from x$drc and display something like:

There are other possibilities, by using the DBMS_DRS PL/SQL package.

The package is quite rich. In order to get more details, I use CHECK_CONNECT to check the connectivity to the member databases:

example:

In the first case I get no exceptions, that means that the database is reachable using the DGConnectIdentifier specified in the configuration (‘TOOLCDB1_SITE2’ is my database name in the configuration, it is NOT a TNS entry. I use EZConnect in my lab).

In the second case I specify a database that is not in the configuration.

In the third case, it looks like the database is down (no service), or the DGConnectIdentifier is not correct.

 

GET_PROPERTY_OBJ is useful to get e single property of a database/instance:

Example:

Here I have, for the primary (the object_id from x$drc), a TransportLagThreshold of 30 seconds.

DO_CONTROL does a specific check and returns a document with the results:

The problem is… what’s the format for indoc?

To get the correct format, I have enabled sql trace to get the executions, with bind variables, of the dgmgrl commands. It happens that the input format is XML and the output format is HTML.

This is how you can get the LogXptStatus, for example:

 

The big script

So I said… why not trying to have a comprehensive SQL script to check a few vital statuses of Data Guard?

This is the script that came out:

Of course, it is not perfect (many checks missing: FSFO readiness, observer checks, etc.), but it is good enough for base monitoring. Also, it’s faster than a normal shell+dgmgrl script.

Output on a Primary database:

Output on a standby database:

In case of errors (e.g. standby listener stopped), I would get:

So easy to spot the error and use a shell wrapper to grep ^ERROR or similar.

Be careful, the script is not RAC aware, and it lacks some checks, so you might want to reuse it and extend it to fit your exact configuration.

Hope you like it!

Ludovico

Real-Time Cascade Standby Container Databases without Oracle Managed Files

OK, the title might not be the best… I just would like to add more detail to content you can already find in other blogs (E.g. this nice one from Philippe Fierens http://pfierens.blogspot.com/2020/04/19c-data-guard-series-part-iii-adding.html).

I have this Cascade Standby configuration:

Years ago I wrote this whitepaper about cascaded standbys:
https://fr.slideshare.net/ludovicocaldara/2014-603-caldarappr
While it is still relevant for non-CDBs, things have changed with Multitenant architecture.

In my config, the Oracle Database version is 19.7 and the databases are actually CDBs. No Grid Infrastructure, non-OMF datafiles.
It is important to highlight that a lot of things have changed since 12.1. And because 19c is the LTS version now, it does not make sense to try anything older.

First, I just want to make sure that my standbys are aligned.

Primary:

1st Standby alert log:

2nd Standby alert log:

Then, I create a pluggable database (from PDB$SEED):

On the first standby I get:

On the second:

So, yeah, not having OMF might get you some warnings like: WARNING: File being created with same name as in Primary
But it is good to know that the cascade standby deals well with new PDBs.

Of course, this is not of big interest as I know that the problem with Multitenant comes from CLONING PDBs from either local or remote PDBs in read-write mode.

So let’s try a relocate from another CDB:

This is what I get on the first standby:

and this is on the cascaded standby:

So absolutely the same behavior between the two levels of standby.
According to the documentation: https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-PLUGGABLE-DATABASE.html#GUID-F2DBA8DD-EEA8-4BB7-A07F-78DC04DB1FFC
I quote what is specified for the parameter STANDBYS={ALL|NONE|…}:
“If you include a PDB in a standby CDB, then during standby recovery the standby CDB will search for the data files for the PDB. If the data files are not found, then standby recovery will stop and you must copy the data files to the correct location before you can restart recovery.”

“Specify ALL to include the new PDB in all standby CDBs. This is the default.”

Specify NONE to exclude the new PDB from all standby CDBs. When a PDB is excluded from all standby CDBs, the PDB’s data files are unnamed and marked offline on all of the standby CDBs. Standby recovery will not stop if the data files for the PDB are not found on the standby. […]”

So, in order to avoid the MRP to crash, I should have included STANDBYS=NONE
But the documentation is not up to date, because in my case the PDB is skipped automatically and the recovery process DOES NOT STOP:

However, the recovery is marked ENABLED for the PDB on the standby, while usind STANDBYS=NONE it would have been DISABLED.

So, another difference with the doc who states:
“You can enable a PDB on a standby CDB after it was excluded on that standby CDB by copying the data files to the correct location, bringing the PDB online, and marking it as enabled for recovery.”

This reflects the findings of Philippe Fierens in his blog (http://pfierens.blogspot.com/2020/04/19c-data-guard-series-part-iii-adding.html).

This behavior has been introduced probably between 12.2 and 19c, but I could not manage to find exactly when, as it is not explicitly stated in the documentation.
However, I remember well that in 12.1.0.2, the MRP process was crashing.

In my configuration, not on purpose, but interesting for this article, the first standby has the very same directory structure, while the cascaded standby has not.

In any case, there is a potentially big problem for all the customers implementing Multitenant on Data Guard:

With the old behaviour (MRP crashing), it was easy to spot when a PDB was cloned online into a primary database, because a simple dgmgrl “show configuration” whould have displayed a warning because of the increasing lag (following the MRP crash).

With the current behavior, the MRP keeps recovering and the “show configuration” displays “SUCCESS” despite there is a PDB not copied on the standby (thus not protected).

Indeed, this is what I get after the clone:

I can see that the Data Guard Broker is completely silent about the missing PDB. So I might think my PDB is protected while it is not!

I actually have to add a check on the standby DBs to check if I have any missing datafiles:

This check should be implemented and put under monitoring (custom metrics in OEM?)

The missing PDB is easy to spot once I know that I have to do it. However, for each PDB to recover (I might have many!), I have to prepare the rename of datafiles and creation of directory (do not forget I am using non-OMF here).

Now, the datafile names on the standby got changed to …/UNNAMEDnnnnn.

So I have to get the original ones from the primary database and do the same replace that db_file_name_convert would do:

and put this in a rman script (this will be for the second standby, the first has the same name so same PATH):

Then, I need to stop the recovery, start it and stopping again, put the datafiles online and finally restart the recover.
These are the same steps used my Philippe in his blog post, just adapted to my taste 🙂

For the second part, I use this HEREDOC to online all offline datafiles:

and finally:

Now, I do not have anymore any datafiles offline on the standby:

I will not publish the steps for the second standby, they are exactly the same (same output as well).

At the end, for me it is important to highlight that monitoring the OFFLINE datafiles on the standby becomes a crucial point to guarantee the health of Data Guard in Multitenant. Relying on the Broker status or “PDB recovery disabled” is not enough.

On the bright side, it is nice to see that Cascade Standby configurations do not introduce any variation, so cascaded standbys can be threated the same as “direct” standby databases.

HTH

Ludovico

How to get the Data Guard broker configuration from a SQL query?

Everybody knows that you can get the Data Guard configuration in dgmgrl with the command:

but few know that this information is actually available in x$drc:

The table design is not very friendly, because it is a mix of ATTRIBUTE/VALUE pairs (hence many rows per object) and specific columns.

So, in order to get something usable to show the databases and their status, the only solution is to make use of PIVOT().

To get it in a friendly format, I recommend using SQLcl and settting

This is what I get on a simple two databases configuration (primary/standby):

and with a real-time cascade standby:

(interesting to note the leading :RTC_ON in the ship_to attribute).

Although it is much easier to get this information from DGMGRL, get it programmatically is more sure/flexible using the SQL interface, as you know what you want to get, no matter how the dgmgrl syntax changes.

Looking forward to have the REST APIs in a future version of Data Guard 🙂

Ludovico

Awk to format the files .ora (listener.ora, tnsnames.ora, etc)

Tired of formatting the tnsnames.ora to make it more readable, I have taken the nice awk examples from Jeremy Schneider: https://ardentperf.com/2008/11/28/parsing-listenerora-with-awk-and-sed/ and created a function to format all files .ora (lisp-like config files).

Example, before:

and after:

The AWK script:

I have included the function in the COE github repo. More functions to come (hopefully).

Ludo

Multitenant Pills: Partial PDB cloning (and cleanup)

When consolidating to multitenant, there are several consolidation patterns.

  • Big, complex databases usually have special requirements for which it might be a good choice to go to single-tenant (a single PDB in one CDB)
  • Small, relatively easy databases are the best candidate for consolidation to multitenant
  • Schema consolidated databases require special attention, but in general there are several advantages to convert individual schemas (or group of schemas) to individual PDBs

For the latter, there are some techniques to convert a schema in a PDB.

  • export/import (obviously), with eventually Golden Gate to do it online
  • Transportable tablespaces (if the schemas follow strict 1-to-1 tablespace separation
  • partial PDB cloning

We will focus on the last one for this blog post.

Situation

Here we have a PDB with some schemas, each of them has a dedicated tablespace, but accidentally, two of them have also some objects on a common tablespace.

This happens frequently when all the users have quota on the default database tablespace and they do not have necessarily a personal default tablespace.

This is the typical situation where transportable tablespaces become hard to achieve without some upfront segment movement, as tablespaces are not self-contained.

Thankfully, Oracle Multitenant allows us to clone a PDB from a remote one and specify only a subset of tablespaces.

Here is a full example script with some checks and fancy parameters:

This is an example output:

If the clone process succeeds, at the end we should have the new ABC pluggable database with ABC and DATA tablespaces only.

Yeah!

Any Cleanup needed?

What happened to the users? Actually, they are all still there:

And the segments in the two skipped tablespaces are not there:

So the table definitions are also gone?

Not at all! The tables are still there and reference to tablespaces that do not exist anymore. Possible?

Actually, the tablespaces definition are still there if we look at v$tablespace:

If we give a look at the DBA_TABLESPACES view definition, there are a few interesting filters:

What is their meaning?

So the first WHERE clause skips all the INVALID TABLESPACES (when you drop a tablespace it is still stored in ts$ with this state), the second skips the definition of TEMPORARY TABLESPACE GROUPS, the third one is actually our candidate.

Indeed, this is what we get from ts$ for these tablespaces:

So the two tablespaces are filtered out because of this new multitenant flag.

If we try to drop the tablespaces, it succeeds:

But the user GHI, who has no objects anymore, is still there.

So we need to drop it explicitly.

Automate the cleanup

This is an example PL/SQL that is aimed to automate the cleanup.

Actually:

  • Users that had segments in one of the excluded tablespaces but do not have any segments left are just LOCKED (for security reasons, you can guess why).
  • Tablespaces that meet the “excluded PDB” criteria, are just dropped

This is the output for the clone procedure we have just seen:

The PL/SQL block can be quite slow depending on the segments and tablespaces, so it might be a good idea to have a custom script instead of this automated one.

What about user DEF?

The automated procedure has not locked the account DEF. Why?

Actually, the user DEF still has some segments in the DATA tablespace. Hence, the procedure cannot be sure what was the original intention: copy the user ABC ? The clone procedure allows only to specify the tablespaces, so this is the only possible result.

Promo: If you need to migrate to Multitenant and you need consulting/training, just contact me, I can help you 🙂

 

Multitenant Pills: Pushing your PDB to the Cloud in one step?

The Oracle Multitenant architecture introduces some nice opportunities, including local and remote cloning (see this example on ORACLE_BASE blog).

However, the only available cloning procedure use the PULL method to get the data from the remote source PDB.

This can be a limitation, especially in environments where the cloning environment does not have direct access to the production, or where the clones must be done in the Cloud with no direct access to the production VLAN on-premises.

So, one common approach is to clone/detach locally, put the PDB files in the Object Store and then attach them in the cloud.

Another approach is to use SSH tunnels. If you follow my blog you can see it is something that I use every now and then to do stuff from on-premises to the cloud.

How to set it up?

Actually, it is super-easy: just prepare a script in the cloud that will do the create pluggable database, then trigger it from on-premises.

This is an example:

It takes as parameters: the name of the source PDB, the name of the target PDB and the SQL*Net descriptor to create the temporary database link from the cloud CDB to the on-premises CDB.

The user C##ONPREM must obviously exist on-premises with the following privileges:

The cloud database should use OMF so you don’t have to take care about file name conversions.

At this point, if you have set up correctly the SSH keys to connect to the cloud server, it is just a matter of running the script remotely using the proper SSH tunnel. Once the remote port binding established, the cloud server can contact the on-premises listener port using localhost:remote_bind:

Of course the timing depends a lot on the size of the database and your connection to the Cloud.

I have tested this procedure with Oracle Database 19.7 on OCI compute instances and on DBaaS VM instance, it works without any additional work. Of course, it does not work for Autonomous Database 🙂

Ludovico

Understand your Database through graphs

During the last months I have had to deal with highly consolidated databases, where there was no basic rule to achieve something maintainable. Over many years (some schemas are 30 years old) the number of schemas and their dependencies became a problem up to a point where basic operations like patching or upgrading were very complex to achieve.

In order to try to split these big databases and move towards Oracle Multitenant, I have created some graphs that helped me and my colleagues to understand the connections between schemas (grants) and databases (db links).

 

How  have I created the graphs?

I used Gephi , an open source software to generate graphs. Gephi is very powerful, I feel I have used just 1% of its capabilities.

How to create a graph, depends mostly on what you want to achieve and which data you have.

First, some basic terminology: Nodes are the “dots” in the graph, Edges are the lines connecting the dots. Both nodes and edges can have properties (e.g. edges have weight), but you might not need any.

Basic nodes and edges without properties

If you need just to show the dependencies between nodes, a basic edge list with source->target will be enough.

For example, you can have a edge list like this one: gephi_1_edges.csv

Open Gephi, go to New ProjectFile -> Import Spreadsheet, select the file. If you already have a workspace and you want to add the edges to the same workspace, select Append to existing workspace.

This will lead to something very basic:

In the Data Laboratory tab, you might want to copy the value of the ID column to the label column, so that they match:

Now you must care about two things:

First, re-arranging the nodes. With few nodes this is often not required, but when there are many, the default visualization is not satisfying. In the tab Overview , pane Layout there are a few algorithms you can choose. For big graphs I prefer Force Atlas. There are a few parameters to tune, especially  the attraction/repulsion strengths and the gravity. Speed is also crucial if you have many nodes. For this small example I put Repulsion Strength to 2000, Attraction Strength to 1. Clicking on Run starts the algorithm to rearrange the nodes, which with few edges will almost instantaneous (don’t forget to stop it afterwards).

Here is what I get:

Now that the nodes are in place, in the preview pane I can adjust the settings, like showing labels and changing colors. Also, in the Appearance pane I can change the scheme to have for example colors based on ranking.

In this example, I choose to color based on ranking (nodes with more edges are darker).

I also set the Preset Default Straight, Show labels (with smaller size) , proportional size.

Adding nodes properties

Importing edges from CSV gives only a dumb list of edges, without any information about nodes properties. Having properties set might be important in order to change how the graph is displayed.

By importing a node list containing the properties of each node , I can add important information. In this example file, I have columns Id, Label and Sex, that I will use to color the nodes differently: gephi_1_nodes.csv

In the appearance node, I have just selected to partition by sex with a meaningful color.

Using real metadata to understand schemas or dependencies…

I will take, as an example, the dependencies in a database between objects of type VIEW, MATERIALIZED VIEW and TABLE. The database has quite a usage of materialized views and understanding the relation is not always easy.

This is the query that interests me:

So I need the nodes, for that I need a UNION to get nodes from both sides of the dependency. The best tool to achieve this is SqlCl as it has the native CSV output format:

The edge list:

Using the very same procedure as above, it is easy to generate the graph.

I am interested in understanding what is TABLE, what is VIEW and what MATERIALIZED VIEW, so I partition the color by type. I also set the edge color to source so the edge will have the same color of the source node type.

I am also interested in highlighting which tables have more incoming dependencies, so I rank the node size by In-Degree.

 

In the graph:

  • All the red dots are MVIEWS
  • All the blue dots are VIEWS
  • All the black dots are TABLES
  • All the red lines are dependencies between a MVIEW and a  (TABLE|VIEW).
  • All the blue lines are dependencies between a VIEW and a  (TABLE|MVIEW).
  • The bigger the dots, the more incoming dependencies.

With the same approach I can get packages, grants, roles, db_links, client-server dependencies, etc. to better understand the infrastructure.

I hope you like it!

ORA-02002 and ORA-00942 while upgrading OWM to 19c

This is a quick post about a problem that we have had while upgrading a DB to 19c.

At 91% of the upgrade, the OWM (Workspace Manager) upgrade was failing with this error error:

Indeed, executing the statement was leading consistently to this problem:

and we have had this result:

So, resuming the autoupgrade job was not a solution.

The view definition is:

but the package wmsys.ltUtil is wrapped, so no chance to understand what was happening.

As a quick fix, we have recompiled the binaries with mixed auditing:

and put the audit_trail=DB in the upgrade pfile (was NONE in this specific case).

After that, restarted the DB in upgrade mode using the same pfile.

After that, the view was giving no errors anymore and we resumed the autoupgrade job.

This is an old troubleshooting method that I call “Database Administration by guess”: I am not sure about the real cause, but the workaround just worked fine for us.

It would be interesting to know if anyone of you have had the same problem, and what were the auditing parameters in your case…

Ludovico

Duplicating a DB and setting up Data Guard through CMAN and SSH tunnel

I am fascinated about the new Zero Downtime Migration tool that has been available since November 28th. Whilst I am still in the process of testing it, there is one big requirement that might cause some headache to some customers. It is about network connectivity:

Configuring Connectivity Between the Source and Target Database Servers

The source database server […] can connect to target database instance over target SCAN through the respecitve scan port and vice versa.
The SCAN of the target should be resolvable from the source database server, and the SCAN of the source should resolve from the target server.
Having connectivity from both sides, you can synchronize between the source database and target database from either side. […]

If you are taking cloud migrations seriously, you should have either a VPN site-to-site to the cloud, or a Fast Connect link. At CERN we are quite lucky to have a high bandwidth Fast Connect to OCI Frankfurt.

This requirement might be missing for many customers, so what is a possible solution to setup connectivity for database duplicates and Data Guard setups?

In the picture above you can see a classic situation, that usually has two problems that must be solved:

  • the SCAN addresses are private: not accessible from internet
  • there are multiple SCAN addresses, so tunneling through all of them might be complex

Is it possible to configure CMAN in front of the SCAN listener as a single IP entry and tunnel through SSH to this single IP?

I will show now how to achieve this configuration.

For sake of simplicity, I have put two single instances without SCAN and a CMAN installation on the database servers, but it will work with little modification using SCAN and RAC setups as well. Note that in a Cloud Infrastructure setup, this will require a correct setup of the TDE wallet on both the source and the destination.

Because I put everything on s single host, I have to setup CMAN to listen to another port, but having a separate host for CMAN is a better practice when it has to proxy to SCAN listeners.

Installing and configuring CMAN

The most important part of the whole setup is that the CMAN on the standby site must have a public IP address and open SSH port so that we can tunnel through it.

The on-premises CMAN must have open access to the standby CMAN port 22.

For both primary and standby site you can follow the instructions of my blog post: Install and configure CMAN 19c in the Oracle Cloud, step by step.

In my example, because I install CMAN on the same host of the DB, I configure CMAN to run on port 1522.

CMAN primary config:

CMAN standby config:

This configuration is not secure at all, you might want to secure it further in order to allow only the services needed for setting up Data Guard.

The registration of database services to CMAN through the the remote_listener parameter is optional, as I will register the entries statically in the listener and use a routed connection through CMAN.

Listener configuration

The listener must have a static entry for the database, so that duplicate and switchover work properly.

On primary add to listener.ora:

On standby:

In a RAC config, all the local listeners must be configured with the correct SID_NAME running on the host. Make sure to reload the listeners 😉

Creating the SSH tunnels

There must be two tunnels open: one that tunnels from on-premises to the cloud and the other that tunnels from the cloud to on-premises.

However, such tunnels can both be created from the on-premises CMAN host that has access to the cloud CMAN host:

in my case, the hostnames are:

Important: with CMAN on a host other than the DB server, the CMAN sshd must be configured to have GatewayPorts set to yes:

After the tunnels are open, any connections to the local CMAN server port 1523 will be forwarded to the remote CMAN port 1522.

Configuring the TNSNAMES to hop through CMAN and SSH tunnel

Both servers must have now one entry for the local database pointing to the actual SCAN (or listener for single instances) and one entry for the remote database pointing to local port 1523 and routing to the remote scan.

On-premises tnsnames.ora:

Cloud tnsnames.ora:

After copying the passwordfile and starting nomount the cloud database, it should be possible from both sides to connect as SYSDBA to both DB_CLOUD and DB_ONPREM.

This configuration is ready for both duplicate from active database and for Data Guard.
I still have to figure out if it works with ZDM, but I think it is a big step towards establishing connection between on-premises and the Oracle Cloud when no VPN or Fast Connect are available.

Duplicate from active database

If the setup is correct, this should work:

Setting up Data Guard

  • Configure broker config files
  • Add and clear the standby logs
  • Start the broker
  • Create the configuration:

    The static connect identifier here is better if it uses the TNSNAMES resolution because each database sees each other differently.

Checking the DG config

A validate first:

Than a switchover, back and forth:

Conclusion

Yes, it is possible to setup a Data Guard between two sites that have no connections except mono-directional SSH. The SSH tunnels allow SQL*Net communication to a remote endpoint. CMAN allows to proxy through a single endpoint to multiple SCAN addresses.

However, do not forget about the ultimate goal that is to migrate your BUSINESS to the cloud, not just the database. Therefore, having a proper communication to the cloud with proper performance, architecture and security is crucial. Depending on your target Cloud database, Zero Downtime Migration or MV2ADB should be the correct and supported solutions.

Checking usage of HugePages by Oracle databases in Linux environments

Yesterday several databases on one server started logging errors in the alert log:

That means not enough contiguous free memory in the OS. The first thing that I have checked has been of course the memory, and the used huge pages:

The memory available (last column in the free command) was indeed quite low, but still plenty of space in the huge pages (86k pages free out of 180k).

The usage by Oracle instances:

You can get the code of mem.sh in this post.

Regarding pure shared memory usage, the situation was what I was expecting:

360G of shared memory usage, much more than what was allocated in the huge pages.

I have compared the situation with the other node in the cluster: it had more memory allocated by the databases (because of more load on it), more huge page usage and less 4k pages consumption overall.

So I was wondering if all the DBs were property allocating the SGA in huge pages or not.

This redhat page has been quite useful to create a quick snippet to check the huge page memory allocation per process: