New in Data Guard 21c and 23c: Automatic preparation of the primary

Oracle Data Guard 21c came with a new command:

The command prepare database for data guard automatically sets parameters and creates standby redo logs according to best practices.This command prepares a database to become primary in a Data Guard configuration.

It sets many recommended parameters:

Sets the RMAN archive deletion policy, enables flashback and force logging, creates the standby logs according to the online redo logs configuration, and creates an spfile if the database is running with an init file.

If you tried this in 21c, you have noticed that there is an automatic restart of the database to set all the static parameters. If you weren’t expecting this, the sudden restart could be a bit brutal approach.

In 23c, we added an additional keyword “restart” to specify that you are OK with the restart of the database. If you don’t specify it, the broker will complain that it cannot proceed without a restart:

If you specify it, it will proceed with the restart:

Notice that if you already have these static parameters set, the broker will just set the missing dynamic parameters without the need for a restart:

This new command greatly simplifies the preparation of a Data Guard configuration!

Before 21c, you had to do everything by hand.

Ludo

Can a physical standby database receive the redo SYNC if the Far Sync instance fails?

The answer is YES.

In the following configuration, cdgsima_lhr1pq (primary) sends synchronously to cdgsima_farsync1 (far sync), which forwards the redo stream asynchronously to cdgsima_lhr1bm (physical standby):

But if cdgsima_farsync1 is not available, I want the primary to send synchronously to the physical standby database. I accept a performance penalty, but I do not want to compromise my data protection.

I just need to set up the Redoroutes as follows:

This is defined the second part of the RedoRoutes rules:

Let’s test. If I shutdown abort the farsync instance:

I can see the new SYNC destination being open almost instantaneously (because the old destination fails immediately with ORA-03113):

Indeed, I can see the new NSS process (synchronous redo transport) spawn at that time:

Ludo

Real-Time Cascade Standby Container Databases without Oracle Managed Files

OK, the title might not be the best… I just would like to add more detail to content you can already find in other blogs (E.g. this nice one from Philippe Fierens http://pfierens.blogspot.com/2020/04/19c-data-guard-series-part-iii-adding.html).

I have this Cascade Standby configuration:

Years ago I wrote this whitepaper about cascaded standbys:
https://fr.slideshare.net/ludovicocaldara/2014-603-caldarappr
While it is still relevant for non-CDBs, things have changed with Multitenant architecture.

In my config, the Oracle Database version is 19.7 and the databases are actually CDBs. No Grid Infrastructure, non-OMF datafiles.
It is important to highlight that a lot of things have changed since 12.1. And because 19c is the LTS version now, it does not make sense to try anything older.

First, I just want to make sure that my standbys are aligned.

Primary:

1st Standby alert log:

2nd Standby alert log:

Then, I create a pluggable database (from PDB$SEED):

On the first standby I get:

On the second:

So, yeah, not having OMF might get you some warnings like: WARNING: File being created with same name as in Primary
But it is good to know that the cascade standby deals well with new PDBs.

Of course, this is not of big interest as I know that the problem with Multitenant comes from CLONING PDBs from either local or remote PDBs in read-write mode.

So let’s try a relocate from another CDB:

This is what I get on the first standby:

and this is on the cascaded standby:

So absolutely the same behavior between the two levels of standby.
According to the documentation: https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-PLUGGABLE-DATABASE.html#GUID-F2DBA8DD-EEA8-4BB7-A07F-78DC04DB1FFC
I quote what is specified for the parameter STANDBYS={ALL|NONE|…}:
“If you include a PDB in a standby CDB, then during standby recovery the standby CDB will search for the data files for the PDB. If the data files are not found, then standby recovery will stop and you must copy the data files to the correct location before you can restart recovery.”

“Specify ALL to include the new PDB in all standby CDBs. This is the default.”

Specify NONE to exclude the new PDB from all standby CDBs. When a PDB is excluded from all standby CDBs, the PDB’s data files are unnamed and marked offline on all of the standby CDBs. Standby recovery will not stop if the data files for the PDB are not found on the standby. […]”

So, in order to avoid the MRP to crash, I should have included STANDBYS=NONE
But the documentation is not up to date, because in my case the PDB is skipped automatically and the recovery process DOES NOT STOP:

However, the recovery is marked ENABLED for the PDB on the standby, while usind STANDBYS=NONE it would have been DISABLED.

So, another difference with the doc who states:
“You can enable a PDB on a standby CDB after it was excluded on that standby CDB by copying the data files to the correct location, bringing the PDB online, and marking it as enabled for recovery.”

This reflects the findings of Philippe Fierens in his blog (http://pfierens.blogspot.com/2020/04/19c-data-guard-series-part-iii-adding.html).

This behavior has been introduced probably between 12.2 and 19c, but I could not manage to find exactly when, as it is not explicitly stated in the documentation.
However, I remember well that in 12.1.0.2, the MRP process was crashing.

In my configuration, not on purpose, but interesting for this article, the first standby has the very same directory structure, while the cascaded standby has not.

In any case, there is a potentially big problem for all the customers implementing Multitenant on Data Guard:

With the old behaviour (MRP crashing), it was easy to spot when a PDB was cloned online into a primary database, because a simple dgmgrl “show configuration” whould have displayed a warning because of the increasing lag (following the MRP crash).

With the current behavior, the MRP keeps recovering and the “show configuration” displays “SUCCESS” despite there is a PDB not copied on the standby (thus not protected).

Indeed, this is what I get after the clone:

I can see that the Data Guard Broker is completely silent about the missing PDB. So I might think my PDB is protected while it is not!

I actually have to add a check on the standby DBs to check if I have any missing datafiles:

This check should be implemented and put under monitoring (custom metrics in OEM?)

The missing PDB is easy to spot once I know that I have to do it. However, for each PDB to recover (I might have many!), I have to prepare the rename of datafiles and creation of directory (do not forget I am using non-OMF here).

Now, the datafile names on the standby got changed to …/UNNAMEDnnnnn.

So I have to get the original ones from the primary database and do the same replace that db_file_name_convert would do:

and put this in a rman script (this will be for the second standby, the first has the same name so same PATH):

Then, I need to stop the recovery, start it and stopping again, put the datafiles online and finally restart the recover.
These are the same steps used my Philippe in his blog post, just adapted to my taste 🙂

For the second part, I use this HEREDOC to online all offline datafiles:

and finally:

Now, I do not have anymore any datafiles offline on the standby:

I will not publish the steps for the second standby, they are exactly the same (same output as well).

At the end, for me it is important to highlight that monitoring the OFFLINE datafiles on the standby becomes a crucial point to guarantee the health of Data Guard in Multitenant. Relying on the Broker status or “PDB recovery disabled” is not enough.

On the bright side, it is nice to see that Cascade Standby configurations do not introduce any variation, so cascaded standbys can be threated the same as “direct” standby databases.

HTH

Ludovico

Cloning a PDB with ASM and Data Guard (no ADG) without network transfer

Ok, if you’re reading this post, you may want to read also the previous one that explains something more about the problem.

Briefly said, if you have a CDB running on ASM in a MAA architecture and you do not have Active Data Guard, when you clone a PDB you have to “copy” the datafiles somehow on the standby. The only solution offered by Oracle (in a MOS Note, not in the documentation) is to restore the PDB from the primary to the standby site, thus transferring it over the network. But if you have a huge PDB this is a bad solution because it impacts your network connectivity. (Note: ending up with a huge PDB IMHO can only be caused by bad consolidation. I do not recommend to consolidate huge databases on Multitenant).

So I’ve worked out another solution, that still has many defects and is almost not viable, but it’s technically interesting because it permits to discover a little more about Multitenant and Data Guard.

The three options

At the primary site, the process is always the same: Oracle copies the datafiles of the source, and it modifies the headers so that they can be used by the new PDB (so it changes CON_ID, DBID, FILE#, and so on).

On the standby site, by opposite, it changes depending on the option you choose:

Option 1: Active Data Guard

If you have ADG, the ADG itself will take care of copying the datafile on the standby site, from the source standby pdb to the destination standby pdb. Once the copy is done, the MRP0 will continue the recovery. The modification of the header block of the destination PDB is done by the MRP0 immediately after the copy (at least this is what I understand).

ADG_PDB_copy

Option 2: No Active Data Guard, but STANDBYS=none

In this case, the copy on the standby site doesn’t happen, and the recovery process just add the entry of the new datafiles in the controlfile, with status OFFLINE and name UNKNOWNxxx.  However, the source file cannot be copied anymore, because the MRP0 process will expect to have a copy of the destination datafile, not the source datafile. Also, any tentative of restore of the datafile 28 (in this example) will give an error because it does not belong to the destination PDB. So the only chance is to restore the destination PDB from the primary.
NOADG_PDB_STANDBYS_NONE_copy

Option 3: No Active Data Guard, no STANDBYS=none

This is the case that I want to explain actually. Without the flag STANDBYS=none, the MRP0 process will expect to change the header of the new datafile, but because the file does not exist yet, the recovery process dies.
We can then copy it manually from the source standby pdb, and restart the recovery process, that will change the header. This process needs to be repeated for each datafile. (that’s why it’s not a viable solution, right now).

NOADG_PDB_copy

Let’s try it together:

The Environment

Primary

Standby

The current user PDB (any resemblance to real people is purely coincidental 😉 #haveUSeenMaaz):

Cloning the PDB on the primary

First, make sure that the source PDB is open read-only

Then, clone the PDB on the primary without the clause STANDBYS=NONE:

Review the clone on the Standby

At this point, on the standby the alert log show that the SYSTEM datafile is missing, and the recovery process stops.

One remarkable thing, is that in the standby controlfile, ONLY THE SYSTEM DATAFILE exists:

We need to fix the datafiles one by one, but most of the steps can be done once for all the datafiles.

Copy the source PDB from the standby

What do we need to do? Well, the recovery process is stopped, so we can safely copy the datafiles of  the source PDB from the standby site because they have not moved yet. (meanwhile, we can put the primary source PDB back in read-write mode).

Copy the datafiles:

Do the magic

Now there’s the interesting part: we need to assign the datafile copies of the maaz PDB to LUDO.

Sadly, the OMF will create the copies on the bad location (it’s a copy, to they are created on the same location as the source PDB).

We cannot try to uncatalog and recatalog the copies, because they will ALWAYS be affected to the source PDB. Neither we can use RMAN because it will never associate the datafile copies to the new PDB. We need to rename the files manually.

It’s better to uncatalog the datafile copies before, so we keep the catalog clean:

Then, because we cannot rename files on a standby database with standby file management set to AUTO, we need to put it temporarily to MANUAL.

standby_file_management is not PDB modifiable, so we need to do it for the whole CDB.

then we need to set back the standby_file_management=auto or the recover will not start:

We can now restart the recovery.

The recovery process will:
– change the new datafile by modifying the header for the new PDB
– create the entry for the second datafile in the controlfile
– crash again because the datafile is missing

We already have the SYSAUX datafile, right? So we can alter the name again:

This time all the datafiles have been copied (no user datafile for this example) and the recovery process will continue!! 🙂 so we can hit ^C and start it in background.

The Data Guard configuration reflects the success of this operation.

Do we miss anything?

Of course, we do!! The datafile names of the new PDB reside in the wrong ASM path. We need to fix them!

 

I know there’s no practical use of this procedure, but it helps a lot in understanding how Multitenant has been implemented.

I expect some improvements in 12.2!!

Cheers

Ludo

 

Tales from the Demo Grounds part 2: cloning a PDB with ASM and Data Guard (no ADG)

In my #OOW14 presentation about MAA and Multitenant, more precisely at slide #59, “PDB Creation from other PDB without ADG*”, I list a few commands that you can use to achieve a “correct” Pluggable Database clone in case you’re not using Active Data Guard.

What’s the problem with cloning a PDB in a MAA environment without ADG? If you’ve attended my session you should know the answer…

If you read the book “Data Guard Concepts and Administration 12c Release 1 (12.1)“, paragraph 3.5 Creating a PDB in a Primary Database, you’ll see that:

If you plan to create a PDB as a clone from a different PDB, then copy the data files that belong to the source PDB over to the standby database. (This step is not necessary in an Active Data Guard environment because the data files are copied automatically when the PDB is created on the standby database.)

But because there are good possibilities (99%?) that in a MAA environment you’re using ASM, this step is not so simple: you cannot copy the datafiles exactly where you want, it’s OMF, and the recovery process expects the files to be where the controlfile says they should be.

So, if you clone the PDB, the recovery process on the standby doesn’t find the datafiles at the correct location, thus the recovery process will stop and will not start until you fix manually. That’s why Oracle has implemented the new syntax “STANDBYS=NONE” that disables the recovery on the standby for a specific PDB: it lets you disable the recovery temporarily while the recovery process continues to apply logs on the remaining PDBs. (Note, however, that this feature is not intended as a generic solution for having PDBs not replicated. The recommended solution in this case is having two distinct CDBs, one protected by DG, the other not).

With ADG, when you clone the PDB on the primary, on the standby the ADG takes care of the following steps, no matter if on ASM or FS:

  1. recover up to the point where the file# is registered in the controlfile
  2. copy the datafiles from the source DB ON THE STANDBY DATABASE (so no copy over the network)
  3. rename the datafile in the controlfile
  4. continue with the recovery

If you don’t have ADG, and you’re on ASM, Oracle documentation says nothing with enough detail to let you solve the problem. So in August I’ve worked out the “easy” solution that I’ve also included in my slides (#59 and #60):

  1. SQL> create pluggable database DEST from SRC standbys=none;
  2. RMAN> backup as copy pluggable database DEST format ‘/tmp/dest%f.dbf’;
  3. $ scp  /tmp/dest*.dbf remote:/tmp
  4. RMAN> catalog start with ‘/tmp/dest’
  5. RMAN> set newnamefor pluggable database DEST to new;
  6. RMAN> restore pluggable database DEST;
  7. RMAN> switch pluggable database DEST to copy;
  8. DGMGRL> edit database ‘STBY’ set state=’APPLY-OFF’;
  9. SQL> Alter pluggable database DEST enable recovery;
  10. DGMGRL> edit database ‘STBY’ set state=’APPLY-ON’;

Once at #OOW14, after endless conversations at the Demo Grounds, I’ve discovered that Oracle has worked out the very same solution requiring network transfer and that it has been documented in a new note.

Making Use of the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1)

This note is very informative and I recommend to read it carefully!

What changes (better) in comparison with my first solution, is that Oracle suggests to use the new feature “restore from service”:

I’ve questioned the developers at the Demo Grounds about the necessity to use network transfer (I had the chance to speak directly with the developer who has written this piece of code!! :-)) and they said that they had worked out only this solution. So, if you have a huge PDB to clone, the network transfer from the primary to standby may impact severely your Data  Guard environment and/or your whole infrastructure, for the time of the transfer.

Of course, I have a complex, undocumented solution, I hope I will find the time to document it, so stay tuned if you’re curious! 🙂

Oracle RAC, Oracle Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant (OOW14)

Here you can find the material related to my session at Oracle Open World 2014. I’m sorry I’m late in publishing them, but I challenge you to find spare time during Oracle Open World! It’s the busiest week of the year! (Hard Work, Hard Play)

 Slides

 Demo 1 video

Demo 2 video

Demo 1 script

 

Demo 2 script

 

There’s one slide describing the procedure for cloning one PDB using the standbys clause. Oracle has released a Note while I was preparing my slides (one month ago) and I wasn’t aware of it, so you may also checkout this note on MOS:

Making Use of the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1)

UPDATE: I’ve blogged about it in a more recent post: Tales from the Demo Grounds part 2: cloning a PDB with ASM and Data Guard (no ADG)

UPDATE 2: I’ve written another blog post about these topics: Cloning a PDB with ASM and Data Guard (no ADG) without network transfer

Cheers!

 

Ludovico

Oracle Active Data Guard 12c: Far Sync Instance, Real-Time Cascade Standby, and Other Goodies

Here you can find the content related to my second presentation at Oracle Open World 2014.

 Slides

Demo video1: Real-Time Cascade

Demo video2: Far Sync Instance

Demo 1 Script

 

Demo 2 script

For the demo I’ve used 5 machines running 3 database instances and 2 Far Sync instances. I cannot provide the documentation for creating the demo environment, but the scripts may be useful to understand how the demo works.

Cheers

Ludo

A PDB is cloned while in read-write, Data Guard loose its marbles (12.1.0.2, ORA-19729)

UPDATE: please check my more recent post about this problem and the information I’ve got at the Oracle Demo Grounds during OOW14: http://www.ludovicocaldara.net/dba/demo-grounds-clone-pdb-rw/

I feel the strong need to blog abut this very recent problem because I’ve spent a lot of time debugging it… especially because there’s no information about this error on the MOS.

Introduction
For a lab, I have prepared two RAC Container databases in physical stand-by.
Real-time query is configured (real-time apply, standby in read-only mode).

Following the doc, http://docs.oracle.com/database/121/SQLRF/statements_6010.htm#CCHDFDDG, I’ve cloned one local pluggable database to a new PDB and, because Active Data Guard is active, I was expecting the PDB to be created on the standby and its files copied without problems.

BUT! I’ve forgot to put my source PDB in read-only mode on the primary and, strangely:

  • The pluggable database has been created on the primary WITHOUT PROBLEMS (despite the documentation explicitly states that it needs to be read-only)
  • The recovery process on the standby stopped with error.

 

Now, the primary had all its datafiles (the new PDB has con_id 4):

 

and the standby was missing the datafiles of the new PDB:

 

But, on the standby database, the PDB somehow was existing.

 

I’ve tried to play a little, and finally decided to disable the recovery for the PDB (new in 12.1.0.2).
But to disable the recovery I was needing to connect to the PDB, but the PDB was somehow “inexistent”:

 

So I’ve tried to drop it, but off course, the standby was read-only and I could not drop the PDB:

 

Then I’ve shutted down the standby, but one instance hung and I’ve needed to do a shutdown abort (I don’t know if it was related with my original problem..)

 

After mounting again the standby, the PDB was also accessible:

 

So I’ve been able to disable the recovery:

 

Then, on the primary, I’ve took a fresh backup of the involved datafiles:

 

and I’ve copied and cataloged the copies to the controlfile:

 

but the restore was impossible, because the controlfile was not knowing these datafiles!!

 

So I’ve RESTARTED the recovery for a few seconds, and because the PDB had the recovery disabled, the recovery process has added the datafiles and set them offline.

 

Then I’ve been able to restore the datafiles 🙂

 

Finally, I’ve enabled again the recovery for the PDB and restarted the apply process.

 

Lesson learned: if you want to clone a PDB never, ever, forget to put your source PDB in read-only mode or you’ll have to deal with it!! 🙂

Oracle Database Backup Logging Recovery Appliance – a preview

Please see the disclaimer at the end of the post.

Oracle has announced the new Oracle Database Backup Logging Recovery Appliance at the last Open World 2013, but since then it has not been released to the market yet, and very few information is available on the Oracle website.

During the last IOUG Collaborate 14, Oracle master product manager of Data Guard and MAA,  Larry Carpenter, has unveiled something more about the DBRLA (call it “Debra” to simplify your life 🙂 ) , and I’ve had the chance to discuss about it directly with Larry.

At Trivadis we think that this appliance will be a game changer in the world of backup management.

Why?

Well, if you have ever worked for a big company with many hundreds of databases, you have certainly encountered many of those common problems:

  • Oracle Backup and restore penalized by a shared infrastructure
  • Poor backup or restore performance
  • Tape drives busy when you need them urgently
  • Complex management of backup retentions

That’s not all. As of now, your best recovery point in case of restore is directly related to your backup archive frequency. Oh yes, you have to low down your archive_lag_target parameter, increase your log switch frequency (and thus, the I/O) and still have… 10, 15, 30 minutes of possible data loss? Unless you protect your transactions with a Data Guard. But this will cost you money. For the additional server and storage. For the licenses. And for the effort required to put in place a Data Guard instance for every database that you want to protect. You want to protect your transactions from a storage failure and there’s a price to pay.

The Database Backup Logging Recovery Appliance (wow, I need to copy and paste the name to save time! :-)) overcomes these problems with a simple but brilliant idea: leveraging the existing redo log transport processes and ship the redo stream directly to the backup appliance (the DBLRA, off course) or to its Cloud alter ego, hosted by Oracle.

DBLRA

As you can infer from the picture, 12c databases will work natively with the appliance, while previous releases will have a plugin that will enable all the capabilities.

Backups can be mirrored selectively to another DBLRA, or copied to the cloud or to a 3rd party (Virtual) Tape Library.

The backup retention is enforced by the appliance and the expiration and deletion is done automatically using the embedded RMAN catalog.

Lightning fast backups and restores are guaranteed by the hardware: DBLRA is based on the same hardware used by Exadata, with High Capacity disks. Optional storage extensions can be added to increase the capacity, but all the data, as I’ve said, can be offloaded to VTLs in order to use a cheaper storage for older backups.

To resume, the key values are:

  • No transaction loss!!
  • Lightning fast backups and restores
  • Integrated, Oracle engineered, scalable solution for hundreds to thousands of databases

Looking forward to see it in action!

I cannot cover all the information I have in a single post, but Trivadis is working actively to be ready to implement it at the time of the launch to the market (estimated in 2014), so feel free to contact me if you are interested in boosting your backup environment. 😉

By the way, I expect that the competitors (IBM, Microsoft?) will try to develop a solution with the same characteristics in terms of reliability, or they will lose terrain.

Cheers!

Ludovico

Disclaimer: This post is intended to outline Oracle’s general product direction based on the information gathered through public conferences. It is intended for informational purposes only. The development and release of these functionalities and features including the release dates remain at the sole discretion of Oracle and no documentation is available at this time. The features and commands shown may or may not be accurate when the final product release goes GA (General Availability).
Please refer Oracle documentation when it becomes available.