Real-Time Cascade Standby Container Databases without Oracle Managed Files

OK, the title might not be the best… I just would like to add more detail to content you can already find in other blogs (E.g. this nice one from Philippe Fierens http://pfierens.blogspot.com/2020/04/19c-data-guard-series-part-iii-adding.html).

I have this Cascade Standby configuration:

Years ago I wrote this whitepaper about cascaded standbys:
https://fr.slideshare.net/ludovicocaldara/2014-603-caldarappr
While it is still relevant for non-CDBs, things have changed with Multitenant architecture.

In my config, the Oracle Database version is 19.7 and the databases are actually CDBs. No Grid Infrastructure, non-OMF datafiles.
It is important to highlight that a lot of things have changed since 12.1. And because 19c is the LTS version now, it does not make sense to try anything older.

First, I just want to make sure that my standbys are aligned.

Primary:

1st Standby alert log:

2nd Standby alert log:

Then, I create a pluggable database (from PDB$SEED):

On the first standby I get:

On the second:

So, yeah, not having OMF might get you some warnings like: WARNING: File being created with same name as in Primary
But it is good to know that the cascade standby deals well with new PDBs.

Of course, this is not of big interest as I know that the problem with Multitenant comes from CLONING PDBs from either local or remote PDBs in read-write mode.

So let’s try a relocate from another CDB:

This is what I get on the first standby:

and this is on the cascaded standby:

So absolutely the same behavior between the two levels of standby.
According to the documentation: https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-PLUGGABLE-DATABASE.html#GUID-F2DBA8DD-EEA8-4BB7-A07F-78DC04DB1FFC
I quote what is specified for the parameter STANDBYS={ALL|NONE|…}:
“If you include a PDB in a standby CDB, then during standby recovery the standby CDB will search for the data files for the PDB. If the data files are not found, then standby recovery will stop and you must copy the data files to the correct location before you can restart recovery.”

“Specify ALL to include the new PDB in all standby CDBs. This is the default.”

Specify NONE to exclude the new PDB from all standby CDBs. When a PDB is excluded from all standby CDBs, the PDB’s data files are unnamed and marked offline on all of the standby CDBs. Standby recovery will not stop if the data files for the PDB are not found on the standby. […]”

So, in order to avoid the MRP to crash, I should have included STANDBYS=NONE
But the documentation is not up to date, because in my case the PDB is skipped automatically and the recovery process DOES NOT STOP:

However, the recovery is marked ENABLED for the PDB on the standby, while usind STANDBYS=NONE it would have been DISABLED.

So, another difference with the doc who states:
“You can enable a PDB on a standby CDB after it was excluded on that standby CDB by copying the data files to the correct location, bringing the PDB online, and marking it as enabled for recovery.”

This reflects the findings of Philippe Fierens in his blog (http://pfierens.blogspot.com/2020/04/19c-data-guard-series-part-iii-adding.html).

This behavior has been introduced probably between 12.2 and 19c, but I could not manage to find exactly when, as it is not explicitly stated in the documentation.
However, I remember well that in 12.1.0.2, the MRP process was crashing.

In my configuration, not on purpose, but interesting for this article, the first standby has the very same directory structure, while the cascaded standby has not.

In any case, there is a potentially big problem for all the customers implementing Multitenant on Data Guard:

With the old behaviour (MRP crashing), it was easy to spot when a PDB was cloned online into a primary database, because a simple dgmgrl “show configuration” whould have displayed a warning because of the increasing lag (following the MRP crash).

With the current behavior, the MRP keeps recovering and the “show configuration” displays “SUCCESS” despite there is a PDB not copied on the standby (thus not protected).

Indeed, this is what I get after the clone:

I can see that the Data Guard Broker is completely silent about the missing PDB. So I might think my PDB is protected while it is not!

I actually have to add a check on the standby DBs to check if I have any missing datafiles:

This check should be implemented and put under monitoring (custom metrics in OEM?)

The missing PDB is easy to spot once I know that I have to do it. However, for each PDB to recover (I might have many!), I have to prepare the rename of datafiles and creation of directory (do not forget I am using non-OMF here).

Now, the datafile names on the standby got changed to …/UNNAMEDnnnnn.

So I have to get the original ones from the primary database and do the same replace that db_file_name_convert would do:

and put this in a rman script (this will be for the second standby, the first has the same name so same PATH):

Then, I need to stop the recovery, start it and stopping again, put the datafiles online and finally restart the recover.
These are the same steps used my Philippe in his blog post, just adapted to my taste 🙂

For the second part, I use this HEREDOC to online all offline datafiles:

and finally:

Now, I do not have anymore any datafiles offline on the standby:

I will not publish the steps for the second standby, they are exactly the same (same output as well).

At the end, for me it is important to highlight that monitoring the OFFLINE datafiles on the standby becomes a crucial point to guarantee the health of Data Guard in Multitenant. Relying on the Broker status or “PDB recovery disabled” is not enough.

On the bright side, it is nice to see that Cascade Standby configurations do not introduce any variation, so cascaded standbys can be threated the same as “direct” standby databases.

HTH

Ludovico

Multitenant Pills: Partial PDB cloning (and cleanup)

When consolidating to multitenant, there are several consolidation patterns.

  • Big, complex databases usually have special requirements for which it might be a good choice to go to single-tenant (a single PDB in one CDB)
  • Small, relatively easy databases are the best candidate for consolidation to multitenant
  • Schema consolidated databases require special attention, but in general there are several advantages to convert individual schemas (or group of schemas) to individual PDBs

For the latter, there are some techniques to convert a schema in a PDB.

  • export/import (obviously), with eventually Golden Gate to do it online
  • Transportable tablespaces (if the schemas follow strict 1-to-1 tablespace separation
  • partial PDB cloning

We will focus on the last one for this blog post.

Situation

Here we have a PDB with some schemas, each of them has a dedicated tablespace, but accidentally, two of them have also some objects on a common tablespace.

This happens frequently when all the users have quota on the default database tablespace and they do not have necessarily a personal default tablespace.

This is the typical situation where transportable tablespaces become hard to achieve without some upfront segment movement, as tablespaces are not self-contained.

Thankfully, Oracle Multitenant allows us to clone a PDB from a remote one and specify only a subset of tablespaces.

Here is a full example script with some checks and fancy parameters:

This is an example output:

If the clone process succeeds, at the end we should have the new ABC pluggable database with ABC and DATA tablespaces only.

Yeah!

Any Cleanup needed?

What happened to the users? Actually, they are all still there:

And the segments in the two skipped tablespaces are not there:

So the table definitions are also gone?

Not at all! The tables are still there and reference to tablespaces that do not exist anymore. Possible?

Actually, the tablespaces definition are still there if we look at v$tablespace:

If we give a look at the DBA_TABLESPACES view definition, there are a few interesting filters:

What is their meaning?

So the first WHERE clause skips all the INVALID TABLESPACES (when you drop a tablespace it is still stored in ts$ with this state), the second skips the definition of TEMPORARY TABLESPACE GROUPS, the third one is actually our candidate.

Indeed, this is what we get from ts$ for these tablespaces:

So the two tablespaces are filtered out because of this new multitenant flag.

If we try to drop the tablespaces, it succeeds:

But the user GHI, who has no objects anymore, is still there.

So we need to drop it explicitly.

Automate the cleanup

This is an example PL/SQL that is aimed to automate the cleanup.

Actually:

  • Users that had segments in one of the excluded tablespaces but do not have any segments left are just LOCKED (for security reasons, you can guess why).
  • Tablespaces that meet the “excluded PDB” criteria, are just dropped

This is the output for the clone procedure we have just seen:

The PL/SQL block can be quite slow depending on the segments and tablespaces, so it might be a good idea to have a custom script instead of this automated one.

What about user DEF?

The automated procedure has not locked the account DEF. Why?

Actually, the user DEF still has some segments in the DATA tablespace. Hence, the procedure cannot be sure what was the original intention: copy the user ABC ? The clone procedure allows only to specify the tablespaces, so this is the only possible result.

Promo: If you need to migrate to Multitenant and you need consulting/training, just contact me, I can help you 🙂

 

Multitenant Pills: Pushing your PDB to the Cloud in one step?

The Oracle Multitenant architecture introduces some nice opportunities, including local and remote cloning (see this example on ORACLE_BASE blog).

However, the only available cloning procedure use the PULL method to get the data from the remote source PDB.

This can be a limitation, especially in environments where the cloning environment does not have direct access to the production, or where the clones must be done in the Cloud with no direct access to the production VLAN on-premises.

So, one common approach is to clone/detach locally, put the PDB files in the Object Store and then attach them in the cloud.

Another approach is to use SSH tunnels. If you follow my blog you can see it is something that I use every now and then to do stuff from on-premises to the cloud.

How to set it up?

Actually, it is super-easy: just prepare a script in the cloud that will do the create pluggable database, then trigger it from on-premises.

This is an example:

It takes as parameters: the name of the source PDB, the name of the target PDB and the SQL*Net descriptor to create the temporary database link from the cloud CDB to the on-premises CDB.

The user C##ONPREM must obviously exist on-premises with the following privileges:

The cloud database should use OMF so you don’t have to take care about file name conversions.

At this point, if you have set up correctly the SSH keys to connect to the cloud server, it is just a matter of running the script remotely using the proper SSH tunnel. Once the remote port binding established, the cloud server can contact the on-premises listener port using localhost:remote_bind:

Of course the timing depends a lot on the size of the database and your connection to the Cloud.

I have tested this procedure with Oracle Database 19.7 on OCI compute instances and on DBaaS VM instance, it works without any additional work. Of course, it does not work for Autonomous Database 🙂

Ludovico

Cloning a PDB with ASM and Data Guard (no ADG) without network transfer

Ok, if you’re reading this post, you may want to read also the previous one that explains something more about the problem.

Briefly said, if you have a CDB running on ASM in a MAA architecture and you do not have Active Data Guard, when you clone a PDB you have to “copy” the datafiles somehow on the standby. The only solution offered by Oracle (in a MOS Note, not in the documentation) is to restore the PDB from the primary to the standby site, thus transferring it over the network. But if you have a huge PDB this is a bad solution because it impacts your network connectivity. (Note: ending up with a huge PDB IMHO can only be caused by bad consolidation. I do not recommend to consolidate huge databases on Multitenant).

So I’ve worked out another solution, that still has many defects and is almost not viable, but it’s technically interesting because it permits to discover a little more about Multitenant and Data Guard.

The three options

At the primary site, the process is always the same: Oracle copies the datafiles of the source, and it modifies the headers so that they can be used by the new PDB (so it changes CON_ID, DBID, FILE#, and so on).

On the standby site, by opposite, it changes depending on the option you choose:

Option 1: Active Data Guard

If you have ADG, the ADG itself will take care of copying the datafile on the standby site, from the source standby pdb to the destination standby pdb. Once the copy is done, the MRP0 will continue the recovery. The modification of the header block of the destination PDB is done by the MRP0 immediately after the copy (at least this is what I understand).

ADG_PDB_copy

Option 2: No Active Data Guard, but STANDBYS=none

In this case, the copy on the standby site doesn’t happen, and the recovery process just add the entry of the new datafiles in the controlfile, with status OFFLINE and name UNKNOWNxxx.  However, the source file cannot be copied anymore, because the MRP0 process will expect to have a copy of the destination datafile, not the source datafile. Also, any tentative of restore of the datafile 28 (in this example) will give an error because it does not belong to the destination PDB. So the only chance is to restore the destination PDB from the primary.
NOADG_PDB_STANDBYS_NONE_copy

Option 3: No Active Data Guard, no STANDBYS=none

This is the case that I want to explain actually. Without the flag STANDBYS=none, the MRP0 process will expect to change the header of the new datafile, but because the file does not exist yet, the recovery process dies.
We can then copy it manually from the source standby pdb, and restart the recovery process, that will change the header. This process needs to be repeated for each datafile. (that’s why it’s not a viable solution, right now).

NOADG_PDB_copy

Let’s try it together:

The Environment

Primary

Standby

The current user PDB (any resemblance to real people is purely coincidental 😉 #haveUSeenMaaz):

Cloning the PDB on the primary

First, make sure that the source PDB is open read-only

Then, clone the PDB on the primary without the clause STANDBYS=NONE:

Review the clone on the Standby

At this point, on the standby the alert log show that the SYSTEM datafile is missing, and the recovery process stops.

One remarkable thing, is that in the standby controlfile, ONLY THE SYSTEM DATAFILE exists:

We need to fix the datafiles one by one, but most of the steps can be done once for all the datafiles.

Copy the source PDB from the standby

What do we need to do? Well, the recovery process is stopped, so we can safely copy the datafiles of  the source PDB from the standby site because they have not moved yet. (meanwhile, we can put the primary source PDB back in read-write mode).

Copy the datafiles:

Do the magic

Now there’s the interesting part: we need to assign the datafile copies of the maaz PDB to LUDO.

Sadly, the OMF will create the copies on the bad location (it’s a copy, to they are created on the same location as the source PDB).

We cannot try to uncatalog and recatalog the copies, because they will ALWAYS be affected to the source PDB. Neither we can use RMAN because it will never associate the datafile copies to the new PDB. We need to rename the files manually.

It’s better to uncatalog the datafile copies before, so we keep the catalog clean:

Then, because we cannot rename files on a standby database with standby file management set to AUTO, we need to put it temporarily to MANUAL.

standby_file_management is not PDB modifiable, so we need to do it for the whole CDB.

then we need to set back the standby_file_management=auto or the recover will not start:

We can now restart the recovery.

The recovery process will:
– change the new datafile by modifying the header for the new PDB
– create the entry for the second datafile in the controlfile
– crash again because the datafile is missing

We already have the SYSAUX datafile, right? So we can alter the name again:

This time all the datafiles have been copied (no user datafile for this example) and the recovery process will continue!! 🙂 so we can hit ^C and start it in background.

The Data Guard configuration reflects the success of this operation.

Do we miss anything?

Of course, we do!! The datafile names of the new PDB reside in the wrong ASM path. We need to fix them!

 

I know there’s no practical use of this procedure, but it helps a lot in understanding how Multitenant has been implemented.

I expect some improvements in 12.2!!

Cheers

Ludo

 

Tales from the Demo Grounds part 2: cloning a PDB with ASM and Data Guard (no ADG)

In my #OOW14 presentation about MAA and Multitenant, more precisely at slide #59, “PDB Creation from other PDB without ADG*”, I list a few commands that you can use to achieve a “correct” Pluggable Database clone in case you’re not using Active Data Guard.

What’s the problem with cloning a PDB in a MAA environment without ADG? If you’ve attended my session you should know the answer…

If you read the book “Data Guard Concepts and Administration 12c Release 1 (12.1)“, paragraph 3.5 Creating a PDB in a Primary Database, you’ll see that:

If you plan to create a PDB as a clone from a different PDB, then copy the data files that belong to the source PDB over to the standby database. (This step is not necessary in an Active Data Guard environment because the data files are copied automatically when the PDB is created on the standby database.)

But because there are good possibilities (99%?) that in a MAA environment you’re using ASM, this step is not so simple: you cannot copy the datafiles exactly where you want, it’s OMF, and the recovery process expects the files to be where the controlfile says they should be.

So, if you clone the PDB, the recovery process on the standby doesn’t find the datafiles at the correct location, thus the recovery process will stop and will not start until you fix manually. That’s why Oracle has implemented the new syntax “STANDBYS=NONE” that disables the recovery on the standby for a specific PDB: it lets you disable the recovery temporarily while the recovery process continues to apply logs on the remaining PDBs. (Note, however, that this feature is not intended as a generic solution for having PDBs not replicated. The recommended solution in this case is having two distinct CDBs, one protected by DG, the other not).

With ADG, when you clone the PDB on the primary, on the standby the ADG takes care of the following steps, no matter if on ASM or FS:

  1. recover up to the point where the file# is registered in the controlfile
  2. copy the datafiles from the source DB ON THE STANDBY DATABASE (so no copy over the network)
  3. rename the datafile in the controlfile
  4. continue with the recovery

If you don’t have ADG, and you’re on ASM, Oracle documentation says nothing with enough detail to let you solve the problem. So in August I’ve worked out the “easy” solution that I’ve also included in my slides (#59 and #60):

  1. SQL> create pluggable database DEST from SRC standbys=none;
  2. RMAN> backup as copy pluggable database DEST format ‘/tmp/dest%f.dbf’;
  3. $ scp  /tmp/dest*.dbf remote:/tmp
  4. RMAN> catalog start with ‘/tmp/dest’
  5. RMAN> set newnamefor pluggable database DEST to new;
  6. RMAN> restore pluggable database DEST;
  7. RMAN> switch pluggable database DEST to copy;
  8. DGMGRL> edit database ‘STBY’ set state=’APPLY-OFF’;
  9. SQL> Alter pluggable database DEST enable recovery;
  10. DGMGRL> edit database ‘STBY’ set state=’APPLY-ON’;

Once at #OOW14, after endless conversations at the Demo Grounds, I’ve discovered that Oracle has worked out the very same solution requiring network transfer and that it has been documented in a new note.

Making Use of the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1)

This note is very informative and I recommend to read it carefully!

What changes (better) in comparison with my first solution, is that Oracle suggests to use the new feature “restore from service”:

I’ve questioned the developers at the Demo Grounds about the necessity to use network transfer (I had the chance to speak directly with the developer who has written this piece of code!! :-)) and they said that they had worked out only this solution. So, if you have a huge PDB to clone, the network transfer from the primary to standby may impact severely your Data  Guard environment and/or your whole infrastructure, for the time of the transfer.

Of course, I have a complex, undocumented solution, I hope I will find the time to document it, so stay tuned if you’re curious! 🙂

Tales from Demo Grounds part 1: Clone PDBs while open READ-WRITE

DISCLAIMER: I’ve got this information by chatting with Oracle developers at the Demo Grounds. The functionality is not documented yet and Oracle may change it at its sole discretion. Please refer to the documentation if/when it will be updated 😉

In one of my previous posts named “A PDB is cloned while in read-write, Data Guard loose its marbles (12.1.0.2, ORA-19729)” I’ve blogged about a weird behaviour:

The documentation states that you can create a pluggable database from another one only if the source PDB is open read-only.

Indeed, If I try to clone it when the source PDB is MOUNTED, I get error ORA-65036:

The weird behavior is that if you do it when the source is in read-write mode, it works from release 12.1.0.2 (onward?)

I’ve questioned the developers at the DEMO Grounds and they have confirmed that:

  • With the 12.1.0.2, they have initially planned to disclose this functionality (clone PDBS in READ-WRITE).
  • That they had problems in making it work with an Active Data Guard environment (a-ah! so my post was not completely wrong)
  • Finally they have released it as undocumented feature
  • In the next release “they will fix it, maybe” and document it
  • The process of cloning the PDB anyway freeze the transactions on the source

I hope that this update helps clarifying both the behavior and my previous post about this problem! 🙂

Cheers

Ludo

Oracle RAC, Oracle Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant (OOW14)

Here you can find the material related to my session at Oracle Open World 2014. I’m sorry I’m late in publishing them, but I challenge you to find spare time during Oracle Open World! It’s the busiest week of the year! (Hard Work, Hard Play)

 Slides

 Demo 1 video

Demo 2 video

Demo 1 script

 

Demo 2 script

 

There’s one slide describing the procedure for cloning one PDB using the standbys clause. Oracle has released a Note while I was preparing my slides (one month ago) and I wasn’t aware of it, so you may also checkout this note on MOS:

Making Use of the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1)

UPDATE: I’ve blogged about it in a more recent post: Tales from the Demo Grounds part 2: cloning a PDB with ASM and Data Guard (no ADG)

UPDATE 2: I’ve written another blog post about these topics: Cloning a PDB with ASM and Data Guard (no ADG) without network transfer

Cheers!

 

Ludovico

A PDB is cloned while in read-write, Data Guard loose its marbles (12.1.0.2, ORA-19729)

UPDATE: please check my more recent post about this problem and the information I’ve got at the Oracle Demo Grounds during OOW14: http://www.ludovicocaldara.net/dba/demo-grounds-clone-pdb-rw/

I feel the strong need to blog abut this very recent problem because I’ve spent a lot of time debugging it… especially because there’s no information about this error on the MOS.

Introduction
For a lab, I have prepared two RAC Container databases in physical stand-by.
Real-time query is configured (real-time apply, standby in read-only mode).

Following the doc, http://docs.oracle.com/database/121/SQLRF/statements_6010.htm#CCHDFDDG, I’ve cloned one local pluggable database to a new PDB and, because Active Data Guard is active, I was expecting the PDB to be created on the standby and its files copied without problems.

BUT! I’ve forgot to put my source PDB in read-only mode on the primary and, strangely:

  • The pluggable database has been created on the primary WITHOUT PROBLEMS (despite the documentation explicitly states that it needs to be read-only)
  • The recovery process on the standby stopped with error.

 

Now, the primary had all its datafiles (the new PDB has con_id 4):

 

and the standby was missing the datafiles of the new PDB:

 

But, on the standby database, the PDB somehow was existing.

 

I’ve tried to play a little, and finally decided to disable the recovery for the PDB (new in 12.1.0.2).
But to disable the recovery I was needing to connect to the PDB, but the PDB was somehow “inexistent”:

 

So I’ve tried to drop it, but off course, the standby was read-only and I could not drop the PDB:

 

Then I’ve shutted down the standby, but one instance hung and I’ve needed to do a shutdown abort (I don’t know if it was related with my original problem..)

 

After mounting again the standby, the PDB was also accessible:

 

So I’ve been able to disable the recovery:

 

Then, on the primary, I’ve took a fresh backup of the involved datafiles:

 

and I’ve copied and cataloged the copies to the controlfile:

 

but the restore was impossible, because the controlfile was not knowing these datafiles!!

 

So I’ve RESTARTED the recovery for a few seconds, and because the PDB had the recovery disabled, the recovery process has added the datafiles and set them offline.

 

Then I’ve been able to restore the datafiles 🙂

 

Finally, I’ve enabled again the recovery for the PDB and restarted the apply process.

 

Lesson learned: if you want to clone a PDB never, ever, forget to put your source PDB in read-only mode or you’ll have to deal with it!! 🙂

Oracle Multitenant and custom DBCA templates

Today I’ve encountered an annoying issue while adapting a few scripts for automatic database creation. I track it here so hopefully it may save a few hours of troubleshooting to someone…

I’ve used the DBCA to prepare a new template:

 

Then continued by customizing other options, including init parameters and datafile/logfile paths. Finally, I’ve saved it as a new template instead of creating the database.

I’ve checked the resulting .dbc and seen that there was, as expected, the parameter “enable_pluggable_database”=”true”.

Then I’ve moved the template file to my $HOME directory and tested the silent database creation with the option “-createAsContainerDatabase true”:

The database configuration has completed successfully, without errors. I’ve accessed my new container, and I’ve been surprised by seing:

In fact, there were no pdb$seed datafiles:

After little investigation, I’ve found these lines in the dbca trace.log:

 

Then I’ve struggled with dbca and templates a few times before finding that, actually, the magic “enable pluggable database” is done by dbca only if the template file name is not customized.

Running the same exact command with the very same template file but renamed to $ORACLE_HOME/assistants/dbca/templates/General_Purpose.dbc actually works (notice the diff at the first line):

 

 

I’ve also tried to cheat and use a symlink to my previous custom template, and surprisingly, it still works:

In the dbca trace log the message saying that the DB will be NON-CDB disappears:

So the problem is really caused by the different filename/location of the template.

IMHO it’s a kind of bug, the decision between a CDB and NON-CDB should not be taken by DBCA.  Moreover, it’s not based on the content of the template, which would be logic. But today I’m late and lazy, I will not open a SR for this.

:-/

Playing with Oracle 12c Multitenant Users and Roles

I’ve realized these days that the great list of articles by Oracle Alchemist does not contain any articles describing a little more in depth common roles and Users.

I’ve found these ones by Pete Finnigan and Bobby Curtis:

http://www.petefinnigan.com/weblog/archives/00001366.htm

http://dbasolved.com/2013/06/29/common-user-vs-local-user-12c-edition/

http://www.petefinnigan.com/weblog/archives/00001370.htm

But I would like to investigate a little more.

My test environment

Just to give you an idea, I have two PDBs (HR and HR2), each containing an HR schema.

Creating the common user

As already pointed by the existing articles, I can’t create a common user into the root container without the c## prefix, unless I’m altering the hidden parameter _common_user_prefix.

so I specify the correct prefix, and it works:

The user is common, so it appears in all the containers, I can check it by querying CDB_USERS from the root container.

Creating the local user

Then I create also a local user into the HR PDB.

 

CDB_USERS scope

From the PDB I see only the users in the PDB scope:

If I change to the root, I see the users valid into all the containers:

Creating a common role

Do the roles obey to the same rules valid for the users?

Yes, they do! So, let’s create a common role with the C## prefix:

It works, but if I try to create a common role into the root container only, I get an error:

And also if I try to create a local role into the root, I can’t:

Note that the error ORA-65049 is different from the ORA-65096 that I’ve got before.

My conclusion is that the clause container of the create role and create user statements doesn’t make sense as you can ONLY create common users and roles into the root container and only local ones into the PDBs.

 Creating a local role

Just as experiment, I’ve tried to see if I can create a local role with container=ALL. It doesn’t work:

So I create the local role with container=current:

Now, from the PDB I can see the two roles I can access, whereas from the root I can see all the roles I’ve defined so far: the common role is available from all the PDBs, the local role only from the db where it has been defined, just like the users.

 Granting object privileges to the roles

From the root I can’t give grants on objects that reside in a PDB since I cannot see them: I need to connect to the PDB and give the grants from there:

Now, if I query CDB_TAB_PRIVS from the root, I see that the grants are given at a local level (con_id=3 and common=N):

 Granting common and local roles to commond and local users

From a PDB, I can grant local roles to local users or common users:

But I can’t grant a common role to a common user with container=all if I’m in a PDB:

To grant the a common role to a common user I can decide either to:

  •  give the grant locally while connected to the PDB:

  •  give the grant commonly while connected to the root:

I can also grant locally both roles and system privileges to common users while connected to the root container: in this case the privileges are applied to the root container only. Finally having the clause container finally starts to make sense:

Verifying the grants

Ok, I’ve given the grants and I’ve never verified if they work, so far.

Let’s try with the select any table privilege I’ve given in the last snippet. I expect C##GOOFY to select any table from the root container and only HR.COUNTRIES and HR.REGIONS on the HR PDB bacause they have been granted through the two roles.

What’s the mess? When I’ve created the user c##goofy, I’ve granted create and alter session without the container=all:

According to the documentation, the grant command uses container=current by default (common=N):

So, I need to give the grants commonly to let c##goofy connect to all the PDBs:

Now I see that the grants give two distinct permissions : one local and the other common.

If I revoke the grants without container clause, actually only the local one is revoked and the user can continue to login. To revoke the grants I would need to check and revoke both local and common privileges.

After the first revoke statement, I can still connect to HR and verify that my select any table privilege doesn’t apply to the PDB as it’s local to the root container:

After that, I want to check the privileges given through the local and common roles.

I expect both users to select from hr.countries and hr.regions since they have been granted indirectly by the roles.

Let’s try the local user first:

Yeah, it works as expected.

Now let’s try the common user:

It also work, so everything is ok.

Common and local grants, why you must pay attention

During the example, I’ve granted the C##COUNTRY_ROLE many times: locally to PDB, locally to the ROOT, commonly. The result is that I’ve flooded the grant table with many entries:

Let’s try to clean things: for sure I don’t need the grant local to the root:

Then I can choose between revoking the common privilege or the local one. Let’s try to remove the local one:

I’ve removed the local one, but I have still the common one (I’m connected to the PDB so the entries from the other containers are not displayed):

I still have access to the tables as expected:

So, you must pay attention to a couple of things:

  • When granting privileges from the root container, keep in mind that container=current is the default even when the grantee or the role granted are common.
  • When revoking the grants with a Multitenant architecture, keep in mind that there is a scope and you may need more than one statement to actually remove the grant from all the scopes.

As always, I look for opinions and suggestions, feel free to comment!

🙂


Ludovico