Video: Where should I put the Observer in a Fast-Start Failover configuration?

The video explains best practices and different failure scenarios for different observer placements. It also shows how to configure high availability for the observer.

Here’s the summary:

  • Always try to put the observer(s) on an external site.
  • If you don’t have any, put it where the primary database is, and have one ready on the secondary site after the role transition.
  • Don’t put the observer together with the standby database!
  • Configure multiple observers for high availability, and use the PreferredObserverHosts Data Guard member property to ensure you never run the observer where the standby database is.

 

Far Sync and Fast-Start Failover Protection modes

Oracle advertises Far Sync as a solution for “Zero Data Loss at any distance”. This is because the primary sends its redo stream synchronously to the Far Sync, which relays it to the remote physical standby.

There are many reasons why Far Sync is an optimal solution for this use case, but that’s not the topic of this post 🙂

Some customers ask: Can I configure Far Sync to receive the redo stream asynchronously?

Although a direct standby receiving asynchronously would be a better idea, Far Sync can receive asynchronously as well.

And one reason might be to send asynchronously to one Far Sync member that redistributes locally to many standbys.

It is very simple to achieve: just changing the RedoRoutes property on the primary.

This will work seamlessly. The v$dataguard_process will show the async transport process:

 

What about Fast-Start Failover?

Up to and including 19c, ASYNC transport to Far Sync will not work with Fast-Start Failover (FSFO).

ASYNC redo transport mandates Maximum Performance protection mode, and FSFO supports that in conjunction with Far Sync only starting with 21c.

Before 21c, trying to enable FSFO with a Far Sync will fail with:

So if you want FSFO with Far Sync in 19c, it has to be MaxAvailability (and SYNC redo transport to the FarSync).


If you don’t need FSFO, as we have seen, there is no problem. The only protection mode that will not work with Far Sync is Maximum Protection:

If FSFO is required, and you want Maximum Performance before 21c, or Maximum Protection, you have to remove Far Sync from the redo route.

Ludovico

Can a physical standby database receive the redo SYNC if the Far Sync instance fails?

The answer is YES.

In the following configuration, cdgsima_lhr1pq (primary) sends synchronously to cdgsima_farsync1 (far sync), which forwards the redo stream asynchronously to cdgsima_lhr1bm (physical standby):

But if cdgsima_farsync1 is not available, I want the primary to send synchronously to the physical standby database. I accept a performance penalty, but I do not want to compromise my data protection.

I just need to set up the Redoroutes as follows:

This is defined the second part of the RedoRoutes rules:

Let’s test. If I shutdown abort the farsync instance:

I can see the new SYNC destination being open almost instantaneously (because the old destination fails immediately with ORA-03113):

Indeed, I can see the new NSS process (synchronous redo transport) spawn at that time:

Ludo

Can I rename a PDB in a Data Guard configuration?

Someone asked me this question recently.

The answer is: yes!

Let’s see it in action.

On the primary I have:

And of course the same PDBs on the standby:

Let’s change the PDB RED name to TOBY: The PDB rename operation is straightforward (but it requires a brief downtime). To be done on the primary:

On the standby, I can see that the PDB changed its name:

The PDB name change is propagated transparently with the redo apply.

Ludo

rhpctl addnode gihome: specify HUB or LEAF when adding new nodes to a Flex Cluster

I have a customer trying to add a new node to a cluster using Fleet Patching and Provisioning.

The error in the command output is not very friendly:

The “RHPHELP_preNodeAddVal” might already give an idea of the cause: something related to the “cluvfy stage -pre nodeadd” evaluation that we normally do when adding a node by hand. FPP does not really run cluvfy, but it calls the same primitives cluvfy is based on.

In FPP, when the error does not give any useful information, this is the flow to follow:

  • use “rhpctl query audit” to get the date and time of the failing operation
  • open the “rhpserver.log.0” and look for the operation log in that time frame
  • get the UID of the operation e.g., in the following line it is “1556344143”:

  • Isolate the log for the operation: grep $UID rhpserver.log.0 > $UID.log
  • Locate the trace file of the rhphelper remote execution:

  • Find the root cause in the rhphelper trace:

In this case, the target cluster is a Flex Cluster, so the command must be run specifying the node_role.

The documentation is not clear (we will fix it soon):

node_role must be specified for Flex Clusters, and it must be either HUB or LEAF.

After using the correct command line, the command succeeded.

HTH

Ludovico

Changing FPP temporary directory (/tmp in noexec and other issues)

When using FPP, you might experience the following error (PRVF-7546):

This is often related to the filesystem /tmp that has the “noexec” option:

Although it is tempting to just remount the filesystem with “exec”, you might be in this situation because your systems are configured to adhere to the STIG recommendations:

The noexec option must be added to the /tmp partition (https://www.stigviewer.com/stig/red_hat_enterprise_linux_6/2016-12-16/finding/V-57569)

FPP 19.9 contains fix 30885598 that allows specifying the temporary location for FPP operations:

After that, the operation should run smoothly:

HTH

Ludo

Why do PMs ask you to open Service Requests for almost EVERYTHING?

If you attend Oracle-related events or if you are active on Twitter or other social medias used by technologists, you might know many of us Product Managers directly. If it is the case, you know that we are in general very easy to reach and always happy to help.

When you contact us directly, however, sometimes we answer “Please open a SR for that“. Somehow irritating, huh? “We had chats and drinks together at conferences and now this bureaucracy?” This is understandable. Who likes opening SRs after all? Isn’t just easier to forward that e-mail internally and get the answer first hand?

This is something that happened to me as well in the past when I was not working for Oracle yet, and that still happens with me now (the answer coming from me, as PM).

Why? The first answer is “it depends on the question“. If it is anything that we can answer directly, we will probably do it.

It might be a question about a specific feature: “Does product X support Y?”, “can you add this feature in your product?” or a known problem for which the PM already knows the bug (in that case is just a matter of looking up the bug number), or anything that is relatively easy to answer: “What are the best practices for X?”, “Do you have a paper explaining that?”, “Does this bug have a fix already?

But there is a plethora of questions for which we need more information.

I try this, but it does not work“. “I get this error and I think it is a bug“. “I have THIS performance problem“.

This is when I’d personally ask to open a SR most of the times (unless I have a quick answer to give). And there are a few reasons:

Data protection

Oracle takes data protection very seriously. Oracle employees are trained to deal with potentially sensitive data and cannot forward customer information via e-mail. That could be exposed or forwarded to the wrong recipients by mistake, etc. We don’t ask for TFA collections or logs via e-mail (even if sometimes customers send them to us anyway…).

There are special privileges required to access customer SRs, that’s the only secure way we provide to transfer logs and protected information. The files uploaded into the SRs must be accessed through a specific application. All the checkouts and downloads are tracked. When we need to forward customer information internally, we just specify the SR number and let our colleagues access the information themselves. Sometimes we use SRs just as placeholder to exchange data with customers, without having a support engineer working on it.

This is the single most important point that somehow makes the other points irrelevant. But still the remaining ones are good points.

Important pieces in the discussion do not get lost

The answer does not always come from first-hand… it might take 3-4 hops (sometimes more) and analysis, comments, explanations, discussions.

E-mail is not a good tool for this. Long threads can split and include just part of the audience (the “don’t reply to all” effect). Attachments are deleted when replying instead of forwarding… and pieces get lost.

This is where you would use a Jira, or a trouble ticketing system. Guess which is the one that Oracle uses for its customers? 🙂

MOS has internal views to dig into TFA logs (that’s why it is a good idea to provide one, whenever it might be relevant), and all the attachments, comments and internal discussions are centralized there. But we need a SR to add information to!

Win-win: knowledge base, feedback, continuous improvement

If you discover something new from a technical discussion, what do you do? Do you share it or do you keep it for yourself? MOS is part of our knowledge base and it is a good idea to store important discussions in it. Support engineers can find solutions in SRs with similar cases. It is a good opportunity for the support engineer him/herself to be involved in one more interesting discussion, so next time he/she might have the answer on top of the fingers.

To conclude, think about it as a win-win. You give us interesting problems that might help improving the product, and you get a Guardian Angel on your SR for free 😉

Ludo

Oracle Fleet Patching and Provisioning (FPP): My new role as PM and a brand new series of blog posts

It’s been 6 years since I’ve tried FPP for the first time (formerly Rapid Home Provisioning, or RHP).

Rapid Home Provisioning

FPP was still young and lacking many features at that time, but it already changed the way I’ve worked during the next years. I embraced the out of place patching, developed some basic scripts to install Oracle Homes, and sought automation and standardization at all costs:

Oracle Home Management – part 7: Putting all together

When 18c came with the FPP local-mode automaton, I have implemented it for the Grid Infrastructure patching strategy at CERN:

Oracle Grid Infrastructure 18c patching part 3: Executing out-of-place patching with the local-mode automaton

And discovered that meanwhile, FPP did giant steps, with many new features and fixes for quite a few usability and performance problems.

Last year, when joining the Oracle Database High Availability (HA), Scalability, and Maximum Availability Architecture (MAA) Product Management Team at Oracle, I took (among others) the Product Manager role for FPP.

Becoming an Oracle employee after 20 years of working with Oracle technology is a big leap. It allows me to understand how big the company is, and how collaborative and friendly the Oracle employees are (Yes, I was used to marketing nonsense, insisting salesmen, and unfriendly license auditors. This is slowly changing with Oracle embracing the Cloud, but it is still a fresh wound for many customers. Expect this to change even more! Regarding me… I’ll be the same I’ve always been 🙂 ).

Now I have daily meetings with big customers (bigger than the ones I have ever had in the past), development teams, other product managers, Oracle consultants, and community experts. My primary goal is to make the product better, increasing its adoption, and helping customers having the best experience with it. This includes testing the product myself, writing specs, presentations, videos, collecting feedback from the customers, tracking bugs, and manage escalations.

I am a Product Manager for other products as well, but I have to admit that FPP is the product that takes most of my Product Manager time. Why?

I will give a few reasons in my next blog post(s).

Ludo

The fear of (availability) loss is a path to the dark side.

I have been a DBA/consultant for customers and big production environments for over twenty years. I have explained more or less my career path in this blog post.

Database (and application) high availability has always been one of my favorite areas. Over the years I have become a high availability expert (my many blog posts are there to confirm it) and I have spent a lot of time building, troubleshooting, teaching, presenting, advocating these gems of technology that are RAC, Data Guard, Application Continuity and the many other products that are part of the Oracle Maximum Availability Architecture solution. Customers fear downtime, and I have always been with them on that. But in my case, it looks like Yoda’s famous quote worked well for me (in a good way):

I’ll be joining the Oracle Maximum Availability Architecture Product Management team as MAA Product Manager (or rather Cloud MAA, I will not explain here ;-))  next November.

(for those who are not familiar with the joke, the “Dark Side” is how we often refer to the Oracle employees in the Oracle Community 😉 )

I remember just like if it was yesterday that I was presenting some Data Guard 12c new features in front of a big audience at Collaborate 2014. There I have met two incredible people that were part of the MAA Product Management team: Larry Carpenter and Markus Michalewicz. Larry has been a great source of inspiration to improve my seniority and ease of presenting in front of the public, while Markus has become a friend over the years in addition of being one of the most influent persons in my professional network.

Now I have got the opportunity to join that team, and I feel like it’s the most natural change to do in my career.

And because I imagine some of you will have some questions, there are some answers to questions I’ve been frequently asked so far:

  • MAA PM does not mean becoming team lead or supervising other colleagues, I’ll be a “regular” PM
  • I will stay in Switzerland and work remotely from here
  • I will stay in “the conference circus” and keep presenting as soon the COVID-19 situation will allow to do so
  • Yes, I was VERY happy in Trivadis and it will always have a special place in my heart
  • Yep, that means no ACE Director award anymore 😉

Exciting times ahead! 🙂