Ludovico is a member of the Oracle Database High Availability (HA), Scalability & Maximum Availability Architecture (MAA) Product Management team in Oracle.
He focuses on Oracle Data Guard, Flashback technologies, and Cloud MAA.
Traditionally, the DGMGRL command SHOW CONFIGURATION VERBOSE not only retrieved detailed configuration information but also triggered a health check. The health check operation can be resource-intensive and time-consuming, especially when executed repeatedly across multiple database instances or as part of automated workflows.
Starting with Oracle 23.9 (and planned also for a future 19c Release Update), the behavior of SHOW CONFIGURATION VERBOSE changes with the introduction of the following fix:
Bug 37829413 – ‘SHOW CONFIGURATION VERBOSE’ UNNECESSARILY TRIGGERS A FORCED HEALTH CHECK
Each use of SHOW CONFIGURATION VERBOSE triggered a fresh, full health check before showing configuration details, regardless of whether up-to-date health information was needed.
New behavior
The command now returns comprehensive configuration details and property values without forcing an immediate health check.
Why this change?
This change eliminates unnecessary resource usage and network communication, improving performance especially in automated systems that repeatedly gather configuration info, such as Oracle TFA or custom scripts. The goal is to make monitoring and troubleshooting more efficient.
What’s the impact for me?
When you execute SHOW CONFIGURATION, at the bottom you see when the last health check was executed:
Oracle PL/SQL
1
2
Configuration Status:
SUCCESS (status updated 32 seconds ago)
The health check is scheduled automatically every minute.
When there was a warning, it was common to execute “SHOW CONFIGURATION VERBOSE” to force a refresh of the status and get the most recent status. This won’t work anymore, and you’ll have to wait until the next scheduled health check.
In Oracle 23ai, you can still force a health check explicitly with:
Oracle PL/SQL
1
SELECTdbms_dg.health_checkFROMdual;
Remember, avoid running it unless you are in an emergency!
Oracle Data Guard 23c comes with many nice improvements for observability, which greatly increase the usability of Data Guard in environments with a high level of automation.
For the 23c version, we have the following new views.V$DG_BROKER_ROLE_CHANGE
This view tracks the last role transitions that occurred in the configuration. Example:
The event might be a Switchover, Failover, or Fast-Start Failover.
In the case of Fast-Start Failover, you will see the reason (typically “Primary Disconnected” if it comes from the observer, or whatever reason you put in DBMS_DG.INITIATE_FS_FAILOVER.
No more need to analyze the logs to find out which database was primary at any moment in time!
V$DG_BROKER_PROPERTY
Before 23c, the only possible way to get a broker property from SQL was to use undocumented (unsupported) procedures in the fixed package DBMS_DRS. I’ve blogged about it in the past, before joining Oracle.
Now, it’s as easy as selecting from a view, where you can get the properties per member or per configuration:
This gives important additional information about the observers, for example, the last time a specific observer was able to ping the primary or the target (in seconds).
Also, the path of the log file and runtime data file are available, making it easier to find them on the observer host in case of a problem.
Conclusion
These new views should greatly improve the experience when monitoring or diagnosing problems with Data Guard. But they are just a part of many improvements we introduced in 23c. Stay tuned for more 🙂
This command prepares a database to become primary in a Data Guard configuration.
It sets many recommended parameters:
Oracle PL/SQL
1
2
3
4
5
6
7
8
DB_FILES=1024
LOG_BUFFER=256M
DB_BLOCK_CHECKSUM=TYPICAL
DB_LOST_WRITE_PROTECT=TYPICAL
DB_FLASHBACK_RETENTION_TARGET=120
PARALLEL_THREADS_PER_CPU=1
STANDBY_FILE_MANAGEMENT=AUTO
DG_BROKER_START=TRUE
Sets the RMAN archive deletion policy, enables flashback and force logging, creates the standby logs according to the online redo logs configuration, and creates an spfile if the database is running with an init file.
If you tried this in 21c, you have noticed that there is an automatic restart of the database to set all the static parameters. If you weren’t expecting this, the sudden restart could be a bit brutal approach.
In 23c, we added an additional keyword “restart” to specify that you are OK with the restart of the database. If you don’t specify it, the broker will complain that it cannot proceed without a restart:
First, I have a Data Guard configuration in place. On the primary database, the current incarnation has a single parent (the template from which it has been created):
Just to make room for some undo, I increase the undo_retention. On a PDB, that requires LOCAL UNDO to be configured (I hope it’s the default everywhere nowadays).
I love seeing people suggesting Oracle Data Guard Fast-Start Failover for high availability. Nevertheless, there are a few problems with the architecture and steps proposed in the article.
I sent my comments via Disqus on the AWS blogging platform, but after a month, my comment was rejected, and the blog content hasn’t changed.
For this reason, I don’t have other places to post my comment but here…
The steps used to create the database service do not include any HA property, which will make most efforts useless. (see Table 153-6 in the link above).
But, most important, TAF (or Oracle connectivity in general) does NOT require a host IP change! There is no need to change the DNS when using the recommended connection string with multiple address_lists.
If you need to implement a complex architecture using a software solution, pay attention that the practices suggested by the partner/integrator/3rd party match the ones from the software vendor. In the case of Oracle Data Guard, Oracle knows better 😉
The video explains best practices and different failure scenarios for different observer placements. It also shows how to configure high availability for the observer.
Here’s the summary:
Always try to put the observer(s) on an external site.
If you don’t have any, put it where the primary database is, and have one ready on the secondary site after the role transition.
Don’t put the observer together with the standby database!
Configure multiple observers for high availability, and use the PreferredObserverHosts Data Guard member property to ensure you never run the observer where the standby database is.
Why is Fast-Start Failover a crucial component for mission-critical Data Guard deployments?
The observer lowers the RTO in case of failure, and the Fast-Start Failover protection modes protect the database from split-brain and data loss.
Are you attending OCW, and do you want to find me and know more about how to avoid downtime and data loss? Or how to optimize your application configuration to make the most out of MAA technologies? Or any database, or technology-related topic?
Maybe you prefer just a chat and discussing life? Over a coffee, or tea? (or maybe beer?)
👇This is where you can find me during OCW.👇
Monday, October 17, 2022
6:30 PM – 10:00 PM – Customer Appreciation Event
Where: Mandalay Bay Shark Reef
This is an invitation-only event. If you are one of the lucky customers that possess an invitation, let’s meet there! It will be fun to discuss technology, business, and life while watching sharks and enjoying a drink together.
Come together and ask anything Data Guard, Active Data Guard, RAC, FPP, or High Availability! See some products in action, and get some insights from my colleagues and me. The booth will be open during the whole exhibition time, but I will be certainly there on Tuesday for these two hours.
I will help my colleague Suraj Ramesh run the hands-on lab of this brand-new (actually, still to be released!) service for general-purpose Disaster Recovery in the cloud.
After HOL4089 until – 7:00 pm – Welcome Reception
Where: CloudWorld Hub, Database booth DB-01
I will probably join to say hello during the Welcome Reception. Maybe you can spot me there 🙂
I will run this hands-on lab. You will have an Active Data Guard 19c configuration in the cloud at your fingertips and you will play with role changes, corruption detection and reparation, and other features. I will be there to explain insights, hints, and recommendations on how to implement it in your work environment.
It’s almost six months without blogging from my side. What a bad score!
It’s not a coincidence that I’m blogging today during #JoelKallmanDay.
A day that reminds the community how important it is to share. Knowledge, mostly. But also good and bad experiences, emotions…
A bittersweet day, at least for me.
On the bitter side: it reminds me of Joel, Pieter, and other friends that are not there anymore. That as a Product Manager, I have to wear big shoes, and it does not matter how good I try to do; I will always feel that it’s not good enough for the high expectations that I set for myself. Guess what! Being PM is way more complicated than I expected when I applied for the position two years ago. So many things to do or learn, so many requests, and so many customers! And being PM at Oracle is probably twice as complicated because it does not matter how good I (or we as a team) try to do; there will always be a portion of the community that picks on the Oracle technology for one reason or another.
On the bright side: it reminds me that I am incredibly privileged to have this role, working in a great team and helping the most demanding customers to get the most out of incredible technology. I love sharing, teaching, giving constructive feedback, producing quality content, and improving the customer experience. This is the sweet part of the job, where I am still taking baby steps when comparing myself to the PM legends we have in our organization. They are always glad to explain our products to the community, the customers, and colleagues! And they are all excellent mentors, each with a different style, background, and personal life.
And knowing people personally is, at least for me, the best thing about being part of a community (outside Oracle) and team (inside Oracle). We all strive for the best technical solutions, performance, developer experience, or uptime for the business. But we are human first of all. And this is what #JoelKallmanDay stands for me—trying to be a better human as a goal so that everything else comes naturally, including being a great colleague, community servant, or friend. ♥
Oracle advertises Far Sync as a solution for “Zero Data Loss at any distance”. This is because the primary sends its redo stream synchronously to the Far Sync, which relays it to the remote physical standby.
There are many reasons why Far Sync is an optimal solution for this use case, but that’s not the topic of this post 🙂
Some customers ask: Can I configure Far Sync to receive the redo stream asynchronously?
Although a direct standby receiving asynchronously would be a better idea, Far Sync can receive asynchronously as well.
And one reason might be to send asynchronously to one Far Sync member that redistributes locally to many standbys.
It is very simple to achieve: just changing the RedoRoutes property on the primary.
Oracle PL/SQL
1
RedoRoutes='(LOCAL : cdgsima_farsync1 ASYNC)'
This will work seamlessly. The v$dataguard_process will show the async transport process:
Oracle PL/SQL
1
2
NAME PID TYP ACTION CLIENT_PID CLIENT_ROLE GROUP# RESETLOG_ID THREAD# SEQUENCE# BLOCK#
So if you want FSFO with Far Sync in 19c, it has to be MaxAvailability (and SYNC redo transport to the FarSync).
If you don’t need FSFO, as we have seen, there is no problem. The only protection mode that will not work with Far Sync is Maximum Protection:
If FSFO is required, and you want Maximum Performance before 21c, or Maximum Protection, you have to remove Far Sync from the redo route.
—
Ludovico
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.Accept
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.