Why is Fast-Start Failover a crucial component for mission-critical Data Guard deployments?
The observer lowers the RTO in case of failure, and the Fast-Start Failover protection modes protect the database from split-brain and data loss.
Author Archives: Ludovico
Find Ludovico at Oracle Cloud World 2022!
Are you attending OCW, and do you want to find me and know more about how to avoid downtime and data loss? Or how to optimize your application configuration to make the most out of MAA technologies? Or any database, or technology-related topic?
Maybe you prefer just a chat and discussing life? Over a coffee, or tea? (or maybe beer?)
👇This is where you can find me during OCW.👇
Monday, October 17, 2022
6:30 PM – 10:00 PM – Customer Appreciation Event
Where: Mandalay Bay Shark Reef
This is an invitation-only event. If you are one of the lucky customers that possess an invitation, let’s meet there! It will be fun to discuss technology, business, and life while watching sharks and enjoying a drink together.
Tuesday, October 18, 2022
2:00 PM – 4:30 PM – Oracle Maximum Availability Architecture with Oracle RAC and Active Data Guard
Where: CloudWorld Hub, Database booth DB-01
Come together and ask anything Data Guard, Active Data Guard, RAC, FPP, or High Availability! See some products in action, and get some insights from my colleagues and me. The booth will be open during the whole exhibition time, but I will be certainly there on Tuesday for these two hours.
4:00 PM – 5:30 PM – Protect Your Business Using Oracle Full Stack Disaster Recovery Service – Interactive Hands-On-Lab [HOL4089]
Where: Bellini 2003, The Venetian, Level 2
I will help my colleague Suraj Ramesh run the hands-on lab of this brand-new (actually, still to be released!) service for general-purpose Disaster Recovery in the cloud.
After HOL4089 until – 7:00 pm – Welcome Reception
Where: CloudWorld Hub, Database booth DB-01
I will probably join to say hello during the Welcome Reception. Maybe you can spot me there 🙂
Wednesday, October 19, 2022
10:00 AM – 12:00 PM – Oracle Maximum Availability Architecture with Oracle RAC and Active Data Guard
Where: CloudWorld Hub, Database booth DB-01
I will be there once again to answer all your questions and show some fancy stuff 🙂
1:15 PM – 2:00 PM – Oracle Data Guard—Active, Autonomous, and Always Protective [LRN3528]
Where: San Polo 3403, The Venetian, Level 3
I will talk about Data Guard, Active Data Guard, and what I consider the most important features today. Come to the session to know more!
3:00 PM – 4:30 PM – Protect Your Data with Oracle Active Data Guard – Interactive Hands-On-Lab [HOL4054]
Where: Bellini 2003, The Venetian, Level 2
I will run this hands-on lab. You will have an Active Data Guard 19c configuration in the cloud at your fingertips and you will play with role changes, corruption detection and reparation, and other features. I will be there to explain insights, hints, and recommendations on how to implement it in your work environment.
Thursday, October 20, 2022
11:40 AM – 12:00 PM – The Least-Known Facts About Oracle Data Guard and Oracle Active Data Guard [LIT4029]
Where: Ascend Lounge, CloudWorld Hub, The Venetian
This will be great! I bet you will discover MANY things that you did not know about Data Guard and Active Data Guard. Come to know more!
See you there!
—
Ludovico
Check, check… Does the mic still work? #JoelKallmanday
Update PHP: ✔
Update WordPress: ✔
New content: ⌛
It’s almost six months without blogging from my side. What a bad score!
It’s not a coincidence that I’m blogging today during #JoelKallmanDay.
A day that reminds the community how important it is to share. Knowledge, mostly. But also good and bad experiences, emotions…
A bittersweet day, at least for me.
On the bitter side: it reminds me of Joel, Pieter, and other friends that are not there anymore. That as a Product Manager, I have to wear big shoes, and it does not matter how good I try to do; I will always feel that it’s not good enough for the high expectations that I set for myself. Guess what! Being PM is way more complicated than I expected when I applied for the position two years ago. So many things to do or learn, so many requests, and so many customers! And being PM at Oracle is probably twice as complicated because it does not matter how good I (or we as a team) try to do; there will always be a portion of the community that picks on the Oracle technology for one reason or another.
On the bright side: it reminds me that I am incredibly privileged to have this role, working in a great team and helping the most demanding customers to get the most out of incredible technology. I love sharing, teaching, giving constructive feedback, producing quality content, and improving the customer experience. This is the sweet part of the job, where I am still taking baby steps when comparing myself to the PM legends we have in our organization. They are always glad to explain our products to the community, the customers, and colleagues! And they are all excellent mentors, each with a different style, background, and personal life.
And knowing people personally is, at least for me, the best thing about being part of a community (outside Oracle) and team (inside Oracle). We all strive for the best technical solutions, performance, developer experience, or uptime for the business. But we are human first of all. And this is what #JoelKallmanDay stands for me—trying to be a better human as a goal so that everything else comes naturally, including being a great colleague, community servant, or friend. ♥
Far Sync and Fast-Start Failover Protection modes
Oracle advertises Far Sync as a solution for “Zero Data Loss at any distance”. This is because the primary sends its redo stream synchronously to the Far Sync, which relays it to the remote physical standby.
There are many reasons why Far Sync is an optimal solution for this use case, but that’s not the topic of this post 🙂
Some customers ask: Can I configure Far Sync to receive the redo stream asynchronously?
Although a direct standby receiving asynchronously would be a better idea, Far Sync can receive asynchronously as well.
And one reason might be to send asynchronously to one Far Sync member that redistributes locally to many standbys.
It is very simple to achieve: just changing the RedoRoutes property on the primary.
|
1 |
RedoRoutes = '(LOCAL : cdgsima_farsync1 ASYNC)' |
This will work seamlessly. The v$dataguard_process will show the async transport process:
|
1 2 |
NAME PID TYP ACTION CLIENT_PID CLIENT_ROLE GROUP# RESETLOG_ID THREAD# SEQUENCE# BLOCK# TT02 440 KSV async ORL multi 0 none 2 1098480879 1 146 456 |
What about Fast-Start Failover?
Up to and including 19c, ASYNC transport to Far Sync will not work with Fast-Start Failover (FSFO).
ASYNC redo transport mandates Maximum Performance protection mode, and FSFO supports that in conjunction with Far Sync only starting with 21c.
Before 21c, trying to enable FSFO with a Far Sync will fail with:
|
1 |
effective redo transport mode is incompatible with the configuration protection mode |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
DGMGRL> show fast_start failover Fast-Start Failover: Disabled Protection Mode: MaxPerformance Lag Limit: 30 seconds Threshold: 30 seconds Active Target: (none) Potential Targets: "cdgsima_lhr1bm" cdgsima_lhr1bm invalid - effective redo transport mode is incompatible with the configuration protection mode Observer: (none) Shutdown Primary: TRUE Auto-reinstate: TRUE Observer Reconnect: (none) Observer Override: FALSE Configurable Failover Conditions Health Conditions: Corrupted Controlfile YES Corrupted Dictionary YES Inaccessible Logfile NO Stuck Archiver NO Datafile Write Errors YES Oracle Error Conditions: (none) |
So if you want FSFO with Far Sync in 19c, it has to be MaxAvailability (and SYNC redo transport to the FarSync).
If you don’t need FSFO, as we have seen, there is no problem. The only protection mode that will not work with Far Sync is Maximum Protection:
If FSFO is required, and you want Maximum Performance before 21c, or Maximum Protection, you have to remove Far Sync from the redo route.
—
Ludovico
Can a physical standby database receive the redo SYNC if the Far Sync instance fails?
In the following configuration, cdgsima_lhr1pq (primary) sends synchronously to cdgsima_farsync1 (far sync), which forwards the redo stream asynchronously to cdgsima_lhr1bm (physical standby):
|
1 2 3 4 5 6 7 8 9 10 11 12 13 |
DGMGRL> show configuration verbose Configuration - cdgsima Protection Mode: MaxPerformance Members: cdgsima_lhr1pq - Primary database cdgsima_farsync1 - Far sync instance cdgsima_lhr1bm - Physical standby database cdgsima_lhr1bm - Physical standby database (alternate of cdgsima_farsync1) Members Not Receiving Redo: cdgsima_farsync2 - Far sync instance |
But if cdgsima_farsync1 is not available, I want the primary to send synchronously to the physical standby database. I accept a performance penalty, but I do not want to compromise my data protection.
I just need to set up the Redoroutes as follows:
|
1 2 3 4 5 6 7 |
-- when primary is cdgsima_lhr1pq EDIT DATABASE 'cdgsima_lhr1pq' SET PROPERTY 'RedoRoutes' = '(LOCAL : (cdgsima_farsync1 SYNC PRIORITY=1, cdgsima_lhr1bm SYNC PRIORITY=2 ))'; EDIT FAR_SYNC 'cdgsima_farsync1' SET PROPERTY 'RedoRoutes' = '(cdgsima_lhr1pq : cdgsima_lhr1bm ASYNC)'; -- when primary is cdgsima_lhr1bm EDIT DATABASE 'cdgsima_lhr1bm' SET PROPERTY 'RedoRoutes' = '(LOCAL : (cdgsima_farsync2 SYNC PRIORITY=1, cdgsima_lhr1pq SYNC PRIORITY=2 ))'; EDIT FAR_SYNC 'cdgsima_farsync2' SET PROPERTY 'RedoRoutes' = '(cdgsima_lhr1bm : cdgsima_lhr1pq ASYNC)'; |
This is defined the second part of the RedoRoutes rules:
|
1 |
cdgsima_lhr1bm SYNC PRIORITY=2 |
Let’s test. If I shutdown abort the farsync instance:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
$ rlwrap sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 26 10:55:31 2022 Version 19.13.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to: Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production Version 19.13.0.0.0 SQL> shutdown abort ORACLE instance shut down. SQL> |
I can see the new SYNC destination being open almost instantaneously (because the old destination fails immediately with ORA-03113):
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
2022-03-26T10:55:35.581460+00:00 LGWR (PID:42101): Attempting LAD:2 network reconnect (3113) LGWR (PID:42101): LAD:2 network reconnect abandoned 2022-03-26T10:55:35.602542+00:00 Errors in file /u01/app/oracle/diag/rdbms/cdgsima_lhr1pq/cdgsima/trace/cdgsima_lgwr_42101.trc: ORA-03113: end-of-file on communication channel LGWR (PID:42101): Error 3113 for LNO:3 to 'dgsima1.dbdgsima.misclabs.oraclevcn.com:1521/cdgsima_farsync1.dbdgsima.misclabs.oraclevcn.com' 2022-03-26T10:55:35.608691+00:00 LGWR (PID:42101): LAD:2 is UNSYNCHRONIZED 2022-03-26T10:55:36.610098+00:00 LGWR (PID:42101): Failed to archive LNO:3 T-1.S-141, error=3113 LGWR (PID:42101): Error 1041 disconnecting from LAD:2 standby host 'dgsima1.dbdgsima.misclabs.oraclevcn.com:1521/cdgsima_farsync1.dbdgsima.misclabs.oraclevcn.com' 2022-03-26T10:55:37.143448+00:00 LGWR (PID:42101): LAD:3 is UNSYNCHRONIZED 2022-03-26T10:55:37.143569+00:00 LGWR (PID:42101): LAD:2 no longer supports SYNCHRONIZATION Starting background process NSS3 2022-03-26T10:55:37.227954+00:00 NSS3 started with pid=38, OS id=78251 2022-03-26T10:55:40.733905+00:00 Thread 1 advanced to log sequence 142 (LGWR switch), current SCN: 8068734 Current log# 1 seq# 142 mem# 0: /u03/app/oracle/redo/CDGSIMA_LHR1PQ/onlinelog/o1_mf_1_k251hfvk_.log 2022-03-26T10:55:40.781499+00:00 ARC0 (PID:42266): Archived Log entry 220 added for T-1.S-141 ID 0x9eb046ef LAD:1 2022-03-26T10:55:41.606175+00:00 ALTER SYSTEM SET log_archive_dest_state_3='ENABLE' SCOPE=MEMORY SID='*'; 2022-03-26T10:55:43.747483+00:00 LGWR (PID:42101): LAD:3 is SYNCHRONIZED 2022-03-26T10:55:43.816978+00:00 Thread 1 advanced to log sequence 143 (LGWR switch), current SCN: 8068743 Current log# 2 seq# 143 mem# 0: /u03/app/oracle/redo/CDGSIMA_LHR1PQ/onlinelog/o1_mf_2_k251hfwz_.log |
Indeed, I can see the new NSS process (synchronous redo transport) spawn at that time:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
SQL> r 1 select NAME 2 ,PID 3 ,TYPE 4 ,ROLE ACTION 5 ,CLIENT_PID 6 ,CLIENT_ROLE 7 ,GROUP# 8 ,RESETLOG_ID 9 ,THREAD# 10 ,SEQUENCE# 11 ,BLOCK# 12* from v$dataguard_process where name like 'NSS%' NAME PID TYP ACTION CLIENT_PID CLIENT_ROLE GROUP# RESETLOG_ID THREAD# SEQUENCE# BLOCK# ----- ------------------------ --- ------------------------ ---------- ---------------- ---------- ----------- ---------- ---------- ---------- NSS2 54961 KSB sync 0 none 0 0 0 0 0 NSS3 78251 KSB sync 0 none 0 0 0 0 0 SQL> !ps -eaf | grep ora_nss oracle 54961 1 0 Mar10 ? 00:00:55 ora_nss2_cdgsima oracle 78251 1 0 10:55 ? 00:00:00 ora_nss3_cdgsima |
—
Ludo
Can I rename a PDB in a Data Guard configuration?
Someone asked me this question recently.
The answer is: yes!
Let’s see it in action.
On the primary I have:
|
1 2 3 4 5 6 7 8 |
----- PRIMARY SQL> show pdbs; CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 RED READ WRITE NO 4 SAND READ WRITE NO |
And of course the same PDBs on the standby:
|
1 2 3 4 5 6 7 8 |
----- STANDBY SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED MOUNTED 3 RED MOUNTED 4 SAND MOUNTED |
Let’s change the PDB RED name to TOBY: The PDB rename operation is straightforward (but it requires a brief downtime). To be done on the primary:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
SQL> alter pluggable database red close; Pluggable database altered. SQL> alter pluggable database red open restricted; Pluggable database altered. SQL> alter session set container=red; Session altered. SQL> alter pluggable database rename global_name to toby; Pluggable database altered. SQL> alter session set container=cdb$root; Session altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 TOBY READ WRITE YES 4 SAND READ WRITE NO SQL> alter pluggable database toby close; Pluggable database altered. SQL> alter pluggable database toby open; Pluggable database altered. SQL> |
On the standby, I can see that the PDB changed its name:
|
1 2 3 4 5 6 7 8 |
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED MOUNTED 3 TOBY MOUNTED 4 SAND MOUNTED SQL> |
The PDB name change is propagated transparently with the redo apply.
—
Ludo
rhpctl addnode gihome: specify HUB or LEAF when adding new nodes to a Flex Cluster
I have a customer trying to add a new node to a cluster using Fleet Patching and Provisioning.
The error in the command output is not very friendly:
|
1 2 3 4 5 |
[grid@fpps ~]$ rhpctl addnode gihome -workingcopy WC_gi19110_FPPC3 \ -newnodes fppc3:fppc3-vip -cred fppc-cred fpps: Audit ID: 269 PRCT-1003 : failed to run "rhphelper" on node "fppc2" PRCT-1014 : Internal error: RHPHELP_preNodeAddVal-05null |
The “RHPHELP_preNodeAddVal” might already give an idea of the cause: something related to the “cluvfy stage -pre nodeadd” evaluation that we normally do when adding a node by hand. FPP does not really run cluvfy, but it calls the same primitives cluvfy is based on.
In FPP, when the error does not give any useful information, this is the flow to follow:
- use “rhpctl query audit” to get the date and time of the failing operation
- open the “rhpserver.log.0” and look for the operation log in that time frame
- get the UID of the operation e.g., in the following line it is “1556344143”:
|
1 2 3 4 5 6 7 |
[UID:-1556344143] [RMI TCP Connection(153)-192.168.1.151] [ 2021-07-27 00:25:20.741 KST ] [ServerCommon.processParameters:485] before parsing: params = {-methodName=addnodesWorkingCopy, -userName=grid, -version=19.0.0.0.0, -auditId=-1556344143, -auditCli=rhpctl addnode gihome -workingcopy WC_gi19110_FPPC3 -newnodes fppc3:fppc3-vip -cred cred_fppc, -plsnrPort=31605, -noun=gihome, -isSingleNodeProv=FALSE, -nls_lang=AMERICAN_AMERICA.AL32UTF8, -clusterName=fpps-cluster, -plsnrHost=fpps, -SA11204ClusterName=null, -lang=en_US, -clientNode=fpps, -verb=addnode, -ghopuid=-1556344143} |
- Isolate the log for the operation:
grep $UID rhpserver.log.0 > $UID.log - Locate the trace file of the rhphelper remote execution:
|
1 2 |
[UID:-1556344143] [RMI TCP Connection(153)-192.168.1.151] [ 2021-07-27 00:26:07.031 KST ] [RHPHELPERUtil.getTraceEnvs:4386] TraceFileLocEnv is :RHPHELPER_TRACEFILE=/u01/app/grid/crsdata/fppc2/rhp/rhphelp_20210727002603.trc |
- Find the root cause in the rhphelper trace:
|
1 |
[main] [ 2021-07-27 00:27:02.600 KST ] [reflect.GeneratedMethodAccessor1.invoke:-1] PRVG-11406 : API with node roles argument must be called for Flex Cluster |
In this case, the target cluster is a Flex Cluster, so the command must be run specifying the node_role.
The documentation is not clear (we will fix it soon):
|
1 2 |
rhpctl addnode gihome {-workingcopy workingcopy_name | -client cluster_name} -newnodes node_name:node_vip[:node_role][,node_name:node_vip[:node_role]...] |
node_role must be specified for Flex Clusters, and it must be either HUB or LEAF.
After using the correct command line, the command succeeded.
|
1 2 |
rhpctl addnode gihome -workingcopy WC_gi19110_FPPC3 \ -newnodes fppc3:fppc3-vip:HUB -cred fppc-cred |
HTH
—
Ludovico
Changing FPP temporary directory (/tmp in noexec and other issues)
When using FPP, you might experience the following error (PRVF-7546):
|
1 2 3 4 5 6 |
$ rhpctl add workingcopy -workingcopy WC_db_19_11_FPPC -image db_19_11 -path /u01/app/oracle/product/WC_db_19_11_FPPC -client fppc -oraclebase /u01/app/oracle fpps01: Audit ID: 121 PRGO-1260 : Cluster Verification checks for database home provisioning failed for the specified working copy WC_db_19_11_FPPC. PRCR-1178 : Execution of command failed on one or more nodes PRVF-7546 : The work directory "/tmp/CVU_19.0.0.0.0_oracle/" cannot be used on node "fppc02" |
This is often related to the filesystem /tmp that has the “noexec” option:
|
1 2 |
$ mount | grep /tmp tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec) |
Although it is tempting to just remount the filesystem with “exec”, you might be in this situation because your systems are configured to adhere to the STIG recommendations:
The noexec option must be added to the /tmp partition (https://www.stigviewer.com/stig/red_hat_enterprise_linux_6/2016-12-16/finding/V-57569)
FPP 19.9 contains fix 30885598 that allows specifying the temporary location for FPP operations:
|
1 |
$ srvctl modify rhpserver -tmploc <new_tmp> |
After that, the operation should run smoothly:
|
1 2 3 4 5 6 |
fppc02: Successfully executed clone operation. fppc02: Executing root script on nodes ltora401,ltora402. fppc02: Successfully executed root script on nodes fppc01,fppc02. fppc02: Working copy creation completed. fppc02: Oracle home provisioned. fpps01: Client-side action completed. |
HTH
—
Ludo
Why do PMs ask you to open Service Requests for almost EVERYTHING?
If you attend Oracle-related events or if you are active on Twitter or other social medias used by technologists, you might know many of us Product Managers directly. If it is the case, you know that we are in general very easy to reach and always happy to help.
When you contact us directly, however, sometimes we answer “Please open a SR for that“. Somehow irritating, huh? “We had chats and drinks together at conferences and now this bureaucracy?” This is understandable. Who likes opening SRs after all? Isn’t just easier to forward that e-mail internally and get the answer first hand?
This is something that happened to me as well in the past when I was not working for Oracle yet, and that still happens with me now (the answer coming from me, as PM).
Why? The first answer is “it depends on the question“. If it is anything that we can answer directly, we will probably do it.
It might be a question about a specific feature: “Does product X support Y?”, “can you add this feature in your product?” or a known problem for which the PM already knows the bug (in that case is just a matter of looking up the bug number), or anything that is relatively easy to answer: “What are the best practices for X?”, “Do you have a paper explaining that?”, “Does this bug have a fix already?”
But there is a plethora of questions for which we need more information.
“I try this, but it does not work“. “I get this error and I think it is a bug“. “I have THIS performance problem“.
This is when I’d personally ask to open a SR most of the times (unless I have a quick answer to give). And there are a few reasons:
Data protection
Oracle takes data protection very seriously. Oracle employees are trained to deal with potentially sensitive data and cannot forward customer information via e-mail. That could be exposed or forwarded to the wrong recipients by mistake, etc. We don’t ask for TFA collections or logs via e-mail (even if sometimes customers send them to us anyway…).
There are special privileges required to access customer SRs, that’s the only secure way we provide to transfer logs and protected information. The files uploaded into the SRs must be accessed through a specific application. All the checkouts and downloads are tracked. When we need to forward customer information internally, we just specify the SR number and let our colleagues access the information themselves. Sometimes we use SRs just as placeholder to exchange data with customers, without having a support engineer working on it.
This is the single most important point that somehow makes the other points irrelevant. But still the remaining ones are good points.
Important pieces in the discussion do not get lost
The answer does not always come from first-hand… it might take 3-4 hops (sometimes more) and analysis, comments, explanations, discussions.
E-mail is not a good tool for this. Long threads can split and include just part of the audience (the “don’t reply to all” effect). Attachments are deleted when replying instead of forwarding… and pieces get lost.
This is where you would use a Jira, or a trouble ticketing system. Guess which is the one that Oracle uses for its customers? 🙂
MOS has internal views to dig into TFA logs (that’s why it is a good idea to provide one, whenever it might be relevant), and all the attachments, comments and internal discussions are centralized there. But we need a SR to add information to!
Win-win: knowledge base, feedback, continuous improvement
If you discover something new from a technical discussion, what do you do? Do you share it or do you keep it for yourself? MOS is part of our knowledge base and it is a good idea to store important discussions in it. Support engineers can find solutions in SRs with similar cases. It is a good opportunity for the support engineer him/herself to be involved in one more interesting discussion, so next time he/she might have the answer on top of the fingers.
To conclude, think about it as a win-win. You give us interesting problems that might help improving the product, and you get a Guardian Angel on your SR for free 😉
—
Ludo
Oracle Fleet Patching and Provisioning (FPP): My new role as PM and a brand new series of blog posts
It’s been 6 years since I’ve tried FPP for the first time (formerly Rapid Home Provisioning, or RHP).
FPP was still young and lacking many features at that time, but it already changed the way I’ve worked during the next years. I embraced the out of place patching, developed some basic scripts to install Oracle Homes, and sought automation and standardization at all costs:
When 18c came with the FPP local-mode automaton, I have implemented it for the Grid Infrastructure patching strategy at CERN:
And discovered that meanwhile, FPP did giant steps, with many new features and fixes for quite a few usability and performance problems.
Last year, when joining the Oracle Database High Availability (HA), Scalability, and Maximum Availability Architecture (MAA) Product Management Team at Oracle, I took (among others) the Product Manager role for FPP.
Becoming an Oracle employee after 20 years of working with Oracle technology is a big leap. It allows me to understand how big the company is, and how collaborative and friendly the Oracle employees are (Yes, I was used to marketing nonsense, insisting salesmen, and unfriendly license auditors. This is slowly changing with Oracle embracing the Cloud, but it is still a fresh wound for many customers. Expect this to change even more! Regarding me… I’ll be the same I’ve always been 🙂 ).
Now I have daily meetings with big customers (bigger than the ones I have ever had in the past), development teams, other product managers, Oracle consultants, and community experts. My primary goal is to make the product better, increasing its adoption, and helping customers having the best experience with it. This includes testing the product myself, writing specs, presentations, videos, collecting feedback from the customers, tracking bugs, and manage escalations.
I am a Product Manager for other products as well, but I have to admit that FPP is the product that takes most of my Product Manager time. Why?
I will give a few reasons in my next blog post(s).
—
Ludo



