About Ludovico

Ludovico is an Oracle ACE Director, frequent speaker and community contributor, working as Senior Database Specialist for Trivadis, Switzerland.

Italian Oracle User Group: from nothing to something

I am pretty sure that every user group has had its own difficulties when starting. Because it happened that we started the ITOUG just a couple of years ago, I think it is worth to tell the story 🙂

I have been told about a team of italian DBAs willing to create a new user group, back in 2013. I decided to join the team because I was starting my journey in the Oracle Community.

We were coming from different cities (Lausanne, Vicenza, Milano, Roma)… the only solution was to meet up online, through Google Hangouts.

The first meetings were incredibly boring and not concluding (sorry ITOUG guys if you read this :-)):

You cannot start something if you do not know what you want to achieve

Of course, everybody was agreeing that it would have been nice to become the new UKOUG in the southern Europe. But being realistic: no budget, no spare time, nothing at all, we needed a starting point. We though that the starting point was a website. But even something easy like a basic website forks a lot of additional questions:

  • Should it allow to publish content?
  • Should it have a bulletin board?
  • What about user subscription?
  • What should be the content? Articles, webinars?

We created something with the idea to publish our content, but after a while it was mostly an empty container.

It took me a while to learn a first big lesson:

Democracy does not work for small User Groups

The original founder of the group decided to quit the ITOUG because nothing concrete was happening. Everybody was putting ideas on the table but nothing was happening, really.

When my friend Björn Rost proposed me to candidate Milano for the OTN Tour, I jumped on the train: it was the best way to start something concrete. I somehow “forced” the OTN Tour candidature to happen, saying to my peers: “I will do it, if you support me, thanks; otherwise I will do it alone”.

And the response was great!

Do not wait for something to happen. If you want it, take the lead and make it.

This is the biggest lesson I have learned from my involvement with user groups. People sometimes cannot contribute because they do not what to do, how to do it, or simply they do not have time because there are a gazillion of things more important than the user groups: family, work, health… even hobbies sometimes are more important 🙂

If you have an idea, it’s up to YOU to transform it in something concrete.

 

After the acceptance, we had to prepare the event. We proposed a few dates and Björn prepared the calendar taking into account the other user groups.

Organizing an event is not easy but not so complex either

  • Set a reasonable target number of participants
  • Fix a date when most contributors are available
  • Get offers from different venues (pricing for the venue and the catering)
  • Get an idea of the budget
  • Ask different companies to sponsor the event
  • Eventually ask Oracle 🙂
  • Once the sponsors are committed to pay, block the venue
  • Prepare and publish the Call for Paper
  • Eventually start some advertising
  • Select the papers
  • Prepare the agenda
  • Ask the speakers for confirmation about the proposed date/time
  • Prepare the event registration form
  • Publish the agenda
  • Broadcast it everywhere 🙂 (Social media, contacts, website, Oracle through their channels)
  • Interact with the hotel and the sponsor to have the proper setup, billing addresses, invoices, etc.
  • Host the event
  • Relax

Finding the sponsors is the most difficult part

It has been easy for the first two events to find a sponsor (just a database stream, event held in Milano), but it was not the same for the last one.

Our aim was to do a double event (Milano + Roma) with two streams in each location (DB + BI & Analytics). In Roma we have been unable to find a sponsor (if you read this AND your company may be interested in sponsoring such event in Roma, please contact me :-)), we decided then to continue with the event in Milano.

Finding the speakers is easier than you can imagine

Unless you want non-english sessions held by native speakers, there is a huge community of speakers willing to share their knowledge. For Oracle, the most obvious source of speakers is the ACE Program, and twitter is probably the best channel for finding them.

Now it has been the third time that we organized an event, and every time we have been surprised by the good attendance and feedback.

A few images from the last event

DBygdizXkAE-EiZ DBynRIlXcAAqbtB DBy_0e8XUAECvW5
DByeHFQWAAEyh6S
DByEWJwXkAEuvrW DByLNwhW0AAlP1a DByn2xpXoAA94GL DByn43hWAAInzVp DByNSwmW0AAzfP_ DByoj6iXUAEV24j DByQNHLXYAAN9X5
DBzBIhOWAAQ_lZO DBzLRM6W0AExe6a DBzKCL3XkAAXcBF
DBydb-qWsAUerGg

Which Oracle Databases use most CPU on my server?

Assumptions

  • You have many (hundreds) of instances and more than a couple of servers
  • One of your servers have high CPU Load
  • You have Enterprise Manager 12c but the Database Load does not filter by server
  • You want to have an historical representation of the user CPU utilization, per instance

Getting the data from the EM Repository

With the following query, connected to the SYSMAN schema of your EM repository, you can get the hourly max() and/or avg() of user CPU by instance and time.

Suppose you select just the max value: the result will be similar to this:

 

Putting it into excel

There are one million ways to do something more reusable than excel (like rrdtool scripts, gnuplot, R, name it), but Excel is just right for most people out there (including me when I feel lazy).

  • Configure an Oracle Client and add the ODBC data source to the EM repository:

odbc_emrep

  • Open Excel, go to “Data” – “Connections” and add a new connection:
    • Search…
    • New Source
    • DSN ODBC
  • Select your new ODBC data source, user, password
  • Uncheck “Connection to a specific table”
  • Give a name and click Finish
  • On the DSN -> Properties -> Definition, enter the SQL text I have provided previously

connection_properties_odbc_excel

The result should be something similar: ( but much longer :-))

first_step_excelPivoting the results

Create e new sheet and name it “pivot”, Click on “Create Pivot Table”, select your data and your dimensions:

pivotThe result:

pivotedCreating the Graph

Now that the data is correctly formatted, it’s easyy to add a graph:

just select the entire pivot table and create a new stacked area graph.

The result will be similar to this:

graph_cpu_load_excel

With such graph, it is easy to spot which databases consumed most CPU on the system in a defined period, and to track the progress if you start a “performance campaign”.

For example, you can see that the “green” and “red” databases were consuming constantly some CPU up to 17.05.2017 and then some magic solved the CPU problem for those instances.

It is also quite convenient for checking the results of new instance caging settings…

The resulting CPU will not necessarily be 100%: the SYS CPU time is not included, as well as the user CPU of all the other processes that are either not DB or not monitored with Enterprise Manager.

HTH

Ludovico

Another problem with “KSV master wait” and “ASM file metadata operation”

My customer today tried to do a duplicate on a cluster. When preparing the auxiliary instance, she noticed that the startup nomount was hanging forever: Nothing in the alert, nothing in the trace files.

Because the database and the spfile were stored inside ASM, I’ve been quite suspicious…

The ASM trace files had the following entries:

The ASM instance had the following sessions waiting:

OMS?

Around 12:38:56, another colleague in the office added a disk to one of the disk groups, through Enterprise Manager 12c!

But there were no rebalance operations:

It’s not the first time that I hit this type of problems. Sadly, sometimes it requires a full restart of the cluster or of ASM (because of different bugs).

This time, however, I have tried to kill only the foreground sessions waiting on “ASM file metadata operation”, starting with the one coming from the OMS.

Surprisingly, after killing that session, everything was fine again:

I never add disks via OMS (I’m a sqlplus guy ;-)) , I wonder what went wrong with it 🙂

Ludovico

RMAN Catalog Housekeeping: how to purge the old incarnations

First, let me apologize because every post in my blog starts with a disclaimer… but sometimes it is really necessary. 😉

Disclaimer: this blog post contains PL/SQL code that deletes incarnations from your RMAN recovery catalog. Please DON’T use it unless you deeply understand what you are doing, as it can compromise your backup and recovery strategy.

Small introduction

You may have a central RMAN catalog that stores all the backup metadata for your databases. If it is the case, you will have a database entry for each of your databases and a new incarnation entry for each duplicate, incomplete recovery or  flashback (or whatever).

You should also have a delete strategy that deletes the obsolete backups from either your DISK or SBT_TAPE media. If you have old incarnations, however, after some time you will notice that their information never goes away from your catalog, and you may end up soon or later to do some housekeeping. But there is nothing more tedious than checking and deleting the incarnations one by one, especially if you have average big numbers like this catalog:

Where db, dbinc, bdf and brl contain reslectively the registered databases, incarnations, datafile backups and archivelog backups.

Different incarnations?

Consider the following query:

You can run it safely: it returns the list of incarnations hierarchically connected to their parent, by database name, key and level.

Then you have several types of behaviors:

  • Normal databases (created once, never restored or flashed back) will have just one or two incarnations (it depends on how they are created):

They are usually the ones that you may want to keep in your catalog, unless the database no longer exist: in this case perhaps you omitted the deletion from the catalog when you have dropped your database?

  • Flashed back databases (flashed back multiple times) will have as many incarnations as the number of flashbacks, but all connected with the incarnation prior to the flashback:

Here, despite you have several incarnations, they all belong to the same database (same DB_KEY and DBID), then you must also keep it inside the recovery catalog.

  • Non-production databases that are frequently refreshed from the production database (via duplicate) will have several incarnations with different DBIDs and DB_KEY:

This is usually the most frequent case: here you want to delete the old incarnations, but only as far as there are no backups attached to them that are still in the recovery window.

  • You may also have orphaned incarnations:

In this case, again, it depends whether the DBID and DB_KEY are the same as the current incarnation or not.

What do you need to delete?

Basically:

  • Incarnations of databases that no longer exist
  • Incarnations of existing databases where the database has a more recent current incarnation, only if there are no backups still in the retention window

How to do it?

In order to be sure 100% that you can delete an incarnation, you have to verify that there are no recent backups (for instance, no backups more rercent than the current recovery window for that database). If the database does not have a specified recovery window but rather a default “CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default”, it is a bit more problematic… in this case let’s assume that we consider “old” an incarnation that does not backup since 1 year (365 days), ok?

Getting the last backup of each database

Sadly, there is not a single table where you can verify that. You have to collect the information from several tables. I think bdf, al, cdf, bs would suffice in most cases.

When you delete an incarnation you specify a db_key: you have to get the last backup for each db_key, with queries like this:

Putting together all the tables:

Getting the  recovery window

The configuration information for each database is stored inside the conf table, but the retention information is stored in a VARCHAR2, either ‘TO RECOVERY WINDOW OF % DAYS’ or ‘TO REDUNDANCY %’

You need to convert it to a number when the retention policy is recovery windows, otherwise you default it to 365 days wher the redundancy is used. You can add a column and a join to the query:

and eventually, either display if it the incarnation is no more used or filter by usage:

Delete the incarnations!

You can delete the incarnations with this procedure:

This procedure will raise an exception (-20001, ‘Database not found’) when a database does not exist anymore (either already deleted by this procedure or by another session), so you need to handle it.

Putting all together:

I have used this procedure today for the first time and it worked like a charm.

However, if you have any adjustment or suggestion, don’t hesitate to comment it 🙂

HTH

Souvenirs from 2016

The 2016 is ending, at least from the Oracle Community point of view. It has been tiring and exciting at the same time, so I would like to put some good memories together.

This post is mostly for me, sorry 🙂

February: Another nice Tech Event

Trivadis Tech Event is a great conference, sadly not open for everyone, but still a great one… Got two (or three?) talks there.

ccn2jahwwaefsn_

March: a good beer in good company

Nearby the CERN, in Geneva, with a few good friends and big technologists:-)

cc9ymgkweaa1uxjMarch again: That ACE Director tweet

May: The DOAG Datenbank 2016

One speech there… the first of many about “upgrading 300 databases  in 300 days”. It was my first time speaking in Germany. 🙂

cilmlkzuoae99csMay again: The Italian leg of the OTN EMEA Tour

The OTN tour has been a great starter for the activities of the Italian Oracle User Group (of which I am one of the founders). It was great to discover that the interest for Oracle Database in Italy is still high (we got almost 60 people: that is huge for a first event, IMO).

We had Mark Rittman (before he became famous :-D), Christian Antognini, Frits Hoogland and Mike Dietrich!

cipw4ewwgauukzeSeptember: the ACED briefing, Oracle Open World and three spare days at Yosemite

It was my first time at the ACED briefind (key word: #cloud 😉 ) and also the first at Oracle HQ. It’s like going to Disney World, but the attractions are a little more scary 😀

cshime8uiaa4ygycsfozwrueaaq0g8The Yosemite was also incredible. In a single day of trekking, I scored 42k steps, 31km, +1200 vertical meters…

dsc09325dsc09426dsc09464dsc09490dsc09578dsc09594October: the great OTN Nordic Tour

That was fun, but incredibly tiring. 4 days in a row, 4 countries, 4 fligths, 4 different currencies, 4 ACE Directors and now 4 friends 🙂

I did not know very well Joel and Martin  and I did not know John at all. They are great people and I enjoyed a lot the time spent with them (and the beers :-D).

Copenhagen

dsc09766cueeecfwgaaza2idsc_2275Oslo

cukiuhswiaalnkvdsc_2282Helsinki

dsc_2327cupa78twaaa_z8lStockholm

Stockholm was the last leg, I did it with John only. There I spent the rest of the week-end (the event was on Friday). I love Stockholm so much! Perhaps my favorite city (for sure in the top 5). I have also got a good whisky as speaker gift 😀

dsc_2343dsc_2402dsc09836dsc09920November: thesecond Italian Oracle User Group Event

We had again 60 people. In november I have also been speaker along with Christian Antognini, Mauro Pagano, Francesco Renne and Francesco Tisiot.

cw-izwiwiaaxosbNovember again: the DOAG

Definitely the best conference in Europe 🙂 It was my second time there and first one as speaker.

cxym09ouqaawfgxcxv6db2xeaamwqlNovember again: the Swiss Data Forum

It has been a great single-day event in Lausanne, not database centric but DATA centric, about Data, IoT, Big Data, Data Science, Deep Learning… I had one speech there.

cx4mv2twqaak5-q

December: the UKOUG Tech 16

Two final speeches at UKOUG in Birmingham. It was fun again, but the last day I did fell sick 🙁 (and some how I am still recovering).

czg9lvdusaahcikPlans for the 2017

I have got accepted for the IOUG Collaborate, but because of the many duties and all the recent travel, I have not confirmed my sessions (ouch, it is the first time that I do this, next time I will submit more carefully), so Open World will likely be my only US trip next year.

I look forward to submit for DOAG events again, speaking at SOUG (it’s already planned: 18th and 19th of May), and organizing at least two more events for the Italian Oracle User Group.

Happy New Year! 🙂

trivadis sessions at UKOUG Tech16

UKOUG Tech16 will start in less than a week. Trivadis will be there with many speakers, 10 sessions in total 🙂
If you are a delegate, come along and have a chat with us!

Super Sunday

Monday 05/12

Tuesday 06/12

Wednesday 07/12

See you there 🙂

DBMS_QOPATCH, datapatch, rollback, apply force

I am working for a customer on a quite big implementation of Cold Failover Cluster with Oracle Grid Infrastructure on Linux. I hope to have some material to publish soon about it! However, in this post I will be talking about patching the database in a cold-failover environment.

DISCLAIMER: I use massively scripts provided in this great blog post by Simon Pane:

https://www.pythian.com/blog/oracle-database-12c-patching-dbms_qopatch-opatch_xml_inv-and-datapatch/

Thank you Simon for sharing this 🙂

Intro

We are not yet in the process of doing out-of-place patching; at the moment the customer prefers to do in-place patching:

  • evacuate a node by relocating all the databases on other nodes
  • patching the node binaries
  • move back the databases and patch them with datapatch
  • do the same for the remaining nodes

I beg to disagree with this method, being a fan of having many patched golden copies distributed on all servers and patching the databases by just changing the ORACLE_HOME and running datapatch (like Rapid Home Provisioning does). But, this is the situation today, and we have to live with it.

Initial situation

  • Server 1, 2 and 3: one-off 20139391 applied
  • New database created

cfc_qopatch1When the DBCA creates a new database, in 12.1.0.2, it does not run datapatch by default, thus, the database does not have any patches installed.

However, this specific one-off patch does not modify anything in the database (sql_patch=false)

and the datapatch runs without touching the db:

Next step: I evacuate the server 2 and patch it, then I relocate my database on it

cfc_qopatch2

Now the database is not at the same level of the binaries and need to be patched:

The column CONSTITUENT is important here because it tells us what the parent patch_id is. This is the column that we have to check when we want to know if the patch has been applied on the database.

Now the patch is visible inside the dba_registry_sqlpatch:

Notice that the child patches are not listed in thie view.

Rolling back

Now, one node is patched, but the others are not. What happen if I relocate the patched database to a non-patched node?

cfc_qopatch3

The patch is applied inside the database but not in the binaries!

If I run datapatch again, the patch is rolled back:

The patch has been rolled back according to the datapatch, and the action is shown in the dba_registry_sqlpatch:

But if I look at the logfile, the patch had some errors:

Indeed, the patch looks still there:

If I try to run it again, it does nothing/it fails saying the patch is not there:

What does it say on the patched node?

Whaaat? datapatch there says that the patch IS in the registry and there’s nothing to do. Let’s try to force its apply again:

Conclusion

I’m not sure whether it is safe to run the patched database in a non-patched Oracle Home. I guess it is time for a new SR 🙂

Meanwhile, we will try hard not to relocate the databases once they have been patched.

Cheers

Ludo

Getting the Oracle Homes in a server from the oraInventory

The information contained in the oratab should always be updated, but it is not always reliable. If you want to know what Oracle installations you have in a server, better to get it from the Oracle Universal Installer or, if you want some shortcuts, do some grep magics inside the inventory with the shell.

The following diagram is a simplified structure of the inventory that shows what entries are present in the central inventory (one per server) and the local inventories (one per Oracle Home).

inventory_structureYou can use this simple function to get some content out of it, including the edition (that information is a step deeper in the local inventory).

HTH

Loading resolved Adaptive Plans in the SQL Plan Management

In my previous post, I have shown that loading Adaptive Plans in the SQL Plan Baseline leads to using the original plan. Well, actually, this is true when you capture them via the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter.

Thanks to a tweet by Neil Chandler, I’ve realized that it was a good idea to show also the case when the plan is loaded manually.

When the adaptive plan switches to the alternative plan, the plan_hash_value also changes, and can be loaded manually in the baseline with DBMS_SPM.

Let’s reset everything and retry quickly to:

  • Capture the plan automatically (this will lead to the original plan)
  • Load the plan manually (I will specify to load the alternative plan, if resolved)
  • Drop the plan captured automatically
  • Use the newly accepted baseline

To recap:

  • The capture process will always load the original plan
  • It is possible to decide to load manually the original one or the alternative one (if resolved)
  • Using automatic capture is a bad idea

HTH

Ludo

How Adaptive Plans work with SQL Plan Baselines?

Disclaimer: after writing this post (but before publishing it) I have seen that other people already blogged about it, so I am ashamed of publishing it anyway… but that’s blogger’s life 🙂

Wednesday I have got a nice question after my presentation about Adaptive Features at the DOAG16 conference:

What happens when you load an adaptive plan in a SQL Plan Baseline?
Does it load only the final plan or does it load the whole plan including the inactive operations? Will the plan be evaluated again using the inflection point?

I have decided to do some tests in order to give the best possible answer. I did not spend the time to rethink about producing an adaptive plan. Tim Hall already did an excellent test case to create and alter an adaptive plan in his blog, so I have reused massively most of its code. Thanks Tim :-).

I will not post all the code (please find it in Tim’s post), I will go straight to the plans.

First: I have an adaptive plan that resolves to NESTED LOOPS:

Second: I load the plan (lazy way: using baseline capture at session level)

Third: re-run the statement and check the plan

It does not look adaptive, but I can also check from the function DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE:

Again, despite in the Note section it says it is adaptive, it does not look like an adaptive plan.

Can I trust this information? Of course I did not and tried to check the plan with and without baseline after changing the rows to force a plan switch to HJ (again taking Tim’s example):

After changing the rows:

  • when I do not use the baseline, the plan resolves to HASH JOIN
  • when I use it, the baseline forces to NESTED LOOPS.

So the plan in the baseline is not adaptive and it forces to what has been loaded. Is it the final plan or the original one? I have to capture it again to see if a new baseline appears:

A new baseline does not appear, so it looks that the original plan is considered by the capture process and not the resolved one! To be 100% sure, let’s try to drop the existing one and redo the test:

So, despite the fact that I have an adaptive plan that switches from NL to HJ, only the NESTED LOOPS operations are captured in the baseline, I can infer the only the original plan is loaded as SQL Plan Baseline.

References: