OOW14… I know I’m late… but it’s worth to blog about it, at least I won’t forget!

One month after the Open World, and still I’ve got no time to blog about it… The first reason is… well… I’ve been working on new stuff (a couple of new websites are coming), then I’ve had to adapt old slides with the great content got at the conference (ZDLRA and other topics).  Finally, I was also tired about working off-hours, I needed to spend good time with my family after this summer’s rush (moved to the new house and meanwhile worked on one gazillion different topics).

Open World 2014, a different personal feeling

The Open World has been again THE big conference that no Oracle professional can miss. But this time it has been very different to me comparing to 2013.

It wasn’t my first time as attendee, so it has been less surprising. I’ve attended as a Speaker (that made me feel part of the conference, and not just a spectator). I’ve attended also as an Oracle ACE (I’ll blog about thisin another post, soon or later). I was already friend with many people that I was eager to see again (and I feel the same now, waiting for the next conference).

My sessions

MAA+MT session at OOW14

Surprisingly, from the very beginning (at the time of the session acceptance from Oracle) I felt very relaxed about my presentations. One presentation after the other, I’m getting aware of my limits (especially my bad English! ;-)) but I’m also getting experienced. So I went to Open World without stress, even knowing that the first one, on Sunday, was already full booked a couple of weeks before the beginning of the conference.

Oracle has assigned to my sessions the VERY FIRST and the VERY LAST slots of the conference, so I’ve had less attendees than expected (around 170 for both sessions out of 230 and 270 registered people respectively). At least, I’ve had the chance to open and close the conference by saying “Welcome to Open World” and “Thank you, see you next year!” :-)


I’ve got very few feedback from the official OOW website, just 3 people for each session (anyway with a good, statistically irrelevant score: 4.33 and 4.67 out of 5), but I’ve been much more excited by the direct feedback of people that joined me for a chat right after the sessions and by the feedback on Twitter. I have to say that the interactions on this social media made my week, again.



RAC Attack

This year we organized (or… Bobby Curtis and Laura Ramsey did) the RAC Attack at the OTN Lounge. It’s always a great meeting point for all the bloggers and Oracle ACEs, but due to the day (Sunday) and the location (the mezzanine of the Moscone South), we’ve got less people than expected. Anyway, it’s always my favorite project and workshop. We’ve got T-shirts, bandannas, famous ninjas and the most famous attendee ever. Who? Try to figure it out yourself:  

Networking events

Saturday I’ve missed a great bike ride, and Sunday I had to miss the “Official” Golden Gate Run organized by Jeff Smith because it was overlapping with my session. So the first big event has been the ACE Dinner on Sunday, kindly sponsored by the Oracle ACE Program. A dream come true!




Monday morning, despite a very bad weather, I’ve been part of the great Swim in the Bay organized by Oraclenerd. The air was very cold (15°C) and the water was even colder!




On Monday evening there was the Tech Fest organized at the Oracle Plaza (once Howard St.), I’ve been there with friends and colleagues and then we headed together to the Mikkeller Bar for a beer.DSC02664_2


I’ve planned my Tuesday longtime ago, when I’ve realized that there was a concert of the Royal Blood and the Pixies at the Masonic Center (Nob Hill) I’ve bought a couple of tickets for my and my ex-colleague and friend Franck Pachot.


Wednesday morning I’ve organized the replica of the Golden Gate Run. Franck and Sten Vesterli joined me. Doing it early in the morning is beautiful!


DSC_0088_2Wednesday evening was the time of the great Blogger Meetup organized by Pythian and the OTN. Nowhere else you can find the same concentration of Oracle bloggers! Sadly this year the gadget was electronic (an app on the smartphone requiring network connectivity, so almost unusable and much less interactive, IMHO). I’ve finally met my good tweep Kevin Closson that wasn’t there in 2013.

DSC02741_2After the meetup, me and a team of very good friends have organized a dinner at the Fisherman’s Wharf, far from the crowd going to the traditional Oracle Appreciation Event. Try to recognize them in the picture below, I couldn’t ask for a better companionship!! :-)

By6SG9kCYAAo5KR By686V2CEAArLzy DSC02749_2Thursday is always sad, everybody leaves and after the conference the Oracle Plaza gets immediately so empty! The afternoon we’ve organized a meeting of the RAC SIG, and the evening I’ve managed to meet one more time many friends. Thank you guys :-)


My ride on Friday

I’ve ended my week with a long bike ride. I’ve under estimated the size of San Francisco, after more than four hours, 50km and almost 800m of cumulated ascent, I was completely out of order! But I’ve got the chance to visit for the first time most attractions… The City Hall, the Painted Ladies, the Golden Gate Park, the beach on the ocean side, the Coastal Trail, the Legion of Honor, the Presidio and the Hawk Hill on the other side of the bridge. DSC02765_2

DSC_0121_2DSC02781_2 gg_ride




To resume

Listing all the friends I would like to thank for the OOW week is long… Heli, Ovind, Kyle, Martin, Yury, Björn, Franck, Tim, Kellyn, Rene, Leighton, Seth, Laura, Vikki, Jennifer, Mina, Marc, Nelson, Markus, Larry, Edelweiss, Emiliano, Roman, Daniele, Konrad, Chris, Christian, Sten, Hans, Rooq, Andrejs, Paul, Michelle, Kai, Ian, Michael, Alex, Vanessa, Mark, Osama, Bobby, Rick, Gurkan, Laurent, Kevin, Jason, Chet, Carlos, Mauro, Danny, Don, Jan, Vit, Biju, Stacey,  Mike, Henning, Jeff, Christophe, Dominique, Hervé…  I know I’m forgetting too many, you know who you are, thank you!


Oracle RAC, Oracle Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant (OOW14)

Here you can find the material related to my session at Oracle Open World 2014. I’m sorry I’m late in publishing them, but I challenge you to find spare time during Oracle Open World! It’s the busiest week of the year! (Hard Work, Hard Play)


 Demo 1 video

Demo 2 video

Demo 1 script


Demo 2 script


There’s one slide describing the procedure for cloning one PDB using the standbys clause. Oracle has released a Note while I was preparing my slides (one month ago) and I wasn’t aware of it, so you may also checkout this note on MOS:

Making Use of the STANDBYS=NONE Feature with Oracle Multitenant (Doc ID 1916648.1)



Oracle Active Data Guard 12c: Far Sync Instance, Real-Time Cascade Standby, and Other Goodies

Here you can find the content related to my second presentation at Oracle Open World 2014.


Demo video1: Real-Time Cascade

Demo video2: Far Sync Instance

Demo 1 Script


Demo 2 script

For the demo I’ve used 5 machines running 3 database instances and 2 Far Sync instances. I cannot provide the documentation for creating the demo environment, but the scripts may be useful to understand how the demo works.



My agenda at Oracle Open World 2014

It’s time to prepare my luggage for the OOW, it will be my second time in San Francisco and the first as ACE and speaker. I still need to figure out if I’ll manage to get my badge Sunday morning, because Saturday I won’t be in the city before the registration desks close.

If you want to reach me during the conference, this is my “expected” plan:


I’m looking forward particularly to meet my many community friends at the ACE dinner (the first as non-infiltrated ;-)), the blogger meetup and the crazy swim in the bay.

See you on Sunday! :-)

RAC Attack 12c in Switzerland, it’s a wrap!

Last Wednesday, September 17th, we’ve done the first RAC Attack in Switzerland (as far as I know!). I have to say that it has been a complete success like all other RAC Attacks I’ve been involved in.


This time I’ve been particularly happy and proud because I’ve organized it almost all alone. Trivadis, my employer, has kindly sponsored everything: the venue (the new, cool Trivadis offices in Geneva), the T-shirts (I’ve done the design, very similar to the one I’ve designed for Collaborate 14),  beers and pizza!

For beer lovers,we’ve got the good “Blanche des Neiges” from Belgium, “La Helles” and “La Rossa” from San Martino Brewery, Ticino (Italian speaking region of Switzerland). People have appreciated :-)


We’ve had 4 top-class Ninjas and 10 people actively installing Oracle RAC (plus a famous blogger that joined for networking), sadly two people have renounced at the last minute. For the very first time, all the participants have downloaded the Oracle Software in advance. When they’ve registered I’ve reminded twice that the software was necessary because we cannot provide it due to legal constraints.



People running the lab on Windows laptops have reported problems with VirtualBox 4.3.16 (4.3.14 has been skipped directly because of known problems). So every one had to fallback to version 4.3.12 (the last stable release, IMO).

The best praise I’ve got has been the presence of a Senior DBA coming from Nanterre! 550Km (> 5h00 by public transport door-to-door) and an overnight stay just for this event, can you believe it? :-)

This makes me think seriously about the real necessity of organizing this kind of events around the world.

DSC02614 DSC02600 DSC02581


Off course, we’ve got a photo session with a lot of jumps ;-) We could not miss this RAC Attack tradition!

We’ve wrapped everything around 10:30pm, after a bit more than 5 hours of event. We’ve enjoyed a lot and had good time together chatting about Oracle RAC and about our work in general.


Thank you again to all participants!! :-)



Upcoming presentations and workshops (Fall 2014)

My community involvment will see its busiest season. It will start today with a webinar in Italian for the RAC SIG, then three conferences and 4 user group meetings, for a total of twelve sessions and workshops before the end of the year.

The updated list of upcoming events can be found here.

BTW, this is the list of events from today to December:

Date/Time Event
4:00 pm - 5:00 pm
RAC SIG webinar in Italian: Gestisci Oracle RAC più facilmente con i Policy-Managed Database
5:15 pm - 6:05 pm
Oracle RAC, Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant
TechEvent 09.2014, Mövenpick Hotel, Regensdorf Zurich
11:05 am - 11:55 am
Oracle Database Backup Log Recovery Appliance: a quick preview
TechEvent 09.2014, Mövenpick Hotel, Regensdorf Zurich
8:00 am - 8:45 am
Oracle RAC, Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant
Oracle Open World 2014, San Francisco California
9:00 am - 2:00 pm
RAC Attack at OOW14
Oracle Open World 2014, San Francisco California
2:30 pm - 3:15 pm
Oracle Active Data Guard 12c: Far Sync Instance, Real-Time Cascade Standby, and Other Goodies
Oracle Open World 2014, San Francisco California
9:00 am
Oracle Database Backup Logging Recovery Appliace: a quick preview
SOUG SIG 10.2014 (Engineered Systems, Infrastruktur, Cloud), ABB Segelhof, Baden, Dättwil AG
12:00 am
Costruire siti con Wordpress
Italian Linux Day 2014, Aosta
9:00 am - 12:30 pm
Oracle Active Data Guard 12c: Far Sync Instance, Real-Time Cascade Standby, and Other Goodies
SOUG-R SIG 11.2014, Continental Hotel, Lausanne, Lausanne
08/12/2014 - 10/12/2014
12:00 am
RAC Attack at UKOUG TECH14
UKOUG Tech 2014, ACC Liverpool, Liverpool
9:00 am - 9:50 am
Oracle RAC, Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant
UKOUG Tech 2014, ACC Liverpool, Liverpool


A PDB is cloned while in read-write, Data Guard loose its marbles (, ORA-19729)

I feel the strong need to blog abut this very recent problem because I’ve spent a lot of time debugging it… especially because there’s no information about this error on the MOS.

For a lab, I have prepared two RAC Container databases in physical stand-by.
Real-time query is configured (real-time apply, standby in read-only mode).

Following the doc, http://docs.oracle.com/database/121/SQLRF/statements_6010.htm#CCHDFDDG, I’ve cloned one local pluggable database to a new PDB and, because Active Data Guard is active, I was expecting the PDB to be created on the standby and its files copied without problems.

BUT! I’ve forgot to put my source PDB in read-only mode on the primary and, strangely:

  • The pluggable database has been created on the primary WITHOUT PROBLEMS (despite the documentation explicitly states that it needs to be read-only)
  • The recovery process on the standby stopped with error.


Now, the primary had all its datafiles (the new PDB has con_id 4):


and the standby was missing the datafiles of the new PDB:


But, on the standby database, the PDB somehow was existing.


I’ve tried to play a little, and finally decided to disable the recovery for the PDB (new in
But to disable the recovery I was needing to connect to the PDB, but the PDB was somehow “inexistent”:


So I’ve tried to drop it, but off course, the standby was read-only and I could not drop the PDB:


Then I’ve shutted down the standby, but one instance hung and I’ve needed to do a shutdown abort (I don’t know if it was related with my original problem..)


After mounting again the standby, the PDB was also accessible:


So I’ve been able to disable the recovery:


Then, on the primary, I’ve took a fresh backup of the involved datafiles:


and I’ve copied and cataloged the copies to the controlfile:


but the restore was impossible, because the controlfile was not knowing these datafiles!!


So I’ve RESTARTED the recovery for a few seconds, and because the PDB had the recovery disabled, the recovery process has added the datafiles and set them offline.


Then I’ve been able to restore the datafiles :-)


Finally, I’ve enabled again the recovery for the PDB and restarted the apply process.


Lesson learned: if you want to clone a PDB never, ever, forget to put your source PDB in read-only mode or you’ll have to deal with it!! :-)

Boost your Oracle RAC manageability with Policy-Managed Databases

The slides of my presentation about Policy-managed databases. I’ve used them to present at Collaborate14 (#C14LV).

The same abstract has been refused by OOW14 and UKOUG_TECH14 selection committees, so it’s time to publish them :-)

In-memory Columnar Store hands-on

As I’ve written in my previous post, the inmemory_size parameter is static, so you need to restart your instance to activate it or change its size. Let’s try to set it at 600M.


First interesting thing: it has been rounded to 608M so it works in chunks of 16M. (to be verified)

Which views can you select for further information?

V$IM_SEGMENTS gives a few information about the segments that have a columnar version, including the segment size, the actual memory allocated, the population status and other compression indicators.

The other views help understand the various memory chunks and the status for each column in the segment.

Let’s create a table with a few records:

The table is very simple, it’s a cartesian of two “all_tables” views.

Let’s also create an index on it:

The table uses 621M and the index 192M.

How long does it take to do a full table scan almost from disk?

15 seconds! Ok, I’m this virtual machine is on an external drive 5400  RPM… :-(

Once the table is fully cached in the buffer cache, the query performance progressively improves to ~1 sec.

There is no inmemory segment yet:

You have to specify it at table level:

The actual creation of the columnar store takes a while, especially if you don’t specify to create it with high priority. You may have to query the table before seeing the columnar store and its population will also take some time and increase the overall load of the database (on my VBox VM, the performance overhead of columnar store population is NOT negligible).

Once the in-memory store created, the optimizer is ready to use it:

The previous query now takes half the time on the first attempt!

The columnar store for the whole table uses 23M out of 621M, so the compression ratio is very good compared to the non-compressed index previously created!


This is a very short example. The result here (2x improvement) is influenced by several factors. It is safe to think that with “normal” production conditions the gain will be much higher in almost all the cases.
I just wanted to demonstrate that in-memory columnar store is space efficient and really provides higher speed out of the box.

Now that you know  about it, can you live without? :-P

Oracle Database 12c in-memory option, a quick overview

Oracle Database is finally out, and as we all knew in advance, it contains the new in-memory option.

I think that, despite its cost ($23k per processor), this is another great improvement! :-)

Consistent savings!

This new feature is not to be confused with Times Ten. In-memory is a feature that enable a new memory area inside the SGA that is used to contain a columnar organized copy of segments entirely in memory. Columnar stores organize the data as columns instead of rows and they are ideal for queries that involve a few columns on many rows, e.g. for analytic reports, but they work great also for all extemporary queries that cannot make use of existing indexes.

Columnar stores don’t replace traditional indexes for data integrity or fast single-row look-ups,  but they can replace many additional indexes created for the solely purpose of reporting. Hence, if from one side it seems a waste of memory, on the other side using in-memory can lead to consistent memory savings due to all the indexes that have no more reason to exist.

Let’s take an example of a table (in RED) with nine indexes (other colors).


If you try to imagine all the blocks in the buffer cache, you may think about something like this:


Now, with the in-memory columnar store, you can get the rid of many indexes because they’ve been created just for reporting and they are now superseded by the performance of the new feature:



In this case, you’re not only saving blocks on disk, but also in the buffer cache, making room for the in-memory area. With columnar store, the compression factor may allow to easily fit your entire table in the same space that was previously required for a few, query-specific indexes. So you’ll have the buffer cache with traditional row-organized blocks (red, yellow, light and dark blue) and the separate in-memory area with a columnar copy of the segment (gray).


The in-memory store doesn’t make use of undo segments and redo buffer, so you’re also saving undo block buffers and physical I/O!


The added value

In my opinion this option will have much more attention from the customers than Multitenant for a very simple reason.

How many customers (in percentage)  would pay to achieve better consolidation of hundreds of databases? A few.

How many  would pay or are already paying for having better performance for critical applications? Almost all the customers I know!


Internal mechanisms

In-memory is enabled on a per-segment basis: you can specify a table or a partition to be stored in-memory.

Each column is organized in separate chunks of memory called In Memory Compression Units (IMCU). The number of IMCUs required for each column may vary.

Each IMCU contains the data of the column and a journal used to guarantee read consistency with the blocks in the buffer cache. The data is not modified on the fly in the IMCU, but the row it refers to is marked as stale in a journal that is stored inside the IMCU itself. When the stale data grows above a certain threshold the space efficiency of the columnar store decreases and the in-memory coordinator process ([imco]) may force a re-population of the store.
Re-population may also occur after manual intervention or at the instance startup: because it is memory-only, the data actually need to be populated in the in-memory store from disk.

Whether the data is populated immediately after the startup or not, it actually depends on the priority specified for the specific segment. The higher the priority, the sooner the segment will be populated in-memory. The priority attribute also drives which segments would survive in-memory in case of “in-memory pressure”. Sadly, the parameter inmemory_size that specifies the size of the in-memory area is static and an instance restart is required in order to change it, that’s why you need to plan carefully the size prior to its activation. There is a compression advisor for in-memory that can help out on this.


In this post you’ve seen a small introduction about in-memory. I hope I can publish very soon another post with a few practical examples.