RAC Attack 12c in Switzerland, it’s a wrap!

Last Wednesday, September 17th, we’ve done the first RAC Attack in Switzerland (as far as I know!). I have to say that it has been a complete success like all other RAC Attacks I’ve been involved in.

DSC_0019

This time I’ve been particularly happy and proud because I’ve organized it almost all alone. Trivadis, my employer, has kindly sponsored everything: the venue (the new, cool Trivadis offices in Geneva), the T-shirts (I’ve done the design, very similar to the one I’ve designed for Collaborate 14),  beers and pizza!

For beer lovers,we’ve got the good “Blanche des Neiges” from Belgium, “La Helles” and “La Rossa” from San Martino Brewery, Ticino (Italian speaking region of Switzerland). People have appreciated 🙂

DSC_0027

We’ve had 4 top-class Ninjas and 10 people actively installing Oracle RAC (plus a famous blogger that joined for networking), sadly two people have renounced at the last minute. For the very first time, all the participants have downloaded the Oracle Software in advance. When they’ve registered I’ve reminded twice that the software was necessary because we cannot provide it due to legal constraints.

DSC_0023

 

People running the lab on Windows laptops have reported problems with VirtualBox 4.3.16 (4.3.14 has been skipped directly because of known problems). So every one had to fallback to version 4.3.12 (the last stable release, IMO).

The best praise I’ve got has been the presence of a Senior DBA coming from Nanterre! 550Km (> 5h00 by public transport door-to-door) and an overnight stay just for this event, can you believe it? 🙂

This makes me think seriously about the real necessity of organizing this kind of events around the world.

DSC02614 DSC02600 DSC02581

 

Off course, we’ve got a photo session with a lot of jumps 😉 We could not miss this RAC Attack tradition!

We’ve wrapped everything around 10:30pm, after a bit more than 5 hours of event. We’ve enjoyed a lot and had good time together chatting about Oracle RAC and about our work in general.

DSC02619

Thank you again to all participants!! 🙂

 

 

A PDB is cloned while in read-write, Data Guard loose its marbles (12.1.0.2, ORA-19729)

UPDATE: please check my more recent post about this problem and the information I’ve got at the Oracle Demo Grounds during OOW14: http://www.ludovicocaldara.net/dba/demo-grounds-clone-pdb-rw/

I feel the strong need to blog abut this very recent problem because I’ve spent a lot of time debugging it… especially because there’s no information about this error on the MOS.

Introduction
For a lab, I have prepared two RAC Container databases in physical stand-by.
Real-time query is configured (real-time apply, standby in read-only mode).

Following the doc, http://docs.oracle.com/database/121/SQLRF/statements_6010.htm#CCHDFDDG, I’ve cloned one local pluggable database to a new PDB and, because Active Data Guard is active, I was expecting the PDB to be created on the standby and its files copied without problems.

BUT! I’ve forgot to put my source PDB in read-only mode on the primary and, strangely:

  • The pluggable database has been created on the primary WITHOUT PROBLEMS (despite the documentation explicitly states that it needs to be read-only)
  • The recovery process on the standby stopped with error.

 

Now, the primary had all its datafiles (the new PDB has con_id 4):

 

and the standby was missing the datafiles of the new PDB:

 

But, on the standby database, the PDB somehow was existing.

 

I’ve tried to play a little, and finally decided to disable the recovery for the PDB (new in 12.1.0.2).
But to disable the recovery I was needing to connect to the PDB, but the PDB was somehow “inexistent”:

 

So I’ve tried to drop it, but off course, the standby was read-only and I could not drop the PDB:

 

Then I’ve shutted down the standby, but one instance hung and I’ve needed to do a shutdown abort (I don’t know if it was related with my original problem..)

 

After mounting again the standby, the PDB was also accessible:

 

So I’ve been able to disable the recovery:

 

Then, on the primary, I’ve took a fresh backup of the involved datafiles:

 

and I’ve copied and cataloged the copies to the controlfile:

 

but the restore was impossible, because the controlfile was not knowing these datafiles!!

 

So I’ve RESTARTED the recovery for a few seconds, and because the PDB had the recovery disabled, the recovery process has added the datafiles and set them offline.

 

Then I’ve been able to restore the datafiles 🙂

 

Finally, I’ve enabled again the recovery for the PDB and restarted the apply process.

 

Lesson learned: if you want to clone a PDB never, ever, forget to put your source PDB in read-only mode or you’ll have to deal with it!! 🙂

Boost your Oracle RAC manageability with Policy-Managed Databases

The slides of my presentation about Policy-managed databases. I’ve used them to present at Collaborate14 (#C14LV).

The same abstract has been refused by OOW14 and UKOUG_TECH14 selection committees, so it’s time to publish them 🙂

RAC Attack 12c arrive en Suisse en Septembre!

carte_suisse - CopieAprès Oracle Open World, IOUG Collaborate et d’autres grandes conférencesRAC Attack arrive également à Genève! Installez l’environnement Oracle 12c RAC sur votre laptop. Des volontaires expérimentés (ninjas) vous aideront à résoudre
toutes les énigmes apparentés et vous guideront à travers le processus
d’installation.

Ninjas
Ludovico Caldara – Oracle ACE, RAC SIG European Chair & RAC Attack co-writer
Luca Canali – OAK Table Member & frequent speaker
Eric Grancher – OAK Table member
Jacques Kostic – OCM 11g & Senior Consultant at Trivadis

Où? nouveaux bureaux Trivadis, Chemin Château-Bloch 11, CH1219 Geneva
Quand? Mercredi 17 September 2014, dès 17h00
Coût? C’est un évènement GRATUIT! C’est un atelier communautaire, plaisant et
informel. Vous n’avez qu’à apporter votre laptop et votre bonne humeur!
Inscription: TVD_LS_ADMIN@trivadis.com

Places limitées! Réservez votre place & votre T-shirt dès à présent: TVD_LS_ADMIN@trivadis.com

NinjaaaAgenda:
17.00 – Bienvenue
17.30 – RAC Attack 12c – 1ere partie
19.30 – Pizza et Bières! (sponsorisés par Trivadis)
20.00 – RAC Attack 12c – 2eme partie
22.00 – distribution des T-shirt et photo de groupe!!

TRES IMPORTANT: La participation à cet évènement requière l’apport de votre propre laptop!
Spécifications requises:
a) 64 bit OS qui supporte Oracle Virtual Box
b) 8GB RAM, 50GB free HDD space.
En raison de contraintes juridiques, merci de télécharger à l’avance Oracle Database 12c ainsi que Grid Infrastructure pour Linux x86-64 depuis https://edelivery.oracle.com/ (et pour
plus d’informations : http://goo.gl/pqavYh).

RAC Attack comes to Switzerland in September!!

carte_suisse - CopieAfter Oracle Open World, IOUG Collaborate and all major conferences, RAC Attack comes to Geneva! Set up Oracle 12c RAC environment on your laptop. Experienced volunteers (ninjas) will help you address any related issues and guide you through the setup process.

Ninjas
Ludovico Caldara – Oracle ACE, RAC SIG European Chair & RAC Attack co-writer
Luca Canali – OAK Table Member and frequent speaker
Eric Grancher – OAK Table member
Jacques Kostic – OCM 11g & Senior Consultant at Trivadis

Where? new Trivadis office, Chemin Château-Bloch 11, CH1219 Geneva
When? Wednesday September 17th 2014, from 17h00 onwards
Cost? It is a FREE event! It is a community based, informal and enjoyable workshop.
You just need to bring your laptop and your desire to have fun!
Registration: TVD_LS_ADMIN@trivadis.com

Limited places! Reserve your seat and T-shirt now: TVD_LS_ADMIN@trivadis.com

NinjaaaAgenda:
17.00 – Welcome
17.30 – RAC Attack 12c part I
19.30 – Pizza and Beers! (kindly sponsored by Trivadis)
20.00 – RAC Attack 12c part II
22.00 – T-shirt distribution and group photo!!

 

VERY IMPORTANT: To participate in the workshop, you need to bring your own laptop.
Required specification:
a) any 64 bit OS that supports Oracle Virtual Box
b) 8GB RAM, 50GB free HDD space.
Due to legal constraints, please pre-download Oracle Database 12c and Grid Infrastructure for Linux x86-64 from https://edelivery.oracle.com/ web site (further
information here: http://goo.gl/pqavYh).

RAC Attack 12c at Collaborate 14 and advanced labs

I’ve just published an advanced lab on SlideShare that RAC Attack attendees may do at Collaborate this year, instead of just doing the basic 2-node RAC installation on their laptop.

We’ll offer also an advanced lab about Flex Clusters and Flex ASM (written by Maaz Anjum). Moreover, I’m working on an additional lab that allows to implement a multi-node RAC by using Virtual Box linked clones and GI Home clones like I’ve shown in my previous post.

RAC Attack at #C14LV will be like fun again. We’ll have a few t-shirts for the attendants, designed by me and Chet “Oraclenerd” Justice, kindly sponsored by OTN.

The workshop will end up with beers and snaks (again, thank you OTN for sponsoring this :-)).

2014_03_27_23_46_49_OraclenerdNinjasFINAL

 

If you’re planning to attend Collaborate, join us and start your conference week in a good mood 🙂

 

Multinode RAC 12c cluster on VirtualBox using linked clones

Recently I’ve had to install a four-node RAC cluster on my laptop in order to do some tests. I have found an “easy” (well, easy, it depends), fast and space-efficient way to do it so I would like to track it down.

The quick step list

  • Install the OS on the first node
  • Add the shared disks
  • Install the clusterware in RAC mode on on the first node only
  • Remove temporarily the shared disks
  • Clone the server as linked clone as many times as you want
  • Reconfigure the new nodes with the new ip and naming
  • Add back the shared disks on the first node and on all other nodes
  • Clone the GI + database homes in order to add them to the cluster

Using this method the Oracle binaries (the most space consuming portion of the RAC installation) are installed and allocated on the first node only.

The long step list

Actually you can follow many instruction steps from the RAC Attack 12c book.

  • Review the HW requirements  but let at least 3Gb RAM for each guest + 2Gb more for your host (you may try with less RAM but everything will slow down).
  • Download all the SW components , additionally you may download the latest PSU (12.1.0.1.2) from MOS.
  • Prepare the host and install linux on the first node. When configuring the OS, make sure you enter all the required IP addresses for the additional nodes. RAC Attack has two nodes collabn1, collabn2. Add as many nodes as you want to configure. As example, the DNS config may have four nodes

At this point, the procedure starts differing from the RAC Attack book.

  •  Go to the VirtualBox VM settings and delete all the shared disks2014_03_16_22_05_26_Oracle_VM_VirtualBox_Manager
  •  Clone the first server as linked clone (right-click, clone, choose the name, flag “Linked Clone” as many times as the number of additional servers you want.

2014-03-16 22_09_42-Clone Virtual Machine

 

  • By using this method  the new servers will use the same virtual disk file of the first server and a second file will be used to track the differences. This will save a lot of space on the disk.
  • Add back the shared disks to all the servers.
  • Start the other nodes and configure them following the RAC Attack instructions again.
  • Once all the nodes are configured, the GI installation has to be cleaned out on all the cloned servers using these guidelines:

  • Then, on each cloned server, run the perl clone.pl as follows to clone the GI home, but change the LOCAL_NODE accordingly (note: the GI Home name must be identical to the one specified in the original installation!):

  •  Then, on the first node (that you have started and you have reactivated the clusterware stack on it with crsctl enable crs / crsctl start crs ;-)), run this command to add the new nodes in the definition of the cluster:

 

  • from the first server copy these files on all the other nodes:

  •  Then clone also the DB Home (again, run it on each new server and specify the same DB home name that you have used in the original installation):

  •  On each new node run also the updatenodelist and the DB root.sh command to update the node list for the DB home:

  •  and finally, run the GI root.sh on each new node to finalize their inclusion in the cluster!! 🙂

 

  • As result, you should be able to seen all the cluster resources started correctly on all the nodes.

 

I know it seems a little complex, but if you have several nodes this is dramatically faster than the standard installation and also the space used is reduced. This is good if you have invested in a high-performance but low-capacity SSD disk like I did :-(.

Hope it helps, I paste here the official documentation links that I’ve used to clone the installations. The other steps are my own work.

References

 

Some notes about Grid Infrastructure PSU 12.1.0.1.2

I’m creating a new 12c RAC environment from scratch, Ii just want to track a few notes (primarily for my personal use ;-)) about the _PSU.

The opatch utility bundled with the GI does not contain the emocmrsp, so it is necessary to install the latest opatch (6880880).

The patching process can patch both GI and RAC homes at once, but if you don’t have a valid database registered, an error is raised:

So you need to patch the Oracle Homes individually if it’s a new installation.

Remind that:

  • The patch must be unzipped by the oracle/grid user in a directory readable to oracle and root (or it will fail with Argument(s) Error… Patch Location not valid) or other funny errors (permission denied errors in the middle of the patch process)
  • Must be applied by the root user
  • Must be applied individually and on every node, one node at time.
  • The opatchauto executable must belong to one of the OH you’re patching (so if you patch GI and RAC separately,  you have to use the correspondent opatch.

Grid:

RAC:

Cheers

Ludovico

Oracle RAC and Policy Managed Databases

 

Some weeks ago I’ve commented a good post of Martin Bach (@MartinDBA on Twitter, make sure to follow him!)

http://martincarstenbach.wordpress.com/2013/06/17/an-introduction-to-policy-managed-databases-in-11-2-rac/

What I’ve realized by  is that Policy Managed Databases are not widely used and there is a lot misunderstanding on how it works and some concerns about implementing it in production.

My current employer Trivadis (@Trivadis, make sure to call us if your database needs a health check :-)) use PMDs as best practice, so it’s worth to spend some words on it. Isn’t it?

 Why Policy Managed Databases?

PMDs are an efficient way to manage and consolidate several databases and services with the least effort. They rely on Server Pools. Server pools are used to partition physically a big cluster into smaller groups of servers (Server Pool). Each pool have three main properties:

  • A minumim number of servers required to compose the group
  • A maximum number of servers
  • A priority that make a server pool more important than others

If the cluster loses a server, the following rules apply:

  • If a pool has less than min servers, a server is moved from a pool that has more than min servers, starting with the one with lowest priority.
  • If a pool has less than min servers and no other pools have more than min servers, the server is moved from the server with the lowest priority.
  • Poolss with higher priority may give servers to pools with lower priority if the min server property is honored.

This means that if a serverpool has the greatest priority, all other server pools can be reduced to satisfy the number of min servers.

Generally speaking, when creating a policy managed database (can be existent off course!) it is assigned to a server pool rather than a single server. The pool is seen as an abstract resource where you can put workload on.

SRVPOOL_descr

If you read the definition of Cloud Computing given by the NIST (http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf) you’ll find something similar:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that
can be rapidly provisioned and released with minimal management effort or service provider interaction

 

There are some major benefits in using policy managed databases (that’s my solely opinion):

  1. PMD instances are created/removed automatically. This means that you can add and remove nodes nodes to/from the server pools or the whole cluster, the underlying databases will be expanded or shrinked following the new topology.
  2. Server Pools (that are the base for PMDs) allow to give different priorities to different groups of servers. This means that if correctly configured, you can loose several physical nodes without impacting your most critical applications and without reconfiguring the instances.
  3. PMD are the base for Quality of Service management, a 11gR2 feature that does resource management cluster-wide to achieve predictable performances on critical applications/transactions. QOS is a really advanced topic so I warn you: do not use it without appropriate knowledge. Again, Trivadis has deep knowledge on it so you may want to contact us for a consulting service (and why not, perhaps I’ll try to blog about it in the future).
  4. RAC One Node databases (RONDs?) can work beside PMDs to avoid instance proliferation for non critical applications.
  5. Oracle is pushing it to achieve maximum flexibility for the Cloud, so it’s a trendy technology that’s cool to implement!
  6. I’ll find some other reasons, for sure! 🙂

What changes in real-life DB administration?

Well, the concept of having a relation Server -> Instance disappears, so at the very beginning you’ll have to be prepared to something dynamic (but once configured, things don’t change often).

As Martin pointed out in his blog, you’ll need to configure server pools and think about pools of resources rather than individual configuration items.

The spfile doesn’t contain any information related to specific instances, so the parameters must be database-wide.

The oratab will contain only the dbname, not the instance name, and the dbname is present in oratab disregarding if the server belongs to a serverpool or another.

Your scripts should take care of this.

Also, when connecting to your database, you should rely on services and access your database remotely rather than trying to figure out where the instances are running. But if you really need it you can get it:

An approach for the crontab: every DBA soon or late will need to schedule tasks within the crond. Since the RAC have multiple nodes, you don’t want to run the same script many times but rather choose which node will execute it.

My personal approach (every DBA has his personal preference) is to check the instance with cardinality 1 and match it with the current node. e.g.:

In the example, TST_1 is running on node1, so the first evaluation returns TRUE. The second evaluation is done after the node2, so it returns FALSE.

This trick can be used to have an identical crontab on every server and choose at the runtime if the local server is the preferred to run tasks for the specified database.

 

A proof of concept with Policy Managed Databases

My good colleague Jacques Kostic has given me the access to a enterprise-grade private lab so I can show you some “live operations”.

Let’s start with the actual topology: it’s an 8-node stretched RAC with ASM diskgroups with failgroups on the remote site.

RAC_ARCH

This should be enough to show you some capabilities of server pools.

The Generic and Free server pools

After a clean installation, you’ll end up with two default server pools:

SRVPOOL_empty

 

The Generic one will contain all non-PMDs (if you use only PMDs it will be empty). The Free one will own servers that are “spare”, when all server pools have reached the maximum size thus they’re not requiring more servers.

 New server pools

Actually the cluster I’m working on has two serverpools already defined (PMU and TST):

SRVPOOL_new

(the node assignment in the graphic is not relevant here).

They have been created with a command like this one:

“srvctl -h ” is a good starting point to have a quick reference of the syntax.

You can check the status  with:

and the configuration:

 

Modifying the configuration of serverpools

In this scenario, PMU is too big. The sum of minumum nodes is 2+5=7 nodes, so I have only one server that can be used for another server pool without falling below the minimum number of nodes.

I want to make some room to make another server pool composed of two or three nodes, so I reduce the serverpool PMU:

Notice that PMU maxsize is still 6, so I don’t have free servers yet.

So, if I try to create another serverpool I’m warned that some resources can be taken offline:

The clusterware proposes to stop 2 instances from the db pmu on the serverpool PMU because it can reduce from 6 to 3, but I have to confirm the operation with the flag -f.

Modifying the serverpool layout can take time if resources have to be started/stopped.

My new serverpool is finally composed by two nodes only, because I’ve set an importance of 1 (PMU wins as it has an importance of 3).

Inviting RAC One Node databases to the party

Now that I have some room on my new serverpool, I can start creating new databases.

With PMD I can add two types of databases: RAC or RACONDENODE. Depending on the choice, I’ll have a database running on ALL NODES OF THE SERVER POOL or on ONE NODE ONLY. This is a kind of limitation in my opinion, hope Oracle will improve it in the near future: would be great to specify the cardinality also at database level.

Creating a RAC One DB is as simple as selecting two radio box during in the dbca “standard” procedure:

RAC_one

The Server Pool can be created or you can specify an existent one (as in this lab):

RAC_one_pool

 

I’ve created two new RAC One Node databases:

  • DB LUDO (service PRISM :-))
  • DB VICO (service CHEERS)

I’ve ended up with something like this:

That can be represented with this picture:

SRVPOOL_final

 

RAC One Node databases can be managed as always with online relocation (it’s still called O-Motion?)

Losing the nodes

With this situation, what happens if I loose (stop) one node?

The node was belonging to the pool LUDO, however I have this situation right after:

A server has been taken from the pol PMU and given to the pool LUDO. This is because PMU was having one more server than his minimum server requirement.

 

Now I can loose one node at time, I’ll have the following situation:

  • 1 node lost: PMU 3, TST 2, LUDO 2
  • 2 nodes lost: PMU 3, TST 2, LUDO 1 (as PMU is already on min and has higher priority, LUDO is penalized because has the lowest priority)
  • 3 nodes lost:PMU 3, TST 2, LUDO 0 (as LUDO has the lowest priority)
  • 4 nodes lost: PMU 3, TST 1, LUDO 0
  • 5 nodes lost: PMU 3, TST 0, LUDO 0

So, my hyper-super-critical application will still have three nodes to have plenty of resources to run even with a multiple physical failure, as it is the server pool with the highest priority and a minimum required server number of 3.

 What I would ask to Santa if I’ll be on the Nice List (ad if Santa works at Redwood Shores)

Dear Santa, I would like:

  • To create databases with node cardinality, to have for example 2 instances in a 3 nodes server pool
  • Server Pools that are aware of the physical location when I use stretched clusters, so I could end up always with “at least one active instance per site”.

Think about it 😉

Ludovico

Oracle Database 12c: Multitenant, Services and Standard Edition RAC

The installation process of a typical Standard Edition RAC does not differ from the Enterprise Edition. To achieve a successful installation refer to the nice quick guide made by Yury Velikanov and change accordingly the Edition when installing the DB software.

Standard Edition and Feature availability

The first thing that impressed me, is that you’re still able to choose to enable pluggable databases in DBCA even if Multitenant option is not available for the SE.

So I decided to create a container database CDB01 using template files, so all options of EE are normally cabled into the new DB. The Pluggable Database name is PDB01.

As you can see,  the initial banner contains “Real Application Clusters and Automatic Storage Management options“.

Multitenant option is not avilable. How SE reacts to its usage?

First, on the ROOT db, dba_feature_usage_statistics is empty.

This is interesting, because all features are in (remember it’s created from the generic template) , so the feature check is moved from the ROOT to the pluggable databases.

On the local PDB I have:

Having ONE PDB is not triggering the usage of Multitenant (as I was expecting).

How if I try to create a new pluggable database?

A-AH!! Correctly, I can have a maximum of ONE pluggable database in my container.

This allows however:

  • Smooth migration from SE to a Multitenant Architecture
  • Quick upgrade from one release to another

To be sure that I can plug/unplug, I’ve tried it:

Other features of Enterprise off course don’t work

Create a Service on the RAC Standard Edition (just to check if it works)

I’ve just followed the steps to do it on an EE. Keep in mind that I’m using admin managed DB (something will come about policy managed DBs, stay tuned).

As you can see it works pretty well. Comparing to 11g you have to specify the -pdb parameter:

Then I can access my DB (and preferred instance) using the service_name I specified.

 

Let me know what do you think about SE RAC on 12c. It is valuable for you?

I’m also on twitter: @ludovicocaldara

Cheers

Ludo