Another successful RAC Attack in Geneva!

ninja-suisseLast week I have hosted the second Swiss RAC Attack workshop at Trivadis offices in Geneva. It has been a great success, with 21 total participants: 5 Ninjas, 4 alumni and 14 people actively installing or playing with RAC 12c on their laptops.

Last year I was suprised by a participant coming fron Nanterre. This year two people came directly from Moscow, just for the workshop!

We’ve got good pizza and special beer: Chimay , Vedett, Duvel, Andechs…

Last but not least, our friend Marc Fielding was visiting Switzerland last week, so he took the opportunity to join us and make the workshop even more interesting! 😀
DSC07173 DSC07164 DSC07183 DSC07148 DSC07147 DSC07144 DSC07142 DSC07154 DSC07153 DSC07152 DSC07151

Looking forward to organize it again in one year! Thank you guys 🙂

Ludovico

How to avoid ORA-02153 when creating database links on 11.2.0.4 (the unsupported way)

Disclaimer (wow, most of my recent posts start with a disclaimer, I guess it’s bad): this post explains an UNSUPPORTED workaround for an error enforced by recent Oracle security checks. You should never use it in production! Forewarned is forearmed.

Before Oracle Database 11.2.0.4, it was possible to create a database link using the following syntax:

It was possible to get the password hash by either selecting dbms_metadata.get_ddl for the database link or by querying directly the link$ table.

Starting with Oracle 11.2.0.4, Oracle is enforcing a check that prevents to use such syntax. Every newly created database link must have the password explicitly set.

This is clearly stated in the MOS note:

ORA-02153: Invalid VALUES Password String When Creating a Database Link Using BY VALUES With Obfuscated Password After Upgrade To 11.2.0.4 (Doc ID 1905221.1)

This is seen as a security enhancement. In my opinion, it forces also to specify clear text passwords somewhere in the scripts that create the db links. (You do not create the db links by hand in sql*plus every time you need one.  Do you?)

The only exception is when using the expdp/impdp. If you expdp a schema, the dumpfile contains the password hash and the statement needed to recreate the database link (… identified by values ‘:1’), but Oracle only allows impdp to use such statement.

So, simple workaround, just create the database links on a dev/staging environment, export them using expdp and then provide your dba the dumpfile so he/she can import it and create the dblinks. Right? Not always.

There is one case where you really need of the old syntax.

  • You don’t know the password

AND

  • You MUST change the database link name.

As you may know, there are no ways to change a database link name (even through impdp, there is no remap_dblink or anything like that).

E.g., you need to keep the db link and intend to use it for a check BUT you want to prevent the application from using it with the old name.

Because I believe that no problems exist that cannot be solved by my Trivadis’ colleagues, I’ve checked internally. A colleague came out with a dead simple (and unsupported) solution:

Insert/update sys.link$, flush the shared_pool.

Remember, use it at your own risk (or don’t use it at all) 😉

HTH

Ludovico

ORA-01882: timezone region not found while connecting to an EM12c target

After a recent upgrade to Enterprise Manager 12c 12.1.0.5, we noticed that the few Oracle Databases in release 10g were no more connectable:

ORA-01882On MOS there are a few notes about the error: ORA-00604: error occurred at recursive SQL level 1 ORA-01882: timezone region not found, the most relevant being Doc ID 1513536.1 and Doc ID 1934470.1.

The problem is due to the time zone table on the target database that doesn’t contain the timezone requested by the client. But who’s the client? In our case, all the target agents were correctly set to Europe/Zurich, but the timezone table of the target database contained it:

So what was causing the problem?

The upgrade process of the OMSes from 12.1.0.3 to 12.1.0.5, without the presence of a specific TZ environment variable, set the OMS timezone to Europe/Vaduz. I figured it out after searching deep and large,  inside the WLS product properties:

Indeed, that timezone was not present in the timezone table version 4:

After setting explicitly the TZ to Europe/Zurich on the OMS servers and restarting the OMSes, everything was fine again.

HTH

Ludo

My feedback after upgrading EM12c 12.1.0.3 to 12.1.0.5

Today I’ve upgraded EM12c for a customer from the second-last version (12.1.0.3) to the last one (12.1.0.5) and the EM Repository from 11.2.0.3 to 12.1.0.2.

The upgrade path was not very easy: EM 12.1.0.3 is not compatible with a repository 12.1.0.2 and EM 12.1.0.5 requires a mandatory patch for the repository if 11.2.0.3 (or an upgrade to 11.2.0.4).

So I’ve done:

  • upgrade of the repository from 11.2.0.3 (in Data Guard configuration) to 11.2.0.4
  • upgrade of the EM from 12.1.0.3 to 12.1.0.5
  • upgrade of the repository from 11.2.0.4 to 12.1.0.2 (in Data Guard configuration), from Solaris to Linux

 

In my case, I was particularly concerned about my customer’s EM topology:

  • two OMS in load balancing
  • console secured with a custom SSL certificate
  • a good amount of targets (more than 800 total targets, more than 500 targets with status)
  • a lot of jobs and custom reports
  • a big, shared central software library
  • many other small customizations: auth, groups, metrics, templates…

I will not bother with the actual execution steps, every installation may differ, I strongly recommend to read the upgrade documentation (I know, it’s HUGE 🙁 ).

Just to resume, the upgrade guide is here: https://docs.oracle.com/cd/E24628_01/upgrade.121/e22625/toc.htm

in my case I had to read carefully the chapters 3, 4, 5, 6 and appendixes G and K.

By following every step carefully, I had no problems at all and at the end everything was working correctly: all the targets up, the load balancing working in SSL as expected, the jobs restarted and ran successfully…

It has been incredible to see how many operations the OUI has done without raising a single error!!

Ok, it’s not just a Click Next Next Next Next installation, there are a lot of steps to do manually before and afterwards, but still… very good impression.

It took a little more than one hour to upgrade the first OMS (this also upgrades the EM repository) and a little less than 20 minutes to upgrade the second one.

Let a couple of hours for checking everything before, staging the binaries, taking backups/snapshots, creating restore points… and one hours more for upgrading the central agents and cleansing the old installations.

About upgrading/moving the repository, check this good post by Maaz AnjumMIGRATE ENTERPRISE MANAGER 12.1.0.4.0 TO A PDB FROM A NON-CDB, even if you don’t plan to do it, it’s worth a read.

HTH

Ludo

It’s confirmed. Standard Edition and Standard Edition One are dead.

The first voices came on July 3rd, 2015.

After many years of existence, Standard Edition and Standard Edition One will no longer be part of the Oracle Database Edition portfolio.

The short history

Standard Edition has been for longtime the “stepbrother” of Enterprise Edition, with less features, no options, but cheaper than EE. I can’t remember when SE has been released. It was before 2000s, I guess.

In 2003, Oracle released 10gR1. Many new features as been released for EE only, but:

– RAC as been included as part of Standard Edition

– Standard Edition One has been released, with an even lower price and “almost” the same features of Standard Edition.

For a few years, customers had the possibility to get huge savings (but many compromises) by choosing the cheaper editions.

SE ONE: just two sockets, but with today’s 18-core processors, the possibility to run Oracle on 36 cores (or more?) for less than 12k of licenses.

SE: up to four sockets and the possibility to run on either 72 core servers or RAC composed by a total of 72 cores (max 4 nodes) for less than the price of a 4-core Enterprise Edition deployement.

In 2014, for the first time, Oracle released a new Database version (12.1.0.2) where  Standard Edition and SE One were not immediately available.

For months, customers asked: “When will the Oracle 12.1.0.2 SE be available?”

Now the big announcement: SE and SE One will no longer exist. With 12.1.0.2, there’s a new Edition: Oracle Database Standard Edition 2.

You can find more information here:

 

Some highlights

– SE One will no longer exist

– SE is replaced by SE Two that has a limitation of 2 sockets

– SE Two still has RAC feature, with a maximum of two single-socket servers.

– Customers with SE on 4 socket nodes (or clusters) will need to migrate to 2 socket nodes (or clusters)

– Customers with SE One should definitely be prepared to spend some money to upgrade to SE Two, which comes at the same price of the old Standard Edition. ($17,500 per socket).

– the smallest amount of NUP licenses when licensing per named users has been increased to 10 (it was 5 with SE and SE One).

– Each SE2 Database can run max 16 user threads (in RAC, max 8 per instance). This is limited by the database Resource Manager. It does not prevent customers from using all the cores, in case they want to deploy many databases per server.

 

So, finally, less scalability for the same pricetag.

Other bloggers have already written about the behaviour of SE2. The best blog post is IMO from Franck Pachot. http://blog.dbi-services.com/oracle-standard-edition-two/

Cheers

Ludo

Block Change Tracking and Duplicate: avoid ORA-19755

If you use Block Change Tracking on your production database and try to duplicate it, you there are good possibilities that you will encounter this error:

The problem is caused by the block change tracking file entry that exists in the target controlfile, but Oracle can’t find the file because the directory structure on the auxiliary server changes.

After the restore and recovery of the auxiliary database, the duplicate process tries to open the DB but the bct file doesn’t exist and the error is thrown.

If you do a quick google search you will find several workarounds:

  • disable the block change tracking after you get the error and manually open the auxiliary instance (this prevent the possibility to get the duplicate outcome from the rman return code)
  • disable the BCT on the target before running the duplicate (this forces your incremental backups to read all your target database!)
  • Richard Harrison proposed another workaround, you can read more about it here.

There is another workaround that I like more (and that you can also find as comment in Richard’s post):

  • Disable the Block Change Tracking on the auxiliary while it’s doing the restore/recovery (in mounted status)

(This solutions isn’t coming from me, but as far as I know, the father of this solution is a colleague working at Trivadis.)

You can easily fork a process before running the duplicate command that:

  • loops and checks the auxiliary instance status
  • run the disable as soon as the auxiliary is mounted

I’ve worked out this script that does the job:

Run it  just before the duplicate! e.g.

HTH

Ludovico

Check the actual ulimits for all the running Oracle instances

I’ve read the recent good post from my friend Rene on Pythian’s Blog about how to troubleshoot when user ulimits are different from what specified in limits.conf:

Quick Tip : Oracle User Ulimit Doesn’t Reflect Value on /etc/security/limits.conf

I would like to add my 2 cents:

Once you fix the problem, you may want to check (any maybe monitor) when an instance is running with a wrong value (and maybe encounter the famous Error message: Linux-x86_64 Error: 23: Too many open files in system).

This single line gives you an overview of all your instances at once:

If you find any wrong values, plan a restart before you encounter any error during peak hours!

Ludo

 

 

RAC Attack! 12c is back to Geneva!

ninja-suisseVersion française ici.

After a great success in 2014, RAC Attack! comes back to Geneva!
Set up an Oracle Real Application Clusters 12c environment on your laptop, try advanced configurations or simply take the opportunity to discuss about Oracle technology with the best experts in Suisse Romande!
Experienced volunteers (ninjas) will help you  address any related issues and guide you through the setup process.

Where? Trivadis office, Chemin Château-Bloch 11, CH1219 Geneva

When? Thursday September 17th, 2015, from 17h00 onwards

Cost? It is a FREE event! It is a community based, informal and enjoyable workshop. You just need to bring your own laptop and your desire to have fun!

Confirmed Ninjas:
Ludovico Caldara
– Oracle ACE, RAC SIG Chair & co-auteur RAC Attack
Eric Grancher – Membre OAK Table & Senior DBA
Jacques Kostic – OCM 11g & Senior Consultant chez Trivadis

Limited places! Reserve your seat and T-shirt now!

tshirt_racattack_2015Agenda:
17.00 – Welcome and T-shirt distribution
17.30 – RAC Attack 12c part I
19.30 – Pizza and Beers! (sponsored by Trivadis)
20.00 – RAC Attack 12c part II
22.00 – Group photo and wrap-up!!

Still undecided? Look at what we did last year!

This event is sold out. No more seats available, sorry! Would you be interested in joining the event next year? Drop me an email!

Smart Bash Prompt for Oracle

If you are an Oracle customer who has several database versions running, you have to deal with scripts that become more and more complex to maintain. Depending on the version or the edition of your database, you may want to run different pieces of code. This forces you to get programmatically more information about your database version and edition (e.g., in order to run a statspack or AWR report if your software is either Enterprise or Standard).
The most common way to get information about the software is connecting to the database and getting it through a couple of selects. But what if you don’t have any running databases?
The ORACLE_HOME inventory has such information, and you can get it with a short shell function:

The snippet searches for a patchset entry in comps.xml to get the patch version rather than the base version (for releases prior to 11gR2 where out-of-place patching occurs). If a patchset cannot be found, it looks for the base version. Depending on the major release, the information about the edition is either in globalvariables.xml (11g, 12c) or in context.xml (10g).
When you call this “ohversion” function, you get both the Oracle version and the edition of your current ORACLE_HOME.
If you’re using the bash as user shell, you may want to take one step forward and  include this information in a much fancier bash prompt than the prompt by default:

2015_06_05_16_22_56_sso0419iAlthough this prompt may seem long, it has several advantages that save you a lot of typing:
• The newline character inside the prompt let’s you start typing commands on an almost empty line so you don’t have to worry about how long your command is.
• The full username@host:path can be copied and pasted quickly for scp commands.
• The time inside the square brackets is helpful to track timings.
• The indication of the current environment (version, edition, SID) lets you know which environment you’re working on.
• The leading number is the exit code of the last command ($?). It’s green when the exit code is zero and red for all other exit codes.
• Hash characters before and after the prompt mitigate the risk of copying and pasting the wrong line by mistake inside your session.

Note: this post originally appeared on IOUG Tips & Best Practices Booklet 9th edition.

SQL Plan Directives: they’re always good… except when they’re bad!

The new Oracle 12c optimizer adaptive features are just great and work well out of the box in most cases.

Recently, however,  I’ve experienced my very first problem with SQL Plan Directives migrating a database to 12c, so I would like to share it.

Disclaimer 1: this is a specific problem that I found on ONE system. My solution may not fit with your environment, don’t use it if you are not sure about what you’re doing!

Disclaimer 2: despite I had this problem with a single SPD, I like adaptive features and I encourage to use them!!

Problem: a query takes a sub-second in 11gR2, in 12c it takes 12 seconds or more.

V_TAB_PROP is a very simple view. It just selects a central table “TAB” and then takes different properties by joining  a property table “TAB_PROP”.

To do that, it does 11 joins on the same property table.

On the property table, TAB_PROP_ID and PROP_ID are unique (they compose the pk), so nested loops and index unique scans are the best way to get this data.
The table is 1500Mb big and the index 1000Mb.

This was the plan in 11g:

In 12c, the plan switches to adaptive, and half of the joins are converted to hash joins / full table scans:

However, the inflection point is never reached. The execution keeps the default plan that has half of the joins HJ and the other half NL.

The problem in this case is the SQL Directive. Why?

There are to many distinct values for TAB_ID and the data is very skewed.

The histogram on that column is OK and it always leads to the correct plan (with the adaptive features disabled).
But there are still some “minor” misestimates, and the optimizer sometimes decides to create a SQL Plan directive:

The Directive instructs the optimizer to do a dynamic sampling, but with a such big and skewed table this is not ok, so the Dynamic sampling result is worse than using the histogram. I can check it by simplifying the query to just one join:

What’s the fix?

I’ve tried to drop the directive first, but it reappears as soon as there are new misestimates.
The best solution in my case has been to disable the directive, an operation that can be done easily with the DBMS_SPD package:

I did this on a QAS environment.
Because the production system is not migrated to 12c yet, it’s wise to import these disabled directives in production before the optimizer creates and enables them.

Off course, the directives can’t be created for objects that do not exist, the import  has to be done after the objects migrate to the 12c version.

Because the SQL Plan Directives are tied to specific objects and not specific queries, they can fix many statements at once, but in case like this one, they can compromise several statements!

Monitoring the creation of new directives is an important task as it may indicate misestimates/lack of statistics on one side or execution plan changes on the other one.