In this specific example, it shows the “parse time elapsed”, the “DB time” and the percentage parse/dbtime, along with the value of the parameter “optimizer_adaptive_features“. You can use it to check if changing the parameters related to adaptive dynamic sampling improves or not the parse time.
Losing the Disk Group that contains OCR and voting files has always been a challenge. It requires you to take regular backups of OCR, spfile and diskgroup metadata.
Since Oracle 12cR1, there are a few additional components you must take care of:
– The ASM password file (if you have Flex ASM it can be quite critical)
According to the MOS Note: FAQ: 12c Grid Infrastructure Management Repository (GIMR) (Doc ID 1568402.1), there is no such need for the moment.
Weird, huh? The -MGMTDB itself contains for the moment just the Cluster Health Monitor repository, but expect to see its important increasing with the next versions of Oracle Grid Infrastructure.
If you REALLY want to back it up (even if not fundamental, it is not a bad idea, after all), you can do it.
The -MGMTDB is in noarchivelog by default. You need to either put it in archivelog mode (and set a recovery area, etc etc) or back it up while it is mounted.
Because the Cluster Health Monitor (ora.crf) depends on it, you have to stop it beforehand:
Now, imagine that you loose the GRID diskgroup (nowadays, with the ASM Filter Driver, it’s more complex to corrupt a device by mistake, but let’s assume that you do it):
Last week I have hosted the second Swiss RAC Attack workshop at Trivadis offices in Geneva. It has been a great success, with 21 total participants: 5 Ninjas, 4 alumni and 14 people actively installing or playing with RAC 12c on their laptops.
Last year I was suprised by a participant coming fron Nanterre. This year two people came directly from Moscow, just for the workshop!
We’ve got good pizza and special beer: Chimay , Vedett, Duvel, Andechs…
Last but not least, our friend Marc Fielding was visiting Switzerland last week, so he took the opportunity to join us and make the workshop even more interesting! 😀
Looking forward to organize it again in one year! Thank you guys 🙂
Disclaimer (wow, most of my recent posts start with a disclaimer, I guess it’s bad): this post explains an UNSUPPORTED workaround for an error enforced by recent Oracle security checks. You should never use it in production! Forewarned is forearmed.
Before Oracle Database 11.2.0.4, it was possible to create a database link using the following syntax:
It was possible to get the password hash by either selecting dbms_metadata.get_ddl for the database link or by querying directly the link$ table.
Starting with Oracle 11.2.0.4, Oracle is enforcing a check that prevents to use such syntax. Every newly created database link must have the password explicitly set.
This is clearly stated in the MOS note:
ORA-02153: Invalid VALUES Password String When Creating a Database Link Using BY VALUES With Obfuscated Password After Upgrade To 11.2.0.4 (Doc ID 1905221.1)
This is seen as a security enhancement. In my opinion, it forces also to specify clear text passwords somewhere in the scripts that create the db links. (You do not create the db links by hand in sql*plus every time you need one. Do you?)
The only exception is when using the expdp/impdp. If you expdp a schema, the dumpfile contains the password hash and the statement needed to recreate the database link (… identified by values ‘:1’), but Oracle only allows impdp to use such statement.
So, simple workaround, just create the database links on a dev/staging environment, export them using expdp and then provide your dba the dumpfile so he/she can import it and create the dblinks. Right? Not always.
There is one case where you really need of the old syntax.
You don’t know the password
AND
You MUST change the database link name.
As you may know, there are no ways to change a database link name (even through impdp, there is no remap_dblink or anything like that).
E.g., you need to keep the db link and intend to use it for a check BUT you want to prevent the application from using it with the old name.
Because I believe that no problems exist that cannot be solved by my Trivadis’ colleagues, I’ve checked internally. A colleague came out with a dead simple (and unsupported) solution:
After a recent upgrade to Enterprise Manager 12c 12.1.0.5, we noticed that the few Oracle Databases in release 10g were no more connectable:
On MOS there are a few notes about the error: ORA-00604: error occurred at recursive SQL level 1 ORA-01882: timezone region not found, the most relevant being Doc ID 1513536.1 and Doc ID 1934470.1.
The problem is due to the time zone table on the target database that doesn’t contain the timezone requested by the client. But who’s the client? In our case, all the target agents were correctly set to Europe/Zurich, but the timezone table of the target database contained it:
The upgrade process of the OMSes from 12.1.0.3 to 12.1.0.5, without the presence of a specific TZ environment variable, set the OMS timezone to Europe/Vaduz. I figured it out after searching deep and large, inside the WLS product properties:
Oracle PL/SQL
1
2
3
4
5
6
$pwd
/u01/app/oracle/Middleware_12105/wlserver_10.3
$cat.product.properties
...
USER_TIMEZONE=Europe/Vaduz
...
Indeed, that timezone was not present in the timezone table version 4:
Today I’ve upgraded EM12c for a customer from the second-last version (12.1.0.3) to the last one (12.1.0.5) and the EM Repository from 11.2.0.3 to 12.1.0.2.
The upgrade path was not very easy: EM 12.1.0.3 is not compatible with a repository 12.1.0.2 and EM 12.1.0.5 requires a mandatory patch for the repository if 11.2.0.3 (or an upgrade to 11.2.0.4).
So I’ve done:
upgrade of the repository from 11.2.0.3 (in Data Guard configuration) to 11.2.0.4
upgrade of the EM from 12.1.0.3 to 12.1.0.5
upgrade of the repository from 11.2.0.4 to 12.1.0.2 (in Data Guard configuration), from Solaris to Linux
In my case, I was particularly concerned about my customer’s EM topology:
two OMS in load balancing
console secured with a custom SSL certificate
a good amount of targets (morethan 800 total targets, more than 500 targets with status)
a lot of jobs and custom reports
a big, shared central software library
many other small customizations: auth, groups, metrics, templates…
I will not bother with the actual execution steps, every installation may differ, I strongly recommend to read the upgrade documentation (I know, it’s HUGE 🙁 ).
in my case I had to read carefully the chapters 3, 4, 5, 6 and appendixes G and K.
By following every step carefully, I had no problems at all and at the end everything was working correctly: all the targets up, the load balancing working in SSL as expected, the jobs restarted and ran successfully…
It has been incredible to see how many operations the OUI has done without raising a single error!!
Ok, it’s not just a Click Next Next Next Next installation, there are a lot of steps to do manually before and afterwards, but still… very good impression.
It took a little more than one hour to upgrade the first OMS (this also upgrades the EM repository) and a little less than 20 minutes to upgrade the second one.
Let a couple of hours for checking everything before, staging the binaries, taking backups/snapshots, creating restore points… and one hours more for upgrading the central agents and cleansing the old installations.
After many years of existence, Standard Edition and Standard Edition One will no longer be part of the Oracle Database Edition portfolio.
The short history
Standard Edition has been for longtime the “stepbrother” of Enterprise Edition, with less features, no options, but cheaper than EE. I can’t remember when SE has been released. It was before 2000s, I guess.
In 2003, Oracle released 10gR1. Many new features as been released for EE only, but:
– RAC as been included as part of Standard Edition
– Standard Edition One has been released, with an even lower price and “almost” the same features of Standard Edition.
For a few years, customers had the possibility to get huge savings (but many compromises) by choosing the cheaper editions.
SE ONE: just two sockets, but with today’s 18-core processors, the possibility to run Oracle on 36 cores (or more?) for less than 12k of licenses.
SE: up to four sockets and the possibility to run on either 72 core servers or RAC composed by a total of 72 cores (max 4 nodes) for less than the price of a 4-core Enterprise Edition deployement.
In 2014, for the first time, Oracle released a new Database version (12.1.0.2) where Standard Edition and SE One were not immediately available.
For months, customers asked: “When will the Oracle 12.1.0.2 SE be available?”
Now the big announcement: SE and SE One will no longer exist. With 12.1.0.2, there’s a new Edition: Oracle Database Standard Edition 2.
– SE is replaced by SE Two that has a limitation of 2 sockets
– SE Two still has RAC feature, with a maximum of two single-socket servers.
– Customers with SE on 4 socket nodes (or clusters) will need to migrate to 2 socket nodes (or clusters)
– Customers with SE One should definitely be prepared to spend some money to upgrade to SE Two, which comes at the same price of the old Standard Edition. ($17,500 per socket).
– the smallest amount of NUP licenses when licensing per named users has been increased to 10 (it was 5 with SE and SE One).
– Each SE2 Database can run max 16 user threads (in RAC, max 8 per instance). This is limited by the database Resource Manager. It does not prevent customers from using all the cores, in case they want to deploy many databases per server.
So, finally, less scalability for the same pricetag.
If you use Block Change Tracking on your production database and try to duplicate it, you there are good possibilities that you will encounter this error:
The problem is caused by the block change tracking file entry that exists in the target controlfile, but Oracle can’t find the file because the directory structure on the auxiliary server changes.
After the restore and recovery of the auxiliary database, the duplicate process tries to open the DB but the bct file doesn’t exist and the error is thrown.
If you do a quick google search you will find several workarounds:
disable the block change tracking after you get the error and manually open the auxiliary instance (this prevent the possibility to get the duplicate outcome from the rman return code)
disable the BCT on the target before running the duplicate (this forces your incremental backups to read all your target database!)
I’ve read the recent good post from my friend Rene on Pythian’s Blog about how to troubleshoot when user ulimits are different from what specified in limits.conf:
Once you fix the problem, you may want to check (any maybe monitor) when an instance is running with a wrong value (and maybe encounter the famous Error message: Linux-x86_64 Error: 23: Too many open files in system).
After a great success in 2014, RAC Attack! comes back to Geneva!
Set up an Oracle Real Application Clusters 12c environment on your laptop, try advanced configurations or simply take the opportunity to discuss about Oracle technology with the best experts in Suisse Romande!
Experienced volunteers (ninjas) will help you address any related issues and guide you through the setup process.
When? Thursday September 17th, 2015, from 17h00 onwards
Cost? It is a FREE event! It is a community based, informal and enjoyable workshop. You just need to bring your own laptop and your desire to have fun!
Confirmed Ninjas:
Ludovico Caldara – Oracle ACE, RAC SIG Chair & co-auteur RAC Attack Eric Grancher – Membre OAK Table & Senior DBA Jacques Kostic – OCM 11g & Senior Consultant chez Trivadis
Agenda:
17.00 – Welcome and T-shirt distribution
17.30 – RAC Attack 12c part I
19.30 – Pizza and Beers! (sponsored by Trivadis)
20.00 – RAC Attack 12c part II
22.00 – Group photo and wrap-up!!
This event is sold out. No more seats available, sorry! Would you be interested in joining the event next year? Drop me an email!
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.Accept
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.