Where are Extended Data Types stored? (12c)

Oracle has introduced Extended Data Types in 12c. If you still don’t know what they are and how enable them, refer to the awesome oracle-base blog or the official documentation.

Here, I’ll show where they are stored, and when.

Tables created with extended varchar2 (length >4000)

If the table is created with a varchar2 length above 4000, a LOBSEGMENT and a LOBINDEX segments are also created automatically. But actually, the rows inserted may be short enough to be stored inline.

 

As you can see from the previous output, the LOB segments are almost empty after 1000 rows inserted, whereas the table has 24 blocks. Not enough to be a proof. Let’s try to update the rows with a column size of 1000:

 

A-ah! The row size is increasing but the blocks are actually allocated in the table segment, this prove that the rows are stored inline!

Where’s the break point? When the rows will be migrated to the out-of-line storage?

 

Actually, it’s somewhere between 3960 and 3970 bytes, but it may depend on other factors (I havn’t tested it deeply).

Lesson learned: when you design your table with extended types, you should plan how many rows you expect with a size below 4000, because they can make your row access slower and chained rows higher than expected.

 

Tables created with standard varchar2 and altered afterward

Let’s try almost the same excercise, but this time starting with a table created with a small length (lower than 4000) and altered afterward to make use of the extended size.

So far, so good. Let’s try to extend the column:

The LOB segments have not been created. Meaning that the rows are still stored inline. Let’s try then to update all the rows to a size above 4000:

The blocks have grown in the table segment, the rows are still inline!!

This lead to a huge amount of chained rows!! In this example, 2 blocks per row, but it can be as high as 5 blocks for a single column varchar2(32767) with db_block_size=8192.

So we need to rebuild the table, right?

WRONG! Even after a rebuild the extended varchars are stored INLINE if they have been created as standard varchars. Actually you need to recreate the table and migrate with dbms_redefinition.

Migrating to extended datatypes instead of converting your application to use secure files  can be a disaster for the physical layout of your tables, thus for the performance of your application. Be careful!

 

Update:

Many thanks to Sayan Malakshinov (@xtner) and Franck Pachot (@FranckPachot) for the information 🙂

Oracle Database Backup Logging Recovery Appliance – a preview

Please see the disclaimer at the end of the post.

Oracle has announced the new Oracle Database Backup Logging Recovery Appliance at the last Open World 2013, but since then it has not been released to the market yet, and very few information is available on the Oracle website.

During the last IOUG Collaborate 14, Oracle master product manager of Data Guard and MAA,  Larry Carpenter, has unveiled something more about the DBRLA (call it “Debra” to simplify your life 🙂 ) , and I’ve had the chance to discuss about it directly with Larry.

At Trivadis we think that this appliance will be a game changer in the world of backup management.

Why?

Well, if you have ever worked for a big company with many hundreds of databases, you have certainly encountered many of those common problems:

  • Oracle Backup and restore penalized by a shared infrastructure
  • Poor backup or restore performance
  • Tape drives busy when you need them urgently
  • Complex management of backup retentions

That’s not all. As of now, your best recovery point in case of restore is directly related to your backup archive frequency. Oh yes, you have to low down your archive_lag_target parameter, increase your log switch frequency (and thus, the I/O) and still have… 10, 15, 30 minutes of possible data loss? Unless you protect your transactions with a Data Guard. But this will cost you money. For the additional server and storage. For the licenses. And for the effort required to put in place a Data Guard instance for every database that you want to protect. You want to protect your transactions from a storage failure and there’s a price to pay.

The Database Backup Logging Recovery Appliance (wow, I need to copy and paste the name to save time! :-)) overcomes these problems with a simple but brilliant idea: leveraging the existing redo log transport processes and ship the redo stream directly to the backup appliance (the DBLRA, off course) or to its Cloud alter ego, hosted by Oracle.

DBLRA

As you can infer from the picture, 12c databases will work natively with the appliance, while previous releases will have a plugin that will enable all the capabilities.

Backups can be mirrored selectively to another DBLRA, or copied to the cloud or to a 3rd party (Virtual) Tape Library.

The backup retention is enforced by the appliance and the expiration and deletion is done automatically using the embedded RMAN catalog.

Lightning fast backups and restores are guaranteed by the hardware: DBLRA is based on the same hardware used by Exadata, with High Capacity disks. Optional storage extensions can be added to increase the capacity, but all the data, as I’ve said, can be offloaded to VTLs in order to use a cheaper storage for older backups.

To resume, the key values are:

  • No transaction loss!!
  • Lightning fast backups and restores
  • Integrated, Oracle engineered, scalable solution for hundreds to thousands of databases

Looking forward to see it in action!

I cannot cover all the information I have in a single post, but Trivadis is working actively to be ready to implement it at the time of the launch to the market (estimated in 2014), so feel free to contact me if you are interested in boosting your backup environment. 😉

By the way, I expect that the competitors (IBM, Microsoft?) will try to develop a solution with the same characteristics in terms of reliability, or they will lose terrain.

Cheers!

Ludovico

Disclaimer: This post is intended to outline Oracle’s general product direction based on the information gathered through public conferences. It is intended for informational purposes only. The development and release of these functionalities and features including the release dates remain at the sole discretion of Oracle and no documentation is available at this time. The features and commands shown may or may not be accurate when the final product release goes GA (General Availability).
Please refer Oracle documentation when it becomes available.

Oracle Multitenant and custom DBCA templates

Today I’ve encountered an annoying issue while adapting a few scripts for automatic database creation. I track it here so hopefully it may save a few hours of troubleshooting to someone…

I’ve used the DBCA to prepare a new template:

 

Then continued by customizing other options, including init parameters and datafile/logfile paths. Finally, I’ve saved it as a new template instead of creating the database.

I’ve checked the resulting .dbc and seen that there was, as expected, the parameter “enable_pluggable_database”=”true”.

Then I’ve moved the template file to my $HOME directory and tested the silent database creation with the option “-createAsContainerDatabase true”:

The database configuration has completed successfully, without errors. I’ve accessed my new container, and I’ve been surprised by seing:

In fact, there were no pdb$seed datafiles:

After little investigation, I’ve found these lines in the dbca trace.log:

 

Then I’ve struggled with dbca and templates a few times before finding that, actually, the magic “enable pluggable database” is done by dbca only if the template file name is not customized.

Running the same exact command with the very same template file but renamed to $ORACLE_HOME/assistants/dbca/templates/General_Purpose.dbc actually works (notice the diff at the first line):

 

 

I’ve also tried to cheat and use a symlink to my previous custom template, and surprisingly, it still works:

In the dbca trace log the message saying that the DB will be NON-CDB disappears:

So the problem is really caused by the different filename/location of the template.

IMHO it’s a kind of bug, the decision between a CDB and NON-CDB should not be taken by DBCA.  Moreover, it’s not based on the content of the template, which would be logic. But today I’m late and lazy, I will not open a SR for this.

:-/

RAC Attack 12c at Collaborate 14 and advanced labs

I’ve just published an advanced lab on SlideShare that RAC Attack attendees may do at Collaborate this year, instead of just doing the basic 2-node RAC installation on their laptop.

We’ll offer also an advanced lab about Flex Clusters and Flex ASM (written by Maaz Anjum). Moreover, I’m working on an additional lab that allows to implement a multi-node RAC by using Virtual Box linked clones and GI Home clones like I’ve shown in my previous post.

RAC Attack at #C14LV will be like fun again. We’ll have a few t-shirts for the attendants, designed by me and Chet “Oraclenerd” Justice, kindly sponsored by OTN.

The workshop will end up with beers and snaks (again, thank you OTN for sponsoring this :-)).

2014_03_27_23_46_49_OraclenerdNinjasFINAL

 

If you’re planning to attend Collaborate, join us and start your conference week in a good mood 🙂

 

Multinode RAC 12c cluster on VirtualBox using linked clones

Recently I’ve had to install a four-node RAC cluster on my laptop in order to do some tests. I have found an “easy” (well, easy, it depends), fast and space-efficient way to do it so I would like to track it down.

The quick step list

  • Install the OS on the first node
  • Add the shared disks
  • Install the clusterware in RAC mode on on the first node only
  • Remove temporarily the shared disks
  • Clone the server as linked clone as many times as you want
  • Reconfigure the new nodes with the new ip and naming
  • Add back the shared disks on the first node and on all other nodes
  • Clone the GI + database homes in order to add them to the cluster

Using this method the Oracle binaries (the most space consuming portion of the RAC installation) are installed and allocated on the first node only.

The long step list

Actually you can follow many instruction steps from the RAC Attack 12c book.

  • Review the HW requirements  but let at least 3Gb RAM for each guest + 2Gb more for your host (you may try with less RAM but everything will slow down).
  • Download all the SW components , additionally you may download the latest PSU (12.1.0.1.2) from MOS.
  • Prepare the host and install linux on the first node. When configuring the OS, make sure you enter all the required IP addresses for the additional nodes. RAC Attack has two nodes collabn1, collabn2. Add as many nodes as you want to configure. As example, the DNS config may have four nodes

At this point, the procedure starts differing from the RAC Attack book.

  •  Go to the VirtualBox VM settings and delete all the shared disks2014_03_16_22_05_26_Oracle_VM_VirtualBox_Manager
  •  Clone the first server as linked clone (right-click, clone, choose the name, flag “Linked Clone” as many times as the number of additional servers you want.

2014-03-16 22_09_42-Clone Virtual Machine

 

  • By using this method  the new servers will use the same virtual disk file of the first server and a second file will be used to track the differences. This will save a lot of space on the disk.
  • Add back the shared disks to all the servers.
  • Start the other nodes and configure them following the RAC Attack instructions again.
  • Once all the nodes are configured, the GI installation has to be cleaned out on all the cloned servers using these guidelines:

  • Then, on each cloned server, run the perl clone.pl as follows to clone the GI home, but change the LOCAL_NODE accordingly (note: the GI Home name must be identical to the one specified in the original installation!):

  •  Then, on the first node (that you have started and you have reactivated the clusterware stack on it with crsctl enable crs / crsctl start crs ;-)), run this command to add the new nodes in the definition of the cluster:

 

  • from the first server copy these files on all the other nodes:

  •  Then clone also the DB Home (again, run it on each new server and specify the same DB home name that you have used in the original installation):

  •  On each new node run also the updatenodelist and the DB root.sh command to update the node list for the DB home:

  •  and finally, run the GI root.sh on each new node to finalize their inclusion in the cluster!! 🙂

 

  • As result, you should be able to seen all the cluster resources started correctly on all the nodes.

 

I know it seems a little complex, but if you have several nodes this is dramatically faster than the standard installation and also the space used is reduced. This is good if you have invested in a high-performance but low-capacity SSD disk like I did :-(.

Hope it helps, I paste here the official documentation links that I’ve used to clone the installations. The other steps are my own work.

References

 

Some notes about Grid Infrastructure PSU 12.1.0.1.2

I’m creating a new 12c RAC environment from scratch, Ii just want to track a few notes (primarily for my personal use ;-)) about the _PSU.

The opatch utility bundled with the GI does not contain the emocmrsp, so it is necessary to install the latest opatch (6880880).

The patching process can patch both GI and RAC homes at once, but if you don’t have a valid database registered, an error is raised:

So you need to patch the Oracle Homes individually if it’s a new installation.

Remind that:

  • The patch must be unzipped by the oracle/grid user in a directory readable to oracle and root (or it will fail with Argument(s) Error… Patch Location not valid) or other funny errors (permission denied errors in the middle of the patch process)
  • Must be applied by the root user
  • Must be applied individually and on every node, one node at time.
  • The opatchauto executable must belong to one of the OH you’re patching (so if you patch GI and RAC separately,  you have to use the correspondent opatch.

Grid:

RAC:

Cheers

Ludovico

Speaker and Ninja at Collaborate14 – #C14LV

COLLABORATE 14 IOUG Forum

This year I will have the honor to present at Collaborate14, from April 7th to 11th. First of all, many thanks to Trivadis that has kindly agreed to send me to the conference.

My session (#603):
Oracle Data Guard 12c: Real-Time Cascade, Far Sync Instances and other goodies
has been accepted, so if you plan to attend Collaborate, I will be glad to see you there!
My paper and presentation are ready, but I’ll wait the post-conference before publishing them. Meanwhile, you can get a little sneak peak of my live demo (I’ll cut something, somewhere, but my new SSD disk should reduce the time elapsed, I have to do it again with the new hardware to get correct timings 🙂 ). There’s no audio, since it’s supposed to be my failover demo if I’ll have problems during my session.

Part I


Part II

I’ve submitted another abstract about Policy Managed Databases, but it has been put in the waiting list, assuming that Data Guard has a lot more users and the interest in new Data Guard 12c features will be higher than PMDBs that are rarely used in production environments (and I’m sad about it, keep in touch if you want to know more about this great technology).

 

RAC Attack 12c!

I’ll be organizing the RAC Attack again, along with Seth Miller, Yury Velikanov and Kamran Agayev. Sharing this exciting role with an Oracle ACE and two ACE Directors makes me  proud of what I’m doing, but more than this, I’m happy to repeat another exciting experience like I had at OOW13.

This Year RAC Attack will be an official pre-conference workshop. We have been contacted directly by the IOUG, and we’re making improvements. We’ll install RAC 12c and discuss about advanced topics, have a lot of fun, drink a beer together and jump a lot! 🙂

Other mentors at the workshop will be Leighton Nelson, Maaz Anjum, Biju Thomas. You should know them already, so join us!

And don’t forget, register before February 12th, so you take benefit of the early bird discount!


Ludovico

Removing passwords from Oracle scripts: Wallets and Proxy Users


Very often I encounter customers that include Oracle account passwords in their scripts in order to connect to the database.
For DBAs, sometimes things are
easier when they run scripts locally using the oracle account, since the “connect / as sysdba” usually do the job, even with some security concerns. But what if we need other users or we need to connect remotely?
Since longtime Oracle supplies secure wallets and the proxy authentication. Let’s see what they are and how to use them.

Secure Wallet
Secure wallets are managed through the tool mkstore. They allow to store a username and password in a secure wallet accessible only by its owner. The wallet is then accessed by the Oracle Client to connect to a remote database, meaning that you DON’T HAVE to specify any username and password!

Let’s see how to implement this the quick way:

Create a directory that will contain your wallet:

Create the wallet, use an arbitrary complex password to protect it:

Immagine that you’ve a user created with a very complex password:

Then you need to insert these credentials, including the connect string, into the wallet.

Keep in mind that you can have multiple credentials in the wallet for different remote descriptors (connect strings), but if you want many credentials for the very same connect string you need to create different wallets in different directories.

Now you need to tell your Oracle Client to use that wallet by using the wallet_location parameter in your sqlnet.ora, so you need to have a private TNS_ADMIN:

If everything’s alright, you should be able to connect to the database PROD as the batch user, without specifying any username or password.

Attention: when mkstore modifies the wallet, only the clients with the same or above versions will be able to access the wallet, so if you access your wallet with a 11g client you shouldn’t modify the wallet with a 12c version of mkstore. This is not documented by Oracle, but you can infer it from different “not a bug” entries on MOS 🙂

Proxy Users
You can push things a little farther, and hook your wallet with a proxy user, in order to connect to arbitrary users. That’s it, a proxy user is entitled to connect to the database on behalf of other users. In this example, we’ll see how, through the batch account, we can connect as OE, SH or HR:

Now I can specify with which user I want to work on the DB, connect to it through the batch account, without specifying the password thanks to the wallet:

 

See how it’s easy? But don’t forget to keep your wallet secure using unix/windows permissions!

Oracle Instances and real memory consumption on Linux and Solaris

There’s a way to know the REAL memory usage by Oracle Instance, including all connecting processes and using the shell rather than a connection to oracle?

The short answer is “I think so” 🙂

Summing up RSS column from ps output, is not reliable because Linux uses a copy-on-write on process forks and also doesn’t take into account correctly the shared memory and other shared allocations.

I’ve come across this post on Pythian’s Blog from Marc Billette.

While it seems good I’ve had discording results depending on platform and release.

Instead, I’ve tried to create a shell snippet that always uses pmap but works differently and SEEMS to work correctly on Linux ans Solaris.

Basically, using the pmap script I get a lot of information about the different memory areas allocated to the process:

 

Initially I’ve tried to decode correctly the different kinds of memory the same way other scripts I’ve found online do:

but finally the ADDRESS is the same from different processes when the memory area is shared, so my script now just get a unique line for each address and sums up the memory size (not the rss one!):

This should give the total virtual memory allocated by the different Oracle instances.

The results I get are plausible both on Linux and Solaris.

Example:

If you find any error let me know and I’ll fix the script!

Ludovico

Regular TNS-12508 critical alerts in EM12c

Yesterday I’ve come across a small request from a customer.
They were receiving REGULARLY critical alerts in EM12c from some listeners due to error TNS-12508.

The facts:

  • only 10g listeners were affected
  • every day, only one occurrence of the error and always at the same time on a named host
  • no apparent correlations between times on different hosts

I’ve analyzed the log to see the error.

 

Notice that after the error there are two requests: show log_directory and show trc_directory.
So I’ve supposed that it’s an additional request failing on 10g listeners but not on 11g listeners.
The “help” command of 10g and 11g releases shows that the two releases have some different commands. One of them is “show oracle_home” that has been introduced in 11g.

  • First I’ve searched for scheduled scripts (the customer literally have a huge library of scripts run against the databases to automate maintenance tasks)
  • Then I’ve asked to the team that manages an automatic discovery tool that feeds the CMDB
  • Finally, I’ve come across this note on Metalink that explain the error:

Repetitive TNS-12508 Errors logged for a listener target after upgrade to 12.1.0.3 DB plugin or higher (Doc ID 1596633.1)

I’ve applied the change to the metrics in EM12c to ignore the error for 10g listeners.

Ludovico