Bash tips & tricks [ep. 7]: Cleanup on EXIT with a trap

This is the seventh epidose of a small series.


Pipes, temporary files, lock files, processes spawned in background, rows inserted in a status table that need to be updated… Everything need to be cleaned up if the script exits, even when the exit condition is not triggered inside the script.


The worst practice is, of course, to forget to cleanup the tempfiles, leaving my output and temporary directories full of files *.tmp, *.pipe, *.lck, etc. I will not show the code because the list of bad practices is quite long…

Better than forgiving to cleanup, but still very bad, is to cleanup everything just before triggering the exit command (in the following example, F_check_exit is a function that exits the script if the first argument is non-zero, as defined it in the previous episode):

A better approach, would be to put all the cleanup tasks in a Cleanup()  function and then call this function instead of duplicating all the code everywhere:

But still, I need to make sure that I insert this piece of code everywhere. Not optimal yet.

I may include the Cleanup function inside the F_check_exit function, but then I have two inconvenients:
1 – I need to define the Cleanup function in every script that includes my include file
2 – still there will be exit conditions that are not trapped


The good approach would be to trap the EXIT signal with the Cleanup function:

Much better! But what if my include script has some logic that also creates some temporary files?

I can create a global F_Cleanup function that eventually executes the local Cleanup function, if defined. Let me show this:

Include script:

Main script:

The Cleanup function will be executed only if defined.

No Cleanup function: no worries, but still the F_Cleanup function can do some global cleanup not specific to the main script.

Bash tips & tricks [ep. 6]: Check the exit code

This is the sixth epidose of a small series.


Every command in a script may fail due to external reasons. Bash programming is not functional programming! 🙂

After running a command, make sure that you check the exit code and either raise a warning or exit with an error, depending on how a failure can impact the execution of the script.


The worst example is not to check the exit code at all:

Next one is better, but you may have a lot of additional code to type:

Again, Log_Close, eok, eerror, etc are functions defined using the previous Bash Tips & Tricks in this series.


Define once the check functions that you will use after every command:


Bash tips & tricks [ep. 5]: Write the output to a logfile

This is the fifth epidose of a small series.


Logging the output of the scripts to a file is very important. There are several ways to achieve it, I will just show one of my favorites.


You can log badly either from the script to a log file:

or by redirecting badly the standard output of the script:


My favorite solution is to automatically open a pipe that will receive from the standard output and redirect to the logfile. With this solution, I can programmatically define my logfile name inside the script (based on the script name and input parameters for example) and forget about redirecting the output everytime that I run a command.

(*) the functions edebug, einfo, etc, have to be created using the guidelines I have used in this post: Bash tips & tricks [ep. 4]: Use logging levels

The -Z parameter can be used to intentionally avoid logging.

Again, all this stuff (function definitions and variables) should be put in a global include file.

If I execute it:


Bash tips & tricks [ep. 4]: Use logging levels

This is the fourth epidose of a small series.


Support different logging levels natively in your scripts so that your code will be more stable and maintainable.




Nothing to invent, there are already a few blog posts around about the best practices for log messages. I personally like the one from Michael Wayne Goodman:

I have reused his code in my scripts with very few modifications to fit my needs:

The edumpvar is handy to have the status of several variables at once:

If you couple the verbosity level with input parameters you can have something quite clever (e.g. -s for silent, -V for verbose, -G for debug). I’m putting everything into one single snippet just as example, but as you can imagine, you should seriously put all the fixed variables and functions inside an external file that you will systematically include in your scripts:





It does not take into account the output file. That will be part of the next tip 🙂

Bash tips & tricks [ep. 3]: Colour your terminal!

This is the third epidose of a small series.


The days of monochrome green-on-black screens are over, in a remote shell  terminal you can have something fancier!




Define a series of variables as shortcuts for color escape codes, there are plenty of examples on internet.

Use them whenever you need to highlight the output of a script, and eventually integrate them in a smart prompt (like the one I’ve blogged about sometimes ago).bash_prompt_color

The echo builtin command requires -e in order to make the colours work. When reading files, cat works, less requires -r. vi may work with some hacking, but it’s not worth to spend too much time, IMHO.

Bash tips & tricks [ep. 2]: Have a smart environment for personal accounts

This is the second epidose of a small series.


The main technical account (oracle here) usually has the smart environment, with aliases, scripts avilable at fingertips, correct environment variables and functions.

When working with personal accounts, it may be boring to set the new environment at each login, copy it from a golden copy or reinvent the wheel everytime.




Distribute a standard .bash_profile that calls a central profile script valid for all the users:

Make your common environment as smart as possible. If any commands need to be run differently depending on the user (oracle or not oracle), just use a simple if:

The goal of course is to avoid as many types as you can, and let all your colleagues profit of the smart environment.

How cold incremental recovery saved me once

UPDATE: In the original version I was missing a few keywords: “incremental level 0” for the base backup and “resetlogs” at the database open. Thanks Gregorz for your comments.

Sorry for this “memories” post, but the technical solution at the end is worth the read, I hope 😉

Back in 2010, I was in charge of a quite complex project and faced some difficulties that led me to recover a database in a different manner. A few years have passed, but I used again the same procedure many times with full satisfaction… I think it’s worth to publish it now.

But first, let me introduce the project details and the problem.


Scope of the project

Transport a >1TB RAC database from AIX 5 on P6 to AIX 6 on P7, from a third-party datacenter in southern Italy to our main datacenter in northern Italy.
The Database featured >1000 datafiles and a huge table (800GB) partitioned by range and sub-partitioned by list (or the opposite, can’t remember).



For budget containement, the project owner asked to avoid the use of HACMP (and thus, avoid the use of shared JFS2). I decided  then to take the risk and migrate from JFS2  to ASM.

In order to avoid a few platform-related ASM bugs, I also had to upgrade from Oracle to Oracle



I had no access to the source database that was 800km far from our datacenter, and I was granted only to ask for RMAN backups.

The total time of service disruption accepted was quite short (<30 minutes) considering the size and the distance of the database, and there was no direct connectivity between the sites (for political reasons).

Globally, the network throughput for sharing files over ftp was very poor.


First solution

This kind of move was very common to me, and because I was not grated to ask for a temporary Data Guard configuration, the easy solution for me was to ask:

1 – one RMAN ONLINE full backup physically sent on disk

2 – many RMAN archive backups sent over network (via ftp)

Then, on my side, restore the full backup, recover the archives sent over time and, at the date X, ask a final archive backup, ask to close the db and send the online redo logs to do a complete recovery on my side, then startup open upgrade.



I did a first “dry run” open resetlogs in order to test the procedure and make it faster, and also asked to test the application pointing to the destination database.

The very bad surprise was that the source database was doing a huge amount of nologging inserts leading to monster index corruptions after the recovery on the destination database.

According to the current database maintainer, setting the force logging on the source database was NOT an option because the SAN was not able to cope with the high redo rates.



By knowing the Oracle recovery mechanisms, I have proposed to the remote maintainer to change the recovery strategy, despite this solution was not clearly stated in the Oracle documentation:

1 – Take a first online incremental backup from the begin scn of  the base full backup (thank God block change tracking was in place) and send it physically over disk

2 – Take other smaller online  incremental backups, send them over ftp  and apply them on the destination with “noredo”

3 – At the date X, shutdown the source, mount it and take a last incremental in mount state

4 – recover noredo the last incremental and open resetlogs the database.

According to the documentation, the “cold incremental strategy” applies if you take “cold full backups”. But from a technical point of view, taking a cold incremental and recovering it on top of a fuzzy online backup this is 100% equivalent of taking a full consistent backup in mount state.
Because all the blocks are consistent to a specific SCN, there are no fuzzy datafiles:  they are recovered from incremental taken from a mounted database! This allows to do incremental recovery and open the databases without applying any single archived log and by shutting down the database only once.


Technical steps

First, take a  full ONLINE backup on the source:

Then restore it on the destination (with no recovery):

Then, run a COLD incremental backup on the source:

And run the incremental recovery on the source (without redo):

That’s all!

This solution gave me the opportunity to move physically the whole >1TB nologging database from one region to another one with a minimal service disruption and without touching at all the source database.

I used it many times later on, even for bigger databases and on several platforms (yes, also Windows, sigh), it works like a charm.



Getting the DBID and Incarnation from the RMAN Catalog

Using the RMAN catalog is an option. There is a long discussion between DBAs on whether should you use the catalog or not.

But because I like (a lot) the RMAN catalog and I generally use it, I assume that most of you do it 😉

When you want to restore from the RMAN catalog, you need to get the DBID of the database you want to restore and, sometimes, also the incarnation key.

The DBID is used to identify the database you want to restore. The DBID is different for every newly created / duplicated database, but beware that if you duplicate your database manually (using restore/recover), you actually need to change your DBID using the nid tool, otherwise you will end up by having more than one database registered in the catalog with the very same DBID. This is evil! The DB_NAME is also something that you may want to make sure is unique within your database farm.

The Incarnation Key changes whenever you do an “open resetlogs”, following for example a flashback database, an incomplete recovery, or just a “open resetlogs” without any specific need.

2016-02-15 09_43_34-Sametime Appshare Highlighter

In the image, you can see that you may want to restore to a point in time after the open resetlogs (blue incarnation) or before it (red incarnation). Depending on which one you need to restore, you may need to use the command RESET DATABASE TO INCARNATION.

If you have a dynamic and big environment, you probably script your restores procedures, that’s why getting the DBID and incarnation key using the RMAN commands may be more complex than just querying the catalog using sqlplus.

How do I get the history of my database incarnations?

You can get it easily for all your databases using the handy hierarchical queries on the RMAN catalog (db names and ids are obfuscated for obvious reasons):

What about getting the correct DBID/DBINC_KEY pair for a specific database/time?

You can get the time windows for each incarnation using the lead() analytical function:

With this query, you can see that every incarnation has a reset time and a “next reset time”.

It’s easy then to get exactly what you need by adding a couple of where clauses:

So, if I need to restore the database 1465419F until time 2016-01-20 00:00:00, i need to set DBID=1048383773 and reset the database to incarnation 1256014297.



2015 in numbers…

I am spending good times since I have moved in Switzerland 3 years ago. 2015 has been as good as 2014. Now that January ends, I am officially late in publishing this information, but it’s MY blog, so who cares? 😛


A few numbers:

25 blog posts (+1)
~52000 page views (+40%), ~43000 visits (+48%)
Speaker at 2 major conferences (#C15LV, #UKOUG_TECH15)
Speaker at 2 Trivadis internal conferences
Speaker at 1 local user group event
Delegate of EOUC at DOAG 2015
A total of 14 public speeches (same as 2014)
I’ve been elected RAC SIG Vice President
2 RAC Attack workshops organized
1 roundtable as organizer, 1 panel
2 T-shirt designed as gifts for 2 RAC Attack workshops
2 articles published
Launched the RAC SIG website and the new session agenda
Countless new friends and/or contacts

I hope that 2016 will be as good as 2015