The new era of spam? Technical-related comments that defeat anti-spam filters

Part of the tasks of every blogger is to update the anti-spam filters (a WordPress plugin in my case, akismet), read the comments in quarantine and either approve them, delete them or mark them as spam (to that the anti-spam plugin can enrich the spam detection algorithms).

So far, the main factor to me to decide to approve or not, is to read the content of the message and find it appropriate or not for the blog post.

Until today.

Some time ago I have written this blog post about diagnosing problems in deleting the Audit Trail in Oracle 12c:

DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL not working on 12c? Here’s why…

Last week I have got this comment:

Technical. Talking about Oracle Audit (which is relevant to my blog post) and with  technical details that only an Oracle professional might know.

So I have approved it and read my blog post again to find a proper answer.

Except that my blog post was not talking about Unified Audit, relink of binaries or exports!

I have realized then that the “link” of the blogger was in fact a website selling sex toys… (yeah, writing this here will bring even more spam… I take the risk 🙂 )

So where does it come from? Actually, the comment has been copied from another blog:

https://petesdbablog.wordpress.com/2013/07/20/12c-new-feature-unified-auditing/

A new type of spam (maybe it exists since a while? I am not a spam expert).

The spammer (bot?) copies technical comments from blog posts with similar keywords and just changes the name and the link of the author.

Beware, bloggers.

Ludo

Oracle Grid Infrastructure 18c patching part 2: Independent Local-mode Automaton architecture and activation

The first important step before starting using the new Independent Local-mode Automaton is understanding which are its components inside a cluster.

Resources

Here’s the list of service that you will find when you install a Grid Infrastructure 18c:

# [ oracle@server1:/u01/app/oracle/home [15:14:41] [18.3.0.0.0 [GRID] SID=GRID] 0 ] #
$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.MGMT.GHCHKPT.advm
               OFFLINE OFFLINE      server1                STABLE
               OFFLINE OFFLINE      server2                STABLE
ora.MGMT.dg
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.OCRVOT.dg
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.chad
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.helper
               OFFLINE OFFLINE      server1                IDLE,STABLE
               OFFLINE OFFLINE      server2                STABLE
ora.mgmt.ghchkpt.acfs
               OFFLINE OFFLINE      server1                STABLE
               OFFLINE OFFLINE      server2                STABLE
ora.net1.network
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.ons
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.proxy_advm
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       server2                STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       server1                STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       server1                169.254.17.12 10.30.
                                                             200.73,STABLE
ora.asm
      1        ONLINE  ONLINE       server1                Started,STABLE
      2        ONLINE  ONLINE       server2                Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       server1                STABLE
ora.server1.vip
      1        ONLINE  ONLINE       server1                STABLE
ora.server2.vip
      1        ONLINE  ONLINE       server2                STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       server1                Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       server1                STABLE
ora.rhpserver
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       server2                STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       server1                STABLE
--------------------------------------------------------------------------------

As you can see, there are 4 components that are OFFLINE by default:

Three local resources (that are present on each node):

  • ora.MGMT.GHCHKPT.advm
  • ora.mgmt.ghchkpt.acfs
  • ora.helper

One cluster resource (active on only one server at a time, it can relocate):

  • ora.rhpserver

If you have ever worked with 12c Rapid Home Provisioning, those name should sound familiar.

The GHCHKPT filesystem (ant its relative volume), is used to store some data regarding the ongoing operations across the cluster during the GI home move.

The ora.helper is the process that actually does the operations. It is local because each node needs it to execute some actions at some point.

The rhpserver is the server process that coordinates the operations and delegates them to the helpers.

All those services compose the independent local-mode automaton, that is the default deployment. The full RHP framework (RHP Server and RHP Client) might be configured instead with some additional work.

Important note: Just a few weeks ago Oracle changed the name of Rapid Home Provisioning (RHP) to Fleet Patching and Provisioning (FPP). The name is definitely more appealing now, but it generates again some confusion about product names and acronyms, so beware that in this series sometimes I refer to RHP, sometimes to FPP, but actually it is the same thing.

Tomcat?

You might have noticed that tomcat is deployed now in the GI home, as there are patches specific to it (here I paste the 18.4 version):

$ opatch lspatches
28655963;DBWLM RELEASE UPDATE 18.4.0.0.0 (28655963)
28655784;Database Release Update : 18.4.0.0.181016 (28655784)
28655916;ACFS RELEASE UPDATE 18.4.0.0.0 (28655916)
28656071;OCW RELEASE UPDATE 18.4.0.0.0 (28656071)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

 

Indeed Tomcat is registered in the inventory and patched just like any other product inside the OH:

<COMP NAME="oracle.tomcat.crs" VER="18.0.0.0.0" BUILD_NUMBER="0" BUILD_TIME="20180207.193003" REP_VER="0.0.0.0.0" RELEASE="Production" INV_LOC="Components/oracle.tomcat.crs/18.0.0.0.0/1/" LANGS="ALL_LANGS" XML_INV_LOC="Components21/oracle.tomcat.crs/18.0.0.0.0/" ACT_INST_VER="12.2.0.4.0" DEINST_VER="11.2.0.0.0" INSTALL_TIME="2018.Nov.05 13:27:32 CET" INST_LOC="/u01/app/grid/crs1840/tomcat">
   <EXT_NAME>Tomcat Container</EXT_NAME>
   <DESC>Packages files from the Tomcat Container.</DESC>
   <DESCID>COMPONENT_DESC</DESCID>
   <STG_INFO OSP_VER="10.2.0.0.0"/>
   <CMP_JAR_INFO>
      <INFO NAME="filemapObj" VAL="Components/oracle/tomcat/crs/v18_0_0_0_0/filemap.xml"/>
      <INFO NAME="helpDir" VAL="Components/oracle/tomcat/crs/v18_0_0_0_0/help/"/>
      <INFO NAME="actionsClass" VAL="Components.oracle.tomcat.crs.v18_0_0_0_0.CompActions"/>
      <INFO NAME="resourceClass" VAL="Components.oracle.tomcat.crs.v18_0_0_0_0.resources.CompRes"/>
      <INFO NAME="identifiersXML" VAL="Components/oracle/tomcat/crs/v18_0_0_0_0/identifiers.xml"/>
      <INFO NAME="contextClass" VAL="Components.oracle.tomcat.crs.v18_0_0_0_0.CompContext"/>
      <INFO NAME="fastCopyLogXML" VAL="Components/oracle/tomcat/crs/v18_0_0_0_0/fastCopyLog.xml"/>
   </CMP_JAR_INFO>
   <LOC_INFO INST_DFN_LOC="../Scripts" JAR_NAME="install1.jar"/>
   <BOOK NAME="oracle.tomcat.crs.hs"/>
   <PRE_REQ DEF="F"/>
   <PROD_HOME DEF="F"/>
   <LANG_IDX_MAP>
      <LANG LIST="en fr ar bn pt_BR bg fr_CA ca hr cs da nl ar_EG en_GB et fi de el iw hu is in it ja ko es lv lt ms es_MX no pl pt ro ru zh_CN sk sl es_ES sv th zh_TW tr uk vi"/>
      <LANGSET IDX="1" BITSET="{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44}"/>
   </LANG_IDX_MAP>
   <PLAT_IDX_MAP>
      <PLAT LIST="46"/>
      <PLATSET IDX="1" BITSET="{0}"/>
   </PLAT_IDX_MAP>
   <DST_IDX_MAP>
      <DST LIST="%ORACLE_HOME% %INVENTORY_LOCATION%"/>
   </DST_IDX_MAP>
   <DEP_GRP_LIST/>
   <DEP_LIST/>
   <REF_LIST>
      <REF NAME="oracle.crs" VER="18.0.0.0.0" HOME_IDX="3"/>
   </REF_LIST>
   <INST_TYPE_LIST>
      <INST_TYPE NAME="Complete" NAME_ID="Maximum" DESC_ID=""/>
   </INST_TYPE_LIST>
   <FILESIZEINFO>
      <DEST VOLUME="%ORACLE_HOME%" SPACE_REQ="3375301"/>
      <DEST VOLUME="%INVENTORY_LOCATION%" SPACE_REQ="2000"/>
   </FILESIZEINFO>
</COMP>

 

# [ oracle@server2:/u01/app/grid/crs1830/inventory/Components21/oracle.tomcat.crs/18.0.0.0.0 [08:56:06] [18.3.0.0.0 [GRID] SID=GRID] 0 ] #
$ vi context.xml

<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2018, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<COMP_CONTEXT>
   <VAR_LIST SIZE="0">
      <VAR NAME="PROD_HOME" TYPE="String" DESC_RES_ID="" SECURE="F" VAL="/u01/app/grid/crs1830/tomcat" ADV="F" CLONABLE="T" USER_INPUT="DEFAULT"/>
   </VAR_LIST>
   <CONST_LIST SIZE="2">
      <CONST NAME="COMPONENT_DESC" PLAT_SP="F" TYPE="String" TRANS="T" VAL="COMPONENT_DESC_ALL"/>
      <CONST NAME="COMPONENT_NAME" PLAT_SP="F" TYPE="String" TRANS="F" VAL="Tomcat Container"/>
   </CONST_LIST>
</COMP_CONTEXT>

Out of the box, Tomcat is used for the Quality of Services Management (ora.qosmserver resource):

$ ps -eaf | grep tomcat
oracle    58746 142151  0 13:10 pts/1    00:00:00 grep --color=auto tomcat
oracle   108610      1  0 Dec04 ?        00:25:33 /CRS/dbs01/crs1830/jdk/bin/java -server -Xms128M -Xmx384M -Djava.awt.headless=true -Ddisable.checkForUpdate=true -Djava.util.logging.config.file=/ORA/dbs01/oracle/crsdata/itrac1602/qos/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -DTRACING.ENABLED=false -Djava.rmi.server.hostname=itrac1602.cern.ch -Doracle.http.port=8888 -Doracle.jmx.port=23792 -Doracle.tls.enabled=false -Doracle.jwc.tls.http.enabled=false -Djava.security.manager -Djava.security.policy=/ORA/dbs01/oracle/crsdata/itrac1602/qos/conf/catalina.policy -Djava.security.egd=file:/dev/urandom -Dcatalina.home=/CRS/dbs01/crs1840/tomcat -Dcatalina.base=/ORA/dbs01/oracle/crsdata/itrac1602/qos -Djava.io.tmpdir=/ORA/dbs01/oracle/crsdata/itrac1602/qos/temp -Doracle.home=/CRS/dbs01/crs1840 -classpath /CRS/dbs01/crs1840/tomcat/lib/tomcat-juli.jar:/CRS/dbs01/crs1840/tomcat/lib/bootstrap.jar:/CRS/dbs01/crs1840/jlib/jwc-logging.jar org.apache.catalina.startup.Bootstrap start

But it is used for the Independent Local Mode Automaton as well, when it is started.

Enabling and starting the independent local-mode automaton

The resources are started using the following commands (as root, the order is quite important):

# /u01/app/grid/crs1830/bin/srvctl enable volume -volume GHCHKPT  -diskgroup mgmt
# /u01/app/grid/crs1830/bin/srvctl enable filesystem -volume GHCHKPT -diskgroup mgmt
# /u01/app/grid/crs1830/bin/srvctl start filesystem -volume GHCHKPT -diskgroup mgmt

Before continuing with the rhpserver resource, you might want to check if the filesystem is mounted:

$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.MGMT.GHCHKPT.advm
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.MGMT.dg
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.OCRVOT.dg
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.chad
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.helper
               OFFLINE OFFLINE      server1                IDLE,STABLE
               OFFLINE OFFLINE      server2                STABLE
ora.mgmt.ghchkpt.acfs
               ONLINE  ONLINE       server1                mounted on /opt/orac
                                                             le/rhp_images/chkbas
                                                             e,STABLE
               ONLINE  ONLINE       server2                mounted on /opt/orac
                                                             le/rhp_images/chkbas
                                                             e,STABLE
ora.net1.network
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.ons
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.proxy_advm
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       server2                STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       server1                STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       server1                169.254.17.12 10.30.
                                                             200.73,STABLE
ora.asm
      1        ONLINE  ONLINE       server1                Started,STABLE
      2        ONLINE  ONLINE       server2                Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       server1                STABLE
ora.server1.vip
      1        ONLINE  ONLINE       server1                STABLE
ora.server2.vip
      1        ONLINE  ONLINE       server2                STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       server1                Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       server1                STABLE
ora.rhpserver
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       server2                STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       server1                STABLE
--------------------------------------------------------------------------------


[root@server2 dbs01]# df -k | grep ghchkpt
/dev/asm/ghchkpt-213              1572864   499572   1073292  32% /opt/oracle/rhp_images/chkbase

Now the rhpserver should start without problems, as oracle:

# [ oracle@server1:/u01/app/oracle/home [17:00:49] [18.3.0.0.0 [GRID] SID=GRID] 0 ] #
$ srvctl start rhpserver

Please note that if you omit to activate the filesystem first, the rhpserver will fail to start.

As you can see, now both rhpserver and the helper are online:

# [ oracle@server1:/u01/app/oracle/home [17:00:49] [18.3.0.0.0 [GRID] SID=GRID] 0 ] #
$ srvctl start rhpserver

# [ oracle@server1:/u01/app/oracle/home [17:02:39] [18.3.0.0.0 [GRID] SID=GRID] 0 ] #
$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.MGMT.GHCHKPT.advm
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.MGMT.dg
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.OCRVOT.dg
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.chad
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.helper
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.mgmt.ghchkpt.acfs
               ONLINE  ONLINE       server1                mounted on /opt/orac
                                                             le/rhp_images/chkbas
                                                             e,STABLE
               ONLINE  ONLINE       server2                mounted on /opt/orac
                                                             le/rhp_images/chkbas
                                                             e,STABLE
ora.net1.network
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.ons
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
ora.proxy_advm
               ONLINE  ONLINE       server1                STABLE
               ONLINE  ONLINE       server2                STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       server2                STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       server1                STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       server1                169.254.17.12 10.30.
                                                             200.73,STABLE
ora.asm
      1        ONLINE  ONLINE       server1                Started,STABLE
      2        ONLINE  ONLINE       server2                Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       server1                STABLE
ora.server1.vip
      1        ONLINE  ONLINE       server1                STABLE
ora.server2.vip
      1        ONLINE  ONLINE       server2                STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       server1                Open,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       server1                STABLE
ora.rhpserver
      1        ONLINE  ONLINE       server2                STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       server2                STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       server1                STABLE
--------------------------------------------------------------------------------
# [ oracle@server2:/u01/app/grid/crs1830 [08:59:43] [18.3.0.0.0 [GRID] SID=GRID] 0 ] #
$ ps -eaf | grep tomca
oracle   132330      1 15 08:48 ?        00:01:39 /u01/app/grid/crs1830/jdk/bin/java -server -Xms128M -Xmx384M -Djava.awt.headless=true -Ddisable.checkForUpdate=true -Djava.util.logging.config.file=/u01/app/oracle/crsdata/server2/rhp/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -DTRACING.ENABLED=false -Djava.rmi.server.hostname=server2.cern.ch -Doracle.http.port=8894 -Doracle.jmx.port=23795 -Doracle.tls.enabled=false -Doracle.jwc.tls.http.enabled=true -Doracle.rhp.storagebase=/opt/oracle/rhp_images -Djava.security.egd=file:/dev/urandom -Dcatalina.home=/u01/app/grid/crs1830/tomcat -Dcatalina.base=/u01/app/oracle/crsdata/server2/rhp -Djava.io.tmpdir=/u01/app/oracle/crsdata/server2/rhp/temp -Doracle.home=/u01/app/grid/crs1830 -classpath /u01/app/grid/crs1830/tomcat/lib/tomcat-juli.jar:/u01/app/grid/crs1830/tomcat/lib/bootstrap.jar:/u01/app/grid/crs1830/jlib/jwc-logging.jar org.apache.catalina.startup.Bootstrap start

Now all is set to start using it!

We’ll see how to use it in the next posts.

Ludo

 

The story of ACME and its CRM with serious SQL injections problems

Preface/Disclaimer

This story is real, but I had to mask some names and introduce some minor changes so that real people are not easy to recognize and the whole story does not sound offensive to anyone. This post is not technic, so my non-technical English will be fully exposed. Sorry for the many errors 🙂

ACME, The Company

ACME is a big, global company. It has a huge revenue and there are almost no competitors on the market that get close to it in terms of fame and earnings.

Its core business is heavily supported by its CRM system, that holds all the customers, contracts, prospects, suppliers…

FOOBAR CRM, The CRM system

Despite the CRM is not ACME’s core business, the data in there is really, really precious. Without prospects and customer data, the sales cannot close the deals.

The CRM application (let’s call it FOOBAR CRM) runs on a legacy architecture and it is as old as the company itself.

The architecture is the “old good style” web application that was common in the early 2000’s… : browser front-end (OK, you might think that it is not so old, huh?) , PHP application backed by Apache, MySQL database.

As you can see, quite old but not so uncommon.

One of the big concerns, as in every application lifecycle, is to maintain good code quality. At the beginning of the PHP era, when PHP was still popular, there was a lack of good frameworks (I’m not even sure if there are now, I’m sure Zend Framework was a cool MVC framework but it came out many years later). The result is that now the code maintenance of the application is literally a pain in the a**.

The customer is a noob in development, so when it has been founded and needed a CRM system, the management delegated the development to an external company (let’s call it FOOBAR).

FOOBAR, The software house

The company FOOBAR is as old as the ACME company. Respective founders were relatives: they started the business together and now that the founders left, the partnership is working so well that FOOBAR is also one the biggest resellers of ACME products (despite its business is loosely related to ACME’s business). FOOBAR is also at the same time a partner and a customer, and some member of its board are also part of ACME’s board.

What is important here, is that the advices coming from the “common board members” are considered much more important than the advices coming from ACME’s employees, customers and marketing department.

The code maintenability

ACME has started small, with a small “oldish” CRM system. But some years later ACME experienced a huge increase of customers, product portfolio, employees, revenues etc..

In order to cope with the increasing workload of the application, they scaled everything up/out: there are now tens of web servers nicely load balanced, some webcache servers, and they introduced Galera cluster in conjunction with some replicated servers to scale out the database workload.

The global business of ACME also required to open the FOOBAR CRM application to the internet, exposing it to a wide range of potential attacks.

In order to cope with increasing needs, FOOBAR proposed an increasing number of modules, pieces of code, tools to expand the CRM system. To maximize the profits, FOOBAR decided to employ only junior developers, unexperienced and not familiar at all with development of applications using big RDBMS systems and a very scarse sense of secure programming.

That’s not all!

In order to develop new features faster, ACME and FOOBAR have an agreement that let the end users develop their own modules in PHP code and plug them in the application, most of the times directly in production (you may think: that’s completely crazy, this should NEVER happen in a serious company! You know what? I agree 100%).

Uh, I forgot to mention, the employees that use the CRM application and have some development skills are VERY, VERY happy to have the permission to code on their own, because they can develop features or solve bugfixes on their own, depending on their needs.

Result: the code is completely out of control: few or no unit tests, no integration tests at all, poor security, tons of bugs.

The big SQL Injection problem

Among many bugs, the SQL injection is the most common. It started with some malicious users trying to play around with injection techniques, but now the attacks are happening more and more frequently:

  • The attacks come from many hackers (not related to each other)
  • Some hackers try to get money for that, some other just steal data, some other want just to mess up and low down ACME’s reputation…

everytime an attack is successful, ACME looses more and more contracts (and money).

The fix, up to now, was to track the hacker IP address AFTER the attack and add it to the firewall blacklist (not so clever, huh?).

Possible Solutions (according to the security experts)

ACME mandated an external company to do an assessment. The external company proposed a few things:

  • SOLUTION 1: Change completely the CRM software and use something more modern, modular, secure and developed by a company that hires top talents. There are tons of cloud vendors that offer CRM software as a Service, and other big companies with proven on-premises CRM solutions.
  • SOLUTION 2: Keep the current solution, but with a few caveats:
    • All the code accessing the database must be reviewed to avoid injections
    • only the experienced developers should have the right to write new code (possibly employees of the software house, that will be accountable for new vulnerabilities)
  • SOLUTION 3: Install content-sensitive firewalls and IDS that detect SQL Injection patterns and block them before they reach the web server and/or the database layer.

What the CRM users think

User ALPHA (the shadow IT guy): “We cannot afford to implement any of the solutions: we, as users, need the agility to develop new things for ourselves! And what if there is a bug? If I have to wait a fix from the software house, I might loose customers or contracts before the CRM is available again!”

User BRAVO (the skeptical): “SQL Injection is a complex problem, you cannot solve it just by fixing the current bugs and revoke the grants to develop new code to the non-developers”

User CHARLIE (the lawyer): “When I’ve been hired, I’ve been told that I had the right to drink coffee and develop my own modules. I would never work for a company that would not allow me to drink coffee! Drinking coffee and creating vulnerabilities, are both rights!”

User DELTA (the average non-sense): “The problem is not the vulnerable code, but all those motherf****** of hackers that try to inject malicious code. We should cure mental illness of geeks so they do not transform themselves in hackers.”

User ECHO (the hacker specialist): “If we ask stackoverflow to provide the IP addresses of the people that search for SQL injection code examples, we might preventively block their IP addresses on our external firewall!”

User FOXTROT (the false realist): “Hacker attacks happen, and there’s not much we can do against them. If we fix the code and implement security constraints, there will always be hackers trying to find vulnerabilities. You miss the real problem! We must cure this geeks/hackers insanity first!”

User GOLF (the non-sense paragon): “You concentrate on contracts lost because of SQL Injections, but the food in our restaurant sucks, and our sales also lose contracts because they struggle to fight stomach ache”.

User HOTEL (the denier): “I’ve never seen the logs that show the SQL Injections, I am sure it is a complot of the no-code organizations meant to sell us some WYSIWIG products”.

User INDIA (the unheard): “Why can’t we just follow what the Security Experts suggest and see if it fixes the problem?”

What the management thinks

“We send thought and prayers to all our sales, you are not alone and you’ll never be. (… and thanks for the amazing party, FOOBAR, the wine was delicious!)”

What ACME did to solve the problem

Absolutely nothing.

Forecast

More SQL Injections.

 

UPDATE 20.02.2018

Many people asked me who was the ACME customer that had the SQL injection problem. None. It is an analogy to the US mass shootings that happen more and more frequently, the last one at the time of writing: https://en.wikipedia.org/wiki/Stoneman_Douglas_High_School_shooting

This post is intended to show that, if explained as it was an IT problem, the solution would sound so easy that nobody would have any doubts about the steps that must be done.

Unfortunately, it is not the case, and the US is condamned to have more and more mass shootings because nobody wants to fix the problem. 🙁

The short story of two ACE Directors, competitors and friends

Well, this is a completely different post from what I usually publish. I like to blog about technology, personal interests and achievements.

This time I really would like to spend a few words to praise a friend.

I met Franck Pachot for the first time back in 2012, it was my first month in Trivadis and, believe it or not, Franck was working for it as well. I have the evidence here 😉

It was the first time since years that I was meeting someone at least as smart as me on the Oracle stack (later, it happened many more times to meet smarter people, but that’s another story).

A few months later, he left Trivadis to join it’s sworn enemy dbi services. But established friendships and like mindedness don’t disappear, we continued to meet whenever an opportunity was coming up, and we started almost simultaneously to boost our blogging activities, doing public presentations and expanding our presence on the social medias (mostly Twitter).

After I’ve got my Oracle ACE status in 2014, we went together at the Oracle Open World. I used to know many folks there and I can say that I helped Franck to meet many smart people inside and outside the ACE Program. A month after the OOW, he became an Oracle ACE.

DSC_0088_2DSC02749_2Franck’s energy, passion and devotion for the Oracle Community are endless. What he’s doing, including his last big effort, is just great and all the people in the Oracle Community respect him. I can say that now he is far more active than me in the Oracle Community (at least regarding “public” activities ;-))

DSC02741_2We both had the target of becoming Oracle ACE Directors, and I have spent a bad month in April when I became an ACE Director and his nomination was still pending.

I said: “If you become ACE Director by the end of April I will write a blog post about you.” And that’s where this post comes from.

Congratulations ACE Director Franck, perfect timing! 🙂

O_ACEDirectorLogo_clrrev

Ludo

 

 

In-memory Columnar Store hands-on

As I’ve written in my previous post, the inmemory_size parameter is static, so you need to restart your instance to activate it or change its size. Let’s try to set it at 600M.

SQL> show parameter inmem

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default              string
inmemory_force                       string      DEFAULT
inmemory_query                       string      ENABLE
inmemory_size                        big integer 0

SQL> alter system set inmemory_size=600M scope=spfile;

SQL> shutdown

...

SQL> startup

...

SQL> show parameter inmem

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
inmemory_clause_default              string
inmemory_force                       string      DEFAULT
inmemory_query                       string      ENABLE
inmemory_size                        big integer 608M

 

First interesting thing: it has been rounded to 608M so it works in chunks of 16M. (to be verified)

Which views can you select for further information?

SQL> select view_name from dba_views where view_name like 'V_$IM%';

VIEW_NAME
--------------------------------------------------------------------------------
V_$IM_SEGMENTS_DETAIL
V_$IM_SEGMENTS
V_$IM_USER_SEGMENTS
V_$IM_TBS_EXT_MAP
V_$IM_SEG_EXT_MAP
V_$IM_HEADER
V_$IM_COL_CU
V_$IM_SMU_HEAD
V_$IM_SMU_CHUNK
V_$IM_COLUMN_LEVEL

10 rows selected

V$IM_SEGMENTS gives a few information about the segments that have a columnar version, including the segment size, the actual memory allocated, the population status and other compression indicators.

The other views help understand the various memory chunks and the status for each column in the segment.

Let’s create a table with a few records:

SQL> create table ludovico.tinmem
as
select
a.OWNER ,
a.TABLE_NAME ,
b.owner owner2,
b.table_name table_name2,
a.TABLESPACE_NAME ,
a.STATUS ,
a.PCT_FREE ,
a.PCT_USED ,
a.INI_TRANS ,
a.MAX_TRANS ,
a.INITIAL_EXTENT ,
a.NEXT_EXTENT ,
a.MIN_EXTENTS ,
a.MAX_EXTENTS ,
a.PCT_INCREASE ,
a.FREELISTS ,
a.FREELIST_GROUPS ,
a.LOGGING
22 from all_tables a, all_tables b;

Table created.

SQL> select count(*) from ludovico.tinmem;

COUNT(*)
----------
5470921

SQL>

The table is very simple, it’s a cartesian of two “all_tables” views.

Let’s also create an index on it:

SQL> create index ludovico.tinmem_ix1 on ludovico.tinmem (table_name, pct_increase);

 SQL> select segment_name, bytes/1024/1024 from dba_segments where owner='LUDOVICO';

SEGMENT_NAME      BYTES/1024/1024
----------------- ---------------
TINMEM                        621
TINMEM_IX1                    192

The table uses 621M and the index 192M.

How long does it take to do a full table scan almost from disk?

SQL> select distinct tablespace_name from ludovico.tinmem order by 1;

TABLESPACE_NAME
------------------------------
SYSAUX
SYSTEM
USERS

Elapsed: 00:00:15.05

15 seconds! Ok, I’m this virtual machine is on an external drive 5400  RPM… 🙁

Once the table is fully cached in the buffer cache, the query performance progressively improves to ~1 sec.

SQL> r
1* select distinct tablespace_name from ludovico.tinmem order by 1

TABLESPACE_NAME
------------------------------
SYSAUX
SYSTEM
USERS


Elapsed: 00:00:01.42
SQL> r
1* select distinct tablespace_name from ludovico.tinmem order by 1

TABLESPACE_NAME
------------------------------
SYSAUX
SYSTEM
USERS


Elapsed: 00:00:00.99

There is no inmemory segment yet:

SQL> SELECT OWNER, SEGMENT_NAME, INMEMORY_PRIORITY, INMEMORY_COMPRESSION

2 FROM V$IM_SEGMENTS; 

no rows selected

You have to specify it at table level:

SQL> alter table ludovico.tinmem inmemory;

Table altered.

SQL> SELECT OWNER, SEGMENT_NAME, INMEMORY_PRIORITY, INMEMORY_COMPRESSION
FROM V$IM_SEGMENTS; 2

OWNER
--------------------------------------------------------------------------------
SEGMENT_NAME
--------------------------------------------------------------------------------
INMEMORY INMEMORY_COMPRESS
-------- -----------------
LUDOVICO
TINMEM
HIGH     FOR QUERY

The actual creation of the columnar store takes a while, especially if you don’t specify to create it with high priority. You may have to query the table before seeing the columnar store and its population will also take some time and increase the overall load of the database (on my VBox VM, the performance overhead of columnar store population is NOT negligible).

Once the in-memory store created, the optimizer is ready to use it:

SQL> explain plan for select distinct tablespace_name from ludovico.tinmem order by 1;

Explained.


SQL> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1243998285

--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |     1 |    13 | 26132   (2)| 00:00:02 |
|   1 |  SORT UNIQUE                |        |     1 |    13 | 25993   (1)| 00:00:02 |
|   2 |   TABLE ACCESS INMEMORY FULL| TINMEM |  5470K|    67M|    26 (100)| 00:00:01 |
--------------------------------------------------------------------------------------

9 rows selected.

The previous query now takes half the time on the first attempt!

SQL> select distinct tablespace_name from ludovico.tinmem order by 1;

TABLESPACE_NAME
------------------------------
SYSAUX
SYSTEM
USERS

Elapsed: 00:00:00.50

The columnar store for the whole table uses 23M out of 621M, so the compression ratio is very good compared to the non-compressed index previously created!

SQL> select OWNER, SEGMENT_NAME, SEGMENT_TYPE, INMEMORY_SIZE, BYTES, BYTES_NOT_POPULATED, INMEMORY_PRIORITY, INMEMORY_DISTRIBUTE, INMEMORY_COMPRESSION

2 from V$IM_SEGMENTS;

OWNER    SEGMENT_NAME SEGMENT_TYPE INMEMORY_SIZE BYTES     BYTES_NOT_POPULATED INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS
-------- ------------ ------------ ------------- --------- ------------------- -------- --------------- -----------------
LUDOVICO TINMEM       TABLE        23527424      651165696 0                   HIGH     AUTO DISTRIBUTE FOR QUERY

Elapsed: 00:00:00.07

 

This is a very short example. The result here (2x improvement) is influenced by several factors. It is safe to think that with “normal” production conditions the gain will be much higher in almost all the cases.
I just wanted to demonstrate that in-memory columnar store is space efficient and really provides higher speed out of the box.

Now that you know  about it, can you live without? 😛

Oracle Instances and real memory consumption on Linux and Solaris

There’s a way to know the REAL memory usage by Oracle Instance, including all connecting processes and using the shell rather than a connection to oracle?

The short answer is “I think so” 🙂

Summing up RSS column from ps output, is not reliable because Linux uses a copy-on-write on process forks and also doesn’t take into account correctly the shared memory and other shared allocations.

I’ve come across this post on Pythian’s Blog from Marc Billette.

While it seems good I’ve had discording results depending on platform and release.

Instead, I’ve tried to create a shell snippet that always uses pmap but works differently and SEEMS to work correctly on Linux ans Solaris.

Basically, using the pmap script I get a lot of information about the different memory areas allocated to the process:

21010:  ora_d000_db1p

0000000000400000     208908K r-x--  /ccv/app/oracle/product/11.2.0.3/bin/oracle

000000000D012000       1536K rw---  /ccv/app/oracle/product/11.2.0.3/bin/oracle

000000000D192000       1040K rw---    [ heap ]

0000000060000000      12288K rwxs-    [ dism shmid=0x4300000e ]

0000000080000000    1036288K rwxs-    [ dism shmid=0x7600000f ]

00000000C0000000         12K rwxs-    [ dism shmid=0x4f000011 ]

FFFFFD7FFC7A0000         64K rwx--    [ anon ]

FFFFFD7FFC7BD000        704K rw---    [ anon ]

FFFFFD7FFC86E000        200K rw---    [ anon ]

FFFFFD7FFC8A0000        312K rw---    [ anon ]

FFFFFD7FFC8EF000       1280K rw---    [ anon ]

FFFFFD7FFCA30000         64K rwx--    [ anon ]

FFFFFD7FFCA4F000        256K rw---    [ anon ]

FFFFFD7FFCA90000         64K rwx--    [ anon ]

FFFFFD7FFCAB0000         36K r-x--  /lib/amd64/libuutil.so.1
...

 

Initially I’ve tried to decode correctly the different kinds of memory the same way other scripts I’ve found online do:

rwxs- = shared memory

rw--- = private heap

rwx-- = private code stack

r-x-- = shared code stack (?)

etc...

but finally the ADDRESS is the same from different processes when the memory area is shared, so my script now just get a unique line for each address and sums up the memory size (not the rss one!):

sids=`ps -eaf | grep ora_pmon | grep -v " grep "  | awk '{print substr($NF,10)}'`

username=`whoami`

total=0
for sid in $sids ; do
        pids=`ps -eaf | grep "$username" | grep $sid | grep -v " grep " | awk '{print $2}'`
        mem=`pmap $pids 2>&1 | grep "K " | sort | awk '{print $1 " " substr($2,1,length($2)-1)}' | uniq | awk ' BEGIN { sum=0 } { sum+=$2} END {print sum}' `

        echo "$sid : $mem"
        total=`expr $total + $mem`
done

echo "total :  $total"

This should give the total virtual memory allocated by the different Oracle instances.

The results I get are plausible both on Linux and Solaris.

Example:

$ ./test_mem.ksh

db1p: 3334852

db2p: 2052048

db3p: 6765280

db4p: 2687928

db5p: 4385616

total :  19225724

If you find any error let me know and I’ll fix the script!

Ludovico

OOW13: How a week in the bay changed the way I think about my job

In my previous post I’ve talked about some news from the Open World. Now it’s time to talk about MY week and why I’ve loved it…

 

It’s all about community involvement

The most exciting thing about the OOW is the possibility to interact with great people, bloggers, speakers, experts, and now I can also say friends. In the last decade, I’ve worked hard for customers and dealt with almost all the problems and architectures that an Oracle DBA can face, but I’ve always done few efforts for the community, sometimes because I was having no time, sometimes because I was perceiving it as time-expensive and gainless. I was wrong. It’s time-consuming, yes, but it lets me feel richer.

 

The first good news

When I’ve landed in the USA, the first mail I’ve read was the following:

Congratulations Bjoern. Bobby, Gokhan and Ludovico on being elected as RAC SIG Office bearers!!!

Vice President: Bjoern Rost

US Conference Events Chair: Bobby Curtis

Website Chair : Gokhan Atil

Regional Chair Europe: Ludovico Cadara

We look forward for you all to take us to the next level. I will include you on our board member email group.

Congratulations again!!

K.P.Singh
Current President, RAC SIG

We’ve made an official announcement on twitter recently:

So I’ve attended my first RAC SIG meeting at OOW13 at the OTN lounge.

RAC SIG meeting at OOW13

Now I’m weekly (or even daily) in contact with the board by e-mail, tomorrow we’ll have a conf-call to put the bases to work on for the next year. Wish us good luck! 🙂

 

RAC Attack 12c

Well, I’ve talked about the RAC ATTACK in my previous posts. The event has been a success, we’ve had many attendants, curious people and friends coming to do the lab or get in touch. It has been a great opportunity to collaborate with (and be praised by) several community experts including Oracle ACEs and ACE directors:

The Ninjas:
Seth Miller (@Seth_M_Miller)
Bjoern Rost (@brost)
Yury Velikanov (@yvelik)
Leighton Nelson (@leight0nn)
Bertrand Drouvot (@BertrandDrouvot)
Kamran Agayev (@KamranAgayev)
Martin Nash (@mpnsh)
Maaz Anjum (@maaz_anjum)
Hans Forbrich (@HansForbric)
Tim Hall ‏ (@oraclebase)
Ludovico Caldara (@ludovicocaldara)  that’s me! 🙂

Other contributors and great people that worked on the project:
Laura Ramsey (@OracleDBDev)
Jeremy Schneider (@jer_s)
Osama Mustafa (@OsamaOracle)
Bobby Curtis (@dbasolved)
Alex Gorbachev (@alexgorbachev)

(If someone is missing please raise the hand!)

Visit the official facebook page for more information.

Also note that the RAC Attack will be present at the UKOUG Tech 2013 and RMOUG 2014, so don’t miss it if you plan to attend these conferences.

 The Blogger Meetup

This meeting, organized by Pythian and OTN, allows to meet the most famous and active bloggers in the Oracle Community. This GREAT VIDEO from Björn Rost should give you an idea (I appear at 19”):

Here’s another one, always from Björn:

The Swim in the bay

Literally, a meetup of crazy guys willing to freeze themselves in the bay the early morning!
Chet Justice has organized this and he managed to get sponsorship for swim-caps and T-shirts 🙂 I’m the third from the left, you may recognize many famous people here 🙂

See the facebook page for more information.

Also for this event there is a video, actually I’m closing the video with a dive 🙂

The Golden Gate Run

Jeff Smith has organized this run Sunday morning, a good way to start the week and my first opportunity to meet many new friends!

Once again, see the Facebook page!

 

Other events

Well, I’ve participated also to the Friends of Pythian Party, the IOUG Party, the Oracle Swiss Partner Lounge, and had many nice evenings.

No, I haven’t attended the appreciation event: I was possessing the ticket but I’ve decided to skip it in favor of a dinner with many great people.

Networking and fun

See some pictures I’ve loaded on Flickr:

You can see the whole set here.

So why everything has changed?

Now I’m part of a community, I’m involved, I learn from others and I let others to learn from me… where applicable! 🙂 Interacting with people from all around the world let me feel stronger and more open to new challenges. And I know that if I have a dream is up to me to chase it and make it come true.

 

PS: This is an incomplete list of the people I’ve met  (other than the RAC Attack contributors).

Make sure to follow them!

Vit Spinka ‏ (@vitspinka)
Vanessa Simmons ‏(@pythiansimmons)
Riyaj Shamsudeen ‏ (@riyajshamsudeen)
Øyvind Isene ‏ (@OyvindIsene)
Carlos Sierra ‏ (@csierra_usa)
Gregory Guillou‏ (@ArKZoYd)
Heli Helskyaho ‏ (@HeliFromFinland)
Marc Fielding ‏ (@mfild)
Jonathan Lewis ‏ (@JLOracle)
Fuad Arshad ‏ (@fuadar)
Chet Justice ‏ (@oraclenerd)
Tim Gorman ‏ (@timothyjgorman)
Mark W. Farnham ‏ (@pudge1954)
Kerry Osborne ‏ (@KerryOracleGuy)
Arjen Visser ‏ (@dbvisit)
Kyle Hailey ‏ (@kylehhailey)
Steve Karam ‏ (@OracleAlchemist)
Eric Grancher ‏ (@EricGrancher)
Arup Nanda ‏ (@arupnanda)
Kent Graziano ‏ (@KentGraziano)
Andrey Goryunov ‏ (@goryunov)
James Morle ‏ (@JamesMorle)
Christo Kutrovsky ‏ (@kutrovsky)
Brian Fitzgerald ‏ (@ExaGridDba)
Kellyn Pot’Vin ‏ (@DBAKevlar)
Karl Arao ‏ (@karlarao)
Jason Arneil‏ (@jarneil)
Gustavo Rene Antunez ‏ (@grantunez)
Frits Hoogland ‏ (@fritshoogland)
Luca Canali ‏ (@LucaCanaliDB)
Chris Buckel ‏ (@flashdba)
Cary Millsap ‏ (@CaryMillsap)
Paul Vallee ‏ (@paulvallee)
Tanel Poder ‏ (@TanelPoder)
Connor McDonald ‏ (@connor_mc_d)
Gwen (Chen) Shapira ‏ (@gwenshap)
Christian Antognini ‏ (@ChrisAntognini)
Mauro Pagano ‏ (@Mautro)
Jérôme Françoisse ‏ (@JeromeFr)
Ittichai ‏ (@ittichai)
Jeff Smith ‏ (@thatjeffsmith)
Michelle Malcher ‏ (@malcherm)
Debra Lilley ‏ (@debralilley)
Doug Burns (@orcldoug)

 

 

 

 

Planning to be at Oracle Open World? Attend RAC Attack, and become a real RAC NINJA

slide_01_2013-08-25_2258You’re going to head at OOW this year. Your count down has started and you feel excited about that.

Have you planned carefully your AGENDA? If you’re a RAC geek and you’re not going to attend #RACAttack, the answer is NO!! 🙂 This year the RAC Attack we’ll be mentored by real Ninjas, just come along to be part of them!

Reserve from one to three slots on your agenda and go to the OTN Lounge (@ the lobby of Moscone South) 

  1. Launch RAC Attack at OOW 2013 (NINJAS Presentation)
    On Sunday, from 4PM to 6PM, the OTN will open his lounge. At 4:45PM our team will present himself in just 15 minutes, along with the project. OTN will provide food and drinks (that’s what I’ve heard 😉 ) so don’t miss it for any reason.
  2. RAC Attack at OOW 2013 Day 1
    On Tuesday, from 10AM to 2PM, our fearless team of Ninjas will assist you during the installation from scratch of a real RAC environment on your laptop.
  3. RAC Attack at OOW 2013 Day 2
    There will be a second slot of 4 hours, the Wednesday, from 10AM to 2PM to give you a second chance to attend (and finalize your installation, start a new one, or just have fun together and discuss about complex RAC topologies,  for real RAC geeks!).

What differs this year?

ribbon_01The new book covers the installation of a brand new Oracle RAC in release 12c on Oracle VirtualBox.  Still not enough?

Among the volunteers we’ll have 5 Oracle ACEs or ACE Directors, some Certified Masters and a great representative of the RAC SIG board. You’ll recognize us because of our bandanas… Don’t worry, we are not an elite, we’re just good DBAs eager to share our experience, ear from you, and share good time. Sounds good?

If you attend, you’ll earn a special “RAC Attack Ninja” ribbon, so you can go back to work and boast with your colleagues (and your boss!).

And why not, propose a new RAC environment at your company and get it done with a little more confidence, for what it’s worth, that’s the real objective: practice with the RAC technology and get ready for a real scenario.

Maximize your experience @ RAC Attack

You can cheat and start reading the new version of the book (still in development): http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/RAC_Attack_12c

Make sure that you accomplish some steps before starting your journey to OOW:

  1. Make sure you have a recent laptop to install the full RAC stack. The new 12c is a little more demanding than the 11g, each RAC node would require 4Gb of RAM, but if you have 8Gb of RAM on your laptop we guarantee that it’s enough.
  2. The WI-FI connection at OOW is sub-optimal for heavy downloads (I’m euphemistic), so make sure you get your own copy of the Oracle software (Grid Infrastructure and Database) by following the download instruction from: http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/RAC_Attack_12c/Software_Components
    If you come without your own copy of the software, we won’t give it to you it because of legal concerns, but hopefully we’ll have some spare laptops available to allow you to complete your lab successfully. Just make sure to confirm your presence on the Facebook event page and we’ll try to do our best (but cannot guarantee a laptop for everyone!).

Give us your feedback and spread the word!

If you plan attend (good choice! :-)) , take a business card with you and get in touch with us. It’s a great networking opportunity. Take a lot of pictures, upload them on the social networks (the #RACAttack hashtag on twitter, the facebook page, the RAC SIG group on LinkedIn) and give us your feedback. Other RAC Attack events will be planned, so we can improve thank to your suggestions.

Hope to see you there.

Ludovico

Oracle Database 12c: Multitenant, Services and Standard Edition RAC

The installation process of a typical Standard Edition RAC does not differ from the Enterprise Edition. To achieve a successful installation refer to the nice quick guide made by Yury Velikanov and change accordingly the Edition when installing the DB software.

Standard Edition and Feature availability

The first thing that impressed me, is that you’re still able to choose to enable pluggable databases in DBCA even if Multitenant option is not available for the SE.

So I decided to create a container database CDB01 using template files, so all options of EE are normally cabled into the new DB. The Pluggable Database name is PDB01.

[oracle@se12c01 ~]$ sqlplus

SQL*Plus: Release 12.1.0.1.0 Production on Wed Jul 3 14:21:47 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

As you can see,  the initial banner contains “Real Application Clusters and Automatic Storage Management options“.

Multitenant option is not avilable. How SE reacts to its usage?

First, on the ROOT db, dba_feature_usage_statistics is empty.

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> select * from dba_feature_usage_statistics;

no rows selected

SQL>

This is interesting, because all features are in (remember it’s created from the generic template) , so the feature check is moved from the ROOT to the pluggable databases.

On the local PDB I have:

SQL> alter session set container=PDB01;

Session altered.

SQL> select * from dba_feature_usage_statistics where lower(name) like '%multitenant%';

NAME               VERSION    DETECTED_USAGES   TOTAL_SAMPLES   CURRE
----------------------------- ----------------- --------------- -----
Oracle Multitenant 12.1.0.1.0                 0               0 FALSE

Having ONE PDB is not triggering the usage of Multitenant (as I was expecting).

How if I try to create a new pluggable database?

SQL> alter session set container=CDB$ROOT;

Session altered.

SQL> create pluggable database PDB02 admin user pdb02admin identified by pdb02admin;
create pluggable database PDB02 admin user pdb02admin identified by pdb02admin
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

SQL>

A-AH!! Correctly, I can have a maximum of ONE pluggable database in my container.

This allows however:

  • Smooth migration from SE to a Multitenant Architecture
  • Quick upgrade from one release to another

To be sure that I can plug/unplug, I’ve tried it:

SQL> alter pluggable database pdb01 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb01 unplug into '/u01/app/oracle/unplugged/pdb01/pdb01.xml';

Pluggable database altered.

SQL> drop pluggable database pdb01 keep datafiles;

Pluggable database dropped.

SQL> create pluggable database PDB02 admin user pdb02admin identified by pdb02admin;

Pluggable database created.

SQL> alter pluggable database pdb02 open;

Pluggable database altered.

SQL> select * from dba_feature_usage_statistics where name like '%Multite%';

NAME               VERSION    DETECTED_USAGES   TOTAL_SAMPLES   CURRE
----------------------------- ----------------- --------------- -----
Oracle Multitenant 12.1.0.1.0                 0               0 FALSE

Other features of Enterprise off course don’t work

SQL> alter index SYS_C009851 rebuild online tablespace users;
alter index SYS_C009851 rebuild online tablespace users
*
ERROR at line 1:
ORA-00439: feature not enabled: Online Index Build

SQL> alter database move datafile 'DATA/CDB01/E09CA0E26A726D60E043A138A8C0E475/DATAFILE/users.284.819821651';
alter database move datafile 'DATA/CDB01/E09CA0E26A726D60E043A138A8C0E475/DATAFILE/users.284.819821651'
*
ERROR at line 1:
ORA-00439: feature not enabled: online move datafile

Create a Service on the RAC Standard Edition (just to check if it works)

I’ve just followed the steps to do it on an EE. Keep in mind that I’m using admin managed DB (something will come about policy managed DBs, stay tuned).

As you can see it works pretty well. Comparing to 11g you have to specify the -pdb parameter:

[oracle@se12c01 admin]$ srvctl add service -db CDB01 -service testpdb02 -preferred CDB012 -pdb PDB02
[oracle@se12c01 admin]$ srvctl start service -db cdb01 -s testpdb02

[oracle@se12c01 admin]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       se12c01                  STABLE
               ONLINE  ONLINE       se12c02                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       se12c01                  STABLE
               ONLINE  ONLINE       se12c02                  STABLE
ora.asm
               ONLINE  ONLINE       se12c01                  Started,STABLE
               ONLINE  ONLINE       se12c02                  Started,STABLE
ora.net1.network
               ONLINE  ONLINE       se12c01                  STABLE
               ONLINE  ONLINE       se12c02                  STABLE
ora.ons
               ONLINE  ONLINE       se12c01                  STABLE
               ONLINE  ONLINE       se12c02                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       se12c01                  STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       se12c02                  STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       se12c02                  STABLE
ora.cdb01.db
      1        ONLINE  ONLINE       se12c01                  Open,STABLE
      2        ONLINE  ONLINE       se12c02                  Open,STABLE
ora.cdb01.testpdb02.svc  <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
      1        ONLINE  ONLINE       se12c02                  STABLE
ora.cvu
      1        ONLINE  ONLINE       se12c02                  STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       se12c01                  STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       se12c02                  STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       se12c02                  STABLE
ora.se12c01.vip
      1        ONLINE  ONLINE       se12c01                  STABLE
ora.se12c02.vip
      1        ONLINE  ONLINE       se12c02                  STABLE
--------------------------------------------------------------------------------

Then I can access my DB (and preferred instance) using the service_name I specified.

[oracle@se12c02 admin]$ $ORACLE_HOME/bin/sqlplus pdb02admin/pdb02admin@testpdb02

SQL*Plus: Release 12.1.0.1.0 Production on Wed Jul 3 16:46:06 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Last Successful login time: Wed Jul 03 2013 16:46:01 +02:00

Connected to:
Oracle Database 12c Release 12.1.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> show con_name

CON_NAME
------------------------------
PDB02
SQL>

 

Let me know what do you think about SE RAC on 12c. It is valuable for you?

I’m also on twitter: @ludovicocaldara

Cheers

Ludo

Oracle Database 12c: RMAN recover at table level

Brett Jordan David MacdonaldOracle Database 12c comes with a new feature named “RMAN table level recovery”.

After a quick try it’s easy to understand that we are talking about Tablespace Point-in-Time Recovery (TSPITR) with some automation to have it near-transparent.

 

How to launch it

The syntax is quite trivial. Suppose you’ve dropped a table ludovico.reco and then purged it (damn!) then you can’t flashback it to before drop and don’t want to flashback the entire database.

SQL> create table reco (field1 varchar2(50) primary key);

Table created.

SQL> insert into reco values ('test');

1 row created.

SQL> insert into reco values ('foo');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from reco;

FIELD1
--------------------------------------------------
foo 
test

SQL> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
803916

SQL> drop table reco;

Table dropped.

SQL> purge table reco;

Table purged.

SQL> flashback table reco to before drop;
flashback table reco to before drop
*
ERROR at line 1:
ORA-38305: object not in RECYCLE BIN

 

You can recover the table with:

RMAN> recover table ludovico.reco until scn 803916 auxiliary destination '/tmp/recover';

 

You identify the schema.table:partition to restore, optionally you can pass the pluggable database containing the table to recover, the time definition as usual (scn, seq# or timestamp) and an auxiliary destination.

This Auxiliary destination is well-known to be mandatory for TSPITR. You can pass other options like table renaming or tablespace remapping.

Off course, the database must be open in read-write, in archivelog mode and at least one successful backup must be taken.

How it works

Oracle prepare an auxiliary instance by restoring the SYSTEM, UNDO and SYSAUX tablespaces.

Starting recover at 10-JUN-13
using channel ORA_DISK_1 
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM 
Tablespace UNDOTBS1

Creating automatic instance, with SID='kCoq'

initialization parameters used for automatic instance:
db_name=CLASSIC 
db_unique_name=kCoq_pitr_CLASSIC 
compatible=12.0.0.0.0 
db_block_size=8192 
db_files=200 
sga_target=1G 
processes=80 
diagnostic_dest=/u01/app/oracle 
db_create_file_dest=/tmp/recover 
log_archive_dest_1='location=/tmp/recover' 
#No auxiliary parameter file used

starting up automatic instance CLASSIC

Oracle instance started

Total System Global Area 1068937216 bytes

Fixed Size 2295952 bytes
Variable Size 281020272 bytes
Database Buffers 780140544 bytes
Redo Buffers 5480448 bytes
Automatic instance created

contents of Memory Script:
{ 
# set requested point in time
set until scn 803916; 
# restore the controlfile 
restore clone controlfile; 
# mount the controlfile 
sql clone 'alter database mount clone database';
# archive current online log 
sql 'alter system archive log current'; 
} 
executing Memory Script

executing command: SET until clause

Starting restore at 10-JUN-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file 
channel ORA_AUX_DISK_1: reading from backup piece +FRA/CLASSIC/BACKUPSET/2013_06_10/ncsnf0_tag20130610t165249_0.270.817750463
channel ORA_AUX_DISK_1: piece handle=+FRA/CLASSIC/BACKUPSET/2013_06_10/ncsnf0_tag20130610t165249_0.270.817750463 tag=TAG20130610T165249
channel ORA_AUX_DISK_1: restored backup piece 1 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07 
output file name=/tmp/recover/CLASSIC/controlfile/o1_mf_8vctyxcy_.ctl 
Finished restore at 10-JUN-13

sql statement: alter database mount clone database

sql statement: alter system archive log current

contents of Memory Script:
{ 
# set requested point in time
set until scn 803916; 
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new; 
set newname for clone datafile 3 to new; 
set newname for clone datafile 2 to new; 
set newname for clone tempfile 1 to new; 
# switch all tempfiles 
switch clone tempfile all; 
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 3, 2; 
switch clone datafile all; 
} 
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /tmp/recover/CLASSIC/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 10-JUN-13
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /tmp/recover/CLASSIC/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /tmp/recover/CLASSIC/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /tmp/recover/CLASSIC/datafile/o1_mf_sysaux_%u_.dbf 
channel ORA_AUX_DISK_1: reading from backup piece +FRA/CLASSIC/BACKUPSET/2013_06_10/nnndf0_tag20130610t165249_0.269.817750371
channel ORA_AUX_DISK_1: piece handle=+FRA/CLASSIC/BACKUPSET/2013_06_10/nnndf0_tag20130610t165249_0.269.817750371 tag=TAG20130610T165249
channel ORA_AUX_DISK_1: restored backup piece 1 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:56 
Finished restore at 10-JUN-13

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=817751486 file name=/tmp/recover/CLASSIC/datafile/o1_mf_system_8vctzf86_.dbf
datafile 3 switched to datafile copy 
input datafile copy RECID=5 STAMP=817751486 file name=/tmp/recover/CLASSIC/datafile/o1_mf_undotbs1_8vctzfc9_.dbf
datafile 2 switched to datafile copy 
input datafile copy RECID=6 STAMP=817751487 file name=/tmp/recover/CLASSIC/datafile/o1_mf_sysaux_8vctzf32_.dbf

Then it opens in READ-ONLY mode the partial database.

contents of Memory Script:
{ 
# set requested point in time
set until scn 803916; 
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone "alter database datafile 3 online";
sql clone "alter database datafile 2 online";
# recover and open database read only 
recover clone database tablespace "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only'; 
} 
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile 1 online

sql statement: alter database datafile 3 online

sql statement: alter database datafile 2 online

Starting recover at 10-JUN-13
using channel ORA_AUX_DISK_1

starting media recovery

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log 
archived log thread=1 sequence=54 
channel ORA_AUX_DISK_1: restoring archived log 
archived log thread=1 sequence=55 
channel ORA_AUX_DISK_1: reading from backup piece +FRA/CLASSIC/BACKUPSET/2013_06_10/annnf0_tag20130610t170210_0.277.817750931
channel ORA_AUX_DISK_1: piece handle=+FRA/CLASSIC/BACKUPSET/2013_06_10/annnf0_tag20130610t170210_0.277.817750931 tag=TAG20130610T170210
channel ORA_AUX_DISK_1: restored backup piece 1 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03 
archived log file name=/tmp/recover/1_54_814717091.dbf thread=1 sequence=54 
archived log file name=/tmp/recover/1_55_814717091.dbf thread=1 sequence=55 
media recovery complete, elapsed time: 00:00:10 
Finished recover at 10-JUN-13

sql statement: alter database open read only

 

It uses then the read-only dictionary to take the tablespace that was containing the table before the data loss. This tablespace (users in my example) is restored and recovered, and the database is opened.

contents of Memory Script:

{ 
sql clone "create spfile from memory";
shutdown clone immediate; 
startup clone nomount; 
sql clone "alter system set control_files = 
''/tmp/recover/CLASSIC/controlfile/o1_mf_8vctyxcy_.ctl'' comment=
''RMAN set'' scope=spfile"; 
shutdown clone immediate; 
startup clone nomount; 
# mount database 
sql clone 'alter database mount clone database'; 
} 
executing Memory Script

sql statement: create spfile from memory

database closed
database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 1068937216 bytes

Fixed Size 2295952 bytes
Variable Size 285214576 bytes
Database Buffers 775946240 bytes
Redo Buffers 5480448 bytes

sql statement: alter system set control_files = ''/tmp/recover/CLASSIC/controlfile/o1_mf_8vctyxcy_.ctl'' comment= ''RMAN set'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 1068937216 bytes

Fixed Size 2295952 bytes
Variable Size 285214576 bytes
Database Buffers 775946240 bytes
Redo Buffers 5480448 bytes

sql statement: alter database mount clone database

contents of Memory Script:
{ 
# set requested point in time
set until scn 803916; 
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile 4 to new; 
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 4; 
switch clone datafile all; 
} 
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

Starting restore at 10-JUN-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=26 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to /tmp/recover/KCOQ_PITR_CLASSIC/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece +FRA/CLASSIC/BACKUPSET/2013_06_10/nnndf0_tag20130610t165249_0.269.817750371
channel ORA_AUX_DISK_1: piece handle=+FRA/CLASSIC/BACKUPSET/2013_06_10/nnndf0_tag20130610t165249_0.269.817750371 tag=TAG20130610T165249
channel ORA_AUX_DISK_1: restored backup piece 1 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03 
Finished restore at 10-JUN-13

datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=817751583 file name=/tmp/recover/KCOQ_PITR_CLASSIC/datafile/o1_mf_users_8vcv7wh0_.dbf

contents of Memory Script:
{ 
# set requested point in time
set until scn 803916; 
# online the datafiles restored or switched
sql clone "alter database datafile 4 online";
# recover and open resetlogs 
recover clone database tablespace "USERS", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs; 
} 
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile 4 online

Starting recover at 10-JUN-13
using channel ORA_AUX_DISK_1

starting media recovery

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log 
archived log thread=1 sequence=54 
channel ORA_AUX_DISK_1: restoring archived log 
archived log thread=1 sequence=55 
channel ORA_AUX_DISK_1: reading from backup piece +FRA/CLASSIC/BACKUPSET/2013_06_10/annnf0_tag20130610t170210_0.277.817750931
channel ORA_AUX_DISK_1: piece handle=+FRA/CLASSIC/BACKUPSET/2013_06_10/annnf0_tag20130610t170210_0.277.817750931 tag=TAG20130610T170210
channel ORA_AUX_DISK_1: restored backup piece 1 
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03 
archived log file name=/tmp/recover/1_54_814717091.dbf thread=1 sequence=54 
channel clone_default: deleting archived log(s) 
archived log file name=/tmp/recover/1_54_814717091.dbf RECID=4 STAMP=817751585 
archived log file name=/tmp/recover/1_55_814717091.dbf thread=1 sequence=55 
channel clone_default: deleting archived log(s) 
archived log file name=/tmp/recover/1_55_814717091.dbf RECID=5 STAMP=817751587 
media recovery complete, elapsed time: 00:00:01 
Finished recover at 10-JUN-13

database opened

 

At this  point, RMAN starts an export/import with datapump to move the table from the auxiliary database back to the target database:

contents of Memory Script:
{ 
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/recover''"; 
# create directory for datapump export 
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/recover''"; 
} 
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/recover''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/recover''

Performing export of tables...
EXPDP> Starting "SYS"."TSPITR_EXP_kCoq_fqic": 
EXPDP> Estimate in progress using BLOCKS method...
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
EXPDP> Total estimation using BLOCKS method: 64 KB 
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE 
EXPDP> Processing object type TABLE_EXPORT/TABLE/IDENTITY_COLUMN
EXPDP> Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
EXPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS 
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER 
EXPDP> . . exported "LUDOVICO"."RECO" 5.054 KB 2 rows
EXPDP> Master table "SYS"."TSPITR_EXP_kCoq_fqic" successfully loaded/unloaded 
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_kCoq_fqic is: 
EXPDP> /tmp/recover/tspitr_kCoq_82218.dmp 
EXPDP> Job "SYS"."TSPITR_EXP_kCoq_fqic" successfully completed at Mon Jun 10 17:14:44 2013 elapsed 0 00:00:41
Export completed

contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
IMPDP> Master table "SYS"."TSPITR_IMP_kCoq_Fbti" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_kCoq_Fbti":
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
IMPDP> . . imported "LUDOVICO"."RECO" 5.054 KB 2 rows
IMPDP> Processing object type TABLE_EXPORT/TABLE/IDENTITY_COLUMN
IMPDP> Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
IMPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
IMPDP> Job "SYS"."TSPITR_IMP_kCoq_Fbti" successfully completed at Mon Jun 10 17:15:12 2013 elapsed 0 00:00:07
Import completed

 

Finally, the auxiliary instance is cleaned:

Removing automatic instance
Automatic instance removed
auxiliary instance file /tmp/recover/CLASSIC/datafile/o1_mf_temp_8vcv5mrt_.tmp deleted
auxiliary instance file /tmp/recover/KCOQ_PITR_CLASSIC/onlinelog/o1_mf_3_8vcv8nfh_.log deleted
auxiliary instance file /tmp/recover/KCOQ_PITR_CLASSIC/onlinelog/o1_mf_2_8vcv8fqb_.log deleted
auxiliary instance file /tmp/recover/KCOQ_PITR_CLASSIC/onlinelog/o1_mf_1_8vcv86fv_.log deleted
auxiliary instance file /tmp/recover/KCOQ_PITR_CLASSIC/datafile/o1_mf_users_8vcv7wh0_.dbf deleted
auxiliary instance file /tmp/recover/CLASSIC/datafile/o1_mf_sysaux_8vctzf32_.dbf deleted
auxiliary instance file /tmp/recover/CLASSIC/datafile/o1_mf_undotbs1_8vctzfc9_.dbf deleted
auxiliary instance file /tmp/recover/CLASSIC/datafile/o1_mf_system_8vctzf86_.dbf deleted
auxiliary instance file /tmp/recover/CLASSIC/controlfile/o1_mf_8vctyxcy_.ctl deleted
auxiliary instance file tspitr_kCoq_82218.dmp deleted
Finished recover at 10-JUN-13

 

We can check if our table is ok:

RMAN> select * from ludovico.reco;

FIELD1
--------------------------------------------------
foo
test

RMAN>

 

Oh, and yes, now we can select directly from RMAN! 🙂

 

 My opinion

  • It still needs the amount of space needed to recover the auxiliary instance (system, sysaux, temp and the user tablespace containing the missing data), so it has all the defeats of the typical TSPITR, but it’s automatic so is an improvement for the real life.
  • Restoring the user tablespace separately from the system tablespaces can be an issue if you’re saving backupsets over tape: you can end up by reading twice the same backupset that could be read once instead.

Cheers

Ludovico