Oracle Database Backup Logging Recovery Appliance – a preview

Please see the disclaimer at the end of the post.

Oracle has announced the new Oracle Database Backup Logging Recovery Appliance at the last Open World 2013, but since then it has not been released to the market yet, and very few information is available on the Oracle website.

During the last IOUG Collaborate 14, Oracle master product manager of Data Guard and MAA,  Larry Carpenter, has unveiled something more about the DBRLA (call it “Debra” to simplify your life 🙂 ) , and I’ve had the chance to discuss about it directly with Larry.

At Trivadis we think that this appliance will be a game changer in the world of backup management.

Why?

Well, if you have ever worked for a big company with many hundreds of databases, you have certainly encountered many of those common problems:

  • Oracle Backup and restore penalized by a shared infrastructure
  • Poor backup or restore performance
  • Tape drives busy when you need them urgently
  • Complex management of backup retentions

That’s not all. As of now, your best recovery point in case of restore is directly related to your backup archive frequency. Oh yes, you have to low down your archive_lag_target parameter, increase your log switch frequency (and thus, the I/O) and still have… 10, 15, 30 minutes of possible data loss? Unless you protect your transactions with a Data Guard. But this will cost you money. For the additional server and storage. For the licenses. And for the effort required to put in place a Data Guard instance for every database that you want to protect. You want to protect your transactions from a storage failure and there’s a price to pay.

The Database Backup Logging Recovery Appliance (wow, I need to copy and paste the name to save time! :-)) overcomes these problems with a simple but brilliant idea: leveraging the existing redo log transport processes and ship the redo stream directly to the backup appliance (the DBLRA, off course) or to its Cloud alter ego, hosted by Oracle.

DBLRA

As you can infer from the picture, 12c databases will work natively with the appliance, while previous releases will have a plugin that will enable all the capabilities.

Backups can be mirrored selectively to another DBLRA, or copied to the cloud or to a 3rd party (Virtual) Tape Library.

The backup retention is enforced by the appliance and the expiration and deletion is done automatically using the embedded RMAN catalog.

Lightning fast backups and restores are guaranteed by the hardware: DBLRA is based on the same hardware used by Exadata, with High Capacity disks. Optional storage extensions can be added to increase the capacity, but all the data, as I’ve said, can be offloaded to VTLs in order to use a cheaper storage for older backups.

To resume, the key values are:

  • No transaction loss!!
  • Lightning fast backups and restores
  • Integrated, Oracle engineered, scalable solution for hundreds to thousands of databases

Looking forward to see it in action!

I cannot cover all the information I have in a single post, but Trivadis is working actively to be ready to implement it at the time of the launch to the market (estimated in 2014), so feel free to contact me if you are interested in boosting your backup environment. 😉

By the way, I expect that the competitors (IBM, Microsoft?) will try to develop a solution with the same characteristics in terms of reliability, or they will lose terrain.

Cheers!

Ludovico

Disclaimer: This post is intended to outline Oracle’s general product direction based on the information gathered through public conferences. It is intended for informational purposes only. The development and release of these functionalities and features including the release dates remain at the sole discretion of Oracle and no documentation is available at this time. The features and commands shown may or may not be accurate when the final product release goes GA (General Availability).
Please refer Oracle documentation when it becomes available.

Exciting News from Oracle Open World 2013

sfo_cutI’m back at work now, safely, after the week in San Francisco.

It’s time to sit down, and try to pull out some thought about what I’ve experienced and done.

I’ll start from the new announcements, what is most important for most people, and leave my personal experience for my next post.

 

 

In-memory Database Option

Oracle has announced the In-Memory option for the Oracle Database. This feature will store the data simultaneously in traditional row-based and into a new in-memory columnar format, to serve optimally both analytics and OLTP workloads AT THE SAME TIME. Because column-based storage is redundant, it will work without logging mechanism, so the overhead will be minimal. The marketing message claims “ungodly speed”: 100x faster queries for analytics and 2x faster queries in OLTP environments.

By separating Analytics and OLTP with different storage formats, the indexes on the row-based version of the table can be reduced to make the transactions faster, getting the rid of the analytical indexes thank to the columnar format that is already optimized for that kind of workload. The activation of the option will be transparent to the applications.

How it will be activated?

Now my considerations:

  • [evil] Will this option make your database faster than putting it on an actual Exadata?
  • It will be an option, so it will cost extra-money on top of the Enterprise Edition
  • [I guess] it will be released with 12cR2 because a such big change cannot be introduced simply with a patch set. So I think we’ll not see it before the end of 2014
  • And, uh, Maria Colgan has given up the Product Management of the Cost Based Optimizer to become the Product Manager of the In-Memory option. Tom Kyte will take the ownership of the CBO.

 

M6-32 Big Memory Machine

I’ve paid much less attention for this new announcement. The new big super hyper machine engineered by Oracle will have:

  • 1024 DIMMS
  • 32TB of DRAM
  • 12 cores per processors
  • 96 threads per processor

This huge memory machine can be connected through InfiniBand to an Exadata to rely on its storage cells.

But it will cost 3M$, so it’s not really intended for SMBs or for the average DBA, that’s why I don’t care too much about it…

 

Oracle Database Backup, Logging, Recovery Appliance

Only 8 minutes in the keynote to introduce this appliance that is really hot, IMHO. This… oh my… let’s call it ODBLRA, is a backup appliance (based on the same HW of Exadata) capable of receiving the stream of redo logging over SQL*Net, the same way as it’s done with DataGuard, except that instead of having a standby database, you’ll have an appliance capable of storing all the redo stream of your entire DB farm and have a real-time backup of your transactions. That’s it: no transactions lost between two backup archives and no need to have hundreds of  DataGuard setups or network filesystems as secondary destinations in order to make your redo stream safer.

I guess that it will host an engine RMAN-aware that can create incremental-updated backups, so that you can almost forget about full backups. You can leverage an existent tape infrastructure to offload the appliance if it starts getting full.

Your ODBLRA can also replicate your backups to an another appliance hosted on the Oracle Cloud: ODBLRAaaS!  🙂

To conclude, Oracle is pushing for bigger, dedicated, specialized SPARC machines instead of relying on commodity hardware…

 

Oracle Multi-tenant Self-Service Provisioning

There’s a new APEX application, now in BETA, that can be downloaded from the Oracle Multitenant Page that provides self-service provisioning of databases in a Multitenant architecture. It’s worth a try… if you plan to introduce the Multitenant option in your environment!

 

All products in the Cloud

Oracle now offers (as a preview) its Database,  Middleware and Applications as a Service, in its public cloud. For a DBA can be of interest:

The Storage aaS, use Java & REST API (Openstack SWIFT) for block level access to the storage.

The Computing aaS allows you to scale the computing power to follow your computing needs.

The Database aaS is the standard, full-featured Oracle Database (in the cloud!) 11gR2 or 12c in all editions (SE, SE1, EE). You can choose five different sizes, up to 17cores and 256Gb of RAM, and choose 3 different formulas:

  • Single Schema (3 sizes: 5, 20 or 50Gb, with prices from 175$/month to 2000$/month)
  • Basic Database (user-managed, single-instance preconfigured databases only with a local EM)
  • Managed Database (single-instance with managed backups & PITR, managed quarterly apply of critical parches)
  • Premium Managed Database (fully managed RAC, with optional DG or Active DG, PDB and upgrades)

My considerations:

  • Oracle releases this cloud offering with significant delay comparing to his competitors
  • It’s still in preview and there’s no information about the billing schema. Depending on that, it can be more or less attractive.
  • As for other cloud services, the performance will be acceptable only when putting all the stack into the same cloud (WebLogic, DB, etc.)

 

Oracle on Azure

Microsoft starts offering preconfigured Oracle platforms, Database and WebLogic,  on Azure on both Linux and Windows systems. I haven’t seen the price list yet, but IMHO Azure has been around since longtime now, and it appears as a reliable and settled alternative comparing to Oracle Cloud. Nice move Microsoft, I think it deserves special attention.

 

Keynotes recordings

You can see the full keynote recordings here:

Oracle OpenWorld Keynote Highlights

Larry Ellison — Oracle OpenWorld Keynote 9-22-2013

Oracle OpenWorld General Session 2013: Database

Kurian and Fowler — Oracle OpenWorld Keynote 9-24-2013

 

Will these announcements change your life? Let me know…

…and stay tuned, I’ll come soon with a new post about the my “real” week at the Open World and why I’ve loved it.

Ludovico