However, the only available cloning procedure use the PULL method to get the data from the remote source PDB.
This can be a limitation, especially in environments where the cloning environment does not have direct access to the production, or where the clones must be done in the Cloud with no direct access to the production VLAN on-premises.
So, one common approach is to clone/detach locally, put the PDB files in the Object Store and then attach them in the cloud.
Another approach is to use SSH tunnels. If you follow my blog you can see it is something that I use every now and then to do stuff from on-premises to the cloud.
How to set it up?
Actually, it is super-easy: just prepare a script in the cloud that will do the create pluggable database, then trigger it from on-premises.
The cloud database should use OMF so you don’t have to take care about file name conversions.
At this point, if you have set up correctly the SSH keys to connect to the cloud server, it is just a matter of running the script remotely using the proper SSH tunnel. Once the remote port binding established, the cloud server can contact the on-premises listener port using localhost:remote_bind:
Cloud. What a wonderful word. Wonderful and gray.
If you are involved in the Oracle Community, blogs and conferences, you certainly care about it and have perhaps your own conception of it or ideas about how to implement it.
My Collaborate 2015 RAC SIG experience
During the last Collaborate Conference, I’ve “tried” to animate the traditional RAC SIG Round-Table with this topic:
In the last few years, cloud computing and infrastructure optimization have been the leading topics that guided the IT innovation. What’s the role of Oracle RAC in this context?
During this meeting leading RAC specialists, product managers, RAC SIG representatives and RAC Attack Ninjas will come together and discuss with you about the new Oracle RAC 12c features for the private cloud and the manageability of RAC environments.
Join us for the great discussion. This is your chance to have a great networking session!
Because it’s the RAC SIG meeting, most of the participants DO HAVE a RAC environment to manage, and are looking for best practices and ideas to improve it, or maybe they want to share their experiences.
I’ve started the session by asking how many people are currently operating a private cloud and how many would like to implement it.
With my biggest surprise (so big that I felt immediately uncomfortable), except one single person, nobody raised the hand.
I’ve spent a very bad minute, I was almost speechless. I was actually asking myself: “is my conception of private cloud wrong?”. Then my good friend Yury came in help and we started the discussion about the RAC features that enable private cloud capabilities. During those 30 minutes, almost no users intervened. Then Oracle Product Managers (RAC, ASM, QoS, Cloud) started explaining their point of view, and I suddenly realized that
when talking about Private Cloud, there is a huge gap between the Oracle Private Cloud implementation best practices and the small customers skills and budgets.
When Oracle product managers talk about Private Cloud, they target big companies and advice to plan the infrastructure using:
Full-pack of options for a total of 131k per CPU:
Enterprise Edition (47.5k)
Real Application Clusters (23k)
Diagnostic Pack (7.5k)
Tuning Pack (5k)
Lifecycle Management Pack (12k)
Cloud Management Pack (7.5k)
Policy Managed Databases
Quality of Services Management
Rapid Home provisioning
Enterprise Manager and DBaaS Self Service portal
The CapEx needed for such a stack is definitely a show stopper for most small-medium companies. And it’s not only about the cost. When I gave my presentation about Policy Managed Databases at Collaborate in 2014, and later about Multitenant and MAA at Open World, it was clear that “almost” nobody (let’s say less than 5%, just to give an idea) uses these new technologies. Many of them are new and, in some cases, not stable. Notably, Multitenant and QoS are not working together as of now. Qos will work with the new resource manager at PDB level only in release 12.2 (and still not guaranteed).
For the average company (or the average DBA), there is more than enough to be scared about, so private cloud is not seen as easy to implement.
So there’s no private cloud solution for SMBs?
It really depends on what you want to achieve, and at which level.
Based on my experience at Trivadis, I can say that you can achieve Private Cloud for less. Much less.
What a Private Cloud should guarantee? According to its NIST definition, five four things:
Broad network access.
Number 5 is a clear field of EM, and AWR Warehouse new feature may be of great help, for free (but still, you can do a lot on your own with Statspack and some scripting if you are crazy enough to do it without Diagnostic pack).
Numbers 3 and 4 are a peculiarity of RAC, and they are included in the EE+RAC license. By leveraging OVM, there are very good opportunities of savings if the initial sizing of the solution is a problem. With OVM you can start as small as you want.
Number 1 depends on standards and automation already in place at your company. Generally speaking, nowadays scripting automatic provisioning with DBCA and APEX is very simple. If you’re not comfortable with coding, tools like the Trivadis Toolbox make this task easier. Moreover, nobody said that the self-service provisioning must be done through a web interface by the final user. It might be (and usually is) triggered by an event, like the creation of a service request, so you can keep web development outside of your cloud.
Putting all together
You can create a basic Private Cloud that fits perfectly your needs without spending or changing too much in your RAC environment.
Automation doesn’t mean cost, you can do it on your own and keep it simple. If you need an advice, ideas or some help, just drop me an email (firstname.lastname@example.org), it would be great to discuss about your need for private cloud!
Things can be less complex than what we often think. Our imagination is the new limit!
I’m back at work now, safely, after the week in San Francisco.
It’s time to sit down, and try to pull out some thought about what I’ve experienced and done.
I’ll start from the new announcements, what is most important for most people, and leave my personal experience for my next post.
In-memory Database Option
Oracle has announced the In-Memory option for the Oracle Database. This feature will store the data simultaneously in traditional row-based and into a new in-memory columnar format, to serve optimally both analytics and OLTP workloads AT THE SAME TIME. Because column-based storage is redundant, it will work without logging mechanism, so the overhead will be minimal. The marketing message claims “ungodly speed”: 100x faster queries for analytics and 2x faster queries in OLTP environments.
By separating Analytics and OLTP with different storage formats, the indexes on the row-based version of the table can be reduced to make the transactions faster, getting the rid of the analytical indexes thank to the columnar format that is already optimized for that kind of workload. The activation of the option will be transparent to the applications.
How it will be activated?
Now my considerations:
[evil] Will this option make your database faster than putting it on an actual Exadata?
It will be an option, so it will cost extra-money on top of the Enterprise Edition
[I guess] it will be released with 12cR2 because a such big change cannot be introduced simply with a patch set. So I think we’ll not see it before the end of 2014
And, uh, Maria Colgan has given up the Product Management of the Cost Based Optimizer to become the Product Manager of the In-Memory option. Tom Kyte will take the ownership of the CBO.
M6-32 Big Memory Machine
I’ve paid much less attention for this new announcement. The new big super hyper machine engineered by Oracle will have:
32TB of DRAM
12 cores per processors
96 threads per processor
This huge memory machine can be connected through InfiniBand to an Exadata to rely on its storage cells.
But it will cost 3M$, so it’s not really intended for SMBs or for the average DBA, that’s why I don’t care too much about it…
Only 8 minutes in the keynote to introduce this appliance that is really hot, IMHO. This… oh my… let’s call it ODBLRA, is a backup appliance (based on the same HW of Exadata) capable of receiving the stream of redo logging over SQL*Net, the same way as it’s done with DataGuard, except that instead of having a standby database, you’ll have an appliance capable of storing all the redo stream of your entire DB farm and have a real-time backup of your transactions. That’s it: no transactions lost between two backup archives and no need to have hundreds of DataGuard setups or network filesystems as secondary destinations in order to make your redo stream safer.
I guess that it will host an engine RMAN-aware that can create incremental-updated backups, so that you can almost forget about full backups. You can leverage an existent tape infrastructure to offload the appliance if it starts getting full.
Your ODBLRA can also replicate your backups to an another appliance hosted on the Oracle Cloud: ODBLRAaaS! 🙂
To conclude, Oracle is pushing for bigger, dedicated, specialized SPARC machines instead of relying on commodity hardware…
Oracle Multi-tenant Self-Service Provisioning
There’s a new APEX application, now in BETA, that can be downloaded from the Oracle Multitenant Page that provides self-service provisioning of databases in a Multitenant architecture. It’s worth a try… if you plan to introduce the Multitenant option in your environment!
All products in the Cloud
Oracle now offers (as a preview) its Database, Middleware and Applications as a Service, in its public cloud. For a DBA can be of interest:
The Storage aaS, use Java & REST API (Openstack SWIFT) for block level access to the storage.
The Computing aaS allows you to scale the computing power to follow your computing needs.
The Database aaS is the standard, full-featured Oracle Database (in the cloud!) 11gR2 or 12c in all editions (SE, SE1, EE). You can choose five different sizes, up to 17cores and 256Gb of RAM, and choose 3 different formulas:
Single Schema (3 sizes: 5, 20 or 50Gb, with prices from 175$/month to 2000$/month)
Basic Database (user-managed, single-instance preconfigured databases only with a local EM)
Managed Database (single-instance with managed backups & PITR, managed quarterly apply of critical parches)
Premium Managed Database (fully managed RAC, with optional DG or Active DG, PDB and upgrades)
Oracle releases this cloud offering with significant delay comparing to his competitors
It’s still in preview and there’s no information about the billing schema. Depending on that, it can be more or less attractive.
As for other cloud services, the performance will be acceptable only when putting all the stack into the same cloud (WebLogic, DB, etc.)
Oracle on Azure
Microsoft starts offering preconfigured Oracle platforms, Database and WebLogic, on Azure on both Linux and Windows systems. I haven’t seen the price list yet, but IMHO Azure has been around since longtime now, and it appears as a reliable and settled alternative comparing to Oracle Cloud. Nice move Microsoft, I think it deserves special attention.
Will these announcements change your life? Let me know…
…and stay tuned, I’ll come soon with a new post about the my “real” week at the Open World and why I’ve loved it.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.