Check, check… Does the mic still work? #JoelKallmanday

Update PHP:
Update WordPress:
New content:

It’s almost six months without blogging from my side. What a bad score!
It’s not a coincidence that I’m blogging today during #JoelKallmanDay.
A day that reminds the community how important it is to share. Knowledge, mostly. But also good and bad experiences, emotions…

A bittersweet day, at least for me.
On the bitter side: it reminds me of Joel, Pieter, and other friends that are not there anymore. That as a Product Manager, I have to wear big shoes, and it does not matter how good I try to do; I will always feel that it’s not good enough for the high expectations that I set for myself. Guess what! Being PM is way more complicated than I expected when I applied for the position two years ago. So many things to do or learn, so many requests, and so many customers! And being PM at Oracle is probably twice as complicated because it does not matter how good I (or we as a team) try to do; there will always be a portion of the community that picks on the Oracle technology for one reason or another.

On the bright side: it reminds me that I am incredibly privileged to have this role, working in a great team and helping the most demanding customers to get the most out of incredible technology. I love sharing, teaching, giving constructive feedback, producing quality content, and improving the customer experience. This is the sweet part of the job, where I am still taking baby steps when comparing myself to the PM legends we have in our organization. They are always glad to explain our products to the community, the customers, and colleagues! And they are all excellent mentors, each with a different style, background, and personal life.

And knowing people personally is, at least for me, the best thing about being part of a community (outside Oracle) and team (inside Oracle). We all strive for the best technical solutions, performance, developer experience, or uptime for the business. But we are human first of all. And this is what #JoelKallmanDay stands for me—trying to be a better human as a goal so that everything else comes naturally, including being a great colleague, community servant, or friend.

Parallel Oracle Catalog/Catproc creation with catpcat.sql

With Oracle 19c, Oracle has released a new script, annotated for parallel execution, to create the CATALOG and CATPROC in parallel at instance creation.

I have a customer who is in the process of migrating massively to Multitenant using many CDBs, so I decided to give it a try to check how much time they could save for the CDB creations.

I have run the tests on my laptop, on a VirtualBox VM with 4 vCPUs.

Test 1: catalog.sql + catproc.sql

In this test, I use the classic way (this is also the case when DBCA creates the scripts):

The catalog is created first on CDB$ROOT and PDB$SEED. Then the catproc is created.

Looking at the very first occurrence of BEGIN_RUNNING (start of catalog for CDB$ROOT) and the very last of END_RUNNING in the log (end of catproc in PDB$SEED), I can see that it took ~ 44 minutes to complete:

 

Test 2: catpcat.sql

In this test, I use the new catpcat.sql using catctl.pl, with a parallelism of 4 processes:

This creates catalog and catproc first on CDB$ROOT, than creates them on PDB$SEED. So, same steps but in different orders.

By running vmstat in the background, I noticed during the run that most of the time the creation was running serially, and when there was some parallelism, it was short and compensated by a lot of process synchronizations (waits, sleeps) done by the catctl.pl.

At the end, the process took ~ 45 minutes to complete.

So the answer is no: it is not faster to run catpcat.sql with parallel degree 4 compared to running catalog.sql and catproc.sql serially.

HTH

Ludo

Duplicating a DB and setting up Data Guard through CMAN and SSH tunnel

I am fascinated about the new Zero Downtime Migration tool that has been available since November 28th. Whilst I am still in the process of testing it, there is one big requirement that might cause some headache to some customers. It is about network connectivity:

Configuring Connectivity Between the Source and Target Database Servers

The source database server […] can connect to target database instance over target SCAN through the respecitve scan port and vice versa.
The SCAN of the target should be resolvable from the source database server, and the SCAN of the source should resolve from the target server.
Having connectivity from both sides, you can synchronize between the source database and target database from either side. […]

If you are taking cloud migrations seriously, you should have either a VPN site-to-site to the cloud, or a Fast Connect link. At CERN we are quite lucky to have a high bandwidth Fast Connect to OCI Frankfurt.

This requirement might be missing for many customers, so what is a possible solution to setup connectivity for database duplicates and Data Guard setups?

In the picture above you can see a classic situation, that usually has two problems that must be solved:

  • the SCAN addresses are private: not accessible from internet
  • there are multiple SCAN addresses, so tunneling through all of them might be complex

Is it possible to configure CMAN in front of the SCAN listener as a single IP entry and tunnel through SSH to this single IP?

I will show now how to achieve this configuration.

For sake of simplicity, I have put two single instances without SCAN and a CMAN installation on the database servers, but it will work with little modification using SCAN and RAC setups as well. Note that in a Cloud Infrastructure setup, this will require a correct setup of the TDE wallet on both the source and the destination.

Because I put everything on s single host, I have to setup CMAN to listen to another port, but having a separate host for CMAN is a better practice when it has to proxy to SCAN listeners.

Installing and configuring CMAN

The most important part of the whole setup is that the CMAN on the standby site must have a public IP address and open SSH port so that we can tunnel through it.

The on-premises CMAN must have open access to the standby CMAN port 22.

For both primary and standby site you can follow the instructions of my blog post: Install and configure CMAN 19c in the Oracle Cloud, step by step.

In my example, because I install CMAN on the same host of the DB, I configure CMAN to run on port 1522.

CMAN primary config:

CMAN standby config:

This configuration is not secure at all, you might want to secure it further in order to allow only the services needed for setting up Data Guard.

The registration of database services to CMAN through the the remote_listener parameter is optional, as I will register the entries statically in the listener and use a routed connection through CMAN.

Listener configuration

The listener must have a static entry for the database, so that duplicate and switchover work properly.

On primary add to listener.ora:

On standby:

In a RAC config, all the local listeners must be configured with the correct SID_NAME running on the host. Make sure to reload the listeners 😉

Creating the SSH tunnels

There must be two tunnels open: one that tunnels from on-premises to the cloud and the other that tunnels from the cloud to on-premises.

However, such tunnels can both be created from the on-premises CMAN host that has access to the cloud CMAN host:

in my case, the hostnames are:

Important: with CMAN on a host other than the DB server, the CMAN sshd must be configured to have GatewayPorts set to yes:

After the tunnels are open, any connections to the local CMAN server port 1523 will be forwarded to the remote CMAN port 1522.

Configuring the TNSNAMES to hop through CMAN and SSH tunnel

Both servers must have now one entry for the local database pointing to the actual SCAN (or listener for single instances) and one entry for the remote database pointing to local port 1523 and routing to the remote scan.

On-premises tnsnames.ora:

Cloud tnsnames.ora:

After copying the passwordfile and starting nomount the cloud database, it should be possible from both sides to connect as SYSDBA to both DB_CLOUD and DB_ONPREM.

This configuration is ready for both duplicate from active database and for Data Guard.
I still have to figure out if it works with ZDM, but I think it is a big step towards establishing connection between on-premises and the Oracle Cloud when no VPN or Fast Connect are available.

Duplicate from active database

If the setup is correct, this should work:

Setting up Data Guard

  • Configure broker config files
  • Add and clear the standby logs
  • Start the broker
  • Create the configuration:

    The static connect identifier here is better if it uses the TNSNAMES resolution because each database sees each other differently.

Checking the DG config

A validate first:

Than a switchover, back and forth:

Conclusion

Yes, it is possible to setup a Data Guard between two sites that have no connections except mono-directional SSH. The SSH tunnels allow SQL*Net communication to a remote endpoint. CMAN allows to proxy through a single endpoint to multiple SCAN addresses.

However, do not forget about the ultimate goal that is to migrate your BUSINESS to the cloud, not just the database. Therefore, having a proper communication to the cloud with proper performance, architecture and security is crucial. Depending on your target Cloud database, Zero Downtime Migration or MV2ADB should be the correct and supported solutions.

The new era of spam? Technical-related comments that defeat anti-spam filters

Part of the tasks of every blogger is to update the anti-spam filters (a WordPress plugin in my case, akismet), read the comments in quarantine and either approve them, delete them or mark them as spam (to that the anti-spam plugin can enrich the spam detection algorithms).

So far, the main factor to me to decide to approve or not, is to read the content of the message and find it appropriate or not for the blog post.

Until today.

Some time ago I have written this blog post about diagnosing problems in deleting the Audit Trail in Oracle 12c:

DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL not working on 12c? Here’s why…

Last week I have got this comment:

Technical. Talking about Oracle Audit (which is relevant to my blog post) and with  technical details that only an Oracle professional might know.

So I have approved it and read my blog post again to find a proper answer.

Except that my blog post was not talking about Unified Audit, relink of binaries or exports!

I have realized then that the “link” of the blogger was in fact a website selling sex toys… (yeah, writing this here will bring even more spam… I take the risk 🙂 )

So where does it come from? Actually, the comment has been copied from another blog:

https://petesdbablog.wordpress.com/2013/07/20/12c-new-feature-unified-auditing/

A new type of spam (maybe it exists since a while? I am not a spam expert).

The spammer (bot?) copies technical comments from blog posts with similar keywords and just changes the name and the link of the author.

Beware, bloggers.

Ludo

Oracle Grid Infrastructure 18c patching part 2: Independent Local-mode Automaton architecture and activation

The first important step before starting using the new Independent Local-mode Automaton is understanding which are its components inside a cluster.

Resources

Here’s the list of service that you will find when you install a Grid Infrastructure 18c:

As you can see, there are 4 components that are OFFLINE by default:

Three local resources (that are present on each node):

  • ora.MGMT.GHCHKPT.advm
  • ora.mgmt.ghchkpt.acfs
  • ora.helper

One cluster resource (active on only one server at a time, it can relocate):

  • ora.rhpserver

If you have ever worked with 12c Rapid Home Provisioning, those name should sound familiar.

The GHCHKPT filesystem (ant its relative volume), is used to store some data regarding the ongoing operations across the cluster during the GI home move.

The ora.helper is the process that actually does the operations. It is local because each node needs it to execute some actions at some point.

The rhpserver is the server process that coordinates the operations and delegates them to the helpers.

All those services compose the independent local-mode automaton, that is the default deployment. The full RHP framework (RHP Server and RHP Client) might be configured instead with some additional work.

Important note: Just a few weeks ago Oracle changed the name of Rapid Home Provisioning (RHP) to Fleet Patching and Provisioning (FPP). The name is definitely more appealing now, but it generates again some confusion about product names and acronyms, so beware that in this series sometimes I refer to RHP, sometimes to FPP, but actually it is the same thing.

Tomcat?

You might have noticed that tomcat is deployed now in the GI home, as there are patches specific to it (here I paste the 18.4 version):

 

Indeed Tomcat is registered in the inventory and patched just like any other product inside the OH:

 

Out of the box, Tomcat is used for the Quality of Services Management (ora.qosmserver resource):

But it is used for the Independent Local Mode Automaton as well, when it is started.

Enabling and starting the independent local-mode automaton

The resources are started using the following commands (as root, the order is quite important):

Before continuing with the rhpserver resource, you might want to check if the filesystem is mounted:

Now the rhpserver should start without problems, as oracle:

Please note that if you omit to activate the filesystem first, the rhpserver will fail to start.

As you can see, now both rhpserver and the helper are online:

Now all is set to start using it!

We’ll see how to use it in the next posts.

Ludo

 

The story of ACME and its CRM with serious SQL injections problems

Preface/Disclaimer

This story is real, but I had to mask some names and introduce some minor changes so that real people are not easy to recognize and the whole story does not sound offensive to anyone. This post is not technic, so my non-technical English will be fully exposed. Sorry for the many errors 🙂

ACME, The Company

ACME is a big, global company. It has a huge revenue and there are almost no competitors on the market that get close to it in terms of fame and earnings.

Its core business is heavily supported by its CRM system, that holds all the customers, contracts, prospects, suppliers…

FOOBAR CRM, The CRM system

Despite the CRM is not ACME’s core business, the data in there is really, really precious. Without prospects and customer data, the sales cannot close the deals.

The CRM application (let’s call it FOOBAR CRM) runs on a legacy architecture and it is as old as the company itself.

The architecture is the “old good style” web application that was common in the early 2000’s… : browser front-end (OK, you might think that it is not so old, huh?) , PHP application backed by Apache, MySQL database.

As you can see, quite old but not so uncommon.

One of the big concerns, as in every application lifecycle, is to maintain good code quality. At the beginning of the PHP era, when PHP was still popular, there was a lack of good frameworks (I’m not even sure if there are now, I’m sure Zend Framework was a cool MVC framework but it came out many years later). The result is that now the code maintenance of the application is literally a pain in the a**.

The customer is a noob in development, so when it has been founded and needed a CRM system, the management delegated the development to an external company (let’s call it FOOBAR).

FOOBAR, The software house

The company FOOBAR is as old as the ACME company. Respective founders were relatives: they started the business together and now that the founders left, the partnership is working so well that FOOBAR is also one the biggest resellers of ACME products (despite its business is loosely related to ACME’s business). FOOBAR is also at the same time a partner and a customer, and some member of its board are also part of ACME’s board.

What is important here, is that the advices coming from the “common board members” are considered much more important than the advices coming from ACME’s employees, customers and marketing department.

The code maintenability

ACME has started small, with a small “oldish” CRM system. But some years later ACME experienced a huge increase of customers, product portfolio, employees, revenues etc..

In order to cope with the increasing workload of the application, they scaled everything up/out: there are now tens of web servers nicely load balanced, some webcache servers, and they introduced Galera cluster in conjunction with some replicated servers to scale out the database workload.

The global business of ACME also required to open the FOOBAR CRM application to the internet, exposing it to a wide range of potential attacks.

In order to cope with increasing needs, FOOBAR proposed an increasing number of modules, pieces of code, tools to expand the CRM system. To maximize the profits, FOOBAR decided to employ only junior developers, unexperienced and not familiar at all with development of applications using big RDBMS systems and a very scarse sense of secure programming.

That’s not all!

In order to develop new features faster, ACME and FOOBAR have an agreement that let the end users develop their own modules in PHP code and plug them in the application, most of the times directly in production (you may think: that’s completely crazy, this should NEVER happen in a serious company! You know what? I agree 100%).

Uh, I forgot to mention, the employees that use the CRM application and have some development skills are VERY, VERY happy to have the permission to code on their own, because they can develop features or solve bugfixes on their own, depending on their needs.

Result: the code is completely out of control: few or no unit tests, no integration tests at all, poor security, tons of bugs.

The big SQL Injection problem

Among many bugs, the SQL injection is the most common. It started with some malicious users trying to play around with injection techniques, but now the attacks are happening more and more frequently:

  • The attacks come from many hackers (not related to each other)
  • Some hackers try to get money for that, some other just steal data, some other want just to mess up and low down ACME’s reputation…

everytime an attack is successful, ACME looses more and more contracts (and money).

The fix, up to now, was to track the hacker IP address AFTER the attack and add it to the firewall blacklist (not so clever, huh?).

Possible Solutions (according to the security experts)

ACME mandated an external company to do an assessment. The external company proposed a few things:

  • SOLUTION 1: Change completely the CRM software and use something more modern, modular, secure and developed by a company that hires top talents. There are tons of cloud vendors that offer CRM software as a Service, and other big companies with proven on-premises CRM solutions.
  • SOLUTION 2: Keep the current solution, but with a few caveats:
    • All the code accessing the database must be reviewed to avoid injections
    • only the experienced developers should have the right to write new code (possibly employees of the software house, that will be accountable for new vulnerabilities)
  • SOLUTION 3: Install content-sensitive firewalls and IDS that detect SQL Injection patterns and block them before they reach the web server and/or the database layer.

What the CRM users think

User ALPHA (the shadow IT guy): “We cannot afford to implement any of the solutions: we, as users, need the agility to develop new things for ourselves! And what if there is a bug? If I have to wait a fix from the software house, I might loose customers or contracts before the CRM is available again!”

User BRAVO (the skeptical): “SQL Injection is a complex problem, you cannot solve it just by fixing the current bugs and revoke the grants to develop new code to the non-developers”

User CHARLIE (the lawyer): “When I’ve been hired, I’ve been told that I had the right to drink coffee and develop my own modules. I would never work for a company that would not allow me to drink coffee! Drinking coffee and creating vulnerabilities, are both rights!”

User DELTA (the average non-sense): “The problem is not the vulnerable code, but all those motherf****** of hackers that try to inject malicious code. We should cure mental illness of geeks so they do not transform themselves in hackers.”

User ECHO (the hacker specialist): “If we ask stackoverflow to provide the IP addresses of the people that search for SQL injection code examples, we might preventively block their IP addresses on our external firewall!”

User FOXTROT (the false realist): “Hacker attacks happen, and there’s not much we can do against them. If we fix the code and implement security constraints, there will always be hackers trying to find vulnerabilities. You miss the real problem! We must cure this geeks/hackers insanity first!”

User GOLF (the non-sense paragon): “You concentrate on contracts lost because of SQL Injections, but the food in our restaurant sucks, and our sales also lose contracts because they struggle to fight stomach ache”.

User HOTEL (the denier): “I’ve never seen the logs that show the SQL Injections, I am sure it is a complot of the no-code organizations meant to sell us some WYSIWIG products”.

User INDIA (the unheard): “Why can’t we just follow what the Security Experts suggest and see if it fixes the problem?”

What the management thinks

“We send thought and prayers to all our sales, you are not alone and you’ll never be. (… and thanks for the amazing party, FOOBAR, the wine was delicious!)”

What ACME did to solve the problem

Absolutely nothing.

Forecast

More SQL Injections.

 

UPDATE 20.02.2018

Many people asked me who was the ACME customer that had the SQL injection problem. None. It is an analogy to the US mass shootings that happen more and more frequently, the last one at the time of writing: https://en.wikipedia.org/wiki/Stoneman_Douglas_High_School_shooting

This post is intended to show that, if explained as it was an IT problem, the solution would sound so easy that nobody would have any doubts about the steps that must be done.

Unfortunately, it is not the case, and the US is condamned to have more and more mass shootings because nobody wants to fix the problem. 🙁

The short story of two ACE Directors, competitors and friends

Well, this is a completely different post from what I usually publish. I like to blog about technology, personal interests and achievements.

This time I really would like to spend a few words to praise a friend.

I met Franck Pachot for the first time back in 2012, it was my first month in Trivadis and, believe it or not, Franck was working for it as well. I have the evidence here 😉

It was the first time since years that I was meeting someone at least as smart as me on the Oracle stack (later, it happened many more times to meet smarter people, but that’s another story).

A few months later, he left Trivadis to join it’s sworn enemy dbi services. But established friendships and like mindedness don’t disappear, we continued to meet whenever an opportunity was coming up, and we started almost simultaneously to boost our blogging activities, doing public presentations and expanding our presence on the social medias (mostly Twitter).

After I’ve got my Oracle ACE status in 2014, we went together at the Oracle Open World. I used to know many folks there and I can say that I helped Franck to meet many smart people inside and outside the ACE Program. A month after the OOW, he became an Oracle ACE.

DSC_0088_2DSC02749_2Franck’s energy, passion and devotion for the Oracle Community are endless. What he’s doing, including his last big effort, is just great and all the people in the Oracle Community respect him. I can say that now he is far more active than me in the Oracle Community (at least regarding “public” activities ;-))

DSC02741_2We both had the target of becoming Oracle ACE Directors, and I have spent a bad month in April when I became an ACE Director and his nomination was still pending.

I said: “If you become ACE Director by the end of April I will write a blog post about you.” And that’s where this post comes from.

Congratulations ACE Director Franck, perfect timing! 🙂

O_ACEDirectorLogo_clrrev

Ludo

 

 

In-memory Columnar Store hands-on

As I’ve written in my previous post, the inmemory_size parameter is static, so you need to restart your instance to activate it or change its size. Let’s try to set it at 600M.

 

First interesting thing: it has been rounded to 608M so it works in chunks of 16M. (to be verified)

Which views can you select for further information?

V$IM_SEGMENTS gives a few information about the segments that have a columnar version, including the segment size, the actual memory allocated, the population status and other compression indicators.

The other views help understand the various memory chunks and the status for each column in the segment.

Let’s create a table with a few records:

The table is very simple, it’s a cartesian of two “all_tables” views.

Let’s also create an index on it:

The table uses 621M and the index 192M.

How long does it take to do a full table scan almost from disk?

15 seconds! Ok, I’m this virtual machine is on an external drive 5400  RPM… 🙁

Once the table is fully cached in the buffer cache, the query performance progressively improves to ~1 sec.

There is no inmemory segment yet:

You have to specify it at table level:

The actual creation of the columnar store takes a while, especially if you don’t specify to create it with high priority. You may have to query the table before seeing the columnar store and its population will also take some time and increase the overall load of the database (on my VBox VM, the performance overhead of columnar store population is NOT negligible).

Once the in-memory store created, the optimizer is ready to use it:

The previous query now takes half the time on the first attempt!

The columnar store for the whole table uses 23M out of 621M, so the compression ratio is very good compared to the non-compressed index previously created!

 

This is a very short example. The result here (2x improvement) is influenced by several factors. It is safe to think that with “normal” production conditions the gain will be much higher in almost all the cases.
I just wanted to demonstrate that in-memory columnar store is space efficient and really provides higher speed out of the box.

Now that you know  about it, can you live without? 😛

Oracle Instances and real memory consumption on Linux and Solaris

There’s a way to know the REAL memory usage by Oracle Instance, including all connecting processes and using the shell rather than a connection to oracle?

The short answer is “I think so” 🙂

Summing up RSS column from ps output, is not reliable because Linux uses a copy-on-write on process forks and also doesn’t take into account correctly the shared memory and other shared allocations.

I’ve come across this post on Pythian’s Blog from Marc Billette.

While it seems good I’ve had discording results depending on platform and release.

Instead, I’ve tried to create a shell snippet that always uses pmap but works differently and SEEMS to work correctly on Linux ans Solaris.

Basically, using the pmap script I get a lot of information about the different memory areas allocated to the process:

 

Initially I’ve tried to decode correctly the different kinds of memory the same way other scripts I’ve found online do:

but finally the ADDRESS is the same from different processes when the memory area is shared, so my script now just get a unique line for each address and sums up the memory size (not the rss one!):

This should give the total virtual memory allocated by the different Oracle instances.

The results I get are plausible both on Linux and Solaris.

Example:

If you find any error let me know and I’ll fix the script!

Ludovico

Planning to be at Oracle Open World? Attend RAC Attack, and become a real RAC NINJA

slide_01_2013-08-25_2258You’re going to head at OOW this year. Your count down has started and you feel excited about that.

Have you planned carefully your AGENDA? If you’re a RAC geek and you’re not going to attend #RACAttack, the answer is NO!! 🙂 This year the RAC Attack we’ll be mentored by real Ninjas, just come along to be part of them!

Reserve from one to three slots on your agenda and go to the OTN Lounge (@ the lobby of Moscone South) 

  1. Launch RAC Attack at OOW 2013 (NINJAS Presentation)
    On Sunday, from 4PM to 6PM, the OTN will open his lounge. At 4:45PM our team will present himself in just 15 minutes, along with the project. OTN will provide food and drinks (that’s what I’ve heard 😉 ) so don’t miss it for any reason.
  2. RAC Attack at OOW 2013 Day 1
    On Tuesday, from 10AM to 2PM, our fearless team of Ninjas will assist you during the installation from scratch of a real RAC environment on your laptop.
  3. RAC Attack at OOW 2013 Day 2
    There will be a second slot of 4 hours, the Wednesday, from 10AM to 2PM to give you a second chance to attend (and finalize your installation, start a new one, or just have fun together and discuss about complex RAC topologies,  for real RAC geeks!).

What differs this year?

ribbon_01The new book covers the installation of a brand new Oracle RAC in release 12c on Oracle VirtualBox.  Still not enough?

Among the volunteers we’ll have 5 Oracle ACEs or ACE Directors, some Certified Masters and a great representative of the RAC SIG board. You’ll recognize us because of our bandanas… Don’t worry, we are not an elite, we’re just good DBAs eager to share our experience, ear from you, and share good time. Sounds good?

If you attend, you’ll earn a special “RAC Attack Ninja” ribbon, so you can go back to work and boast with your colleagues (and your boss!).

And why not, propose a new RAC environment at your company and get it done with a little more confidence, for what it’s worth, that’s the real objective: practice with the RAC technology and get ready for a real scenario.

Maximize your experience @ RAC Attack

You can cheat and start reading the new version of the book (still in development): http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/RAC_Attack_12c

Make sure that you accomplish some steps before starting your journey to OOW:

  1. Make sure you have a recent laptop to install the full RAC stack. The new 12c is a little more demanding than the 11g, each RAC node would require 4Gb of RAM, but if you have 8Gb of RAM on your laptop we guarantee that it’s enough.
  2. The WI-FI connection at OOW is sub-optimal for heavy downloads (I’m euphemistic), so make sure you get your own copy of the Oracle software (Grid Infrastructure and Database) by following the download instruction from: http://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/RAC_Attack_12c/Software_Components
    If you come without your own copy of the software, we won’t give it to you it because of legal concerns, but hopefully we’ll have some spare laptops available to allow you to complete your lab successfully. Just make sure to confirm your presence on the Facebook event page and we’ll try to do our best (but cannot guarantee a laptop for everyone!).

Give us your feedback and spread the word!

If you plan attend (good choice! :-)) , take a business card with you and get in touch with us. It’s a great networking opportunity. Take a lot of pictures, upload them on the social networks (the #RACAttack hashtag on twitter, the facebook page, the RAC SIG group on LinkedIn) and give us your feedback. Other RAC Attack events will be planned, so we can improve thank to your suggestions.

Hope to see you there.

Ludovico