echo"#### IT MAY TAKE SOME MINUTES BEFORE EVERYTHING START WORKING ####"
read-p""
dgmgrl-echosys/manager<<EOF
show configuration
EOF
For the demo I’ve used 5 machines running 3 database instances and 2 Far Sync instances. I cannot provide the documentation for creating the demo environment, but the scripts may be useful to understand how the demo works.
Last Wednesday, September 17th, we’ve done the first RAC Attack in Switzerland (as far as I know!). I have to say that it has been a complete success like all other RAC Attacks I’ve been involved in.
This time I’ve been particularly happy and proud because I’ve organized it almost all alone. Trivadis, my employer, has kindly sponsored everything: the venue (the new, cool Trivadis offices in Geneva), the T-shirts (I’ve done the design, very similar to the one I’ve designed for Collaborate 14), beers and pizza!
For beer lovers,we’ve got the good “Blanche des Neiges” from Belgium, “La Helles” and “La Rossa” from San Martino Brewery, Ticino (Italian speaking region of Switzerland). People have appreciated 🙂
We’ve had 4 top-class Ninjas and 10 people actively installing Oracle RAC (plus a famous blogger that joined for networking), sadly two people have renounced at the last minute. For the very first time, all the participants have downloaded the Oracle Software in advance. When they’ve registered I’ve reminded twice that the software was necessary because we cannot provide it due to legal constraints.
People running the lab on Windows laptops have reported problems with VirtualBox 4.3.16 (4.3.14 has been skipped directly because of known problems). So every one had to fallback to version 4.3.12 (the last stable release, IMO).
The best praise I’ve got has been the presence of a Senior DBA coming from Nanterre! 550Km (> 5h00 by public transport door-to-door) and an overnight stay just for this event, can you believe it? 🙂
This makes me think seriously about the real necessity of organizing this kind of events around the world.
Off course, we’ve got a photo session with a lot of jumps 😉 We could not miss this RAC Attack tradition!
We’ve wrapped everything around 10:30pm, after a bit more than 5 hours of event. We’ve enjoyed a lot and had good time together chatting about Oracle RAC and about our work in general.
I feel the strong need to blog abut this very recent problem because I’ve spent a lot of time debugging it… especially because there’s no information about this error on the MOS.
Introduction
For a lab, I have prepared two RAC Container databases in physical stand-by.
Real-time query is configured (real-time apply, standby in read-only mode).
But, on the standby database, the PDB somehow was existing.
Oracle PL/SQL
1
2
3
4
5
6
7
16:20:58SYS@CDBGVA_1>selectnamefromv$pdbs;
NAME
------------------------------
PDB$SEED
MAAZ
LUDO
I’ve tried to play a little, and finally decided to disable the recovery for the PDB (new in 12.1.0.2).
But to disable the recovery I was needing to connect to the PDB, but the PDB was somehow “inexistent”:
Then I’ve shutted down the standby, but one instance hung and I’ve needed to do a shutdown abort (I don’t know if it was related with my original problem..)
So I’ve RESTARTED the recovery for a few seconds, and because the PDB had the recovery disabled, the recovery process has added the datafiles and set them offline.
As I’ve written in my previous post, the inmemory_size parameter is static, so you need to restart your instance to activate it or change its size. Let’s try to set it at 600M.
V$IM_SEGMENTS gives a few information about the segments that have a columnar version, including the segment size, the actual memory allocated, the population status and other compression indicators.
The other views help understand the various memory chunks and the status for each column in the segment.
Let’s create a table with a few records:
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
SQL>createtableludovico.tinmem
as
select
a.OWNER,
a.TABLE_NAME,
b.ownerowner2,
b.table_nametable_name2,
a.TABLESPACE_NAME,
a.STATUS,
a.PCT_FREE,
a.PCT_USED,
a.INI_TRANS,
a.MAX_TRANS,
a.INITIAL_EXTENT,
a.NEXT_EXTENT,
a.MIN_EXTENTS,
a.MAX_EXTENTS,
a.PCT_INCREASE,
a.FREELISTS,
a.FREELIST_GROUPS,
a.LOGGING
22fromall_tablesa,all_tablesb;
Tablecreated.
SQL>selectcount(*)fromludovico.tinmem;
COUNT(*)
----------
5470921
SQL>
The table is very simple, it’s a cartesian of two “all_tables” views.
The actual creation of the columnar store takes a while, especially if you don’t specify to create it with high priority. You may have to query the table before seeing the columnar store and its population will also take some time and increase the overall load of the database (on my VBox VM, the performance overhead of columnar store population is NOT negligible).
Once the in-memory store created, the optimizer is ready to use it:
The columnar store for the whole table uses 23M out of 621M, so the compression ratio is very good compared to the non-compressed index previously created!
This is a very short example. The result here (2x improvement) is influenced by several factors. It is safe to think that with “normal” production conditions the gain will be much higher in almost all the cases.
I just wanted to demonstrate that in-memory columnar store is space efficient and really provides higher speed out of the box.
Now that you know about it, can you live without? 😛
Oracle Database 12.1.0.2 is finally out, and as we all knew in advance, it contains the new in-memory option.
I think that, despite its cost ($23k per processor), this is another great improvement! 🙂
Consistent savings!
This new feature is not to be confused with Times Ten. In-memory is a feature that enable a new memory area inside the SGA that is used to contain a columnar organized copy of segments entirely in memory. Columnar stores organize the data as columns instead of rows and they are ideal for queries that involve a few columns on many rows, e.g. for analytic reports, but they work great also for all extemporary queries that cannot make use of existing indexes.
Columnar stores don’t replace traditional indexes for data integrity or fast single-row look-ups, but they can replace many additional indexes created for the solely purpose of reporting. Hence, if from one side it seems a waste of memory, on the other side using in-memory can lead to consistent memory savings due to all the indexes that have no more reason to exist.
Let’s take an example of a table (in RED) with nine indexes (other colors).
If you try to imagine all the blocks in the buffer cache, you may think about something like this:
Now, with the in-memory columnar store, you can get the rid of many indexes because they’ve been created just for reporting and they are now superseded by the performance of the new feature:
In this case, you’re not only saving blocks on disk, but also in the buffer cache, making room for the in-memory area. With columnar store, the compression factor may allow to easily fit your entire table in the same space that was previously required for a few, query-specific indexes. So you’ll have the buffer cache with traditional row-organized blocks (red, yellow, light and dark blue) and the separate in-memory area with a columnar copy of the segment (gray).
The in-memory store doesn’t make use of undo segments and redo buffer, so you’re also saving undo block buffers and physical I/O!
The added value
In my opinion this option will have much more attention from the customers than Multitenant for a very simple reason.
How many customers (in percentage) would pay to achieve better consolidation of hundreds of databases? A few.
How many would pay or are already paying for having better performance for critical applications? Almost all the customers I know!
Internal mechanisms
In-memory is enabled on a per-segment basis: you can specify a table or a partition to be stored in-memory.
Each column is organized in separate chunks of memory called In Memory Compression Units (IMCU). The number of IMCUs required for each column may vary.
Each IMCU contains the data of the column and a journal used to guarantee read consistency with the blocks in the buffer cache. The data is not modified on the fly in the IMCU, but the row it refers to is marked as stale in a journal that is stored inside the IMCU itself. When the stale data grows above a certain threshold the space efficiency of the columnar store decreases and the in-memory coordinator process ([imco]) may force a re-population of the store.
Re-population may also occur after manual intervention or at the instance startup: because it is memory-only, the data actually need to be populated in the in-memory store from disk.
Whether the data is populated immediately after the startup or not, it actually depends on the priority specified for the specific segment. The higher the priority, the sooner the segment will be populated in-memory. The priority attribute also drives which segments would survive in-memory in case of “in-memory pressure”. Sadly, the parameter inmemory_size that specifies the size of the in-memory area is static and an instance restart is required in order to change it, that’s why you need to plan carefully the size prior to its activation. There is a compression advisor for in-memory that can help out on this.
Conclusion
In this post you’ve seen a small introduction about in-memory. I hope I can publish very soon another post with a few practical examples.
Après Oracle Open World, IOUG Collaborate et d’autres grandes conférences, RAC Attack arrive également à Genève! Installez l’environnement Oracle 12c RAC sur votre laptop. Des volontaires expérimentés (ninjas) vous aideront à résoudre
toutes les énigmes apparentés et vous guideront à travers le processus
d’installation.
Ninjas Ludovico Caldara – Oracle ACE, RAC SIG European Chair & RAC Attack co-writer Luca Canali – OAK Table Member & frequent speaker Eric Grancher – OAK Table member Jacques Kostic – OCM 11g & Senior Consultant at Trivadis
Où? nouveaux bureaux Trivadis, Chemin Château-Bloch 11, CH1219 Geneva Quand? Mercredi 17 September 2014, dès 17h00 Coût? C’est un évènement GRATUIT! C’est un atelier communautaire, plaisant et
informel. Vous n’avez qu’à apporter votre laptop et votre bonne humeur! Inscription: TVD_LS_ADMIN@trivadis.com
Places limitées! Réservez votre place & votre T-shirt dès à présent: TVD_LS_ADMIN@trivadis.com
Agenda:
17.00 – Bienvenue
17.30 – RAC Attack 12c – 1ere partie
19.30 – Pizza et Bières! (sponsorisés par Trivadis)
20.00 – RAC Attack 12c – 2eme partie
22.00 – distribution des T-shirt et photo de groupe!!
TRES IMPORTANT:La participation à cet évènementrequière l’apport de votre propre laptop!
Spécifications requises:
a) 64 bit OS qui supporte Oracle Virtual Box
b) 8GB RAM, 50GB free HDD space.
En raison de contraintes juridiques, merci de télécharger à l’avance Oracle Database 12c ainsi que Grid Infrastructure pour Linux x86-64 depuis https://edelivery.oracle.com/ (et pour
plus d’informations : http://goo.gl/pqavYh).
After Oracle Open World, IOUG Collaborate and all major conferences, RAC Attack comes to Geneva! Set up Oracle 12c RAC environment on your laptop. Experienced volunteers (ninjas) will help you address any related issues and guide you through the setup process.
Ninjas Ludovico Caldara – Oracle ACE, RAC SIG European Chair & RAC Attack co-writer Luca Canali – OAK Table Member and frequent speaker Eric Grancher – OAK Table member Jacques Kostic – OCM 11g & Senior Consultant at Trivadis
Where? new Trivadis office, Chemin Château-Bloch 11, CH1219 Geneva When? Wednesday September 17th 2014, from 17h00 onwards Cost? It is a FREEevent! It is a community based, informal and enjoyable workshop.
You just need to bring your laptop and your desire to have fun! Registration: TVD_LS_ADMIN@trivadis.com
Limited places! Reserve your seat and T-shirt now: TVD_LS_ADMIN@trivadis.com
Agenda:
17.00 – Welcome
17.30 – RAC Attack 12c part I
19.30 – Pizza and Beers! (kindly sponsored by Trivadis)
20.00 – RAC Attack 12c part II
22.00 – T-shirt distribution and group photo!!
VERY IMPORTANT: To participate in the workshop, you need to bring your own laptop.
Required specification:
a) any 64 bit OS that supports Oracle Virtual Box
b) 8GB RAM, 50GB free HDD space.
Due to legal constraints, please pre-download Oracle Database 12c and Grid Infrastructure for Linux x86-64 from https://edelivery.oracle.com/ web site (further
information here: http://goo.gl/pqavYh).
Tables created with extended varchar2 (length >4000)
If the table is created with a varchar2 length above 4000, a LOBSEGMENT and a LOBINDEX segments are also created automatically. But actually, the rows inserted may be short enough to be stored inline.
As you can see from the previous output, the LOB segments are almost empty after 1000 rows inserted, whereas the table has 24 blocks. Not enough to be a proof. Let’s try to update the rows with a column size of 1000:
Actually, it’s somewhere between 3960 and 3970 bytes, but it may depend on other factors (I havn’t tested it deeply).
Lesson learned: when you design your table with extended types, you should plan how many rows you expect with a size below 4000, because they can make your row access slower and chained rows higher than expected.
Tables created with standard varchar2 and altered afterward
Let’s try almost the same excercise, but this time starting with a table created with a small length (lower than 4000) and altered afterward to make use of the extended size.
This lead to a huge amount of chained rows!! In this example, 2 blocks per row, but it can be as high as 5 blocks for a single column varchar2(32767) with db_block_size=8192.
WRONG! Even after a rebuild the extended varchars are stored INLINE if they have been created as standard varchars. Actually you need to recreate the table and migrate with dbms_redefinition.
Migrating to extended datatypes instead of converting your application to use secure files can be a disaster for the physical layout of your tables, thus for the performance of your application. Be careful!
there is a hidden parameter: _scalar_type_lob_storage_threshold that controls the threshold of the varchar size specified in the definition before creating a segment for the lobs (default 4000). see this post: http://orasql.org/2013/07/13/oracle-12c-extended-varchars/
Oracle has announced the new Oracle Database Backup Logging Recovery Appliance at the last Open World 2013, but since then it has not been released to the market yet, and very few information is available on the Oracle website.
During the last IOUG Collaborate 14, Oracle master product manager of Data Guard and MAA, Larry Carpenter, has unveiled something more about the DBRLA (call it “Debra” to simplify your life 🙂 ) , and I’ve had the chance to discuss about it directly with Larry.
At Trivadis we think that this appliance will be a game changer in the world of backup management.
Why?
Well, if you have ever worked for a big company with many hundreds of databases, you have certainly encountered many of those common problems:
Oracle Backup and restore penalized by a shared infrastructure
Poor backup or restore performance
Tape drives busy when you need them urgently
Complex management of backup retentions
That’s not all. As of now, your best recovery point in case of restore is directly related to your backup archive frequency. Oh yes, you have to low down your archive_lag_target parameter, increase your log switch frequency (and thus, the I/O) and still have… 10, 15, 30 minutes of possible data loss? Unless you protect your transactions with a Data Guard. But this will cost you money. For the additional server and storage. For the licenses. And for the effort required to put in place a Data Guard instance for every database that you want to protect. You want to protect your transactions from a storage failure and there’s a price to pay.
The Database Backup Logging Recovery Appliance (wow, I need to copy and paste the name to save time! :-)) overcomes these problems with a simple but brilliant idea: leveraging the existing redo log transport processes and ship the redo stream directly to the backup appliance (the DBLRA, off course) or to its Cloud alter ego, hosted by Oracle.
As you can infer from the picture, 12c databases will work natively with the appliance, while previous releases will have a plugin that will enable all the capabilities.
Backups can be mirrored selectively to another DBLRA, or copied to the cloud or to a 3rd party (Virtual) Tape Library.
The backup retention is enforced by the appliance and the expiration and deletion is done automatically using the embedded RMAN catalog.
Lightning fast backups and restores are guaranteed by the hardware: DBLRA is based on the same hardware used by Exadata, with High Capacity disks. Optional storage extensions can be added to increase the capacity, but all the data, as I’ve said, can be offloaded to VTLs in order to use a cheaper storage for older backups.
To resume, the key values are:
No transaction loss!!
Lightning fast backups and restores
Integrated, Oracle engineered, scalable solution for hundreds to thousands of databases
Looking forward to see it in action!
I cannot cover all the information I have in a single post, but Trivadis is working actively to be ready to implement it at the time of the launch to the market (estimated in 2014), so feel free to contact me if you are interested in boosting your backup environment. 😉
By the way, I expect that the competitors (IBM, Microsoft?) will try to develop a solution with the same characteristics in terms of reliability, or they will lose terrain.
Cheers!
Ludovico
Disclaimer: This post is intended to outline Oracle’s general product direction based on the information gathered through public conferences. It is intended for informational purposes only. The development and release of these functionalities and features including the release dates remain at the sole discretion of Oracle and no documentation is available at this time. The features and commands shown may or may not be accurate when the final product release goes GA (General Availability). Please refer Oracle documentation when it becomes available.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.Accept
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.