Recently my customer patched its 12.1.0.2 databases with the Bundle Patch 170718 on the new servers (half of the customer’s environment). The old servers are still on 161018 Bundle Patch.
We realized that we could not move anymore the databases from the old servers to the new ones because the duplicate from active database was failing with this error:
Oracle 12.2 had introduced incompatibility with previous releases in remote file transfer via SQL*Net. At least this is what it seems. According to Oracle, this is due to a bugfix present in Oracle 12.2
Now, the bundle patch that we installed on BP 170718 contains the same bugfix (Patch for bug 18633374).
So, the incompatibility happens now between databases of the same “Major Release” (12.1.0.2).
There are two possible workarounds:
Apply the same patch level on both sides (BP170718 in my case)
Apply just the patch 18633374 on top of your current PSU/DBBP (a merge might be necessary).
We used the second approach and now we can setup Data Guard again to move our databases without downtime:
This year Trivadis will be again at Oracle Open World (and Oak Table World!) in San Francisco, with a few sessions (including mine!)
If you are going to Oracle Open World and you want to say hello to the Trivadis speakers, make sure you attend them!
Get the Most Out of Oracle Data Guard Ludovico Caldara – ACE Director, Senior Consultant – Trivadis
When: Sunday, Oct 01, 12:45 PM
Where: Marriott Marquis (Yerba Buena Level) – Nob Hill A/B
EOUC Database ACES Share Their Favorite Database Things Christian Antognini – ACE Director, OAK Table Member, Senior Principal Consultant, Partner – Trivadis
When: Sunday, Oct 01, 10:45 AM
Where: Marriott Marquis (Golden Gate Level) – Golden Gate C1/C2
Application Containers: Multitenancy for Database Applications Markus Flechtner – Principal Consultant – Trivadis
When: Sunday, Oct 01, 2:45 PM
Where: Marriott Marquis (Yerba Buena Level) – Nob Hill A/B
TBA Christian Antognini – ACE Director, OAK Table Member, Senior Principal Consultant, Partner – Trivadis
When: Monday Oct 02, 1:00 PM
Where: Oak Table World, Children Creativity Museum
Apache Kafka: Scalable Message Processing and More Guido Schmutz – ACE Director, Senior Principal Consultant, Partner – Trivadis
When: Monday Oct 02, 4:30 PM
Where: Moscone West – Room 2004
You can find trivadis’s sessions in the session catalog here.
A blog post series would not be complete without a final post about vacuumlo.
In the previous post we have seen that the large objects are split in tuples containing 2048 bytes each one, and each chunk behaves in the very same way as regular tuples.
What distinguish large objects?
NOTE: in PostgreSQL, IT IS possible to store a large amount of data along with the table, thanks to the TOAST technology. Read about TOAST here.
Large objects are not inserted in application tables, but are threated in a different way. The application using large objects usually has a table with columns of type OID. When the application creates a new large objects, a new OID number is assigned to it, and this number is inserted into the application table.
Now, a common mistake for people who come from other RDBMS (e.g. Oracle), think that a large object is unlinked automatically when the row that references
it is deleted. It is not, and we need to unlink it explicitly from the application.
Let’s see it with a simple example, starting with an empty pg_largeobject table:
If we want to get the rid of the LOB, we have to unlink it, either explicitly or by using triggers that unlink the LOB when a record in the application table is deleted.
Another way is to use the binary vacuumlo included in PostgreSQL.
It scans the pg_largeobject_metadata and search through the tables that have OID columns to find if there are any references to the LOBs. The LOB that are not referenced, are unlinked.
ATTENTION: this means that if you use ways to reference LOBs other than OID columns, vacuumlo might unlink LOBs that are still needed!
As expected, because a random sequence of characters cannot be compressed, the size increased again by 171 blocks (see my previous post for the explanation)
If you read this nice series of blog posts by Frits Hoogland, you should know about the pageinspect extension and the t_infomask 16-bit mask.
Let’s install it and check the content of the pg_largeobjects pages:
The space is still used and the tuples are still there.
However, we can check that the tuples are no longer used by checking the validity of their t_xmax. In fact, according to the documentation, if the XMAX is invalid the row is at the latest version:
[…] a tuple is the latest version of its row iff XMAX is invalid or t_ctid points to itself (in which case, if XMAX is valid, the tuple iseither locked or deleted). […]
PostgreSQL uses a nice, non standard mechanism for big columns called TOAST (hopefully will blog about it in the future) that can be compared to extended data types in Oracle (TOAST rows by the way can be much bigger). But traditional large objects exist and are still used by many customers.
If you are new to large objects in PostgreSQL, read here. For TOAST, read here.
Inside the application tables, the columns for large objects are defined as OIDs that point to data chunks inside the pg_largeobject table.
Because the large objects are created independently from the table columns that reference to it, when you delete a row from the table that points to the large object, the large object itself is not deleted.
Moreover, pg_largeobject stores by design all the large objects that exist in the database.
This makes housekeeping and maintenance of this table crucial for the database administration. (we will see it in a next post)
How is space organized for large objects?
We will see it by examples. Let’s start with an empty database with empty pg_largeobject:
First evidence: the file is empty, meaning that the first block is not created physically until there’s some data in the table (like deferred segment creation in Oracle, except that the file exists).
Now, let’s create two files big 1MB for our tests, one zero-padded and another random-padded:
Just 40k! This means that the chunks are compressed (like the TOAST pages). PostgreSQL uses the pglz_compress function, its algorithm is well explained in the source code src/common/pg_lzcompress.c.
What happens when we insert the random-padded file?
The segment increased of much more than 1Mb! precisely, 1441792-40960 = 1400832 bytes. Why?
The large object is splitted again in 512 data chinks big 2048 bytes each, and again, PostgreSQL tries to compress them. But because a random string cannot be compressed, the pieces are still (average) 2048 bytes big.
Now, a database block size is 8192 bytes. If we subtract the size of the bloch header, there is not enough space for 4 chunks of 2048 bytes. Every block will contain just 3 non-compressed chunks.
So, 512 chunks will be distributed over 171 blocks (CEIL(512/3.0)), that gives:
PgSQL
1
2
3
4
5
lob_test=#selectceil(1024*1024/2048/3.0)*8192;
?column?
----------
1400832
(1row)
1400832bytes!
Depending on the compression rate that we can apply to our large objects, we might expect much more or much less space used inside the pg_largeobject table.
Since its creation (9 years ago), this blog has been almost only Oracle-oriented. But during my career I worked a lot with other RDBMS technologies… SQL Server, MySQL (and forks), Sybase, PostgreSQL, Progres. Some posts in this blog prove it.
The last two years especially, I have worked a lot with PostgreSQL. In the last few months I have seen many friends and technologists increasing their curiosity in this product. So I think that I will, gently, start blogging also about my experiences with PostgreSQL.
I am pretty sure that every user group has had its own difficulties when starting. Because it happened that we started the ITOUG just a couple of years ago, I think it is worth to tell the story 🙂
I have been told about a team of italian DBAs willing to create a new user group, back in 2013. I decided to join the team because I was starting my journey in the Oracle Community.
We were coming from different cities (Lausanne, Vicenza, Milano, Roma)… the only solution was to meet up online, through Google Hangouts.
The first meetings were incredibly boring and not concluding (sorry ITOUG guys if you read this :-)):
You cannot start something if you do not know what you want to achieve
Of course, everybody was agreeing that it would have been nice to become the new UKOUG in the southern Europe. But being realistic: no budget, no spare time, nothing at all, we needed a starting point. We though that the starting point was a website. But even something easy like a basic website forks a lot of additional questions:
Should it allow to publish content?
Should it have a bulletin board?
What about user subscription?
What should be the content? Articles, webinars?
We created something with the idea to publish our content, but after a while it was mostly an empty container.
It took me a while to learn a first big lesson:
Democracy does not work for small User Groups
The original founder of the group decided to quit the ITOUG because nothing concrete was happening. Everybody was putting ideas on the table but nothing was happening, really.
When my friend Björn Rost proposed me to candidate Milano for the OTN Tour, I jumped on the train: it was the best way to start something concrete. I somehow “forced” the OTN Tour candidature to happen, saying to my peers: “I will do it, if you support me, thanks; otherwise I will do it alone”.
And the response was great!
Do not wait for something to happen. If you want it, take the lead and make it.
This is the biggest lesson I have learned from my involvement with user groups. People sometimes cannot contribute because they do not what to do, how to do it, or simply they do not have time because there are a gazillion of things more important than the user groups: family, work, health… even hobbies sometimes are more important 🙂
If you have an idea, it’s up to YOU to transform it in something concrete.
After the acceptance, we had to prepare the event. We proposed a few dates and Björn prepared the calendar taking into account the other user groups.
Organizing an event is not easy but not so complex either
Set a reasonable target number of participants
Fix a date when most contributors are available
Get offers from different venues (pricing for the venue and the catering)
Get an idea of the budget
Ask different companies to sponsor the event
Eventually ask Oracle 🙂
Once the sponsors are committed to pay, block the venue
Prepare and publish the Call for Paper
Eventually start some advertising
Select the papers
Prepare the agenda
Ask the speakers for confirmation about the proposed date/time
Prepare the event registration form
Publish the agenda
Broadcast it everywhere 🙂 (Social media, contacts, website, Oracle through their channels)
Interact with the hotel and the sponsor to have the proper setup, billing addresses, invoices, etc.
Host the event
Relax
Finding the sponsors is the most difficult part
It has been easy for the first two events to find a sponsor (just a database stream, event held in Milano), but it was not the same for the last one.
Our aim was to do a double event (Milano + Roma) with two streams in each location (DB + BI & Analytics). In Roma we have been unable to find a sponsor (if you read this AND your company may be interested in sponsoring such event in Roma, please contact me :-)), we decided then to continue with the event in Milano.
Finding the speakers is easier than you can imagine
Unless you want non-english sessions held by native speakers, there is a huge community of speakers willing to share their knowledge. For Oracle, the most obvious source of speakers is the ACE Program, and twitter is probably the best channel for finding them.
Now it has been the third time that we organized an event, and every time we have been surprised by the good attendance and feedback.
You have many (hundreds) of instances and more than a couple of servers
One of your servers have high CPU Load
You have Enterprise Manager 12c but the Database Load does not filter by server
You want to have an historical representation of the user CPU utilization, per instance
Getting the data from the EM Repository
With the following query, connected to the SYSMAN schema of your EM repository, you can get the hourly max() and/or avg() of user CPU by instance and time.
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
SELECTentity_name,
ROUND(collection_time,'HH')AScolltime,
ROUND(avg_value,2)/16*100ASavgv,-- 16 is my number of CPU
ROUND(max_value,2)/16*100ASmaxv-- same here
FROMgc$metric_values_hourlymv
JOINem_targetst
ON(t.target_name=mv.entity_name)
WHEREt.host_name='myserver1'-- myserver1 is the server that has high CPU Usage
ANDmv.metric_column_name='user_cpu_time_cnt'-- let's get the user cpu time
ANDcollection_time>sysdate-14-- for the lase 14 days
ORDERBYentity_name,
ROUND(collection_time,'HH');
Suppose you select just the max value: the result will be similar to this:
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
ENTITY_COLLTIMEMAXV
------- ---------------- ------
mydbone10.05.201716:000.3125
mydbone10.05.201717:000.1875
mydbone10.05.201718:000.1875
mydbone10.05.201719:000.1875
mydbone10.05.201720:000.25
mydbone10.05.201721:000.125
mydbone10.05.201722:000.125
mydbone10.05.201723:000.125
mydbone11.05.201700:000.1875
mydbone11.05.201701:000.125
mydbone11.05.201702:000.1875
mydbone11.05.201703:000.1875
....
mydbone23.05.201720:000.125
mydbone23.05.201721:000.125
mydbone23.05.201722:000.125
mydbone23.05.201723:000.0625
mydbtwo10.05.201716:000.3125
mydbtwo10.05.201717:000.25
mydbtwo10.05.201718:000.1875
mydbtwo10.05.201719:000.1875
mydbtwo10.05.201720:000.3125
mydbtwo10.05.201721:000.125
mydbtwo10.05.201722:000.125
mydbtwo10.05.201723:000.125
.....
mydbtwo14.05.201719:000.125
mydbtwo14.05.201720:000.125
mydbtwo14.05.201721:000.125
mydbtwo14.05.201722:000.125
mydbtwo14.05.201723:000.125
dbthree10.05.201716:001.1875
dbthree10.05.201717:000.6875
dbthree10.05.201718:000.625
dbthree10.05.201719:000.5625
dbthree10.05.201720:000.8125
dbthree10.05.201721:000.5
dbthree10.05.201722:000.4375
dbthree10.05.201723:000.4375
...
Putting it into excel
There are one million ways to do something more reusable than excel (like rrdtool scripts, gnuplot, R, name it), but Excel is just right for most people out there (including me when I feel lazy).
Configure an Oracle Client and add the ODBC data source to the EM repository:
Open Excel, go to “Data” – “Connections” and add a new connection:
Search…
New Source
DSN ODBC
Select your new ODBC data source, user, password
Uncheck “Connection to a specific table”
Give a name and click Finish
On the DSN -> Properties -> Definition, enter the SQL text I have provided previously
The result should be something similar: ( but much longer :-))
Pivoting the results
Create e new sheet and name it “pivot”, Click on “Create Pivot Table”, select your data and your dimensions:
The result:
Creating the Graph
Now that the data is correctly formatted, it’s easyy to add a graph:
just select the entire pivot table and create a new stacked area graph.
The result will be similar to this:
With such graph, it is easy to spot which databases consumed most CPU on the system in a defined period, and to track the progress if you start a “performance campaign”.
For example, you can see that the “green” and “red” databases were consuming constantly some CPU up to 17.05.2017 and then some magic solved the CPU problem for those instances.
It is also quite convenient for checking the results of new instance caging settings…
The resulting CPU will not necessarily be 100%: the SYS CPU time is not included, as well as the user CPU of all the other processes that are either not DB or not monitored with Enterprise Manager.
My customer today tried to do a duplicate on a cluster. When preparing the auxiliary instance, she noticed that the startup nomount was hanging forever: Nothing in the alert, nothing in the trace files.
Because the database and the spfile were stored inside ASM, I’ve been quite suspicious…
Around 12:38:56, another colleague in the office added a disk to one of the disk groups, through Enterprise Manager 12c!
But there were no rebalance operations:
Oracle PL/SQL
1
2
3
SQL>select*fromgv$asm_operation;
norowsselected
It’s not the first time that I hit this type of problems. Sadly, sometimes it requires a full restart of the cluster or of ASM (because of different bugs).
This time, however, I have tried to kill only the foreground sessions waiting on “ASM file metadata operation”, starting with the one coming from the OMS.
Surprisingly, after killing that session, everything was fine again:
First, let me apologize because every post in my blog starts with a disclaimer… but sometimes it is really necessary. 😉
Disclaimer: this blog post contains PL/SQL code that deletes incarnations from your RMAN recovery catalog. Please DON’T use it unless you deeply understand what you are doing, as it can compromise your backup and recovery strategy.
Small introduction
You may have a central RMAN catalog that stores all the backup metadata for your databases. If it is the case, you will have a database entry for each of your databases and a new incarnation entry for each duplicate, incomplete recovery or flashback (or whatever).
You should also have a delete strategy that deletes the obsolete backups from either your DISK or SBT_TAPE media. If you have old incarnations, however, after some time you will notice that their information never goes away from your catalog, and you may end up soon or later to do some housekeeping. But there is nothing more tedious than checking and deleting the incarnations one by one, especially if you have average big numbers like this catalog:
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
SQL>SELECTcount(*)FROMdb;
COUNT(*)
----------
1843
SQL>SELECTcount(*)FROMdbinc;
COUNT(*)
----------
3870
SQL>SELECTcount(*)FROMbdf;
COUNT(*)
----------
4130959
SQL>SELECTcount(*)FROMbrl;
COUNT(*)
----------
14876291
Where db, dbinc, bdf and brl contain reslectively the registered databases, incarnations, datafile backups and archivelog backups.
Different incarnations?
Consider the following query:
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
coldbinc_key_fora60
setpages100lines200
SELECTlpad(' ',2*(level-1))
||TO_CHAR(DBINC_KEY)ASDBINC_KEY_,
db_key,
db_name,
TO_CHAR(reset_time,'YYYY-MM-DD HH24:MI:SS'),
dbinc_status
FROMrman.dbinc
STARTWITHPARENT_DBINC_KEYISNULL
CONNECTBYpriorDBINC_KEY=PARENT_DBINC_KEY
ORDERBYdb_name,db_key,level
;
You can run it safely: it returns the list of incarnations hierarchically connected to their parent, by database name, key and level.
Then you have several types of behaviors:
Normal databases (created once, never restored or flashed back) will have just one or two incarnations (it depends on how they are created):
104546535 104546534 VxxxxxxP 2012-01-18 09:31:01 CURRENT
They are usually the ones that you may want to keep in your catalog, unless the database no longer exist: in this case perhaps you omitted the deletion from the catalog when you have dropped your database?
Flashed back databases (flashed back multiple times) will have as many incarnations as the number of flashbacks, but all connected with the incarnation prior to the flashback:
1437822754 1164696336 VxxxxxxD 2016-12-16 12:07:46 CURRENT
Here, despite you have several incarnations, they all belong to the same database (same DB_KEY and DBID), then you must also keep it inside the recovery catalog.
Non-production databases that are frequently refreshed from the production database (via duplicate) will have several incarnations with different DBIDs and DB_KEY:
This is usually the most frequent case: here you want to delete the old incarnations, but only as far as there are no backups attached to them that are still in the recovery window.
In this case, again, it depends whether the DBID and DB_KEY are the same as the current incarnation or not.
What do you need to delete?
Basically:
Incarnations of databases that no longer exist
Incarnations of existing databases where the database has a more recent current incarnation, only if there are no backups still in the retention window
How to do it?
In order to be sure 100% that you can delete an incarnation, you have to verify that there are no recent backups (for instance, no backups more rercent than the current recovery window for that database). If the database does not have a specified recovery window but rather a default “CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default”, it is a bit more problematic… in this case let’s assume that we consider “old” an incarnation that does not backup since 1 year (365 days), ok?
Getting the last backup of each database
Sadly, there is not a single table where you can verify that. You have to collect the information from several tables. I think bdf, al, cdf, bs would suffice in most cases.
When you delete an incarnation you specify a db_key: you have to get the last backup for each db_key, with queries like this:
The configuration information for each database is stored inside the conf table, but the retention information is stored in a VARCHAR2, either ‘TO RECOVERY WINDOW OF % DAYS’ or ‘TO REDUNDANCY %’
You need to convert it to a number when the retention policy is recovery windows, otherwise you default it to 365 days wher the redundancy is used. You can add a column and a join to the query:
This procedure will raise an exception (-20001, ‘Database not found’) when a database does not exist anymore (either already deleted by this procedure or by another session), so you need to handle it.
I have used this procedure today for the first time and it worked like a charm.
However, if you have any adjustment or suggestion, don’t hesitate to comment it 🙂
HTH
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.Accept
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.