The new Oracle Database 12c allows to move datafiles ONLINE while they’re been used. This allows great availability when dealing with database moving, compared to the previous approach.
Controlfiles cannot be moved online yet. The other kind of files (temp and redo logs) off course can be moved easily by creating the new ones and deleting the old ones, as it was on pre-12c releases.
Oracle instances on Unix/Linux servers have been composed historically by separated server processes to allow the database to be multi-user, in opposite with Windows that has always been multithread (Oracle 7 on MS-DOS was a single-user process, but this is prehistory…). The background processes number has increased to support all the new features of Oracle, up to this new Oracle 12c release. On a simple database installation you’ll be surprised to have this output from a ps command (38 processes):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# ps -eaf | grep CLASSIC | grep -v grep
oracle35821021:59?00:00:00ora_pmon_CLASSIC
oracle35841021:59?00:00:00ora_psp0_CLASSIC
oracle35901421:59?00:00:51ora_vktm_CLASSIC
oracle35961021:59?00:00:00ora_gen0_CLASSIC
oracle35991021:59?00:00:00ora_mman_CLASSIC
oracle36081021:59?00:00:00ora_diag_CLASSIC
oracle36121021:59?00:00:00ora_dbrm_CLASSIC
oracle36161021:59?00:00:00ora_dia0_CLASSIC
oracle36201021:59?00:00:00ora_dbw0_CLASSIC
oracle36241021:59?00:00:04ora_lgwr_CLASSIC
oracle36281021:59?00:00:00ora_ckpt_CLASSIC
oracle36321021:59?00:00:00ora_smon_CLASSIC
oracle36361021:59?00:00:00ora_reco_CLASSIC
oracle36401021:59?00:00:00ora_lreg_CLASSIC
oracle36441021:59?00:00:00ora_rbal_CLASSIC
oracle36481021:59?00:00:00ora_asmb_CLASSIC
oracle36521021:59?00:00:01ora_mmon_CLASSIC
oracle36591021:59?00:00:00ora_mmnl_CLASSIC
oracle36641021:59?00:00:00ora_d000_CLASSIC
oracle36671021:59?00:00:00ora_s000_CLASSIC
oracle36721021:59?00:00:00ora_mark_CLASSIC
oracle37071021:59?00:00:01ora_o000_CLASSIC
oracle37171021:59?00:00:01ora_o001_CLASSIC
oracle37251021:59?00:00:00ora_tmon_CLASSIC
oracle37291021:59?00:00:00ora_tt00_CLASSIC
oracle37361021:59?00:00:00ora_smco_CLASSIC
oracle37381022:00?00:00:00ora_w000_CLASSIC
oracle37491022:00?00:00:00ora_fbda_CLASSIC
oracle37511022:00?00:00:00ora_aqpc_CLASSIC
oracle37571022:00?00:00:00ora_qm02_CLASSIC
oracle37591022:00?00:00:00ora_p000_CLASSIC
oracle37631022:00?00:00:00ora_p001_CLASSIC
oracle37651022:00?00:00:00ora_q002_CLASSIC
oracle37671022:00?00:00:00ora_p002_CLASSIC
oracle37691022:00?00:00:00ora_q003_CLASSIC
oracle37711022:00?00:00:00ora_p003_CLASSIC
oracle37741022:00?00:00:00ora_cjq0_CLASSIC
oracle38011022:00?00:00:02ora_vkrm_CLASSIC
If you have consolidated many databases without the pluggable database feature, you’ll end up to have several hundreds of processes even without users connected. But Oracle 12c now introduce the possibility to start an instance using multithreading instead of the traditional processes. This could lead to some optimizations due to the shared process memory, and reduced context switches overhead, I presume (need to test it).
Enabling the Multithreaded Execution
By default this feature is not enabled, so you have to set it explicitly:
1
2
3
4
5
SQL>alter system set threaded_execution=truescope=spfile;
System altered.
SQL>
And in parallel, you’ll need to add this line to the listener.ora:
1
DEDICATED_THROUGH_BROKER_listener=on
After a restart, the instance will show only a bunch of processes:
So we have the Process Monitor (pmon), the Process Spawner (psp0), the Virtual Keeper of Time (vktm), the Database Writer (dbw0) and two new multithreaded processes (u004) and (u005). “U” can stand for User or Unified?
Where can I find the information on the other processes?
They still exist in the v$process view, thus leading to some confusion when talking about Oracle Processes with your sysadmins… The new EXECUTION_TYPE column show if the Oracle Process is executed as a thread or as an OS process, and the SPID let us know which process actually executes it.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
PID SPID PNAME EXECUTION_
-----------------------------
24792PMON PROCESS
34794PSP0 PROCESS
44800VKTM PROCESS
54804GEN0 THREAD
64804SCMN THREAD
184804LREG THREAD
194804RBAL THREAD
204804ASMB THREAD
114804DBRM THREAD
144804LGWR THREAD
154804CKPT THREAD
164804SMON THREAD
74804MMAN THREAD
174810RECO THREAD
124810DIA0 THREAD
104810SCMN THREAD
94810DIAG THREAD
254810N000 THREAD
504810Q002 THREAD
494810W004 THREAD
214810MMON THREAD
224810MMNL THREAD
234810D000 THREAD
244810S000 THREAD
514810Q003 THREAD
264810MARK THREAD
274810W001 THREAD
284810THREAD
294810THREAD
304810TMON THREAD
314810TT00 THREAD
324810SMCO THREAD
334810FBDA THREAD
344810W000 THREAD
354810AQPC THREAD
364810CJQ0 THREAD
374810P000 THREAD
384810P001 THREAD
394810P002 THREAD
404810P003 THREAD
414810VKRM THREAD
424810THREAD
434810O000 THREAD
454810W002 THREAD
464810QM02 THREAD
474810W003 THREAD
134818DBW0 PROCESS
84884PROCESS
1NONE
What about the User processes?
Well, I’ve spawned 200 user processes with sqlplus, and got 200 threads:
1
2
3
4
5
6
7
8
9
10
SQL>select BACKGROUND,EXECUTION_TYPE,count(*)
2>fromv$process group by background,EXECUTION_TYPE;
BEXECUTION_ COUNT(*)
---------------------
1PROCESS4
1THREAD34
PROCESS1
NONE1
THREAD200
On the OS side, I’ve registered an additional process to distribute the load of the new user processes. Damn, I start to being confusional using the term “process” o_O
By using the multithreaded execution, the operating system authentication doesn’t work.
1
2
3
4
5
6
7
8
9
10
[oracle@luc12c01~]$sqlplus/assysdba
SQL*Plus:Release12.1.0.1.0Production on Fri May1001:14:172013
Copyright(c)1982,2013,Oracle.All rights reserved.
ERROR:
ORA-01017:invalid username/password;logon denied
Enter user-name:
Unless Oracle will review it’s authentication mechanism in a future patchset, you’ll need to rely on the password file and use the password to connect to the instance as sysdba, even locally.
What about performance?
In theory, threads should be faster and with a lower footprint:
The main benefit of threads (as compared to multiple processes) is that the context switches are much cheaper than those required to change current processes. Sun reports that a fork() takes 30 times as long as an unbound thread creation and 5 times as long as a boundthread creation.
In some operating systems running on some hardware, switching between threads belonging to the same process is much faster than switching to a thread from different process (because it requires more complicated process context switch). http://en.wikipedia.org/wiki/Thread_switching_latency
In practice, I’ll do some tests and let you know! 🙂
What about the good old OS kill command to terminate processes?
Good question! Currently I have not found any references to an orakill command (that exists on Windows). Hope it will arrive soon!
After a long, long wait, Oracle finally announced the availability of his new generation database. And looking at the new features, I think it will take several months before I’ll learn them all. The impressive number of changes brings me back to the release 10gR1, and I’m not surprised that Oracle has waited so long, I still bet that we’ll find a huge amount of bugs in the first release. We need for sure to wait a first Patchset, as always, before going production.
Does ‘c’ stand for cloud?
While Oracle has developed this release with the cloud in mind, the first word that comes out of my mind is “consolidation”. The new claimed feature Pluggable Database (aka Oracle Multitenancy) will be the dream of every datacenter manager along with CloneDB (well, it was somehow already available on 11.2.0.2) and ASM Thin_provisioned diskgroups.
But yes, it’s definitely the best for clouds
Other features like Flex ASM, Flex Cluster, several new security features, crossplatform backups… let imagine how deeply we can work to make private, multi-tenant clouds.
First steps, what changes with a typical installation
The process for a traditional standalone DB+ASM installation is the same as the old 11gR2: You’ll need to install the Grid Infrastructure first (and then take advantage of the Oracle Restart feature) and subsequently the Database installation.
The installation documentation is complete as always and is getting quite huge as the Grid Infrastructure capabilities increment.
To meet most installation prerequisites, Oracle has prepared again an RPM that does the dirty work:
Oracle suggests to use Ksplice and also explicitly recommends to use the deadline I/O scheduler (it has been longtime a best practice but I can’t remember it was documented officially).
The splash screen has become more “red” giving a colorful experience on the installation process. 😉
Once the GI is installed, the Database installation asks for many new OS groups: OSBACKUPDBA, OSDGDBA, OSKMDBA. This give you more possibilities to split administration duties, not specifying them will lead to the “old behavior”.
You can decide to use an ACFS filesystemfor both the installation AND the database files (with some exceptions, e.g. Windows servers). So, you can take advantage of the snapshot features of ACFS for your data, provided that the performance is acceptable (I’ll try to test and blog more about this). You can use the feature Copy-On-Write to provide writable snapshot copies, directly embedding a special syntax inside the “create pluggable database” command. Unfortunately, Oracle has decided to deliver pluggable databases as an extra-cost option. :-/
The database creation with DBCA is even easier, you have an option for a very default installation, you can guess it uses templates with all options installed by default.
But the Hot topic is that you can create it as a “Container Database”. This is done by appending the keywords “enable pluggable database;” at the end of the create database command. The process will then put all the required bricks (creation of the pdb$seed database and so on), I’ll cover the topic in separate posts cause it’s the really biggest new feature.
You can still use advanced mode to have the “old style” database creation, where you can customize your database.
If you try to create only the scripts and run them manually (that’s my habit), you’ll notice that SQL scripts are not run directly within the opened SQL*Plus session, but they’re run from a perl script that basically suppresses all the output to terminal, giving the impression of a cleaner installation. IMO it could be better only if everything runs fine.
I’ve taken my MySQL 5.0 OCP certification in 2010. A little late because this certification has been around for longtime, but surprisingly, it’s still the only version for MySQL OCP certification.
The 5.0 branch started in 2003 and it went GA in 2005:
“MySQL 5.0.0, a new version of the popular Open Source/Free SoftwareDatabase Management System, has been released.”
Michael Widenius, December 24 2003
“It is my pleasure to announce the production release of MySQL 5.0, which is hereby GA (Generally Available).”
Kaj Arnö, October 24th 2005
If we make a comparison, it’s like if the certification release for Oracle was the 10gR1.
Yesterday I’ve received an email from Oracle University claiming new courses on MySQL 5.6 to add certifications to the resume:
“The MySQL for Database Administrators course is for DBAs and other database professionals who want to install and configure MySQL Server, set up replication and security, perform database backups and performance tuning, and protect MySQL databases. This course has been updated to cover commercial features of MySQL as well as enhanced replication capabilities in MySQL 5.6.”
Oracle University, May 20th 2013
I’ve started thinking that a new certification path on 5.6 was available, but on the Oracle site I see that the certification is always the same. They have just added a sentence in the preparation notes:
“This exam has been validated through MySQL 5.5. The recommended course is based on MySQL 5.6. Candidates wishing to use this course to prepare may wish to consult product documentation to ensure they understand the differences between MySQL 5.5 and MySQL 5.6”
In my previous post I’ve shown how to collect data and insert it into a database table using PowerShell. Now it’s time to get some information from that data, and I’ve used TSQL for this purpose.
The backup exceptions
Every environment has some backup rules and backup exceptions. For example, you don’t want to check for failures on the model, northwind, adventureworks or distribution databases.
I’ve got the rid of this problem by using the “exception table” created in the previous post. The rules are defined by pattern matching. First we need to define a generic rule for our default backup schedules:
PgSQL
1
2
3
4
5
6
INSERTINTO[Tools].[dbo].[DB_Backup_Exceptions]
([InstanceName],[DatabaseName],[LastFullHours]
,[LastLogHours],[Description],[BestBefore])
VALUES
('%','%',36,12,'Default delays for all databases',NULL)
GO
In the previous example, we’ll check that all databases (‘%’) on all instances (again ‘%’) have been backed up at least every 36 hours, and a backup log have occurred in the last 12 hours. The description is useful to remember why such rule exists.
The “BestBefore” column allows to define the time limit of the rule. For example, if you do some maintenance and you are skipping some schedules, you can safely insert a rule that expires after X days, so you can avoid some alerts while avoiding also to forget to delete the rule.
PgSQL
1
2
3
4
5
6
INSERTINTO[Tools].[dbo].[DB_Backup_Exceptions]
([InstanceName],[DatabaseName],[LastFullHours]
,[LastLogHours],[Description],[BestBefore])
VALUES
('SERVER1','%',1000000,1000000,'Maintenance until 012.05.2013','2013-05-12 00:00:00')
GO
The previous rule will skip backup reports on SERVER1 until May 12th.
Oracle PL/SQL
1
2
3
4
5
6
INSERTINTO[Tools].[dbo].[DB_Backup_Exceptions]
([InstanceName],[DatabaseName],[LastFullHours]
,[LastLogHours],[Description],[BestBefore])
VALUES
('%','Northwind',1000000,1000000,'Don''t care about northwind',NULL)
GO
The previous rule will skip all reports on all Northwind databases.
Important: If multiple rules apply to the same database, the rule with a higher time threshold wins.
The queries
The following query lists the databases with the last backup full older than the defined threshold:
Checking database backups has always been one of the main concerns of DBAs. With Oracle is quite easy with a central RMAN catalog, but with other databases doing it with few effort can be a great challenge.
Some years ago I developed a little framework to control all SQLServer databases. This framework was based on Linux (strange but true!), bash, freetds, sqsh and flat configuration files. It’s still doing well its work, but not all SQLServer DBAs can deal with complex bash scripting, so a customer of mines asked me if I was able to rewrite it with a language Microsoft-like.
So I decided to go for a PowerShell script in conjunction with a couple of tables for the configuration and the data, and a simple TSQL script to provide HTML reporting. I have to say, I’m not an expert on PowerShell, but it’s far from being as flexible as other programming languages (damn, comparing to perl, python or php they have in common only the initial ‘P’). However I managed to do something usable.
The principle
This is quite simple: the PowerShell script looks up for the list of instance in a reference table, then it sequentially connect to and retrieves the data:
recovery mode
status
creation time
last full backup
last log backup
This data is merged into a table on the central repository. Finally, a TSQL script do some reporting.
Custom classes in powershell
One of the big messes with PowerShell is the lack of the definition for custom classes, this is a special mess if we consider that PowerShell is higly object-oriented. To define your own classes to work with, you have to define them in another language (C# in this example):
PowerShell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
Add-Type@'
using System;
public class DatabaseBackup
{
public string instanceName;
public string databaseName;
public string recoveryMode;
public string status;
public DateTime creationTime;
public DateTime lastFull;
public DateTime lastLog;
private TimeSpan diff;
public double lastFullTotalHours () {
diff = DateTime.Now - lastFull;
return Math.Round(diff.TotalHours,2);
}
public double lastLogTotalHours () {
diff = DateTime.Now - lastLog;
return Math.Round(diff.TotalHours,2);
}
}
'@
For better code reading, I’ve put this definition in a separate file (DatabaseBackup.ps1).
where type ='L' and backup_finish_date <= getdate()
group by database_name
) c
on d.name = c.database_name
where d.name <> 'Tempdb'
order by [LastFull]";
I’ve also put this snippet in a separate file queries.ps1 to improve readability.
The tables
The first table (DB_Servers) can be as simple as a single column containing the instances to check. This can be any other kind of source like a corporate CMDB or similar.
The second table will contain the data collected. Off course it can be expanded!
The third table will contain some rules for managing exceptions. Such exceptions can be useful if you have situations like “all databases named northwind should not be checked”. I’ll show some examples in the next post.
$instances=invoke-sqlcmd-Query"select [name]=db_instance from db_servers"-ServerInstance$serverName-Database$databasename
Finally, for each instance we have to check, we trigger the function that collects the data and we insert the results in the central repository (I’m using a merge to update the existent records).
You can’t use the internal powershell of SQLServer because it’s not full compatible with powershell 2.0.
Check that the table db_status is getting populated
Limitations
The script use Windows authentication, assuming you are working with a centralized domain user. If you want to use the SQL authentication (example if you are a multi-tenant managed services provider) you need to store your passwords somewhere…
This script is intended to be used with single instances. It should works on clusters but I haven’t tested it.
Check the backup chain up to the tape library. Relying on the information contained in the msdb is not a reliable monitoring solution!!
In my next post we’ll see how to generate HTML reports via email and manage exceptions.
Hope you’ll find it useful.
Again PLEASE, if you improve it, kindly send me back a copy or blog it and post the link in the comments!
Windows Performance Monitor is an invaluable tool when you don’t have external enterprise monitoring tools and you need to face performance problems, whether you have a web/application server, a mail server or a database server.
But what I don’t personally like of it is what you get in terms of graphing. If you schedule and collect a big amount of performance metrics you will likely get lost in adding/removing such metrics from the graphical interface.
What I’ve done long time ago (and I’ve done again recently after my old laptop has been stolen 🙁 ) is to prepare a PHP script that parse the resulting CSV file and generate automatically one graph for each metric that could be found.
Unfortunately, most of Windows Sysadmin between you will disagree that I’ve done this using a Linux Box. But I guess you can use my script if you install php inside cygwin. The other tool you need, is rrdtool, again I use it massively to resolve my graphing needs.
How to collect your data
Basically you need to create any Data Collector within the Performance Monitor that generates a log file. You can specify directly a CSV file (Log format: Comma separated) or generate a BLG file and convert it later (Log format: Binary). System dumps are not used, so if you use the standard Performace template, you can delete it from your collection.
Remember that the more counters you take, the more the graph generation will take. The script does not run in parallel, so it will use only one core. Generally:
Where (Speed factor) is depending on both the CPU speed and the disk speed because of the huge number of syncs required to update several thousands of files. I’ve tried to reduce the number of rrdupdates by queuing several update values in a single command line and I’ve noticed an important increase of performances, but I know it’s not enough.
rrdtool graph/root/temp/LUDO/IPv4/Datagrams_Received_Unknown_Protocol.png--start"1366721762"--end"1366724017"--width453DEF:ds0=/root/temp/LUDO/IPv4/Datagrams_Received_Unknown_Protocol.rrd:value:LAST:step=5LINE1:ds0#0000FF:"IPv4\Datagrams Received Unknown Protocol" VDEF:ds0max=ds0,MAXIMUM VDEF:ds0avg=ds0,AVERAGE VDEF:ds0min=ds0,MINIMUM COMMENT:" " COMMENT:" Maximum " GPRINT:ds0max:"%6.2lf" COMMENT:" Average " GPRINT:ds0avg:"%6.2lf" COMMENT:" Minimum " GPRINT:ds0min:"%6.2lf"
534x177
...
Now it’s done!
The script generate a folder with the name of the server (LUDO in my example) and a subfolder for each class of counters (as you see in Performance Monitor).
Inside each folder you will have a PNG (and an rrd) for each metric.
Important: The RRD are generated with a single round-robin archive with a size equal to the number of samples. If you want to have the rrd to store your historical data you’ll need to modify the script. Also, the size of the graph will be the same as the number of samples (for best reading), but limited to 1000 to avoid huge images.
Future Improvements
Would be nice to have a prepared set of graphs for standard graphs with multiple metrics (e.g. CPU user, system and idle together) and additional lines like regressions…
Download the script: process_l_php.txtand rename it with a .php extension.
Ok, Ok, as an “Oracle on Linux Certified Expert”, I’ve never been a great fan of SQLServer (I shouldn’t say this, I’m working on SQLServer since release 6.5…) and I’ve always hated the DOS command prompt.
However, things are changing fast after Microsoft released the Powershell some years ago. It’s surprising, now Windows powershell support new best of breed features like aliases and pipelines. 😀
Today Microsoft itself recommends Windows Core installations instead of the full ones, and also SQLServer 2012 comes with a lot of new Commandlets to manage your server.
So I’ve decided to move my first steps in the Powershell world and I’ve created a script for a customer that installs and configure a SQL2008 with a single Powershell script.
The very first line accepts named parameters. I’ve tried to reduce the number but I’ve preferred to take, as an example, different disks for different tasks.
Then I’ve put a little of interaction if some parameters are missing. In facts, I can launch my scripts without inline parameters and specify everything when prompted by the script.
The commented command is to get the installed features after the installation. No really need to display it, it works really well.
Dynamically prepare a configuration file
The unattended installation needs some parameters prepared in a configuration file.
This is likely where you will change most of your stuff depending on your standards:
Components, paths, service accounts, you can change everything or modify the script to accept also this variables as parameters.
The full documentation about filling the configuration file is on the MSDN:
Off course you’ll need an installation media downloaded from the Microsoft customers site with the correct License Keys and mounted somewhere. (remember the $sourceDir parameter?) I’ve decided to change the path in the directory containing the media and then change it back.
The Service Pack installation has been a little more painful, normally would be simple but actually the powershell prompt is returned immediately after firing the command. So, to wait it, I’ve had to figure out the name of the process (is the executable name without the file extension .exe), get its process id and wait for that process:
By default SQLServer starts listening on a dynamic port. If you have a default and you want to configure it without opening the configuration manager, you can do it with this snipplet that I’ve copied from sirSql (thank you for sharing this).
PowerShell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
###############################
# change the TCP port at the end of the installation
"Success: SQL set to listen on TCP/IP port $port. Please restart the SQL service for changes to take effect."
}
Catch{Write-Warning"Unable to enable TCP/IP & set SQL to listen on port $port"}
}
####################
# Changing TCPport #
####################
"Changing TCP port to $port..."
changePort$hostName$instance$port
Adding your stuff at the end
Having the installation completed is in midstream. After the installation you may want to add tempfiles to your installation, modify your model database, add default accounts.
That’s up to you. If your scripts are identical you can execute them with sqlcmd.If you want to take benefit of the variables already set in the script you can execute them directly:
Well, I’ll never paste again all the content here, you can download the script HERE. Just change the file extension from .txt to .ps1.
I know it’s not a rock-solid procedure but it works well for my purposes, feel free to comment or review my script. Just, if you do some improvement on it, please share it and back-link this post!
My recent move to Switzerland has kept me a little busy.I’ve applied last August for a consulting position at Trivadis, in their Lausanne location. I can’t hide I’m quite excited to be part of a great company, but now I need to “ride the wave” and come back with some new posts, especially now that MySQL and Oracle are out with new releases. (Well, Oracle Database 12c will be out SOON, I hope!).
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.Accept
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.