System triggers, stats$user_log and side effects

Sometimes people get advice from internet: both Metalink or well-known consulting sites.
If people need a fix or a feature, they use to trust advices.

Last week I heard a collegue about a 10g RAC database with performance problems and, since I never lay on my chair, I probed both AWR and ADDM . I suddenly recognized heavy enqueues and physical reads
over a segment named STATS$USER_LOG. “Strange”, I said, “I cannot remember this name in neither perfstat or catalog segments”.
Then I searched the Internet and the Metalink and found the same thing in BOTH metalink.oracle.com and www.dba-oracle.com: a trick to trace logon and logoffs into a table using system triggers.

Look at this code:

create or replace trigger
logon_audit_trigger
AFTER LOGON ON DATABASE
BEGIN
insert into stats$user_log values(
user,
sys_context('USERENV','SESSIONID'),
sys_context('USERENV','HOST'),
sysdate,
to_char(sysdate, 'hh24:mi:ss'),
[...]
);
COMMIT;
END;
/

Cool, every single access is kept into stats$user_log.

Let’s see the logoff trigger:

create or replace trigger
logoff_audit_trigger
BEFORE LOGOFF ON DATABASE
BEGIN
-- ***************************************************
-- Update the last action accessed
-- ***************************************************
update stats$user_log [...]
--***************************************************
-- Update the last program accessed
-- ***************************************************
update stats$user_log [...]
-- ***************************************************
[ ... many, many updates ...]
-- ***************************************************
update stats$user_log [...]
-- ***************************************************
-- Compute the elapsed minutes
-- ***************************************************
update stats$user_log set elapsed_minutes =
round((logoff_day - logon_day)*1440)
where
sys_context('USERENV','SESSIONID') = session_id;
COMMIT;
END;
/

That’s all. It inserts a row when someone logons. It updates MANY rows when someone logoffs.
There is no match between the record inserted and the records updated (but the session_id).
Neither indexes or constraints.

What’s the matter?

What happens if we have many logons?

SQL> select num_rows from dba_tables where table_name='STATS$USER_LOG';

NUM_ROWS
———-
3053931

What happens if the execution plan does a full scan?

SQL> explain plan for update stats$user_log […]

Explained.

SQL> @?/rdbms/admin/utlxpls
---------------------------------------------
| Id | Operation | Name |
---------------------------------------------
| 0 | UPDATE STATEMENT | |
| 1 | UPDATE | STATS$USER_LOG |
| 2 | TABLE ACCESS FULL| STATS$USER_LOG |
---------------------------------------------

How many reads should it take?

SQL> select bytes/1024/1024 Mb from dba_Segments where segment_name='STATS$USER_LOG';

MB
———-
237

The database performace will decrease constantly and very slowly…..
Remember: never trust a solution if it involves a change on the system.

Plot Oracle historical statistics within SQL*Plus

More than often I’m asked to investigate “what happened yesterday when performance problems appeared”.

Sometimes I have the Enterprise Manager DB Console licensed, sometimes not. Sometimes I have direct SQL*Net access to the database that I may use to produce custom reports with my LAMP self-developed application. But it may happen that only an ssh access is granted to the db server.

That’s why I started to develop some little scripts to plot the trends of database timed statistics.

Let’s see this one:

SQL> @sysstat.sql
Enter a sysstat to search for: physical reads


STAT_ID     STAT_NAME
----------- ------------------------------------------

2263124246 physical reads
4171507801 physical reads cache
297908839 physical reads cache prefetch
2589616721 physical reads direct
2564935310 physical reads direct (lob)
2663793346 physical reads direct temporary tablespace
473165409 physical reads for flashback new
3102888545 physical reads prefetch warmup
531193461 physical reads retry corrupt

9 rows selected.

Enter the desired stat_id: 2263124246
Enter the start date (YYYYMMDD) [defaults today] : 20080922
Enter the end date date (YYYYMMDD) [defaults today] : 20080922

STAT_NAME        START    END
---------------- -------- --------
physical reads   20080922 20080922

BEGIN_INTERVAL_TIME           VALORE PLOTTED_VALUE
------------------------- ---------- -------------------------
22-SEP-08 12.00.12.122 AM          0
22-SEP-08 01.00.28.253 AM     120092
22-SEP-08 02.00.05.039 AM      35780
22-SEP-08 03.00.55.595 AM       4792
22-SEP-08 04.00.43.725 AM       4905
22-SEP-08 05.00.31.855 AM       7300
22-SEP-08 06.00.17.017 AM     234596
22-SEP-08 07.00.08.132 AM      24651
22-SEP-08 08.00.50.936 AM     481884
22-SEP-08 09.00.33.488 AM     130201
22-SEP-08 10.00.03.805 AM    1300306 **
22-SEP-08 11.00.07.764 AM     491857
22-SEP-08 12.00.31.548 PM     304702
22-SEP-08 01.01.04.880 PM    1023664 *
22-SEP-08 02.00.17.822 PM    8588180 ************
22-SEP-08 03.00.36.969 PM    2201615 ***
22-SEP-08 04.01.01.397 PM   17237098 *************************
22-SEP-08 05.00.39.262 PM    1606300 **
22-SEP-08 06.00.03.829 PM     451568
22-SEP-08 07.00.31.461 PM     137684
22-SEP-08 08.00.05.966 PM     203803
22-SEP-08 09.00.24.829 PM     536394
22-SEP-08 10.00.12.945 PM   10209783 **************
22-SEP-08 11.00.35.123 PM    6151663 *********

24 rows selected.

Oh! At 4.00 PM we had a lot of physical reads. Nice.

This is the code:

Ciao

Ludovico