System triggers, stats$user_log and side effects

Sometimes people get advice from internet: both Metalink or well-known consulting sites.
If people need a fix or a feature, they use to trust advices.

Last week I heard a collegue about a 10g RAC database with performance problems and, since I never lay on my chair, I probed both AWR and ADDM . I suddenly recognized heavy enqueues and physical reads
over a segment named STATS$USER_LOG. “Strange”, I said, “I cannot remember this name in neither perfstat or catalog segments”.
Then I searched the Internet and the Metalink and found the same thing in BOTH metalink.oracle.com and www.dba-oracle.com: a trick to trace logon and logoffs into a table using system triggers.

Look at this code:

create or replace trigger
logon_audit_trigger
AFTER LOGON ON DATABASE
BEGIN
insert into stats$user_log values(
user,
sys_context('USERENV','SESSIONID'),
sys_context('USERENV','HOST'),
sysdate,
to_char(sysdate, 'hh24:mi:ss'),
[...]
);
COMMIT;
END;
/

Cool, every single access is kept into stats$user_log.

Let’s see the logoff trigger:

create or replace trigger
logoff_audit_trigger
BEFORE LOGOFF ON DATABASE
BEGIN
-- ***************************************************
-- Update the last action accessed
-- ***************************************************
update stats$user_log [...]
--***************************************************
-- Update the last program accessed
-- ***************************************************
update stats$user_log [...]
-- ***************************************************
[ ... many, many updates ...]
-- ***************************************************
update stats$user_log [...]
-- ***************************************************
-- Compute the elapsed minutes
-- ***************************************************
update stats$user_log set elapsed_minutes =
round((logoff_day - logon_day)*1440)
where
sys_context('USERENV','SESSIONID') = session_id;
COMMIT;
END;
/

That’s all. It inserts a row when someone logons. It updates MANY rows when someone logoffs.
There is no match between the record inserted and the records updated (but the session_id).
Neither indexes or constraints.

What’s the matter?

What happens if we have many logons?

SQL> select num_rows from dba_tables where table_name='STATS$USER_LOG';

NUM_ROWS
———-
3053931

What happens if the execution plan does a full scan?

SQL> explain plan for update stats$user_log […]

Explained.

SQL> @?/rdbms/admin/utlxpls
---------------------------------------------
| Id | Operation | Name |
---------------------------------------------
| 0 | UPDATE STATEMENT | |
| 1 | UPDATE | STATS$USER_LOG |
| 2 | TABLE ACCESS FULL| STATS$USER_LOG |
---------------------------------------------

How many reads should it take?

SQL> select bytes/1024/1024 Mb from dba_Segments where segment_name='STATS$USER_LOG';

MB
———-
237

The database performace will decrease constantly and very slowly…..
Remember: never trust a solution if it involves a change on the system.

Sometimes they come back

A collegue called me yesterday.
Collegue: “Hi, I cannot acces a customer database, it’s urgent. Can you help me?”
Me: “Did you ever connect to the database?”
Collegue: “Off course, but last monday netadmins reconfigured a router: that’s when the problem started.”
Me: “Reconfigured what?
Collegue: “A router: the router lost the config.”

I tried the connection to the database: network timeout received.

Me: “Can the customer connect from his network?”
Collegue: “Yes, it doesn’t work only from our network.”
Me: “Is there a NAT?”
Collegue: “Yes, the real address is different.”
Me: “Is the db server running Windows”?
Collegue: “I forgot! It’s a Windows NT.”
Me: “Mmh, I should ask netadmins. Perhaps you can workaround putting “USE_SHARED_SOCKET” in the registry… Is the release at least 9i?”
Collegue: “It’s a 7.3”
Me: “@#”&%!?”