More than often I’m asked to investigate “what happened yesterday when performance problems appeared”.
Sometimes I have the Enterprise Manager DB Console licensed, sometimes not. Sometimes I have direct SQL*Net access to the database that I may use to produce custom reports with my LAMP self-developed application. But it may happen that only an ssh access is granted to the db server.
That’s why I started to develop some little scripts to plot the trends of database timed statistics.
Let’s see this one:
SQL> @sysstat.sql
Enter a sysstat to search for: physical reads
STAT_ID STAT_NAME
----------- ------------------------------------------
2263124246 physical reads
4171507801 physical reads cache
297908839 physical reads cache prefetch
2589616721 physical reads direct
2564935310 physical reads direct (lob)
2663793346 physical reads direct temporary tablespace
473165409 physical reads for flashback new
3102888545 physical reads prefetch warmup
531193461 physical reads retry corrupt
9 rows selected.
Enter the desired stat_id: 2263124246
Enter the start date (YYYYMMDD) [defaults today] :
20080922
Enter the end date date (YYYYMMDD) [defaults today] :
20080922
STAT_NAME START END
---------------- -------- --------
physical reads 20080922 20080922
BEGIN_INTERVAL_TIME VALORE PLOTTED_VALUE
------------------------- ---------- -------------------------
22-SEP-08 12.00.12.122 AM 0
22-SEP-08 01.00.28.253 AM 120092
22-SEP-08 02.00.05.039 AM 35780
22-SEP-08 03.00.55.595 AM 4792
22-SEP-08 04.00.43.725 AM 4905
22-SEP-08 05.00.31.855 AM 7300
22-SEP-08 06.00.17.017 AM 234596
22-SEP-08 07.00.08.132 AM 24651
22-SEP-08 08.00.50.936 AM 481884
22-SEP-08 09.00.33.488 AM 130201
22-SEP-08 10.00.03.805 AM 1300306 **
22-SEP-08 11.00.07.764 AM 491857
22-SEP-08 12.00.31.548 PM 304702
22-SEP-08 01.01.04.880 PM 1023664 *
22-SEP-08 02.00.17.822 PM 8588180 ************
22-SEP-08 03.00.36.969 PM 2201615 ***
22-SEP-08 04.01.01.397 PM 17237098 *************************
22-SEP-08 05.00.39.262 PM 1606300 **
22-SEP-08 06.00.03.829 PM 451568
22-SEP-08 07.00.31.461 PM 137684
22-SEP-08 08.00.05.966 PM 203803
22-SEP-08 09.00.24.829 PM 536394
22-SEP-08 10.00.12.945 PM 10209783 **************
22-SEP-08 11.00.35.123 PM 6151663 *********
24 rows selected.
Oh! At 4.00 PM we had a lot of physical reads. Nice.
This is the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
-- display given statistics from DBA_HIST_SYSSTAT col BEGIN_INTERVAL_TIME for a25 set pages 100 lines 130 set verify off term on accept sysstat prompt 'Enter a sysstat to search for: ' select STAT_ID, STAT_NAME from DBA_HIST_STAT_NAME where lower(STAT_NAME) like lower('%&sysstat%') order by stat_name; accept stat_id prompt 'Enter the desired stat_id: ' accept startdate prompt 'Start date (YYYYMMDD) [today] : ' accept enddate prompt 'End date date (YYYYMMDD) [today] : ' select STAT_NAME, nvl('&startdate',to_char(sysdate,'YYYYMMDD')) as "START", nvl('&enddate',to_char(sysdate,'YYYYMMDD')) as "END" from DBA_HIST_STAT_NAME where STAT_ID = &stat_id; select BEGIN_INTERVAL_TIME, VALORE, substr( rpad('*',40*round( VALORE/max(VALORE)over(),2),'*'),1,40) PLOTTED_VALORE from ( select s.BEGIN_INTERVAL_TIME BEGIN_INTERVAL_TIME, nvl(decode(greatest(VALUE, nvl(lag(VALUE) over (partition by s.dbid, s.instance_number, g.stat_name order by s.snap_id),0)), VALUE, VALUE - lag(VALUE) over (partition by s.dbid, s.instance_number, g.stat_name order by s.snap_id),VALUE), 0) VALORE from DBA_HIST_SNAPSHOT s, DBA_HIST_SYSSTAT g, v$instance i where s.SNAP_ID=g.SNAP_ID and g.STAT_ID='&stat_id' and s.BEGIN_INTERVAL_TIME >= trunc(to_timestamp(nvl('&startdate',to_char(sysdate,'YYYYMMDD')),'YYYYMMDD')) and s.BEGIN_INTERVAL_TIME < = trunc(to_timestamp(nvl('&enddate',to_char(sysdate,'YYYYMMDD')),'YYYYMMDD')+1) and s.instance_number=i.instance_number and s.instance_number=g.instance_number order by 1 ); |
Ciao
—
Ludovico
Latest posts by Ludovico (see all)
- When it comes to using Oracle, trust Oracle… - July 14, 2023
- Video: Where should I put the Observer in a Fast-Start Failover configuration? - November 29, 2022
- Video: The importance of Fast-Start Failover in an Oracle Data Guard configuration - November 29, 2022
Thanks. Its great
What if we want to just to a daily sum and not per snap reading.
SO I just want to see per day and not per snap ID.
Well, in that case you’ll need some aggregation per day…
Like profs use to say: “the solution of this problem is trivial and is left as an exercise for the reader” 🙂
Thank you for sharing..very useful post
Hi Ludovico,
Is the output shows cumulative values or delta values?
Thank you
Hi Satish, the delta is shown. This is thanks to the “lag .. over ” function.
Ludovico,
Nice work!!!.
Please request to validate below SQL can be used for RAC aware script!!
select instance_number,SNAP_ID, BEGIN_INTERVAL_TIME, VALORE,
substr( rpad(‘*’,40*round( VALORE/max(VALORE)over(),2),’*’),1,40) PLOTTED_VALORE
from (
select s.instance_number instance_number,s.SNAP_ID,s.BEGIN_INTERVAL_TIME BEGIN_INTERVAL_TIME,
nvl(decode(greatest(VALUE, nvl(lag(VALUE) over
(partition by s.dbid, s.instance_number, g.stat_name order by s.snap_id),0)),
VALUE,
VALUE – lag(VALUE) over
(partition by s.dbid, s.instance_number, g.stat_name order by s.snap_id),VALUE), 0) VALORE
from DBA_HIST_SNAPSHOT s,
DBA_HIST_SYSSTAT g,
gv$instance i
where s.SNAP_ID=g.SNAP_ID
and g.STAT_ID=’&stat_id’
and s.BEGIN_INTERVAL_TIME >=
trunc(to_timestamp(nvl(‘&startdate’,to_char(sysdate,’YYYYMMDD’)),’YYYYMMDD’))
and s.BEGIN_INTERVAL_TIME < =
trunc(to_timestamp(nvl('&enddate',to_char(sysdate,'YYYYMMDD')),'YYYYMMDD')+1)
and s.instance_number=i.instance_number
and s.instance_number=g.instance_number
order by 1
);
Thanks
Yousuf
Hi Yousuf, it should be RAC aware, it will just gives you onw series per instance.
Pingback: Querying DBA_HIST_SNAPSHOT and DBA_HIST_SYSSTAT « flashdba
eMPi, you’re right, I fixed the post.
Line
substr( rpad(‘*’,40*round( VALUE/max(VALORE)over(),2),’*’),1,40) PLOTTED_VALORE
should be:
substr( rpad(‘*’,40*round( VALORE/max(VALORE)over(),2),’*’),1,40) PLOTTED_VALORE
Otherwise – very good post! Thanks
Keep up the good work.