RRDize everything, chapter 1
If you are managing some Application Server deployments you should have wondered how to check and collect performance data.
As stated in documentation, you can gather performance metrics with the dmstool utility.
AFAIK, this can be done from 9.0.2 release upwards, but i’m concerned DMS will not work on Weblogic.
Mainly, you should have an external server that acts as collector (it could be a server in the Oracle AS farm as well): copy the dms.jar library from an Oracle AS installation to your collector and use it as you would use dmstool:
1 |
java -jar dms.jar [dmstool options] |
There are three basilar methods to get data:
Get all metrics at once:
1 |
java -jar dms.jar -dump -a "youraddress://..." [format=xml] |
Get only the interesting metrics:
1 |
java -jar dms.jar -a "youraddress://..." metric metric ... |
Get metrics included into specific DMS tables:
1 |
java -jar dms.jar -a "youraddress://..." -table table table ... |
What youraddress:// is, it depends on the component you are trying to connect:
1 2 3 |
opmn://asserver:6003 http://asserver:7200/dms0/Spy ajp13://asserver:3301/dmsoc4j/Spy |
If you are trying to connect to the OHS (Apache), be careful to allow remote access from the collector by editing the dms.conf file.
Now that you can query dms data, you should store it somewhere.
Personally, I did a first attempt with dmstool -dump format=xml. I wrote a parser in PHP with SimpleXML extension and I did a lot of inserts into a MySQL database. After a few months the whole data collected from tens of servers was too much to be mantained…
To avoid the maintenance of a DWH-grade database I investigated and found RRDTool. Now I’m asking how could I live without it!
I then wrote a parser in awk that parse the output of the dms.jar call and invoke an rrdtool update command.
I always use dms.jar -table command. The output has always the same format:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
###SOF Mon Mar 02 17:01:19 CET 2009 --------------- TABLE1_Name --------------- record1_metric1.name: value units record1_metric2.name: value units .... record2_metric1.name: value units record2_metric2.name: value units .... --- TABLE2_Name --- record1_metric1.name: value units record1_metric2.name: value units .... record2_metric1.name: value units record2_metric2.name: value units .... ##EOF |
So I written an awk file that works for me.
use it this way:
1 |
java -jar dms.jar ... | awk -f parse_output.awk |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
#################### # parse_output.awk # #################### #function pl() replaces all non alphanumeric occurrences with an underscore function pl(input) { return gensub("[^[:alnum:]_-]","_","G",input); } # function get_rrd_path() returns a path where the rrd files should be placed # I should rewrite a new path for each dms table... I'll skip many of them function get_rrd_path() { if (table == "mod_oc4j_destination_metrics") return sprintf("%s/%s/%s/%s.rrd", record["Host"], pl(table), pl(record["Name.value"]), pl(var) ); if (table == "mod_oc4j_mount_pt_metrics") return sprintf("%s/%s/%s/%s/%s.rrd", record["Host"], pl(table), pl(record["Destination.value"]), pl(record["Name.value"]), pl(var) ); if (table == "ohs_server") return sprintf("%s/%s/%s.rrd", record["Host"], pl(table), pl(var) ); if (table == "JVM") return sprintf("%s/%s/%s/%s.rrd", record["Host"], pl(table), pl(record["Process"]), pl(var) ); if (table == "opmn_process") return sprintf("%s/%s/%s/%s/%s/%s/%s/%s.rrd", record["Host"], pl(table), pl(record["iasInstance.value"]), pl(record["opmn_ias_component"]), pl(record["opmn_process_type"]),pl(record["opmn_process_set"]), pl(record["Name"]), pl(var) ); return sprintf("%s/%s/%s.rrd", record["Host"], pl(table), pl(var) ); } # function process_record actually does the dirty work of invoking the update script function process_record() { #every record has a timeStamp.ts metric that I should use to update my rrd ts=substr(record["timeStamp.ts"],0,10); for ( var in record ) { if ( var != "timeStamp.ts" && record[var] ~ /^[[:digit:]]+$/ ) { if ( var ~ /\.(count|completed|time)$/ ) { dstype="DERIVE"; } else { if ( var == "responseSize.value" ) { dstype="DERIVE"; } else { dstype="GAUGE"; } } rrdFile=sprintf("/path_to_data/%s",get_rrd_path()); #### update_metric_rrd is a shell script listed below!!!!! cmd=sprintf("/path_to_scripts/update_metric_rrd %s %s %d %d", rrdFile,dstype,ts,record[var]); system(cmd); } } } # parse_record() populates an hash array # with all metrics belonging to the table record function parse_record() { #print "RRRR - START OF RECORD (table " table ")" delete record while ( ! /^$/ ) { # I'm parsing the record as far I'm in this while statement # the array hash is the name of the dms metric basename. # $1 is the metric name but I have to trim the final ":" key=substr($1,0,length($1)-1) record[key]=$2 getline } # this function is included in funcions.awk: # I invoke it to process the record I've just parsed process_record(); } BEGIN { # as far as started is 0, I've never reached the first table started=0 } #MAIN { # I jump over the first lines until I reach the first table if (started==0) { while ( ! /^---/ ) { getline } started=1 } # looking for the next occurrence of a table # all tables start with: # ---------- # table_name # ---------- if ( /^---/ ) { # first table reached: the next row is my table name, # then I reach again a dashed line ----- getline table getline trash #print "" #print "##########################" print " TABELLA " table #print "##########################" next } if ( ! /^$/ ) { # reached an empty line: could be the end of a record or the and of a table # since a new table is threated in previous "if" statement, I'm starting a new record. parse_record() } } END { } |
And this is the code for update_metric_rrd:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
#!/bin/bash RRDFILE=$1 DSTYPE=$2 TS=$3 VALUE=$4 rrdtool update $RRDFILE ${TS}:${VALUE} if [ $? -ne 0 ] ; then DIR=`dirname $RRDFILE` [ -d $DIR ] || mkdir -p $DIR [ -f $RRDFILE ] || rrdtool create $RRDFILE -b "now-1month" -s 1800 \ DS:metric:${DSTYPE}:7200:0:U \ RRA:AVERAGE:0.5:1:672 \ RRA:AVERAGE:0.5:4:1080 \ RRA:AVERAGE:0.5:12:1460 \ RRA:AVERAGE:0.5:48:1095 \ RRA:MAX:0.5:4:1080 \ RRA:MAX:0.5:12:1460 \ RRA:MAX:0.5:48:1095 \ RRA:LAST:0.5:1:672 rrdtool update $RRDFILE ${TS}:${VALUE} fi |
Once you have all your rrd files populated, it’s easy to script automatic reporting. You would probably want a graph with the request count served by your Apache cluster, along with its linear regression:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
rrdtool graph - -s "end-${hours}hours" -e $end \ -v "Requests Completed/sec" \ -w 640 -h 240 --slope-mode \ -t "HTTP Requests for www.ludovicocaldara.net" \ DEF:1request_completed=/data/wwwserver1/ohs_server/request_completed.rrd:metric:AVERAGE \ DEF:2request_completed=/data/wwwserver2/ohs_server/request_completed.rrd:metric:AVERAGE \ CDEF:request_completed=1request_completed,2request_completed,+ \ VDEF:slope=request_completed,LSLSLOPE \ VDEF:lslint=request_completed,LSLINT \ CDEF:reg=request_completed,POP,slope,COUNT,*,lslint,+ \ LINE1:reg#666666:"Regression" \ AREA:1request_completed#4040AA:"wwwserver1" \ AREA:2request_completed#6666FF:"wwwserver1":STACK \ > mygraph.png |
This is the result:
OHHHHHHHHHHHH!!!! COOL!!!!
That’s all for DMS capacity planning. Stay tuned, more about rrdtool is coming!
Latest posts by Ludovico (see all)
- New views in Oracle Data Guard 23c - January 3, 2024
- New in Data Guard 21c and 23c: Automatic preparation of the primary - December 22, 2023
- Does FLASHBACK QUERY work across incarnations or after a Data Guard failover? - December 13, 2023