Saving you 15 minutes: Dynamically Generating a PIVOT clause using SYS_CONNECT_BY_PATH

After looking at MGMT$ views and PIVOT queries I wondered if it was possible to dynamically generate the PIVOT clause between to dates, it turns out it is with a bit of SQL.

As always I will layer this up but if you can’t be bothered to read it then go to the end of the blog and just copy the SQL and give it a try.

The part of the PIVOT clause I want to generate is this and I only want to put in the start and end dates, the SQL needs to generate all the dates on a single line between the two dates specified.

AVG (average) FOR RT IN ('2014-01' AS JAN_14 ,'2014-06' AS JUN_14 ,'2014-12' AS DEC_14)

Step 1:

First lets create our list of dates, the query just gives us a list of dates between a start and end date:

SELECT to_date('04-01-2006','dd-mm-yyyy')+lvl DATE_COL
      FROM
        (SELECT level - 1 lvl
        FROM dual
          CONNECT BY level <= (to_date('10-01-2006','dd-mm-yyyy') - to_date('04-01-2006','dd-mm-yyyy'))+ 1
        )

04-JAN-06
05-JAN-06
06-JAN-06
07-JAN-06
08-JAN-06
09-JAN-06
10-JAN-06

Step 2:

Now that we have a set of dates we will want to put them in a single column, for this I will need a hierarchical query.  To help me to do this I need to put some additional columns in to help me generate that hierarchy.  All I have done is added in the rownum and rownum+1, i.e. generating a hierarchy of the dates starting at 04-JAN-06.

SELECT to_date('04-01-2006','dd-mm-yyyy')+lvl DATE_COL,
        rownum col_num,
        rownum+1 col_num1
      FROM
        (SELECT level - 1 lvl
        FROM dual
          CONNECT BY level <= (to_date('10-01-2006','dd-mm-yyyy') - to_date('04-01-2006','dd-mm-yyyy'))+ 1
        )

04-JAN-06		1	2
05-JAN-06		2	3
06-JAN-06		3	4
07-JAN-06		4	5
08-JAN-06		5	6

Step 3:

So we have a date and a hierarchy but the PIVOT clause has the format ‘date’ as date, so I need to replicate that out.  It also has the ‘AS’ clause and a quote.  For the quote I have used the ascii table chr(039) this is easier in adding in ’ to the query itself because you need to add in so many ‘ that it becomes unreadable after a while.  You will also see that I have XXXX in the query as well, this is because when Oracle generates a hierarchy I cannot have any whitespace in the clause.

SELECT date_col,
      chr(039)
      ||date_col
      ||chr(039)
      ||'XXXX'
      ||date_col TEST,
      rownum col_num,
      rownum+1 col_num1
    FROM
      (SELECT to_date('04-01-2006','dd-mm-yyyy')+lvl DATE_COL,
        rownum col_num,
        rownum+1 col_num1
      FROM
        (SELECT level - 1 lvl
        FROM dual
          CONNECT BY level <= (to_date('10-01-2006','dd-mm-yyyy') - to_date('04-01-2006','dd-mm-yyyy'))+ 1
        )
      )

04-JAN-06		'04-JAN-06'XXXX04-JAN-06	1	2
05-JAN-06		'05-JAN-06'XXXX05-JAN-06	2	3
06-JAN-06		'06-JAN-06'XXXX06-JAN-06	3	4
07-JAN-06		'07-JAN-06'XXXX07-JAN-06	4	5
08-JAN-06		'08-JAN-06'XXXX08-JAN-06	5	6
09-JAN-06		'09-JAN-06'XXXX09-JAN-06	6	7
10-JAN-06		'10-JAN-06'XXXX10-JAN-06	7	8

Step 4:

Now we add in the hierarchy query:

SYS_CONNECT_BY_PATH is used on the column TEST which is our concatenated date column, it then also uses a REPLACE outside the SYS_CONNECT_BY_PATH to replace the XXXX with ‘ AS ‘.  At the bottom of the query I start with the start date and specify the hierarchy element by using the rownum columns.

SELECT date_col,
    TEST,
    col_num,
    col_num1,
    REPLACE(sys_connect_by_path(TEST,' '),'XXXX',' AS ') "PATH"
  FROM
    (SELECT date_col,
      chr(039)
      ||date_col
      ||chr(039)
      ||'XXXX'
      ||date_col TEST,
      rownum col_num,
      rownum+1 col_num1
    FROM
      (SELECT to_date('04-01-2006','dd-mm-yyyy')+lvl DATE_COL,
        rownum col_num,
        rownum+1 col_num1
      FROM
        (SELECT level - 1 lvl
        FROM dual
          CONNECT BY level <= (to_date('10-01-2006','dd-mm-yyyy') - to_date('04-01-2006','dd-mm-yyyy'))+ 1
        )
      )
    )
    START WITH date_col      ='04-JAN-06'
    CONNECT BY prior col_num1=col_num

For readability I have only put the PATH column below. If you execute this query in full you will see all the columns.

'04-JAN-06' AS 04-JAN-06
'04-JAN-06' AS 04-JAN-06 '05-JAN-06' AS 05-JAN-06
'04-JAN-06' AS 04-JAN-06 '05-JAN-06' AS 05-JAN-06 '06-JAN-06' AS 06-JAN-06
'04-JAN-06' AS 04-JAN-06 '05-JAN-06' AS 05-JAN-06 '06-JAN-06' AS 06-JAN-06 '07-JAN-06' AS 07-JAN-06
'04-JAN-06' AS 04-JAN-06 '05-JAN-06' AS 05-JAN-06 '06-JAN-06' AS 06-JAN-06 '07-JAN-06' AS 07-JAN-06 '08-JAN-06' AS 08-JAN-06 ...
'04-JAN-06' AS 04-JAN-06 '05-JAN-06' AS 05-JAN-06 '06-JAN-06' AS 06-JAN-06 '07-JAN-06' AS 07-JAN-06 '08-JAN-06' AS 08-JAN-06 ...
'04-JAN-06' AS 04-JAN-06 '05-JAN-06' AS 05-JAN-06 '06-JAN-06' AS 06-JAN-06 '07-JAN-06' AS 07-JAN-06 '08-JAN-06' AS 08-JAN-06 ...

Step 5:

Finally I add in the starting piece of my pivot clause and concatenate it with the PATH column, I then add in a where clause at the end and select the highest number in the hierarchy or for simplicity sake the number of days between the two dates.  This query can then be used for generate any DATE related PIVOT clause, all you need to do is just edit the date values in the original query.

The output is:

AVG (average) FOR RT IN  '04-JAN-06' AS 04-JAN-06 '05-JAN-06' AS 05-JAN-06 '06-JAN-06' AS 06-JAN-06 '07-JAN-06' AS 07-JAN-06 '08-JAN-06' AS 08-JAN-06 '09-JAN-06' AS 09-JAN-06 '10-JAN-06' AS 10-JAN-06

SELECT ‘AVG (average) FOR RT IN ‘
||PATH
FROM
(SELECT date_col,
TEST,
col_num,
col_num1,
REPLACE(sys_connect_by_path(TEST,’ ‘),’XXXX’,’ AS ‘) “PATH”
FROM
(SELECT date_col,
chr(039)
||date_col
||chr(039)
||’XXXX’
||date_col TEST,
rownum col_num,
rownum+1 col_num1
FROM
(SELECT to_date(’04-01-2006′,’dd-mm-yyyy’)+lvl DATE_COL,
rownum col_num,
rownum+1 col_num1
FROM
(SELECT level – 1 lvl
FROM dual
CONNECT BY level <= (to_date(’10-01-2006′,’dd-mm-yyyy’) – to_date(’04-01-2006′,’dd-mm-yyyy’))+ 1
)
)
)
START WITH date_col =’04-JAN-06′
CONNECT BY prior col_num1=col_num
)
WHERE col_num=7

Saving you 15 minutes: What a lovely view:- MGMT$DAILY_METRICS Part 2

So to start with lets look at a query to check database growth over a period of time.

Here is our starting query:

SELECT target_name, key_value, TO_CHAR(ROLLUP_TIMESTAMP,'YYYY-MM') RT, round(average,0) average,  
FROM MGMT$METRIC_DAILY 
WHERE 
METRIC_NAME='tbspAllocation' AND
METRIC_COLUMN='spaceAllocated' AND
TARGET_NAME='DB1'

This will tell you each for each day the average values in terms of allocated space of each tablespace.  However in terms of date it will round the value up to the month, this will become clear why shortly.

SELECT target_name, 
key_value TABLESPACE_NAME, 
round(JAN_14,0) JAN_14, 
round(JUN_14,0) JUN_14, 
round(DEC_14,0) DEC_14
             FROM (
             SELECT target_name, key_value, TO_CHAR(ROLLUP_TIMESTAMP,'YYYY-MM') RT, round(average,0) average
             FROM MGMT$METRIC_DAILY 
             WHERE 
             METRIC_NAME='tbspAllocation' AND
             METRIC_COLUMN='spaceAllocated' AND
             TARGET_NAME='DB1'
                  )
PIVOT
(
AVG (average) FOR RT IN ('2014-01' AS JAN_14 ,'2014-06' AS JUN_14 ,'2014-12' AS DEC_14)
)

The next query then introduces a pivot.  This does two things, firstly it displays things nicely but more importantly it averages out the averages.  So for any given month a tablespace may have a number of different sizes, but really we just want the average for that month, not each day.  You specify the pivot on the date column based on three intervals, January, June and December.

FOR RT IN ('2014-01' AS JAN_14 ,'2014-06' AS JUN_14 ,'2014-12' AS DEC_14)

This bit can be as long or as short as you like, but you need to edit this as you see fit, it’s not possible to generate this dynamically (although it might be with PIVOT XML).

The final query is:

SELECT 
target_name, 
SUM(JAN_14)/1024 JAN_TOTAL,
SUM(JUN_14)/1024 JUN_TOTAL,
SUM(DEC_14)/1024 DEC_TOTAL
         FROM (
                SELECT target_name, 
                key_value TABLESPACE_NAME, 
                round(JAN_14,0) JAN_14, 
                round(JUN_14,0) JUN_14, 
                round(DEC_14,0) DEC_14
                   FROM (
                        SELECT target_name, key_value, TO_CHAR(ROLLUP_TIMESTAMP,'YYYY-MM') RT, round(average,0) average
                        FROM MGMT$METRIC_DAILY 
                        WHERE 
                        METRIC_NAME='tbspAllocation' AND
                        METRIC_COLUMN='spaceAllocated' AND
                        TARGET_NAME='DB1'
                        )
PIVOT
(
AVG (average) FOR RT IN ('2014-01' AS JAN_14 ,'2014-06' AS JUN_14 ,'2014-12' AS DEC_14)
))

This just takes all the monthly tablespace information for each tablespace and summarise it and also divides it by 1024 to put the totals in MB rather than KB.  The output is as follows:

TARGET_NAME                JAN_TOTAL        JUN_TOTAL        DEC_TOTAL
DB1                                        11302                    12060                    12306

The great thing of course is that you can do this for any target across the estate (as long as you have the data going back to that time, otherwise you will get a NULL in the column).

 

Saving you 15 minutes: What a lovely view; MGMT$DAILY_METRICS Part 1

I have been in a couple of meetings recently when people have been talking about capacity and storage management.  If you are running OEM then MGMT$DAILY_METRICS should capture everything you would want to know about any database ‘trends’.  A quick query against the MGMT$DAILY_METRICS show the number of different ‘space’ related metrics which can be gathered.  If we combine this with a bit of thought, analysis and maybe a dash of analytics we could hopefully develop a set of scripts which cover all types of storage and capacity queries.

Next part coming soon.

Host Storage   Summary ASM   Storage Overhead (GB)
Host Storage   Summary Databases   Storage Used (GB)
Host Storage   Summary Disk   Storage Allocated (GB)
Host Storage   Summary Total   Storage Used (GB)
Host Storage   Summary ASM Storage Unallocated (GB)
host Storage   Summary Total   Storage Allocated (GB)
host Storage   Summary Volumes   Storage Allocated (GB)
host Storage   Summary Volumes   Storage Unallocated (GB)
host Storage   Summary ASM   Storage Allocated (GB)
host Storage   Summary Local   File Systems Storage Free (GB)
host Storage   Summary Total   Storage Free (GB)
host Storage   Summary Writeable   NFS Storage Used (GB)
host Storage Summary Databases   Storage Free (GB)
host Storage   Summary Volumes   Storage Overhead (GB)
host Storage   Summary Disk   Storage Unallocated (GB)
host Storage   Summary Local   File Systems Storage Used (GB)
host Storage   Summary Total   Storage Unallocated (GB)
host Storage   Summary Writeable   NFS Storage Free (GB)
host Storage   Summary Total   Storage Overhead (GB)
oracle_database Database   Size Allocated   Space(GB)
oracle_database Database   Size Used   Space(GB)
oracle_database Archive   Area Total   Archive Area (KB)
oracle_database Archive   Area Free   Archive Area (KB)
oracle_database Dump   Area Total   Dump Area (KB)
oracle_database Archive   Area Archive   Area Used (KB)
oracle_database Dump   Area Dump   Area Used (KB)
oracle_database Dump   Area Free   Dump Area (KB)
osm_cluster ASM   Volumes Disk   Group Allocated Space (GB)
osm_cluster ASM   Volumes Size   (GB)
osm_cluster ASM   Cluster File System Size   (GB)
osm_cluster ASM   Cluster File System Disk   Group Allocated Space (GB)
osm_cluster ASM   Cluster File System Used   (GB)
osm_cluster ASM   Cluster File System Free   (GB)
osm_instance ASM   Cluster File System Size   (GB)
osm_instance ASM   Cluster File System Free   (GB)
osm_instance ASM   Volumes Disk   Group Allocated Space (GB)
osm_instance ASM   Cluster File System Used   (GB)
osm_instance ASM Volumes Size   (GB)
osm_instance ASM   Cluster File System Disk   Group Allocated Space (GB)
rac_database Database   Size Used   Space(GB)
rac_database Database   Size Allocated   Space(GB)
rac_database Archive   Area Free   Archive Area (KB)
rac_database Archive   Area Archive   Area Used (KB)
oracle_emd Agent   Process Statistics Agent Virtual Memory Utilization (KB)
oracle_emd Agent   Process Statistics Agent Resident Memory Utilization (KB)
oracle_emd Agent   Process Statistics Virtual   Memory Utilization (KB)
weblogic_j2eeserver JVM   Memory Pools Memory   Pool – Peak Usage (KB)
weblogic_j2eeserver JVM   Memory Usage JVM Memory – Non-Heap Memory Usage (KB)
weblogic_j2eeserver JVM   Memory Usage JVM   Memory – Heap Memory Usage (KB)
oracle_emd Agent   Process Statistics Resident   Memory Utilization (KB)
rac_database Archive   Area Total   Archive Area (KB)

 

 

 

Saving you 15 minutes: LCMP Compliance ‘Required Data Available’ and Forced Configuration Refresh

I think this may be a bug and unless you’re really into the details of the Life Cycle Management Pack you may want to look away now.

If you go into your compliance results you will see a little flag, Required Data Available.  This will tell you if your targets have the relevant configuration extensions deployed to a target to allow it to fully evaluate its compliance score.  Ever since I have upgraded my Agents from 12.1.0.3 to 12.1.0.4 this has been set to No and I think I have found out why.

SCORES1

If you are using any Oracle supplied rules you will need to ensure the oracle_security SNAPSHOT_TYPE is deployed to your targets, this is normally done via monitoring templates and ‘other collected items’.  From 12.1.0.1 to 12.1.0.3 the SNAPSHOT_TYPE is oracle_security.  From 12.1.0.4 this is called oracle_security_inst2.  You can see all your SNAPSHOT_TYPES in MGMT$ECM_CURRENT_SNAPSHOTS.

I am guessing that when you upgrade your agent it should automatically deploy the oracle_security_inst2 SNAPSHOT_TYPE.  However it doesn’t.

If you go into ANY target which has the required data available flag set to NO and go to Configuration you will see the date that the Configuration was refreshed.  Looking at this it seems that the configuration is still being collected at regular daily intervals.

SCORES2

It seems that the only way to resolve this is to manually refresh the target configuration by using the ‘Refresh’ button on the target configuration page.  Once this has been done then you can look into MGMT$ECM_CURRENT_SNAPSHOTS and see the oracle_security_inst2 has been deployed and more importantly you Required Data Available flag is set to Yes.

Unfortunately it doesn’t seem you can use emcli to script a manual target refresh across your estate.

Saving you 15 minutes: Identifying Legacy SAP Users OPS$ Accounts with OEM

It seems that creating users is much easier than removing them!!

If you are running SAP you will probably have multiple SAP tiers running the different SAP products.  SAP environments require OPS$ accounts to be created and those accounts are given high levels of database privileges.  Like any ERP application it is likely you will be cloning these environments between the tiers and therefore you can find yourself with a lot of OPS$ accounts with high levels of database privileges which should be removed.  Luckily for the DBAs is that the OPS$ accounts have a naming convention in-line with the SAP database, therefore all you need to do is find where the OPS$ username doesn’t match the DB name.

If you are using OEM you can identify them with the following query:

select target_name, username
from mgmt$db_users 
where username like 'OPS$%' 
and instr(username,target_name,1,1) = 0
and length(target_name)=3

This query works on the following assumption:

  • Your SAP databases have a 3 character length
  • INSTR searches for the target_name in the username column and if it can’t find that pattern then it will return a row, i.e we have a OPS$ account in a database which doesn’t match the convention, hence it shouldn’t be there.