10 Useful DBA Commands & Shortcuts

There are some useful DBA commands/shortcuts that may be unknown and useful to a budding DBA.

1. When you see this:

Users-MacBook-Pro:~ User$ ssh -i ./MEAppsMgr.pem root@54.172.55.230

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
c8:08:96:e5:2f:50:57:08:40:20:b3:3c:b5:9a:e1:67. Please contact
your system administrator. Add correct host key in /Users/User/.ssh/known_hosts
to get rid of this message. Offending RSA key in /Users/User/.ssh/known_hosts:124
RSA host key for 54.172.55.230 has changed and you have requested strict
checking. Host key verification failed.

Why not do this?

Users-MacBook-Pro:~ User$ vi +124 /Users/Users/.ssh/known_hosts

Go straight to the line to edit/remove.

2. Find files for the last few minutes:

find . -name ‘*.log’ -mmin -10 -exec ls -l {} \;

mmin – modified in the last x mins
cmin – created in the last x mins
amin – accessed in the last x mins

3. Instead of typing :

SQL> @$ORACLE_HOME/rdbms/admin/

why not…

SQL> @?/rdbms/admin/

4. Use EZ connect to connect to databases

Users-MacBook-Pro:~ User$ sqlplus user/@gc12c-db.e-dba.com:1521/oem.e-dba.com

5. Remove jobs from DBA_JOBS even if you’re not the owner:

Use DBMS_IJOB.REMOVE

N.B. – Please don’t run this command in Production or anywhere you don’t want to remove jobs!

BEGIN
FOR e IN (SELECT * FROM dba_jobs)
LOOP
SYS.DBMS_IJOB.REMOVE( e.job );
END LOOP;
END;
/

6. Miscellaneous loops in shell:

[root@host]# for s in {1..10}
> do
> echo $s
> done
1
2
3
4
5
6
7
8
9
10

Or

[root@host]# END=5
[root@host]# for L in `seq 1 $END`
> do
> echo $L
> done
1
2
3
4
5

7. Send attachments from shell scripts:

uuencode weeklyreport_${DDMM}.html weeklyreport_${DDMM}.html) | mailx -s “Previous Week’s Report” user@emailaddress.com

8. Run host commands from SQLPLUS:

select ‘host grep -i ”’||username ||”’ /etc/passwd’ from dba_users
where username = ‘JOHN’
SQL> /

‘HOSTGREP-I”’||USERNAME||”’/ETC/PASSWD’
———————————————————
host grep -i ‘JOHN’ /etc/passwd

SQL> host grep -i ‘JOHN’ /etc/passwd
john:x:2468:2468:John Smith:/home/john:/bin/bash

SQL>

9. Turn numbering on in vi:

While vi’ing a file:

:se number

Turn it off:

:se nonumber

10. Search contents of JAR files:

Looking in a jar file for case insensitive presence of ‘oracle’.

[oracle@hostname ~]$ find /home/oracle/ -type f -name ‘*.jar’ -print0 | xargs -n1 -0i sh -c ‘jar tf “{}” | grep -qi oracle && echo “{}”’
/home/oracle/osbws_install.jar
[oracle@hpsmprd ~]$

Saving you 15 minutes: Action Required!! Drilling into automated upgrade jobs in 12c OEM

Here is hopefully a handy tip for anyone running automated database upgrades via 12c OEM.

When you are running an automated upgrade job you may have the need to action something manually (ironic i know).  This could be the need to run some root commands for example.  Now depending on which view you are in for the automated upgrade job will depend on how easy it is to find the thing you are trying to action.

If you are watching your job you will be in this screen, which is the screen that the job defaults to after you have launched it:

AutoPro1

Now when you see action required and you click on the link it will take you to another screen and then you click on that link, and so on and so on and so on until you get to the action required.  This makes the process feel quite clumbersum.  I think what’s occurring is that you drilling down into the individual job steps.

AutoPro2

Here you can see that you click through 4 different pages.

AutoPro3

You will finally get here:

AutoPro4

However once you have launched your provisioning job if you come out of that screen and go to the main summary screen i.e.

Enterprise > Provisioning > Procedure Activity; you will see this:

AutoPro5

Here you see all the jobs and the summary status, however when ‘Action Required’ is highlighted at this level if you click on it, it will take you to exactly the screen that you needed, rather than going through endless drill downs.

Strange but true…

TweetMe:pbedba

Upgrading a Single PDB from 12.1.0.1 to 12.1.0.2 – with Apex involved

I was giving a seminar recently on 12c and discussing multi-tenant with some customers, and wanted to clear up a few of the issues around patching.

One of the benefits of the 12c Multi-tenant option (and in my mind possibly the biggest benefit) is that of patching, being able to consolidate a bunch of databases down into a CDB and then patch them one by one. This differs substantially from the “grid” architecture we talked about in 10g and 11g, under which a whole bunch of databases were consolidated down into a RAC cluster or schema based consolidation.

What I’m going to show here is how easy it is to migrate a single PDB from 12.1.0.1 to 12.1.0.2 using the unplug/plug method and along the way show how we can quickly fail back as well.

I think i’ve covered multi-tenant in enough detail elsewhere to explain the concepts there, so I’ll assume some familiarity at this point.

In my example 12.1.0.1 has been running (and patched) on the server (non-RAC but using ASM) and a new 12.1.0.2 container database has been created on the same server. I’ve also un-installed Apex and the container level, and instead got it running in the PDB individually (allowing each PDB to have a different Apex version if required – there’s a reason I did this… read on)

At container level


SQL> show con_name
CON_NAME
——————————
CDB$ROOT

SQL> SELECT VERSION_NO FROM APEX_RELEASE;
SELECT VERSION_NO FROM APEX_RELEASE
*
ERROR at line 1:
ORA-00942: table or view does not exist



So we have no Apex installed at CDB level, let’s check what we have at PDB level


SQL> alter session set container = PDB1;


Session altered.

 

SQL> show con_name
CON_NAME
——————————
PDB1


SQL> SELECT VERSION_NO FROM APEX_RELEASE;
VERSION_NO
————————————————————————
4.0.2.00.07



So in this PDB I’ve got an (admittedly old) version of Apex running. If I look at the listener I can see that this PDB is being served from the listener just fine (output truncated for brevity)


[oracle@upgradelab ~]$ lsnrctl status

LSNRCTL for Linux: Version 12.1.0.1.0 – Production on 24-MAR-2015 08:32:29

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
STATUS of the LISTENER
————————

Services Summary…
Service “UPGRD.e-dba.com” has 1 instance(s). Instance “UPGRD”, status READY, has 1 handler(s) for this service…
Service “UPGRDXDB.e-dba.com” has 1 instance(s).
Instance “UPGRD”, status READY, has 1 handler(s) for this service…
Service “pdb1.e-dba.com” has 1 instance(s).
  Instance “UPGRD”, status READY, has 1 handler(s) for this service…
The command completed successfully

At this stage I now have a new CDB (with a SID of NEW) running on the server, and I’ve created a PDB called DUMMY in it (it’s also patched with the Jan 2015 patchet), and a listener on a new port (again output truncated for brevity)

[oracle@upgradelab trace]$ lsnrctl status
LSNRCTL for Linux: Version 12.1.0.2.0 – Production on 24-MAR-2015 09:56:04
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1522)))
Services Summary…
Service “NEW.e-dba.com” has 1 instance(s).
  Instance “NEW”, status READY, has 1 handler(s) for this service…
Service “NEWXDB.e-dba.com” has 1 instance(s).
  Instance “NEW”, status READY, has 1 handler(s) for this service…
Service “dummy.e-dba.com” has 1 instance(s).
  Instance “NEW”, status READY, has 1 handler(s) for this service…
The command completed successfully

And just confirming the new CDB has Apex uninstalled…


 SQL> show con_name
 CON_NAME
 ——————————
 CDB$ROOT
 SQL> SELECT VERSION_NO FROM APEX_RELEASE;
 SELECT VERSION_NO FROM APEX_RELEASE
 *
ERROR at line 1:
ORA-00942: table or view does not exist



Now let’s start the process of migrating the PDB to the new container, first I’m going to connect to the 12.1.0.1 instance and close the PDB, then I’m going to unplug it, which will create a small XML file describing the instance. I’m then going to drop it, and check that it’s been de-registered from the listener

 

[oracle@upgradelab ords]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Tue Mar 24 10:26:21 2015
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


SQL> show pdbs
    CON_ID CON_NAME   OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED   READ ONLY  NO
3 PDB1   READ WRITE NO


SQL> alter pluggable database pdb1 close;
Pluggable database altered.


SQL> alter pluggable database pdb1 unplug into ‘/home/oracle/pdb1.xml’;
Pluggable database altered.


SQL> drop pluggable database pdb1;
Pluggable database dropped.


SQL> show pdbs
    CON_ID CON_NAME   OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED   READ ONLY  NO


NOTE : I didn’t using the “including datafiles” clause as I still now have a back out option for my upgrade.. which is to simply plug in the existing database and datafiles back into the 12.1.0.1 instance.

Now to plug the database back into the new 12.1.0.2 instance… here I’m going to use the COPY keyword (even though it’s the default) so I get new copies of the datafiles. This means I’m bound by my IO subsystem, but the original datafiles are unaffected, leaving me a back out option.

SQL> create pluggable database pdb2  using ‘/home/oracle/pdb1.xml’
  2  COPY file_name_convert = (‘UPGRD’, ‘NEW’);
Pluggable database created.
SQL> show pdbs
    CON_ID CON_NAME   OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED   READ ONLY  NO
3 DUMMY   READ WRITE NO
4 PDB2   MOUNTED


Now it’s been created, now let’s try and open it…


SQL> alter pluggable database pdb2 open;


Warning: PDB altered with errors.

SQL> show pdbs
    CON_ID CON_NAME   OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED   READ ONLY  NO
3 DUMMY   READ WRITE NO
4 PDB2   MIGRATE    YES

There you go, the first thing I wanted to clear up. When you migrate a single PDB using the unplug/plug method you still need to run the upgrade script for that PDB as below…


[oracle@upgradelab admin]$ pwd
/u00/app/oracle/product/12.1.0.2/dbhome_1/rdbms/admin
[oracle@upgradelab admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c ‘PDB2′ catupgrd.sql
Argument list for [catctl.pl]
SQL Process Count     n = 0
SQL PDB Process Count N = 0
Input Directory       d = 0
Phase Logging Table   t = 0
…Output truncated…

Serial   Phase #:72 Files: 1     Time: 7s    PDB2
Serial   Phase #:73 Files: 1     Time: 0s    PDB2
——————————————————
Phases [0-73]         End Time:[2015_03_24 11:07:07]
Container Lists Inclusion:[PDB2] Exclusion:[NONE]
——————————————————

Grand Total Time: 1985s PDB2

LOG FILES: (catupgrdpdb2*.log)

Upgrade Summary Report Located in:
/u00/app/oracle/product/12.1.0.2/dbhome_1/cfgtoollogs/NEW/upgrade/upg_summary.log

Total Upgrade Time:          [0d:0h:33m:5s]

     Time: 1992s For PDB(s)

Grand Total Time: 1992s

LOG FILES: (catupgrd*.log)

Grand Total Upgrade Time:    [0d:0h:33m:12s]

That’s 33 minutes to upgrade my PDB! Anyone who’s done a 12.1.0.2 (or for that matter 12.1.0.1 upgrade from 11.2) will know this seems quite a long time (I was only on a 2 vCPU VM doing this). So why did it take so long?

SQL> alter pluggable database pdb2 open;
Pluggable database altered.
SQL> show pdbs
    CON_ID CON_NAME   OPEN MODE  RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED   READ ONLY  NO
3 DUMMY   READ WRITE NO
4 PDB2   READ WRITE NO
VERSION_NO
——————
4.2.5.00.08


So whilst the upgrade is painless the same rules apply as with a non-CDB upgrade, the more components you have the longer it’s going to take, and in this case having Apex there meant my Apex got upgraded too (a little unexpectedly) and took 16 minutes of the 33 when I looked back at the timings. If you’re interested it’s actually the script $ORACLE_HOME/rdbms/admin/cmpupmsc.sql that does the upgrade with this section…

Rem =====================================================================
Rem Upgrade Application Express
Rem =====================================================================

Rem Set identifier to APEX for errorlogging
Rem SET ERRORLOGGING ON TABLE SYS.REGISTRY$ERROR IDENTIFIER ‘APEX’;

SELECT dbms_registry_sys.time_stamp_display(‘APEX’) AS timestamp FROM DUAL;
SELECT dbms_registry_sys.dbupg_script(‘APEX’) AS dbmig_name FROM DUAL;
@&dbmig_file
SELECT dbms_registry_sys.time_stamp(‘APEX’) AS timestamp FROM DUAL;


Before anyone comments, yes I know that 12.1 only supports Apex 4.2.5 and above, so perhaps I should have expected the Apex upgrade, but the whole thing around separation of “user” and “system” parts of the database in the multi-tenant appears to only hold true for true user stuff and not the Oracle components such as Apex.
Anyway the point being not that Apex got upgraded (I just think that’s worth pointing out as once again it re-enforces the need to drop the stuff you aren’t using before an upgrade), but to show how easy it is to upgrade a single PDB inside multi-tenant, and how the back out option of at any point just doing a “create pluggable database XXX from YYYY’ exists back in your original instance.




/

A Virtualbox OVM Model

I have a task coming up at a customer site to install a version of Oracle Virtual Machine which is relatively new, and which I haven’t installed before. OVM, as it’s more often known within fashionable circles, is Oracle’s virtualisation product, based on the tried and trusted Xen hypervisor. It normally involves installing software across at least two different bare metal machines (hosts for VMs), and a third machine which is often established as a virtual, used to run the management software. The two (or more) VM hosts are typically connected to a storage array.

I like to be as prepared as I possibly can be before visiting a customer to carry out any piece of work and my preference for a new product or version would be to try it out on a testbed first, to be forewarned of any new gotchas and work through any wrinkles (and because it’s always nice to practise). However, I don’t usually have two spare bare metal servers handy to transform into a test OVM 3.2.2 cluster, or indeed a SAN.

Fortunately there’s an answer, in the form of a different Oracle virtualisation product – VirtualBox. This is a very popular and useful piece of software that’s typically used to allow one or two virtual machines – perhaps running different operating systems, a different architecture or a particular set of software – to be run on a desktop PC or laptop. It turns out that with a bit of effort and enough memory – my company laptop has recently been furnished with 16G of RAM for this very purpose – it’s quite possible to set up a modest OVM cluster and manager, all installed as virtual machines. I also set up a small Ubuntu VM as an iSCSI target with LUNs presented to the OVM hosts; a virtual SAN.

It’s an effective way to rehearse an installation, and to provide a testbed for testing operations like server pool migrations. I have done this for two versions of OVM now, and each time it has proved very useful. Admittedly I did have to cheat a little – I don’t quite have enough memory to go round for the minimum requirements specified for each of the necessary components, but I found that my test systems worked well enough for the purposes they were assigned to.

Obviously a system like this, realised entirely in Virtualbox, could only be valuable as a model. I like to call it ‘meta-virtualisation’.

 

Saving you 15 minutes: 12c OEM BI Violation Reports Custom and Standard Rules

I’m developing some 12c OEM LCMP BI reports; for these reports I would like people to be able to select a target and see:

  • Target Name
  • Description of the Compliance Rule
  • Severity of the Rule
  • Violation Cause

When you’re drilling into violations the OEM dashboards are good, and they do let you see the details.  However you can only see a sub-section of the details at any one time; and you have to keep drilling in; you don’t get to see everything all at once.  If great for one-off queries but if you want a list of rules and violations you need something else.

To get around this I wanted to create BI compliance report to show this and in this blog I will go through writing the underlying query.

For this query we need four different views:

  • MGMT$TARGET (obviously)
  • MGMT$COMPLIANCE_STANDARD_RULE (Rule descriptions)
  • MGMT$CSR_CURRENT_VIOLATIONS (Violation Information)
  • MGMT$CSR_VIOLATION_CONTEXT (Violation Details)

The challenge is that MGMT$CSR_VIOLATION_CONTEXT contains useful data and metadata.  If we take the standard rule:

Access to DBA_SYS_PRIVS

If that rule has hit a violation the MGMT$CSR_VIOLATION_CONTEXT it will record the privilege and grantee, for example DMSYS and SELECT.  I don’t really care about what privilege that user has, I just want to know which user.

In the table you will see:

COLUMN_STR_VALUE       COLUMN_NAME
DMSYS                  grantee
SELECT                 privilege

For custom compliance rules it stores ATTR, VALUE and INFO for each violation.  So for a custom rule which looks at the status of the SYSTEM account and ensures it is locked, the information that is stored is:

COLUMN_STR_VALUE       COLUMN_NAME
ACCOUNT_STATUS	       ATTR
OPEN	               VALUE
attrvalue	       INFO

Here is another an example:

select 
a.target_name target_name, 
b.description description,
b.severity,
d.column_str_value
from
MGMT$TARGET a,
MGMT$COMPLIANCE_STANDARD_RULE b,
MGMT$CSR_CURRENT_VIOLATION c,
MGMT$CSR_VIOLATION_CONTEXT d
where
a.target_guid=c.target_guid and
b.rule_guid=c.rule_guid and
c.violation_guid=d.violation_guid and
a.target_type='oracle_database' and
a.target_name= 'TPB1'

BI1

So to get around this you need to filter out certain COLUMN_NAMES from the MGMT$CSR_VIOLATION_CONTEXT table. Also this reinforces the point that any configuration extensions you create really need to be a single column row value as this would reduce any amount of filtering you would need to do here.

select 
a.target_name target_name, 
b.description description,
b.severity,
d.column_str_value
from
MGMT$TARGET a,
MGMT$COMPLIANCE_STANDARD_RULE b,
MGMT$CSR_CURRENT_VIOLATION c,
MGMT$CSR_VIOLATION_CONTEXT d
where
a.target_guid=c.target_guid and
b.rule_guid=c.rule_guid and
c.violation_guid=d.violation_guid and
a.target_type='oracle_database' and
d.column_name not in ('ATTR','INFO','privilege','limit') and
a.target_name= 'TPB1'