Saving you 15 minutes: Convert your SYSADMINs to using OEM

I am currently assisting a client with a bit of OS housekeeping and one of the jobs is to double check the OS users which have been granted accessed to the OS DBA group.  Now the estate is pretty large, 400+ servers and there isn’t any centralised OS management solution like LDAP so users have to be manually provisioned and removed.  Obviously this means things get missed.

To execute my job I just ran a simple command via an OS Job: cat /etc/groups | grep dba.  The job executed and then I went through the results and gave a list to the SYSADMINs.  Isn’t this a good case for allow / giving / persuading SYSADMINs to use OEM?

Converting a SYSADMIN to use an ‘database tool’ is never going to be easy, bringing up the subject is difficult enough.  However for me there is one simple and very effective benefit.  Any OS job you execute through OEM is repeatable AND more importantly the results of that job are SAVED in a database.  Therefore not only do you have a snapshot of that configuration but you can also query and manipulate that information with SQL (not that SYSADMINs know it…YET…).  Suddenly SYSADMINs have the benefit of storing information in a databases….before you know it you will give them access to their biscuit draw and there will be peace and harmony.

Exadata’s onecommand fails to validate NTP servers on storage servers

This will be simple and short post on an issue I had recently. I got the following error while running the first step of onecommand – Validate Configuration File:

2015-07-01 12:31:03,712 [INFO  ][    main][     ValidationUtils:761] SUCCESS: NTP servers on machine verified successfully
2015-07-01 12:31:03,713 [INFO  ][    main][     ValidationUtils:761] SUCCESS: NTP servers on machine verified successfully
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:778] Following errors were found...
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host:
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host:
2015-07-01 12:31:03,714 [INFO  ][    main][     ValidationUtils:783] ERROR: Encountered error while running NTP validation error on host:

Right, so my NTP servers were accessible from the db nodes but not from the cells. When I queried the NTP server from the cells I got the following error:

# ntpdate -dv ntpserver1
1 Jul 09:00:09 ntpdate[22116]: ntpdate 4.2.6p5@1.2349-o Fri Feb 27 14:50:33 UTC 2015 (1)
Looking for host ntpserver1 and service ntp
host found :
transmit( Server dropped: no data
server, port 123

Perhaps I should have mentioned that the cells have their own firewall (cellwall) which will only allow certain inbound/outbound traffic. During boot the script will build all the rules dynamically and apply them. Now the above error occurred because of two reasons:

A) The NTP servers were specified using hostname instead of IP addresses in OEDA
B) The management network was NOT available after the initial config (applyElasticConfig) was applied

Because of that cellwall was not able to resolve the NTP servers IP addresses and thus they were omitted from the firewall configuration. The solution is simply to restart the cell firewall – /etc/init.d/cellwall restart

Saving you 15 minutes: Hybrid Cloud Blog Blitz Part 5

Configuring HA for Cloud Gateway Agents.  It took about 15 minutes to read the doc, run the commands, get an error, read the docs and then go ahhhhhhhhh.

So concepts:

  • Hybrid Cloud Agent
    • This is an agent in your Oracle Public Cloud
  • Hybrid Cloud Gateway Agent
    • This is an agent ‘on-prem’ which is just a regular agent but handles the communication to the Hybrid Cloud Agent, this is the bit that you ‘HA’.

So when you are configuring the HA for the Hybrid Cloud Agent it is enabling HA of the Hybrid Cloud Gateway Agent (not the Cloud Agent).  I.e you have two connections to your Hybrid Cloud rather than one.  To do this you need two on-prem agents; the first agent which you register using the command;

./emcli register_hybridgateway_agent hybridgateway_agent_list=''

becomes the Master Gateway agent. To enable HA you then just run the same command again on another agent to register it as a Gateway agent. Now you have registered two agents.  Both of which are on-prem.

./emcli register_hybridgateway_agent -hybridgateway_agent_list=''

The last bit is then to add the additional registered Hybrid Cloud Gateway Agent to the Hybrid Cloud Agent so that it has a master and slave gateway.  When I look at the emcli command now it looks obvious but I was putting in the master gateway agent in as the hybrid agent name and then the slave in the hybrid gateway agent list. Obviously you need to have an Agent in the cloud before you do this.

Hybrid Cloud Agent (Agent in the Cloud):
Master Hybrid Cloud Gateway Agent (On-Prem)
Slave Hybrid Cloud Gateway Agent (On-Prem):

Do this:

./emcli add_hybridgateway_for_hybrid_agent -hybrid_agent_name='' -hybridgateway_agent_list=''

Not this:

./emcli add_hybridgateway_for_hybrid_agent -hybrid_agent_name='' -hybridgateway_agent_list=''

So once that was done l killed the master agent and they Hybrid Cloud targets were still available.

OEM Cloud Control Silent Install

I was recently asked to come up with a process for a silent install for EM12c.

My initial thoughts were that’s going to be difficult!

But now that I’ve actually done it (several times!) I’m happy to say how easy it is.

Why might one want to do a silent install? Well working on customer sites there’s not always the option of opening up an X-Windows or VNC session. Also working through install screens can be time consuming and can potentially result in a seat to keyboard interface error. Scripting a silent install also means you have an easily repeatable process.

Cloud Control was released on June 16th 2015. It’s worth pointing out this is the terminal release of EM12c and contains a number of new features particularly around Hybrid Cloud Management. On that subject I’d heartily recommend checking out the blog blitz my colleague Phil is doing on this.

Anyway back to the silent install, let’s assume that you’ve covered off all of the prereqs and you have a database ready to host the OEM repository.

The Cloud Control software should be downloaded and staged on the server. All the zipfiles must be unzipped to the same directory.

The next stage would be to run the EMPrereqKit tool to confirm the database meets all of the prerequisites for hosting the EM repository. The tool is run as part of the install but it’s better to correct any issues up front.

To run the tool execute the following (logfiles will be generated in your current directory):

<STAGEDIR>/install/requisites/bin/emprereqkit -executionType install -prerequisiteXMLRootDir <STAGEDIR>/install/requisites/list  -connectString “(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<REPOSITORY_DB_HOST>)(PORT=<PORT>)))(CONNECT_DATA=(SID=<SID>)))” -dbUser SYS -dbPassword <PASSWORD> -dbRole sysdba -reposUser SYSMAN -runPrerequisites -runCorrectiveActions

Some failures can be corrected via the runCorrectiveActions flag.

For more details on any failures consult ./prerequisiteResults/log/LATEST/emprereqkit.out

All failures should be addressed prior to commencing the installation of Cloud Control.

So on to the silent install. There are template response files contained within the software directories in <STAGEDIR>/response. The response file template we need is new_install.rsp

If you were going for a silent upgrade then have a look at upgrade.rsp

We’re also going to use staticports.ini to configure the ports as we want them.

The default response file contains a number of comments, instructions etc. but I’m going to cut these out (via an egrep  -v ^’(#|$)’ new_install.rsp) for ease of reading. This gives me, with my amended parameters:


You would of course need to double check all of these parameters tie in with your environment. I’m also only installing the default plugins here but you can add additional ones in the PLUGIN_SELECTION variable as needed.

And not forgetting our staticports.ini:

Admin Server Http SSL Port=7101
Managed Server Http Port=7201
Managed Server Http SSL Port=7301
Enterprise Manager Upload Http Port=4889
Enterprise Manager Upload Http SSL Port=1159
Enterprise Manager Central Console Http Port=7788
Enterprise Manager Central Console Http SSL Port=7799
Node Manager Http SSL Port=7401
Oracle Management Agent Port=3872

So now we can run the silent install. Passwords can be put in the response file but I prefer not to do this for security reasons. Instead I will pass all of the password variables when executing the runInstaller command.

A couple of preflight checks to do before execution:

echo $ORACLE_HOME – Should be null
echo $PATH – Should not include any Oracle Home locations

Then the runInstaller can be run with the -silent option as the oracle software owner as follows:

<SW_STAGING_LOC>/install/runInstaller -silent \
-responseFile /home/oracle/my_new_install.rsp \

On execution this will run in the background but continue to stream output to stdout, as well as logfiles in /tmp and your oraInventory location.

Once completed you will need to run the script as root.

And that’s it!

Well almost…now you would want to backup your emkey and emconfig, configure BI Publisher integration and apply any recommended patches.

And of course start adding your targets! :)




The basic paradigm of a server CPU attached to a chunk of backing storage hasn’t really changed in decades, but the actual method used to present what looks to a server operating system like a set of disks is often very different than it would have been twenty or thirty years ago. In larger enterprises, data storage for large servers is usually provided from a Storage Area Network, or SAN. This is a shared block storage array which provides disk to servers through a dedicated high speed network, usually using multiple pathways for resiliency, to allow the illusion of locally attached disks.

Two protocols dominate the SAN landscape – Fibre Channel and iSCSI. The former uses dedicated adaptors to connect servers to storage, whereas the latter has the advantage that it can use regular ethernet NICs. iSCSI is in essence a means for implementing the time-honoured SCSI protocol over IP networks.

Another advantage is that an iSCSI target or server, a dedicated SAN of sorts in other words, can be set up very inexpensively using a regular Linux host, ordinary networking, and open source software.

I set up an iSCSI target myself on a simple Ubuntu server, for training and familiarisation purposes. Here’s what I did.

Firstly, install the necessary packages:

# apt-get install iscsitarget iscsitarget-dkms

Then edit /etc/default/iscsitarget to set ISCSITARGET_ENABLE=true

I then added an additional disk to my system to be used for iSCSI storage, and set up a single partition on it (/dev/sdb1). Obviously any spare partition on your system will do.

Next, configure the partition as a target: edit /etc/iet/ietd.conf and append:

Target iqn.2015-06.local.jg:storage.sys0
        Lun 0 Path=/dev/sdb1,Type=fileio,ScsiId=lun0,ScsiSN=lun0

You can call your target whatever you like; the convention is to represent the current date by month after the iqn. prefix.

Next, restart the iSCSI target service to pick up the changes:

# service iscsitarget restart

.. and you should be done!

You can test (or use) your new ersatz SAN from another Linux box quite easily. In my case I connected to it from another Ubuntu machine, so to set this up as an initiator (client), I installed the open-iscsi package like this:

$ sudo apt-get install open-iscsi

Then to ‘discover’ the storage, I did:

# iscsiadm -m discovery -t st -p

.. the IP address shown being that of the target machine. At that point you can simply:

iscsiadm -m node --login

.. to logon (connect) to the iSCSI storage. In my case this replied with:,1 iqn.2015-06.local.jg:storage.sys0

This will make the disk device attach to the initiator (you can see this by typing dmesg).

Obviously in a simple setup like this I don’t have the advantage of multiple paths, RAID or any of the other nice-to-haves that you’d associate with a self-respecting, serious SAN of any distinction  but it’s useful for tinkering and experimentation, and fun to set up. It can be used to share storage in a domestic environment quite usefully, as well.