Saving you 15 minutes: Hybrid Cloud Blog Blitz Part 5

Configuring HA for Cloud Gateway Agents.  It took about 15 minutes to read the doc, run the commands, get an error, read the docs and then go ahhhhhhhhh.

So concepts:

  • Hybrid Cloud Agent
    • This is an agent in your Oracle Public Cloud
  • Hybrid Cloud Gateway Agent
    • This is an agent ‘on-prem’ which is just a regular agent but handles the communication to the Hybrid Cloud Agent, this is the bit that you ‘HA’.

So when you are configuring the HA for the Hybrid Cloud Agent it is enabling HA of the Hybrid Cloud Gateway Agent (not the Cloud Agent).  I.e you have two connections to your Hybrid Cloud rather than one.  To do this you need two on-prem agents; the first agent which you register using the command;

./emcli register_hybridgateway_agent hybridgateway_agent_list='gc12beta.redstk.com:3872'

becomes the Master Gateway agent. To enable HA you then just run the same command again on another agent to register it as a Gateway agent. Now you have registered two agents.  Both of which are on-prem.

./emcli register_hybridgateway_agent -hybridgateway_agent_list='10.200.132.36:3870'

The last bit is then to add the additional registered Hybrid Cloud Gateway Agent to the Hybrid Cloud Agent so that it has a master and slave gateway.  When I look at the emcli command now it looks obvious but I was putting in the master gateway agent in as the hybrid agent name and then the slave in the hybrid gateway agent list. Obviously you need to have an Agent in the cloud before you do this.

E.g:
Hybrid Cloud Agent (Agent in the Cloud):129.152.130.39
Master Hybrid Cloud Gateway Agent (On-Prem):gc12beta.redstk.com
Slave Hybrid Cloud Gateway Agent (On-Prem):10.200.132.36

Do this:

./emcli add_hybridgateway_for_hybrid_agent -hybrid_agent_name='129.152.130.39:3872' -hybridgateway_agent_list='10.200.132.36:3870'

Not this:

./emcli add_hybridgateway_for_hybrid_agent -hybrid_agent_name='gc12beta.redstk.com:3872' -hybridgateway_agent_list='10.200.132.36:3870'

So once that was done l killed the master agent and they Hybrid Cloud targets were still available.

OEM Cloud Control 12.1.0.5 Silent Install

I was recently asked to come up with a process for a silent install for EM12c.

My initial thoughts were that’s going to be difficult!

But now that I’ve actually done it (several times!) I’m happy to say how easy it is.

Why might one want to do a silent install? Well working on customer sites there’s not always the option of opening up an X-Windows or VNC session. Also working through install screens can be time consuming and can potentially result in a seat to keyboard interface error. Scripting a silent install also means you have an easily repeatable process.

Cloud Control 12.1.0.5 was released on June 16th 2015. It’s worth pointing out this is the terminal release of EM12c and contains a number of new features particularly around Hybrid Cloud Management. On that subject I’d heartily recommend checking out the blog blitz my colleague Phil is doing on this.

Anyway back to the silent install, let’s assume that you’ve covered off all of the prereqs and you have a database ready to host the OEM repository.

The Cloud Control software should be downloaded and staged on the server. All the zipfiles must be unzipped to the same directory.

The next stage would be to run the EMPrereqKit tool to confirm the database meets all of the prerequisites for hosting the EM repository. The tool is run as part of the install but it’s better to correct any issues up front.

To run the tool execute the following (logfiles will be generated in your current directory):

<STAGEDIR>/install/requisites/bin/emprereqkit -executionType install -prerequisiteXMLRootDir <STAGEDIR>/install/requisites/list  -connectString “(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<REPOSITORY_DB_HOST>)(PORT=<PORT>)))(CONNECT_DATA=(SID=<SID>)))” -dbUser SYS -dbPassword <PASSWORD> -dbRole sysdba -reposUser SYSMAN -runPrerequisites -runCorrectiveActions

Some failures can be corrected via the runCorrectiveActions flag.

For more details on any failures consult ./prerequisiteResults/log/LATEST/emprereqkit.out

All failures should be addressed prior to commencing the installation of Cloud Control.

So on to the silent install. There are template response files contained within the software directories in <STAGEDIR>/response. The response file template we need is new_install.rsp

If you were going for a silent upgrade then have a look at upgrade.rsp

We’re also going to use staticports.ini to configure the ports as we want them.

The default response file contains a number of comments, instructions etc. but I’m going to cut these out (via an egrep  -v ^’(#|$)’ new_install.rsp) for ease of reading. This gives me, with my amended parameters:

RESPONSEFILE_VERSION=2.2.1.0.0
UNIX_GROUP_NAME=”oinstall”
INVENTORY_LOCATION=”/u01/app/oraInventory”
SECURITY_UPDATES_VIA_MYORACLESUPPORT=FALSE
DECLINE_SECURITY_UPDATES=TRUE
INSTALL_UPDATES_SELECTION=”skip”
ORACLE_MIDDLEWARE_HOME_LOCATION=”/u01/app/oracle/middleware”
ORACLE_HOSTNAME=”oem.localdomain”
AGENT_BASE_DIR=”/u01/app/oracle/agent”
ORACLE_INSTANCE_HOME_LOCATION=”/u01/app/oracle/gc_inst”
CONFIGURE_ORACLE_SOFTWARE_LIBRARY=true
SOFTWARE_LIBRARY_LOCATION=”/u01/app/oracle/swlib”
DATABASE_HOSTNAME=”oem.localdomain”
LISTENER_PORT=”1521″
SERVICENAME_OR_SID=”EMREP”
DEPLOYMENT_SIZE=”SMALL”
MANAGEMENT_TABLESPACE_LOCATION=”/oradata/emrep/mgmt.dbf”
CONFIGURATION_DATA_TABLESPACE_LOCATION=”/oradata/emrep/mgmt_ecm_depot1.dbf”
JVM_DIAGNOSTICS_TABLESPACE_LOCATION=”/oradata/emrep/mgmt_deepdive.dbf”
STATIC_PORTS_FILE=”/home/oracle/staticports.ini”
FROM_LOCATION=”../oms/Disk1/stage/products.xml”
DEINSTALL_LIST={“oracle.sysman.top.oms”,”12.1.0.5.0″}
b_upgrade=false
EM_INSTALL_TYPE=”NOSEED”
CONFIGURATION_TYPE=”ADVANCED”
TOPLEVEL_COMPONENT={“oracle.sysman.top.oms”,”12.1.0.5.0″}

You would of course need to double check all of these parameters tie in with your environment. I’m also only installing the default plugins here but you can add additional ones in the PLUGIN_SELECTION variable as needed.

And not forgetting our staticports.ini:

Admin Server Http SSL Port=7101
Managed Server Http Port=7201
Managed Server Http SSL Port=7301
Enterprise Manager Upload Http Port=4889
Enterprise Manager Upload Http SSL Port=1159
Enterprise Manager Central Console Http Port=7788
Enterprise Manager Central Console Http SSL Port=7799
Node Manager Http SSL Port=7401
Oracle Management Agent Port=3872

So now we can run the silent install. Passwords can be put in the response file but I prefer not to do this for security reasons. Instead I will pass all of the password variables when executing the runInstaller command.

A couple of preflight checks to do before execution:

echo $ORACLE_HOME – Should be null
echo $PATH – Should not include any Oracle Home locations

Then the runInstaller can be run with the -silent option as the oracle software owner as follows:

<SW_STAGING_LOC>/install/runInstaller -silent \
-responseFile /home/oracle/my_new_install.rsp \
WLS_ADMIN_SERVER_PASSWORD=”<PASSWORD>” \
WLS_ADMIN_SERVER_CONFIRM_PASSWORD=”<PASSWORD>” \
NODE_MANAGER_PASSWORD=”<PASSWORD>” \
NODE_MANAGER_CONFIRM_PASSWORD=”<PASSWORD>” \
SYS_PASSWORD=”<PASSWORD>” \
SYSMAN_PASSWORD=”<PASSWORD>” \
SYSMAN_CONFIRM_PASSWORD=”<PASSWORD>” \
AGENT_REGISTRATION_PASSWORD=”<PASSWORD>” \
AGENT_REGISTRATION_CONFIRM_PASSWORD=”<PASSWORD>

On execution this will run in the background but continue to stream output to stdout, as well as logfiles in /tmp and your oraInventory location.

Once completed you will need to run the allroot.sh script as root.

And that’s it!

Well almost…now you would want to backup your emkey and emconfig, configure BI Publisher integration and apply any recommended patches.

And of course start adding your targets! :)

 

 

iSCSI

The basic paradigm of a server CPU attached to a chunk of backing storage hasn’t really changed in decades, but the actual method used to present what looks to a server operating system like a set of disks is often very different than it would have been twenty or thirty years ago. In larger enterprises, data storage for large servers is usually provided from a Storage Area Network, or SAN. This is a shared block storage array which provides disk to servers through a dedicated high speed network, usually using multiple pathways for resiliency, to allow the illusion of locally attached disks.

Two protocols dominate the SAN landscape – Fibre Channel and iSCSI. The former uses dedicated adaptors to connect servers to storage, whereas the latter has the advantage that it can use regular ethernet NICs. iSCSI is in essence a means for implementing the time-honoured SCSI protocol over IP networks.

Another advantage is that an iSCSI target or server, a dedicated SAN of sorts in other words, can be set up very inexpensively using a regular Linux host, ordinary networking, and open source software.

I set up an iSCSI target myself on a simple Ubuntu server, for training and familiarisation purposes. Here’s what I did.

Firstly, install the necessary packages:

# apt-get install iscsitarget iscsitarget-dkms

Then edit /etc/default/iscsitarget to set ISCSITARGET_ENABLE=true

I then added an additional disk to my system to be used for iSCSI storage, and set up a single partition on it (/dev/sdb1). Obviously any spare partition on your system will do.

Next, configure the partition as a target: edit /etc/iet/ietd.conf and append:

Target iqn.2015-06.local.jg:storage.sys0
        Lun 0 Path=/dev/sdb1,Type=fileio,ScsiId=lun0,ScsiSN=lun0

You can call your target whatever you like; the convention is to represent the current date by month after the iqn. prefix.

Next, restart the iSCSI target service to pick up the changes:

# service iscsitarget restart

.. and you should be done!

You can test (or use) your new ersatz SAN from another Linux box quite easily. In my case I connected to it from another Ubuntu machine, so to set this up as an initiator (client), I installed the open-iscsi package like this:

$ sudo apt-get install open-iscsi

Then to ‘discover’ the storage, I did:

# iscsiadm -m discovery -t st -p 192.168.0.200

.. the IP address shown being that of the target machine. At that point you can simply:

iscsiadm -m node --login

.. to logon (connect) to the iSCSI storage. In my case this replied with:

192.168.0.200:3260,1 iqn.2015-06.local.jg:storage.sys0

This will make the disk device attach to the initiator (you can see this by typing dmesg).

Obviously in a simple setup like this I don’t have the advantage of multiple paths, RAID or any of the other nice-to-haves that you’d associate with a self-respecting, serious SAN of any distinction  but it’s useful for tinkering and experimentation, and fun to set up. It can be used to share storage in a domestic environment quite usefully, as well.

AWR Repository and SYSAUX Size – utlsyxsz.sql

Working on a Production DB recently I noticed that AWR retention was still set to the measly default of 8 days.

I asked if we could increase this to 32 days to which I was asked how much more space we would need.

Before I started digging into v$sysaux_occupants and multiplying by 4 + Fudge_Factor, I (vaguely) remembered that there is actually an Oracle provided routine for calculating this:

$ORACLE_HOME/rdbms/admin/utlsyxsz.sql

This was from 10g upgrade days when you wanted to estimate size for the new SYSAUX tablespace.

Anyway, the script will show the current usage and then prompt for parameters, such as Retention & Interval settings, active sessions, number of tables amongst other things,  to estimate the new sizing requirements.

Sounds great but does it work?

Well in my environment not so much.

The current SYSAUX size with an 8 day retention was 550MB with AWR data consuming 190MB of that.

The script estimated that changing retention to 32 days would cause SYSAUX to increase in total to 750MB and AWR to 450MB.

This was even with exaggerating the amount of activity there was likely to be.

I was sceptical at the time and ultimately proved correct.

32 days later the actual usage figures are:

~~~~~~~~~~~~~~~~~~~~

Current SYSAUX usage

~~~~~~~~~~~~~~~~~~~~

| Total SYSAUX size:                     1,189.5 MB

|

| Total size of SM/AWR                806.5 MB(  67.8% of SYSAUX )

So in reality I would have been better served with my original 4*Current AWR Size + Fudge Factor calculation :)

However at least I now have a way of easily summarizing the current usage of SYSAUX.

Let me know if you ever used utlsyxsz.sql and got accurate results!

Saving you 15 minutes: Hybrid Cloud Blog Blitz Part 4

In this blog I am going to clone back from the cloud to the on premise environment.  Based on a demo I did yesterday I was asked the question about if the Oracle Homes PSU versions don’t match for the source and the target.  Taking a look at the documentation the restrictions of pluggable database cloning are fairly light!

https://docs.oracle.com/cd/E24628_01/em.121/e27046/prov_create_pdb.htm#EMLCM93263

For example there is nothing mentioned character set yet if you look at Oracle documentation regarding character set you will see some ‘considerations’ for multi-tenant architecture.  Therefore it must come into play in cloning? (another blog I guess).

So in terms of PSU nothing is mentioned in the documentation and a very quick look at MOS I couldn’t find anything.  So there are two possible outcomes.

  1. PSU variances doesn’t matter as long as you run ./datapatch post clone (and OEM does that without an issue)
  2. PSU variance do matter and OEM will protect you from your own (my) stupidity

The other thing I am keen to see is the length of time for the operations?  Pushing to the cloud (I assume it’s a push) takes about 30 minutes.

So here is the initial clone with the source and target now reversed:

HC26

The clone this time took 8 minutes.

HC27

All good!