Keep Your Secrets Safe with EncFS

Most readers will be familiar with services like KeePass and Passpack which allow passwords, keyphrases and chunks of text to be held in an encrypted location. For enterprises needing to share passwords and other secrets they’re ideal, but for data you only need to access yourself, there are simpler ways to keep your files secure that don’t involve using a remote service.

It’s quite possible to encrypt an entire disk of course, and that’s a very useful approach to guard against the situation where you leave your laptop in the back of a cab, along with a gigabyte or two of sensitive information – provided your laptop is switched off at the time it’s examined. For desktop machines which are switched on 24/7 like my own, full disk encryption is not such a good idea because the data I want to protect is only opaque when the system is powered off, or the disk unmounted.

Fortunately, for Linux users, there’s a simple way to encrypt data selectively within a filesystem in such a way that it can be opened and closed easily, like a safe. EncFS is an open source, user-space encrypted filesystem that’s provided with most Linux distros. Once installed, setting up a stash for your secret stuff is as easy as (eg):

$ mkdir $HOME/vault $HOME/.vault_encfs
$ encfs $HOME/.vault_encfs $HOME/vault

The first time you issue the encfs command, you’ll be prompted for a password twice and the encrypted filesystem will be created. At this point, you can create or copy files into $HOME/vault/ and you’ll see them turn up, encrypted with bizarre and unusual filenames, in $HOME/.vault_encfs/, which is the underlying location (~/vault being as it were a ‘decrypted view’). Subsequent invocations of the command in exactly the same form will mount a pre-existing stash.

As soon as you unmount the ~/vault directory, like this:

$ fusermount -u $HOME/vault

.. the decrypted view will disappear leaving an empty mount point, and your data will be safe from prying eyes.

A useful tip if you need to share your encrypted data between a number of computers is to use a folder synchronisation service like Dropbox or Insync as the location of your stash, so in the above example this might be $HOME/.vault_encfs.

A tool called gnome-encfs-manager is available for some Linux desktop environments. This provides a panel applet to create, mount and unmount encrypted folders quickly and easily, so you don’t need to get your hands dirty with the command line.

Saving you 15 minutes: Provisioning 12c RDBMS and GI via 12c OEM (OS Credentials / Potential Bug)

I’m deploying 12c via OEM and I think I may have come across an anomaly.  When you run the automated providing you need to have a ‘normal’ user and a ‘privileged’ user.  Now if you have a user which has been created and has the power of SUDO then this user is essentially the same.

When you create that user as a named credential all you do is specify the ‘run’ privilege.

Here is a screenshot:

c1

So when you come to set the preferred credentials I set the normal and privilege credential as the same, as this single credential is both normal and privileged.

c2

However when I came to the 12c OEM deployment I hit this error:

The output of the directive is:
comparing version 12 1 0 2 0 with 11 2 0 0 0
12 1 0 2 0 is greater than 11 2 0 0 0
comparing version 12 1 0 2 0 with 12 0 0 0 0
12 1 0 2 0 is greater than 12 0 0 0 0
comparing version 12 1 0 2 0 with 12 0 0 0 0
12 1 0 2 0 is greater than 12 0 0 0 0
Command to execute : /oracle/stage/SIDB-1432728585899/gi/stage_racp/racp_prereqs/cvupack/bin/cluvfy comp sys -p ha -fixupnoexec -r 12.1 -verbose -osdba dba -orainv oinstall
Exit code is : 1
You must NOT be logged in as root (uid=0) when running /oracle/stage/SIDB-1432728585899/gi/stage_racp/racp_prereqs/cvupack/bin/cluvfy.

So having previously got this to work I tracked back and noticed that the only difference
was the preferred credentials settings. Prior to this I had created two preferred credentials, one without the SUDO privilege and one with; even though they are the same user.  Feels like this could be a little bug, I changed it to two preferred credentials and it worked fine.

 

Saving you 15 minutes: 12c S&M Flashback Database (12.1.0.1 Booo, 12.1.0.2 Hooray)

Just a quick one; I was testing Flashback database in 12.1.0.1; nothing special just a standard operational test and hit an issue.  This was flashing back a whole single tenant database.

RMAN> flashback database to restore point 'BEFORE_INSERT';
Starting flashback at 18-MAY-15
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:04

Finished flashback at 18-MAY-15
RMAN> exit

SQL> alter database open resetlogs;
Database altered.

SQL> select con_id, name, open_mode from v$containers;

CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
1 CDB$ROOT                       READ WRITE
2 PDB$SEED                       READ ONLY
3 PB101C                         MOUNTED

SQL> alter pluggable database pb101c open;
Warning: PDB altered with errors.

When I hit this issue I went to the alert log and it shows this:

***************************************************************
WARNING: Pluggable Database PB101C with pdb id - 3 is
altered with errors or warnings. Please look into
PDB_PLUG_IN_VIOLATIONS view for more details.
***************************************************************

So there is 5 minutes saved if you need to find out what issues you have with your pluggable database.

SQL> select time, message from PDB_PLUG_IN_VIOLATIONS;

TIME                            MESSAGE
18-MAY-15 10.30.48.614584       Database option CONTEXT mismatch: PDB installed version 12.1.0.1.0. CDB installed version NULL.

18-MAY-15 10.30.48.615477       Database option OWM mismatch: PDB installed version 12.1.0.1.0. CDB installed version NULL.

18-MAY-15 10.30.48.616230       Database option XOQ mismatch: PDB installed version 12.1.0.1.0. CDB installed version NULL.

Based on those errors on an empty database it looks like a pretty fundamental problem.  The 12.1.0.2 patch set has a lot of flashback related bugs fixed, I couldn’t find this specifically but I believe the patch set documentation doesn’t necessarily cover all bugs.   Also the dba_registry looks a bit screwy:

Oracle Database Vault                    INVALID
Oracle Application Express               VALID
Oracle Label Security                    INVALID
Spatial                                  INVALID
Oracle Multimedia                        VALID
Oracle Text                              REMOVED
Oracle XML Database                      INVALID
Oracle Database Catalog Views            VALID
Oracle Database Packages and Types       INVALID
JServer JAVA Virtual Machine             VALID
Oracle XDK                               VALID
Oracle Database Java Packages            VALID
OLAP Analytic Workspace                  VALID
Oracle Real Application Clusters         OPTION OFF

When I tried the exact same thing on 12.1.0.2 it worked fine.  I guess sometimes when we want to have a quick ‘play’ with something there is a tendency just to download the base release from OTN and have a go.  I guess with the step change in architecture it makes sense just to spend a little time going to the latest patch set and deploying a PSU or two.

Speaking at UKOUG Systems Event and BGOUG

I’m pleased to say that I will be speaking at the UKOUG Systems Event 2015, held at Cavendish Conference Center in London, 20 May 2015. My session “Oracle Exadata Meets Elastic Configurations” starts at 10:15 in Portland Suite. Here is the agenda of the UKOUG Systems Event.

In a month time I’ll be also speaking at the Spring Conference of the Bulgarian Oracle User Group. The conference will be held from 12th to 14th June, 2015 in hotel Novotel in Plovdiv, Bulgaria. I’ve got the conference opening slot at 11:00 in hall Moskva, my session topic is “Oracle Data Guard Fast-Start Failover: Live demo”. Here is the agenda of the conference. I would like to thank EDBA for making this happen!

applyElasticConfig.sh fails with Unable to locate any IB switches

With the release of Exadata X5 Oracle introduced elastic configurations and changed the process on how the initial configuration is performed. Back before you had to run applyconfig.sh which would go across the nodes and change all the settings according to your config. This script has now evolved and it’s called applyElasticConfig.sh which is part of OEDA (onecommand). During one of the recent deployments I ran into the below problem:

[root@node8 linux-x64]# ./applyElasticConfig.sh -cf Customer-exa01.xml

Applying Elastic Config...
Applying Elastic configuration...
Searching Subnet 172.16.2.x..........
5 live IPs in 172.16.2.x.............
Exadata node found 172.16.2.46.
Collecting diagnostics...
Errors occurred. Send /opt/oracle.SupportTools/onecommand/linux-x64/WorkDir/Diag-150512_160716.zip to Oracle to receive assistance.
Exception in thread "main" java.lang.NullPointerException
at oracle.onecommand.commandexec.utils.CommonUtils.getStackFromException(CommonUtils.java:1579)
at oracle.onecommand.deploy.cliXml.ApplyElasticConfig.doDaApply(ApplyElasticConfig.java:105)
at oracle.onecommand.deploy.cliXml.ApplyElasticConfig.main(ApplyElasticConfig.java:48)

Going through the logs we can see the following message:

2015-05-12 16:07:16,404 [FINE ][ main][ OcmdException:139] OcmdException from node node8.my.company.com return code = 2 output string: Unable to locate any IB switches... stack trace = java.lang.Throwable

The problem was caused because of IB switch names in my OEDA XML file were different to the one’s actually physically in the rack, actually the IB switch hostnames were missing from the hosts file. So if you ever run into this problem make sure your IB switch hosts file (/etc/hosts) has the correct hostname in the proper format:

#IP                 FQDN                      ALIAS
192.168.1.100       exa01ib01.local.net       exa01ib01

Also make sure to reboot the IB switch after any change of the hosts file.