Top 60 Oracle Blogs

Recent comments

July 2012

Friday Philosophy – Whatever Happened to Run Books?

I realised recently that it is many years since I saw what used to be called a Run Book or System Log Book. This was a file – as in a plastic binder – with sheets of paper or printouts in it about a given system. Yes, this was a while back. It would often also have diagrams {occasionally drawn by hand on scraps of paper – so that would be the database ERD then}, hand-written notes and often the printed stuff would have scribbles against it.

{BTW I asked a colleague if he remembered these and when he said he did, what he used to call them – “err, documentation???”. Lol}

Exporting DBFS via NFS

Anybody who was thinking about exporting DBFS via NFS have probably stumbled upon the fact the Oracle says it can not be done:

DBFS does not support exporting NFS or SAMBA exports

What's wrong with DBFS?

There is nothing wrong with DBFS itself. The problem originated form the fact that FUSE did not have proper interfaces implemented to support exporting by the kernel. Newer versions of the Linux kernel fully support exporting. I know that OEL 6.x works for sure as I did the DBFS exports myself through both NFS as well as Samba. The common minimum kernel version circulating across the internet seems to be 2.6.27 but I haven't had a chance to check whether it's true.

Older Kernels

Fact of the matter is -- it was always possible to export FUSE via NFS.

Enkitec Extreme Exadata Expo

I will be hanging around E4, it's going to be a really cool and geeky event. See you all there!

When host credentials attack…

I was using Grid Control to do a DB recovery test yesterday and something rather unusual happened which I can not remember seeing before.

I turned the DB off and moved all the files (spfile, controlfiles, datafiles, FRA contents etc) to a new location for safe keeping. One of the system administrators restored the most recent backups from tape to the disk-based backup location. I then started the recovery…

One of the first steps in the recovery process (using GC) is to enter the host credentials for the DB server where you are doing the recovery. Every time I tried I was told they weren’t correct, even though I could use the same credentials to log in to the server using SSH… Strange…

To make a long story short, if I was logged into the GC as the SYSMAN user, the credentials were recognized. If I was logged in as my own user (a super administrator) credentials for the “oracle” user on the DB server were reported as incorrect.

Why ESXi 5 has become my new standard hypervisor

OK so I have to admit that I was very sceptical at first about any non-paravirtualised hypervisors. Mainly because my knowledge about virtualisation seems to have become a little dated. I thought that anything that’s para-virtualised will clearly outrun anything else. Based on my experience with my older hardware this was actually true.

However, a couple of months ago I bought a new lab server, and because VMware kindly provided the licenses for ESXi 5 through the guru licensing scheme I gave it a try. I had a few initial problems with the fact that ESXi 5 doesn’t have the same shell as its predecessor, and it also uses the GPT format which didn’t work well with my Oracle Linux 6 installation using grub 0.97, which cannot deal with the MBR on a GPT disk.

Questions – 1

A recent question on the OTN Database forum:

If the block size of the database is 8K and average row length is 2K and if we  select all column of the table the I/O show that it had read more blocks then compare to specific column of the same table. Why is this?

Secondly if Oracle brings a complete block is the db buffer cache, while reading block from disk, they why there is a difference of block count in two queries. This difference reveals to me when I check the EXPLAIN PLAN for two different queries against the same table but one select all columns and the other one select specific column.

Kindly help me in clearing this confusion.

I can’t see anything subtle and complex in the problem as stated, so why doesn’t the OP give a clear explanation of what’s puzzling them?


We started on an interesting mad scientist kind of project a couple of days ago.

One of our long time customers bought an Exadata last month. They went live with one system last week and are in the process of migrating several others. The Exadata has an interesting configuration. The sizing exercise done prior to the purchase indicated a need for 3 compute nodes, but the data volume was relatively small. In the end, a half rack was purchased and all four compute nodes were licensed, but 4 of the 7 storage servers were not licensed. So it’s basically a half rack with only 3 storage servers.

OTN Tour of Latin America: Wrap-up…

The OTN Tour of Latin America is over for me. Several brave souls continue on to the second leg in about a week. For those playing catch-up on my little adventure, you can read the posts here:

Oracle GoldenGate Integrated Capture

Oracle GoldenGate 11.2 release notes contain an interesting new feature:

Extract can now be used in integrated capture mode with an Oracle database. Extract integrates with an Oracle database log mining server to receive change data from that server in the form of logical change records (LCR).

All of that just rings too many bells so I've decided to find out what exactly have happened. This feature requires database patches to be installed (described in Note:1411356.1).

Stack dumps

Stack dump reveals a lot of interesting information already (I've left only relevant pieces in place):

#10 0x00002b08f2ba21b7 in knxoutReceiveLCR () from /u01/app/oracle/ggs/
#11 0x00002b08f2ae1048 in OCIXStreamOutLCRReceive () from /u01/app/oracle/ggs/
#12 0x0000000000721a96 in IXAsyncReader::ProcessBatchNonCallbackArray() ()

Using udev on RHEL 6 / OL 6 to change disk permissions for ASM

When you use Oracle ASM (Automatic Storage Management) for your database, the permissions on the block devices on the operating system layer which are used by ASM need to be changed. To be more precise, the owner and group need to be set to ‘oracle’ and ‘dba’ (Oracle documentation) in my case.

I used to do this in a very lazy way, using a simple ‘/bin/chown oracle.dba /dev/sdb’ in /etc/rc.local. This worked for me with RHEL/OL version 5. This has changed with RHEL/OL 6, because the system V startup system has changed to ‘upstart’. Also, the disk devices change ownership back in OL6 if you set it by hand to oracle.dba.