Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Hotsos 2009

This was my sixth Hotsos Symposium as an attendee, my second as a speaker, and this gathering has become something of a spring time ritual for me. Others may look at the temperature, the dogwoods or the daffodils, but Hotsos is how I know spring is here, even if it did snow in Texas last year.I can divide my life as a DBA into three distinct phases:Phase one: I was given a server (DEC Alpha 2100

J2EE and traditional MVC (Part 1)

A short note to new visitors: this blog documents my vision on how to build database web applications. Normally I do this by presenting a two hour presentation know as "A Database Centric Approach to J2EE Application Development". First given at Oracle Openworld 2002. You can find the original paper here (it's the one titled "A First Jdeveloper Project"). Since the Mayday Miracle gathering in

Generating Histograms

Today I have been trying to generate a histogram for a table column as part of a larger statistics project. In fact it is quite a straightforward process although some of the syntax is a bit tricky.

The table I am working with is called GP.CAR and contains a list of all cars that have raced in Formula 1 since 1961. I am trying to build a frequency histogram based on the DRIVER_KEY column which is a CHAR(4). The histogram will contain the following data:


The histogram can be built using the following code:

l_statrec dbms_stats.statrec;
l_charvals dbms_stats.chararray;
l_bkvals dbms_stats.numarray;
l_charvals := dbms_stats.chararray ();
l_charvals.extend (11);

l_bkvals := dbms_stats.numarray ();
l_bkvals.extend (11);

l_charvals(1) := 'MSCH'; l_bkvals(1) := 91;
l_charvals(2) := 'APRO'; l_bkvals(2) := 51;
l_charvals(3) := 'ASEN'; l_bkvals(3) := 41;
l_charvals(4) := 'NMAN'; l_bkvals(4) := 31;
l_charvals(5) := 'JSTE'; l_bkvals(5) := 27;
l_charvals(6) := 'NLAU'; l_bkvals(6) := 25;
l_charvals(7) := 'JCLA'; l_bkvals(7) := 25;
l_charvals(8) := 'NPIQ'; l_bkvals(8) := 23;
l_charvals(9) := 'FALO'; l_bkvals(9) := 22;
l_charvals(10) := 'DHIL'; l_bkvals(10) := 22;
l_charvals(11) := 'MHAK'; l_bkvals(11) := 20;

l_statrec.epc := 11;
l_statrec.bkvals := l_bkvals;
l_statrec.eavs := 0;

DBMS_STATS.PREPARE_COLUMN_VALUES (l_statrec,l_charvals);

ownname => 'GP',
tabname => 'CAR',
colname => 'DRIVER_KEY',
distcnt => 11,
density => 0.00210084,
nullcnt => 0,
srec => l_statrec,
avgclen => 4

I have not yet worked out how to calculate the density. In the above example I used the value in DBA_TAB_COLUMNS generated by a previous GATHER_TABLE_STATS operation.

In order to test that the histogram had been correctly generated, I used the following code:

l_column_name VARCHAR2(30);
l_endpoint_number NUMBER;
l_endpoint_value NUMBER;
l_cardinality NUMBER;

SELECT column_name
FROM dba_tab_columns
WHERE owner = 'GP'
AND table_name = 'CAR'
AND histogram = 'FREQUENCY'
ORDER BY column_id;

CURSOR c2 (p_column_name VARCHAR2) IS
SELECT endpoint_value,endpoint_number,
endpoint_number -
NVL (LAG (endpoint_number,1) OVER (ORDER BY endpoint_number),0)
FROM dba_histograms
WHERE owner = 'GP'
AND table_name = 'CAR'
AND column_name = p_column_name
ORDER BY endpoint_number;
OPEN c1;
FETCH c1 INTO l_column_name;
DBMS_OUTPUT.PUT_LINE ('Column Name: '||l_column_name);

DBMS_OUTPUT.PUT ('Endpoint Value ');
DBMS_OUTPUT.PUT ('Endpoint Number');
DBMS_OUTPUT.PUT ('Cardinality');

OPEN c2 (l_column_name);
FETCH c2 INTO l_endpoint_value,l_endpoint_number,l_cardinality;
DBMS_OUTPUT.PUT (LPAD (TO_CHAR(l_endpoint_value),32,' '));
DBMS_OUTPUT.PUT (LPAD (TO_CHAR(l_endpoint_number),12,' '));
DBMS_OUTPUT.PUT (LPAD (TO_CHAR(l_cardinality),12,' '));

The values in the ENDPOINT_NUMBER column are cumulative. Therefore I have used a LAG analytic function to calculate the cardinality of each value. The above script generates output similar to the following:

Column Name: DRIVER_KEY

Endpoint Value Endpoint Number Cardinality
-------------- --------------- -----------
40149562867409400000000000000000 91 91
33912840973169800000000000000000 142 51
33918822668491100000000000000000 183 41
40656607447559200000000000000000 214 31
38592008404879700000000000000000 241 27
40654579423238300000000000000000 266 25
38559493043189900000000000000000 291 25
40662755645815800000000000000000 314 23
36478518251134200000000000000000 336 22
35454232704855500000000000000000 358 22
40127236464058400000000000000000 378 20


This week I have been writing a lengthy SQL*Plus script to extract details about object statistics from the data dictionary. Some of the join statements have not been performing well, but I did not have time to investigate them so I rewrote them using PL/SQL cursors and outputting the results using DBMS_OUTPUT.

The problem I have always had with DBMS_OUTPUT is outputting blank lines. For example:

dbms_output.put_line ('Line One');
dbms_output.put_line ('Line Two');

returns the following output

Line One
Line Two

I have also tried using PUT_LINE with a space character. For example:

dbms_output.put_line ('Line One');
dbms_output.put_line (' ');
dbms_output.put_line ('Line Two');

which returns the same output:

Line One
Line Two

In other words I cannot output a blank line to improve readability

For a long time (since Oracle 6.0) I have been aware of the CHR() built-in function and this offers one solution:

dbms_output.put_line ('Line One');
dbms_output.put_line (CHR(10));
dbms_output.put_line ('Line Two');

which returns the required output:

Line One
Line Two

However, a much more elegant solution is available, at least in Oracle 10.2, possibly in earlier versions which modifies the SET SERVEROUTPUT ON statement.

dbms_output.put_line ('Line One');
dbms_output.put_line ('Line Two');

which also returns the required output:

Line One
Line Two

Configuring VMWare on a Laptop

I have been using VMWare Server Linux VMs on my Windows XP laptop for over two years; with both single instance and RAC configurations across numerous VMs. Normally my laptop is connected to my home network (192.168.1.x) so I have been assigning VMs with IP addresses in the same subnet and configuring bridged networks. This worked OK while I was at home but I always had a problem when I was on the road and I could not find a suitable network connection. If all my adapters were reporting "media disconnected" I could not get a network connection between my host and the VMs.

I finally solved this problem (with help from Simon Haslam of Verition) last week. I have installed a Microsoft Loopback Adapter and configured a network address on this. The network address works even if the laptop is not connected to any networks.

The Microsoft Loopback Adapter can be installed using Control Panel -> Add Hardware which launches the Add Hardware Wizard

On the first page answer select "Yes, I have already connected the hardware"

On the next page select "Add a new hardware device"

On the next page select "Install the hardware that I manually select from a list (Advanced)"

On the next page select "Network adapters"

On the next page in the Manufacturer drop down select "Microsoft" and then in the Network Adapter drop down select "Microsoft Loopback Adapter"

The Loopback Adapter will be installed.

You can then specify an IP address for the loopback adapter using Settings -> Network Connections. I used for my host; is a new subnet in my network.

I found I needed to restart the laptop at this point for VMWare to see the Loopback Adapter. I when started VMWare Server and selected Host -> Virtual Network Settings and then selected the Host Virtual Network Mapping tab. For VMnet0 I selected the Microsoft Loopback Adapter.

I then started the virtual operating system (Linux) and set the IP address for the VM to in /etc/sysconfig/network-scripts/ifcfg-eth0 and /etc/hosts. This seems to be sufficient to use putty, pscp etc to communicate between host and VM.

I have not yet tried this configuration with RAC VMs mainly because I no longer have space on my hard drive for two VMs and the shared storage. Time to upgrade my laptop...

Exadata front and center

Just in case you were like me and did not tune in for Oracle’s quarterly earnings concall, there were some interesting highlights. As many of you (well, there aren’t that many of you that read this, but…) know, I’ve been very interested in Exadata since its announcement at Oracle OpenWorld 2008 in October. While some observed that Larry’s introduction keynote was rather brief, I didn’t take it as a sign of disinterest at all. According to the concall earlier this week, quite the opposite.

Here are some choice excerpts from the transcript that I find telling about the future of Exadata:

Larry Ellison:

“So, that’s looking back. Now looking forward, I think the most exciting product we’ve had in many, many years is our Exadata Database Server.”

“Exadata is 100% innovation on top of our very large and very strong database business. And the early results have been remarkable. Charles Phillips will go into a lot of detail but I’ll just throw a couple of numbers out there.

One of our customers, and Charles will describe this customer, one of our customers saw a 28x performance improvement over an existing Oracle database. Another customer saw a monthly aggregation drop from 4.5 hours just to 3 minutes.

When compared to Teradata, a competitive database machine that’s been in the market for a very, very long time, another customer saw that we were 6x faster than their existing Teradata application, when using Exadata versus Teradata.

Another customer saw a batch process fall from 8 hours to 30 minutes. Charles will go into more detail on all this, he will repeat those numbers, because I think they’re worth mentioning twice.”

Charles Phillips:

“So now just a few comments by area. On databases, Larry mentioned, we’re very excited about how the HP Oracle database machine is performing. The increases have just been stunning and so we are getting great feedback from our customers and the pipeline is the largest build I’ve ever seen in terms of a new product.

And as he mentioned, the numbers are just stunning. The major European retailer who reduced the batch processing time from 8 hours to 30 minutes did not believe the process had completed. We had to convince him that’s actually how it’s done.

And so, as Larry mentioned, this is the reminder that this is an internally developed technology in the midst of all the discussion of acquisitions. People forget that we’re actually spending $3.0 billion a year on research and development and this is why we do it.”

From these snippets, you can see that the top executives at Oracle are excited about Exadata. If you’re a techie (if you’re not, how’d you get to this blog?), you’ll probably already know about Kevin Closson’s popular blog on all things related to Oracle and storage. Kevin is giving a webcast next week on Exadata where we expect he’ll discuss some of the technical workings of the product–deeper than the overview information many of us have heard before. If you’re interested, I strongly encourage you to sign up for the event and attend. There is no better authority on Exadata than Kevin and this is a great opportunity!

Congratulations New Oracle ACE, Jeremy Schneider!

I’ll be the first to offer a large congratulations to Jeremy Schneider on being the most recent appointment to the Oracle ACE program. He certainly deserves it (I nominated him, so I suppose I would think so) and I continue to look for great things to come.

Jeremy is the main creator of the IOUG RAC Attack! event that was held for the first time back in August 2008. He (with help from others) will also be putting it on as a half-day session at Collaborate 09. It’s a University Seminar on Thursday morning. All hands-on, all RAC, all the time. I’m looking forward to the event (I’m volunteering as a staffer). You should sign up now before it’s full! I can almost guarantee you’ll learn something.

Besides his work on this hands on lab/class for RAC, Jeremy has many other community contributions. His blog is full of excellent technical bits that always seem to come from a significant amount of research. He contributes occasionally to the Oracle-L mailing list. He also contributed some code to OCFS (v1) several years ago, so you can guess he understands a thing or two about programming and Linux, too.

His ACE Profile isn’t posted yet, but look for it to arrive soon. In the meantime, read some of the good stuff he wrote on his blog and look for him (and me too) at Collaborate in early May!

Congratulations, Jeremy!

small truths just when you need them ...

Ever have one of those meetings? The ones where you *think* everyone is on the same page and is ready to bust out in the right direction to do great things, but there's a big old wet blanket that derails the whole thing?I had one of those last week and I returned to my desk utterly frustrated and questioning the role I'm expected to play these days. To get my mental state back on track, I

The Helsinki declaration: observation 4

So here is the last observation while looking back at 20+ years of (web) database application development. The fourth observation is about the required developer knowledge investment. How much time do you, as a developer, need to invest in learning and taking on the tools with which you can build database (web) applications.Similar to the DBMS (observation 1) it was real simple to learn the


SAP is a huge, mysterious, expensive animal.

In my very private opinion it is probably the worst ERP system you can buy today. Hence, most whiteshirts will choose it.

To compensate for the fact that it's old and silly technology, it's also exceedingly expensive. Introducing SAP to your company is the only reliable way to tell whether your company is so financially strong that it almost resembles a monopoly.

But what I really hate about SAP is that it removes people from the Oracle database field. I think most of us have experienced the following scenario:

A colleague or a bunch of colleagues are selected to help implement SAP. Until then they've been ordinary DBA's, fixing stuff, running databases and leading normal family lives.

Then they go away for EXTENSIVE training over a LONG period of time. In between the 42 week-long classes they have to take (per year, of course), they usually rest with their families and might show up for short, social functions among their (still) colleagues. But they have these myserious, far-away eyes... you can't quite reach them.

Then you get the famous message by mouth or email stating:

"We're now almost ready to , so for the next transition period of , I'll be working half of my time with my old stuff and half of my time with SAP before moving to full-time SAP obligations."


I think, but I could be wrong, that they're sucked into a place and time in space that the rest of us can't see or in other ways sense.

A Harry Potter-like parallel universe.

From which, mind you, they never return.

Numerous are the good Oracle DBA's who disappear from the real Oracle world this way.

Now it appears that in their parallel universe (the SAPPU, it could be called) they're slowly corrupted into thinking about Oracle databases the way the real (the few, the remaining) Oracle DBA's thought about databases in the 80's.

They're forced to unlearn all the right things they had learned. A kind of communist re-schooling or indoctrination. So sad.

Here follows a real mail thread from some friends of mine that know more about Oracle than most. They shall, of course, remain nameless - we still have no idea about the powers and general abilities of our disappeared friends in the SAPPU, and so a certain degree of fear for the unknown make us cautious....

Oracle Person One said:

I find myself once again embroiled in an SAP/Oracle Issue.

It seems I broke something in the production database by applying security patches, even though the 5 development and production servers have no such issues. (No, we can't afford a full system to test patches on - sigh...)

There are a number of ORA-7445 and ORA-600 errors being issued when DML is attempted, along with requisite trace files. (Yes, there is a an active SR already)

The SAP Basis team see's ORA-3113 and ORA-3114 errors.

I think I finally have them convinced to stop trying to 'fix' the ORA-3113...

Which brings me to the point: The usual method of troubleshooting SAP problems, as practiced by the SAP team, and nearly every SAP person I have worked with (not a terribly large sample - maybe I am just really lucky), goes something like this:

1. Search the SAP support site for every note containing ORA-600, ORA-7445. There are a few as you can well imagine.

2. Call a meeting to discuss which of the actions in these notes should be taken. Whether or not the contents of the note actually match the problem at hand seems to be irrelevant.

3. Ask the DBA to run a script (recommended by SAP support) to check for some problem or another in the data. This script will launch full table scans for every table in the database... Using a for/next loop. Fortunately for me, the script is broken as is. ... and I can't seem to find the problem with it.

4. Rinse and repeat - effectiveness is unimportant, only looking busy is important.

To memorialize this method, I have created a link to the following short, but accurately portrayed method of this troubleshooting methodology:
For that Person Two commented:

Make that, 'development and test'
Person Three need this off his chest:

So, regarding recent security patches that might cause havoc:

The one that prevents old client retries in clear text when the encrypted handshake is rejected affects some connection attempts.

“Repairing” the permissions of the archive log destination can ultimately get the archiver stuck far enough behind to toss a 7445 (I think) I didn’t look it up and since my databases never have problems I get rusty on the error messages. (tongue firmly in cheek). If memory serves (see previous sentence) one of the security patches suggests repairing the permissions on the archive log directory without telling you to make sure the ownership is correct.

What are your vintages? I’ve only done SAP stuff ONCE (and walked away quickly and quietly having proved that a certain physical reordering solved all their stated performance issues on the load testing system only to be informed that any manipulation of the data outside of SAP was not allowed.)
Person One felt he finally had someone to talk to who understood him:

Yup, that is SAP SOP.

So far I have refused to do that particular operation, except in a couple cases where it retrieved a lot of empty space due to archival of data.

The 'make work' analogy from "The Longest Yard" (the old one with Burt Reynolds and Eddie Albert) was used in reference to the "SAP Reorg" mentality when they last asked me to do a db reorg.

They didn't get it.

For those of you that don't already know it, the 'make work' for the prison workers in the facility where Reynold's character was incarcerated consisted of the ollowing:

Morning: shovel mud out of the swamp.
Afternoon: shovel mud back into the swamp.

By the classic definition of work, nothing was accomplished in the end.
But the inmates were still sweaty, tired, thirsty and hungry.
Person Five then finally could say this to a friendly crowd:

I wholeheartedly agree with both of you.

I spent almost 4 years as part of team that supported SAP. There were 3 dba's on the team, and both of the other two received their primary dba training from SAP. Their method was to manage EVERY database as if it were an SAP database.

I volunteered to do every non-SAP upgrade and rearranged everything back Oracle standards when they weren't looking. :) Took about 2 years to get to them all but it was well worth the trouble.

As for the SAP databases, patch application and upgrades always had different results on dev, test or prod. It was utterly baffling.

Ah well, based on all this, it is really no wonder that SAP is so wildly popular and has won the whole upper(ERP market) over less complicated, cheaper, more technologically advanced - and way more agile - competitors.

If you can sit in the local CEO club and claim: "We actually managed to pay for the WHOLE SAP implementation with our own money and we're still functioning in several departsments..." your fellow CEO's will know that you have more money than God or AIG's dealers.

You will have the uttermost respect from them, as they scramble to try and explain why their predecessors in their respective companies chose to implement something different from SAP. Cheaper, of course.

But one day, when they have gained enough financial strength....

Here's one final salute to all those lost colleagues from the Oracle database space. We'll always remember you. You'll never be forgotten. Long live your memory.

Wherever you are. In whatever lifeform.