Top 60 Oracle Blogs

Recent comments

August 2011

InSync11 – Day 2

InSync11 day 2 was very much “the day after the night before” for me. I didn’t sleep very well at all. I think I spent most of the night winding myself up about my dodgy demo earlier in the day.

My next presentation was in the last slot on day 2, so I got to see the following presentations before I was up:

Jeremy Ashley & Chris Muir: What’s next for Oracle’s Application User Experiences?

On our first night in Sydney, Chris Muir and I were out with Jeremy Ashley and Mark Drake from Oracle. In addition to the regular geek talk, Jeremy waxed lyrical about end-to-end user experience. It’s quite an interesting subject, with a lot more to it than first springs to mind.

Graham Wood: Ashes of DBTime

I’ve seen Graham speak on this subject a number of times, but it’s always worth checking it out again. I’ll probably end up watching it again in OpenWorld in a few weeks. :)

Marcelle Kratochvil: Why you should be storing unstructured data in the Oracle database.

I have some experience of storing images and documents in the database, so I can appreciate some of the issues Marcelle was highlighting in her presentation. She’s very passionate about the subject and constantly pushing the barrier of what Oracle is capable in the multimedia field.

Connor McDonald: Managing optimizer statistics – A better way.

Management of database statistics has got to be one of the most contentious issues. Everyone seems to have a slightly different opinion and I always come away both agreeing and disagreeing with many of the points.

Connor always has great content and is renowned as an excellent presenter. This talk was no different in that respect. In fact, I would go as far as to say this is the best presentation I’ve ever seen him do, which probably means it’s one of the best Oracle presentations I’ve ever seen. If you get the chance to see Connor present, you really should take the opportunity. Of course, if you are a presenter yourself, you may want to avoid it, as it will probably make you paranoid about how bad you are in comparison.

Me: Edition-Based Redefinition.

Rule of thumb: Don’t present a database talk in the same time slot as Tom Kyte because everyone will go to his session, not yours. To be fair, if I wasn’t presenting I would have been in his session too. :)

Even so, the talk went ok and my demo worked as planned, so I slept well that night! :)


Pretty much everyone I spoke to said they were very impressed with the standard of the conference this year. The standard of the presentations was high and the location was cool.



Friday Philosophy – Dyslexia Defence League

NB This post has nothing to do with Oracle or even technology really. It’s just some thoughts about one aspect of my life.

I know I’ve mentioned this once before, though it was in an early blog post when I had a readership of about 8, but I am mildly dyslexic. If you want to know how I found out I was dyslexic then check out the original post. I’m quite fond of that post, as a non-technical one, though almost no one read it.

The thing is, I now cringe slightly when I say I am Dyslexic. I’ve sat on this post for weeks, wondering if I should post it. You see, it seems to me that dyslexia, along with some other oddities of perception, have over the last few years almost become a thing to be proud of. A banner to wave to show how great you are. “Hey, look at me, I am this good even though I have Dyslexia” or even “I am great because I have dyslexia”. Maybe I am just a little sensitive about it but it seems to me that more and more people make a thing about it. If I am being candid, I feel a little proud that I did OK academically despite it {I should point out there is no proven link between dyslexia and IQ but in exams you get marked down for spelling and slow reading speed means it takes longer to, well, read stuff!} and in the past I have been very open about mentioning it. Hey, this is my second blog on dyslexia!

However, I’ve had it suggested to me in the past that I use it as a defense for being lazy – Can I prove I am dyslexic? Does it really impact me that much? Well, actually no I cannot prove it and has it impacted me? Not a great deal I guess as I can read pretty much anything {I did say it was mild. Scientific papers and anything with very long words can be a challenge, but isn’t that true of everyone?}. My reading speed is about 120,150 words a minute. Average is about 250wpm. My wife seems to read at about 500wpm :-)

Also, don’t get me wrong, I fully appreciate that looking at a challenge you have and taking the benefits from it that you can is a very healthy attitude. If I remember right it was Oliver Sacks in one of his books (“the man who mistook his wife for a hat” maybe) who describes a man with sever Tourette’s syndrome {which is more often all about physical ticks and uncontrolled motions rather than the famous “swearing” aspect of it} who could somehow take advantage of his physical manifestations in his jazz drumming. He could just make it flow for him. But when he took treatment to control the physical issues, his jazz drumming suffered. He really wanted the benefit of the drugs for day-to-day life but keep the Tourettes for jazz. So he took the drugs during the week and came off just before the weekends when he played. Neat.

Does Dyslexia help me? I think I am more of a diagrams and pictures person than a text person because of my dyslexia and I think I maybe look at things a little differently to most people at times – because of the differences in how I perceive. That can help me see things that maybe others have missed? Maybe an advantage. I’ll take that.

Also, in my case at least, dyslexia is not an issue for me comprehending or constructing written prose. I think I write some good stuff at times.

But I don’t want to be dyslexic. Frankly, it p122es me off.

I’ll give you an example. I did a blog post a few weeks back and it had some script examples in it. I had nearly finished it when I realised I had constantly spelt one word utterly wrong. The spell checker picked it up. But just before I posted it, I realised I had also got my column aliases utterly wrong. I have a little set of rules for generating table and column aliases, it is not complex, but in my head the leading letters of a word are not always, well, the leading letters. I had to alter my scripts and then re-run them all as I knew if I tried to unpick the spelling mistakes manually I would mess it up, I’ve been there before. It took me hours. I can really do without wasting that time. {Update, since originally drafting this post the same situation with another technical post has occurred}. Then there is the embarrassment of doing something like spelling the name of a column wrong when you design and build a database. I did that in a V8 database when renaming columns was still not a simple task {was it really Oracle 9 release 2 before column rename was introduced?}. The database went live and accrued a lot of data before anyone made an issue of it. It then kept getting mentioned and I had to keep explaining.

I don’t see Dyslexia as a badge of honour and every time I see someone being proud of it (or to my odd mind it seems they are proud of it) or suggesting they are better than average for overcoming it (again, maybe it is just my perception), I just feel uncomfortable. I think all and everyone of us has something we have had to overcome to be “normal”.

Yet, on reading that above paragraph back, it is simply insulting to people who have fought and striven to overcome severe dyslexia or other issues with perception or communication. I certainly do not mean that (and I apologise unreservedly to anyone who is now fuming at me because of my callousness).

Maybe that is my issue with the whole topic – I am not uncomfortable with the notion of being proud to have overcome something like dyslexia and I admire people who cope with other conditions which make it harder for them to get by in our culture, but I just can’t see why you would be proud of the condition or want to use it as a bragging right.

I guess I want to be able to just acknowledge my dyslexia, point out it is no big deal in my case but it is why I spell like a 10 year old. It is as significant as the fact I’m scared of heights. I guess I cringe a little when I say it as I don’t want to be seen to be making excuses and I certainly do not feel, that in my case at least. I have won through against the odds. Maybe I’ve been a little hard-done-by occasionally but haven’t we all?

Creating Optimizer Trace Files

Many Oracle DBA’s are probably familiar with what Optimizer trace files are and likely know how to create them. When I say “Optimizer trace” more than likely you think of event 10053, right? SQL code like this probably is familiar then:

alter session set tracefile_identifier='MY_10053';
alter session set events '10053 trace name context forever';
select /* hard parse comment */ * from emp where ename = 'SCOTT';
alter session set events '10053 trace name context off';

In 11g, a new diagnostic events infrastructure was implemented and there are various levels of debug output that you can control for sql compilation. ORADEBUG shows us the hierarchy.


In the comments to a recent blog posting about log file syncs, Tony Hasler has produced a stunning observation about Oracle and ACID, in particular the D (durability) bit of transactions.

The observation can best be described with a demonstration (which I have run on versions from 8.1 to 11.2) involving three sessions, one of which has to be connected with sysdba privileges.

Session 1

create table t1(n1 number);
insert into t1 values(1);

session 2 (sysdba)

oradebug setorapid NNN  -- where NNN is v$ for lgwr
oradebug suspend  -- suspend lgwr processing

session 1

update t1 set n1 = 2;
commit;    -- the session hangs, waiting for log file sync

session 3

select n1 from t1;

Is session 3 going to report the value 1 or 2 ?

Since lgwr has not been able to write the redo generated by session 1 you might think that the answer would 1; but the session has already done all the work normally associated with a commit, (despite any indications you might see in the manual that the data is not committed until it is written) and the actual answer to my question is 2.

Now go back to session 2 and issue:

shutdown abort

When the database starts up, is a query against t1 going to return 1 or 2 ?

It ought to return 2, because after instance recovery you ought to be able to repeat the query of session 3 and get the same answer. But you will see 1 – lgwr didn’t write the transaction’s commit record before the crash, so even though session 3 saw the result of the transaction, the transaction wasn’t recoverable. (In passing, session 3 could have been an incoming query from a remote database.)

When I first saw this behaviour my first thought was that you would have to be very unlucky to see a real problem because the window of opportunity for an error was very small, but after making a few comments about the behaviour at the NoCOUG conference earlier on today I found myself at lunchtime sitting next to someone who described a problem that they had had a few years earlier when their archive destination had become full and a complex piece of synchronisation between two databases had failed in a strange way.

If lgwr has to stop for any reason, that doesn’t stop sessions running on for a little while until the log buffer is full – and if those sessions commit (each one will hang after it commits, of course) you could end up with the results of a number of committed, but unrecoverable, transactions being captured by a remote database and then you’re in trouble if the local database crashes – especially if the local programs decided to re-run the programs to generate the missing data … and that’s the problem my lunchtime companion may have been seeing. (I think I may soon be doing a few experiments to see if I can reproduce the problem – one of the other ideas I’ve assumed to be true is that you can’t get more data into the log buffer if the log writer has initiated a log file switch that has not yet completed, maybe that’s wrong as well.)

Footnote: See comment 2- the problem is arguably one of isolation rather than durability (though I have finally come down on the side of the problem being durability).

Update (Sunday 10:30 am – GMT/UTC + 1)

There have been some interesting comments and suggestions in response to this posting, and I think there is room for some elucidation about how things work and speculation about how they could be changed.

When you modify data blocks you generate redo change vectors (for the blocks you want to modify, and for undo blocks) and these redo change vectors are combined to form a redo change record. Ignoring a small complexity introduced in 10g, the changes are applied to the blocks in memory as your session copies a redo record into the log buffer; the order (ignoring 10g again) is: copy record into buffer, apply change to undo block, apply change to target block. So changes you make are immediately applied to the current blocks (again ignoring 10g).

When you commit a transaction you update the transaction table slot that identifies your transaction to show that the transaction is committed – this is a change to an undo segment header block so it generates a redo change vector. THe handling of this vector is completely standard – your session creates a redo record from it and puts the record into the log buffer – which means it is applied to the undo block at that moment, which means everyone can now see the effects of your transaction. AFTER the record is in the redo buffer your session will post lgwr to write the log buffer to the log file and goes into a “log file sync” wait. But this means (a) your transaction is visible to everyone else before it is recoverable (durable) and (b) you’re the only session that thinks the transaction hasn’t yet “really committed”.

Ideally your session needs to do something that puts the “commit record” into log buffer without letting anyone else see the undo segment header change, and only lets them see the change after your session has been posted by lgwr (and even that leaves a window for inconsistent recovery if the instance crashes after the write has happened but before your session has been posted – see my reply to Martin Berger). A change of this type, though, would probably require a massive rewrite of the logging mechanism because it would make the commit record handling different from every other record. There’s also the side effect this might have on the “group commit” mechanism and consequent scalability.

Prompted by an observation by Sokrates about “other” sessions waiting on log file sync in 11.2, and comments from Mark Farnham about a “write guarantee”, I’ve been trying to think through the consequences of a possible strategy for doing a “two-phase” commit with some “delayed logging” on the undo segment headers. The steps are as follows:

  • Session commits and updates the undo segment header with ‘commit waiting on lgwr’ status rather than ‘committed’, and posts lgwr.
  • Any session looking at a block affected by that transaction now has to deal with indeterminate data – so waits on the transaction slot (with enq: TX in mode 4) rather than assuming a rollback or commit. This session, however, could be allowed to check where the log writer had got to and recognise that the relevant write had been completed (and this is analogous to existing code that runs when a session times out on a log file sync). Note that the underlying strategy here is one that is already in place to cater for distributed transactions caught between the PREPARE and COMMIT phases. The waiting session could also post pmon if it thought that the driving session had failed and was unable to receive the post back from lgwr (and this is analogous to the current behaviour when a process is trying to get a latch that seems to have been held too long).
  • When the original session is posted by lgwr to continue it could modify the transaction table slot without generating redo – or possibly generate it but not issue a log file sync for it – thus allowing the second session to release its TX (mode 4) and continue. (This would require some special handling during recovery for transactions that had been written to lgwr but not confirmed back to their sessions.)
  • There is a consistency problem, though, because commits are sequenced by SCN, and at present lgwr simply posts all sessions that it can see should be cleared by its most recent write – they need not resume in exactly the right order, which means transactions could go from “waiting on lgwr” to “committed” in the wrong order. This means we need another two-step approach: lgwr to write process ids to a linked list in commit (i.e. log buffer) order, with a new background process created to walk the list and post each waiter in turn, waiting (or watching) for it to acknowledge before going on to the next entry in the list.

Feel free to poke holes.


A note to anyone who was at the NoCOUG conference today and who attended my presentation on “The Beginner’s Guide to becoming an Expert.” At the time I couldn’t figure out why the number of redo records in the 10g demonstration was twice the size that it should have been. The answer came to me a few minutes after the end of the session – I had enabled auditing (audit_trail = db) a few days earlier and forgotten about it.

There is an odd bug in (and possibly other 10.2 releases) which results in Oracle locking the row (redo entry 1) before updating the row (redo entry 2). If I had looked at the script more carefully I would have noticed that I had documented the issue a few months ago.

UCAS Statistics

Those of you who were here earlier will have seen a post questioning a story on the BBC by Pallab Gosh, though its main thrust was to question the blind use of ratios. I used as my source the 2003-2010 figures available at which seem to show Physics had 4300 or so applicants in 2010. Pallab [...]

Oracle Database Time Model Viewer in Excel 6

August 18, 2011 (Back to the Previous Post in the Series) It has been roughly five months since the last installment in this blog article series.  Hopefully, several people have found this series helpful and have adapted the solution to better fit their specific needs.  By the end of the last article the Excel based [...]

I hate Ratios

Well, maybe hate is a strong word, but they sure can and do mislead. The BBC has been running with an interesting story, but it can’t help but try and add a more dramatic ratio into the mix. The story is here, and was featured heavily on the Today programme this morning. In it we [...]

Free Online Classes from Stanford University

If you are looking to know more about the insides of databases (relational algebra, relational design theory, etc.) or machine learning or AI you may want to check out these free online classes from Stanford University. Class begins October 10!

Oracle OpenWorld 2011: Sessions By OakTable Members

It’s that time again — time to figure out what sessions you will be attending at Oracle OpenWorld 2011. In my slightly biased opinion, session by members of the OakTable Network generally have great technical content and give you the most value for your time. To aid you with your scheduling, I’ve compiled a list of sessions by OakTable members here. Enjoy!