Search

Top 60 Oracle Blogs

Recent comments

July 2014

Oracle Linux and MySQL : Progress

A few months ago I wrote about some MySQL on Oracle Linux migrations we were working through. It’s been a long time coming, but last weekend was the go-live for this batch of migrations. So far so good! :)

Most of the elapsed time since my last post on this subject has been spent with the developers and users testing the migrations.

It was 12 years ago today…

It was exactly 12 years ago today that I first presented my Index Internals – Rebuilding The Truth presentation at a local ACT Oracle User Group event. And so my association with Oracle indexes started. It would be an interesting statistic to know how many people have subsequently read the presentation :) It would no doubt result in […]

KeePass 2.27 Released

I just noticed that KeePass 2.27 has been released.

I was introduced to KeePass at my current job and now I use it for everything at home too. You can read how I use KeePass here.

Happy upgrading…

Cheers

Tim…


KeePass 2.27 Released was first posted on July 7, 2014 at 8:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Avoiding Data Doomsday: succeed now or pay later

 “Virtualized data is fast becoming the new norm. No one can justify using storage like before” – Tim Gorman

Recurring Conversations: AWR Intervals (Part 1)

I've seen plenty of blog posts and discussions over the years about the need to increase the default AWR retention period beyond the default value of 8 days. Experienced Oracle folk understand how useful it is to have a longer history of performance metrics to cover an entire workload period so that we can, for example, compare the acceptable performance of the last month end batch processes to the living hell of the current month end. You'll often hear a suggested minimum of 35-42 days and I could make good arguments for even more history for trending and capacity management.

Why is P1 the only parameter populated in cell smart table scan?

Anyone who has looked at Exadata might ask the question, and I did so too. After all, cell smart table scan is in wait class User IO so there should be more, right? This is what you find for a smart scan:

NAME                           PARAMETER1           PARAMETER2           PARAMETER3                     WAIT_CLASS
------------------------------ -------------------- -------------------- ------------------------------ ---------------
cell smart table scan          cellhash#                                                                User I/O
cell smart index scan          cellhash#                                                                User I/O

Compare this to the traditional IO request:

SQL Plan Baselines

Here’s a thread from Oracle-L that reminded of an important reason why you still have to hint SQL sometimes (rather than following the mantra “if you can hint it, baseline it”).

I have a query that takes 77 seconds to optimize (it’s not a production query, fortunately, but one I engineered to make a point). I can enable sql plan baseline capture and create a baseline for it, and given the nature of the query I can be confident that the resulting plan will always be exactly the plan I want. If I have to re-optimize the query at any time  (because it runs once per hour, say, and is constantly being flushed from the library cache) how much time will the SQL plan baseline save for me ?

The answer is NONE.

The first thing that the optimizer does for a query with a stored sql plan baseline is to optimize it as if the baseline did not exist.

No Hakan factor for IOT

Sadly there seems to be no concept of the Hakan factor for an IOT.

I have an application which merges into an IOT, the merge incrementally populating a swag of initially null columns, hence growing the rows in size.  Some simple benchmarking shows the overhead of this versus merging into a table with pre-populated values:

Adjusting Histograms

This is a quick response to a question on an old blog post asking how you can adjust the high value if you’ve already got a height-balanced histogram in place. It’s possible that someone will come up with a tidier method, but this was just a quick sample I created and tested on 11.2.0.4 in a few minutes.  (Note – this is specifically for height-balanced histograms,  and it’s not appropriate for 12c which has introduced hybrid histograms that will require me to modify my “histogram faking” code a little).

Philosophy 22

Make sure you agree on the meaning of the jargon.

If you had to vote would you say that the expressions “more selective” and “higher selectivity” are different ways of expressing the same idea, or are they exact opposites of each other ? I think I can safely say that I have seen people waste a ludicrous amount of time arguing past each other and confusing each other because they didn’t clarify their terms (and one, or both, parties actually misunderstood the terms anyway).

Selectivity is a value between 0 and 1 that represents the fraction of data that will be selected – the higher the selectivity the more data you select.

If a test is “more selective” then it is a harsher, more stringent, test and returns less data  (e.g. Oxford University is more selective than Rutland College of Further Education): more selective means lower selectivity.