Search

Top 60 Oracle Blogs

Recent comments

December 2009

The CPU Costing Model: A Few Thoughts Part I

In the coming days, I’ll post a series of little entries highlighting a specific point in relation to the use of system statistics and the CPU cost model. In my previous post, we looked at how the cost of a FTS is calculated using the CPU costing model and how this generally results in an increase in the associated FTS [...]

C. J. Date: All Systems Go

Enrollment is open for the course taught by Christopher J. Date that we'll host 26–28 January in the Dallas/Fort Worth area. For many of us, this will be a once-in-a-lifetime opportunity to sit in a classroom for three days with one of the pioneers who created the field we live in each day.

I'm looking forward to this course myself. It is so easy to use Oracle in non-relational ways. But not understanding how to use SQL relationally leads to countless troubles and unnecessary complexities. Chris's focus in this course will be the discipline to use Oracle in a truly relational way, the mastery of which will make your applications faster, easier to prove, and more fun to write, maintain, and ehnance.

What is the purpose of SYS_OP_C2C internal function

Recently I was involved in one problem with CBO's selectivity estimations on NVARCHAR2 data type column. What I spotted in predicate information was the usage of internal Oracle function SYS_OP_C2C.

Here is an example of the run-time execution plan using bind variables:

SQL_ID 1fj17ram77n5w, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ *
from x1 where cn >= to_nchar(:a1) and cn <= to_nchar(:a2)

Plan hash value: 2189453339

-------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------
|* 1 | FILTER | | 1 | | 9999 |00:00:00.08 | 102 |

A fairly unique opportunity...

CJ Date will be giving a seminar in Dallas Texas next month. I wish I could get to it myself - but I'm already fully committed that week.

Maybe we can get the answer to using NULLs. Or at least some more input :)

OakTable Book "Oracle Expert Practices"

I haven't had much time in the recent past to write interesting blog posts, and the main reason for this is that I was very busy during the last couple of months - in particular contributing to the latest OakTable book "Oracle Expert Practices: Oracle Database Administration from the OakTable" by APress. It has been a very interesting experience - I've been co-authoring two chapters about Performance Optimization Methods together with another OakTable member: Charles Hooper.

This was a real collaborative work, a joint-effort if you want to say so. We exchanged the chapter contents rather frequently via Internet and I think this approach worked out quite well. I also have to thank Charles for spending a lot of time in rewording most of my "german" English into something that was consistent with his style.

It actually worked so well that what was originally planned as a single chapter grew so fast that it was finally decided to split it into two chapters - so we ended up with two chapters each co-authored by two authors.

Although it is obvious that something as complex as Performance Optimization Methods can't be covered to a full extend in a chapter (or even two) and therefore sometimes only the surface could be scratched and countless omissions were necessary I hope that the two chapters help to get a good overview of the available Performance Monitoring and Optimization methods.

I guess that these two chapters are not an easy read - we have packed a lot of details into them, but they should really be worth spending some time to dig through. We have also prepared numerous scripts that can be downloaded at the APress website to reproduce the described methods.

For a full coverage of the Performance Optimization area to me personally Christian Antognini's "Troubleshooting Oracle Performance" is still the benchmark - a very remarkable book.

I really hope that the same will be true for the "Oracle Expert Practices" book - it is supposed to be shipping real soon now.

By the way - it is still a bit early I know, but Charles and myself plan to perform a presentation together about our book chapters at the "Michigan OakTable Symposium (MOTS)" which will take place right before the OOW 2010 on the 16th and 17th of September 2010. So if you're looking for a "technical" conference rather than the more marketing oriented stuff at OOW - this might be interesting for you.

We have some very good ideas about this presentation - it will probably be more or less "zero-slide" and cover lots of demonstrations I guess, but it's too early obviously to reveal too much.

notes from UKOUG

I am back from my third visit to UKOUG and as always, the trip was wonderful, the conference was excellent and if I could have stayed, I would have. Of course, after dealing with trains all morning and then being stuck in an airplane seat watching movies for hours on end, sleeping in my own bed last night was absolute heaven. But the UK must be a wonderful place to live: the people are lovely,

V$SQL_MONITOR and V$SQL_PLAN_MONITOR

In my recent presentation at UKOUG 2009 in Birmingham I also mentioned the new feature of Oracle11gR1 which is a neat solution for monitoring long running SQL statements. It captures statistics about SQL execution every second.

For parallel execution every process involved gets separate entries in V$SQL_MONITOR and V$SQL_PLAN_MONITOR.

It is enabled by default for long running statements if parameter CONTROL_MANAGEMENT_PACK_ACCESS if it is set to “DIAGNOSTIC+TUNING” and STATISTICS_LEVEL=ALL|TYPICAL

It can be enabled at statement level as well with /*+ MONITOR */ hint or disabled with /*+ NO_MONITOR */ hint.

There are some defaults defined which can be altered by setting hidden parameters:
_sqlmon_max_plan - Maximum number of plans entries that can be monitored. Defaults to 20 per CPU
_sqlmon_max_planlines - Number of plan lines beyond which a plan cannot be monitored (default 300)

UKOUG 2009

UKOUG 2009 conference will take place next week in Birmingham. I will speak on Wednesday, December 2nd about "Execution Plan Stability in Oracle11g". As the audience at UKOUG conference is very technical I will add more technical stuff to my presentation this week.

Here is the link for for the conference agenda.

Global Temporary Tables Share Statistics Across Sessions

In another blog posting, I asserted that statistics collected by one session on a Global Temporary table (GTT) would be used by other sessions that reference that table, even though each session has their own physical instance of the table. I thought I should demonstrate that behaviour, so here is a simple test.

We will need two database sessions. I will create a test Global Temporary table with a unique index.

#eeeeee; border: 0px solid rgb(0, 0, 0); overflow: auto; padding-left: 4px; padding-right: 4px; width: 100%;">
DROP TABLE t PURGE;
TRUNCATE TABLE t;

CREATE GLOBAL TEMPORARY TABLE t
(a NUMBER,b VARCHAR2(1000))
ON COMMIT PRESERVE ROWS;

CREATE UNIQUE INDEX t ON t(a);

Partition Maintenance with Global Indexes

(updated 3.12.2009 to deal with UPDATE GLOBAL INDEXES option)
When I decide to partition a table, I also need to consider whether to partition the indexes, and if so how. The easiest option is to locally partition the indexes.  I try to avoid globally partitioned indexes because they can become invalid when you do partition maintenance. However, where an index leads on a column other than the partitioning key, then you might have to scan all partitions of a locally partitioned index if you do not query by the partitioning column.  Sometimes, it is necessary to partition an index differently to the table, or not at all. However, I have found that there are some partition management operations that do not invalidate global indexes.