Top 60 Oracle Blogs

Recent comments

May 2011

SQL timings for ASH II


Here is a max, min, avg  execution time  for SQL executions in ASH data along with count of execution and histogram of their execution times

(tomorrow I’ll look at showing the execution date time of the longest elapsed times)

#ffffff; padding-right: .5em;"> #ffffff; padding-right: .5em;">
col 1 for 99999
col 2 for 99999
col 3 for 9999
col 4 for 9999
col 5 for 9999
col av for 99999.9
with pivot_data as (
      mxdelta mx,
      mndelta mn,
      avdelta av,
      width_bucket(delta,0,mxdelta+.1,5) as bucket
from (
   count(*) OVER (PARTITION BY sql_id) ct,
   max(delta) OVER (PARTITION BY sql_id) mxdelta,
   min(delta) OVER (PARTITION BY sql_id) mndelta,
   avg(delta) OVER (PARTITION BY sql_id) avdelta
from (
        max(delta) delta
     from ( select
              ((cast(sample_time    as date)) -
               (cast(sql_exec_start as date))) * (3600*24) delta
           where sql_exec_id is not null
     group by sql_id,sql_exec_id
select * from pivot_data
PIVOT ( count(*) FOR bucket IN (1,2,3,4,5))
order by mx,av
#ffffff; padding-right: .5em;"> #ffffff; padding-right: .5em;">
SQL_ID                CT         MX         MN       AV      1      2     3     4     5
------------- ---------- ---------- ---------- -------- ------ ------ ----- ----- -----
5k7vccwjr5ahd       2653       1963          0     33.4   2623     15     8     4     3
ds8cz0fb8w147        161       2531         13    273.8    136     18     5     1     1
bzyny95313u12        114       2599          0     46.5    113      0     0     0     1
0hbv80w9ypy0n        161       4089          0   1183.9     27    116     9     6     3
71fwb4n6a92fv         49       4481         30    676.9     38      6     2     2     1
0bujgc94rg3fj        604       4929          0     24.7    601      1     1     0     1
64dqhdkkw63fd       1083       7147          0      7.2   1082      0     0     0     1
990m08w8xav7s        591       7681          0     51.8    587      0     0     2     2
2spgk3k0f7quz        251      29607          0    546.1    247      2     0     0     2
497wh6n7hu14f         49      69438          0   5498.2     44      1     0     1     3

Coming soon to Dublin, London and Helsinki...

I'll be doing some visits to Europe coming up soon.

On May 16th, I'll be doing a half day session in Dublin, Ireland. The information for this visit can be found here.
On May 17th, I'll be doing a day of "Real World Performance". You can read the writeup for that in Computer World.
And lastly, on May 19th, I'll be participating in the EMEA Harmony Conference in Helsinki, Finland. Chris Date will be there presenting as well! Should be interesting.

EMC World 2011? What A Waste Of Time!

EMC World 2011

Waste of time? Absolutely not! It won’t be a waste of time…certainly not for me. I am going and looking forward to it. I’m motivated to learn more about Cloud computing infrastructure and am super interested in seeing the interesting things that are going on with VMware. These are two technology trends I’m very interested in. The way I look at virtualization is two-fold: a) it solves a lot of problems and b) pretty much everything will eventually be running virtualized so I want to be ahead of the curve. There’s no need to be in denial.

Along those lines, I invite folks to read the following paper:

Oracle E-Business Suite on EMC Unified Storage with VMware Virtualization

I believe there is a session on this at EMC World. If so, I plan to attend. One of the things that impresses me the most about the project behind this paper is the proof-positive that  that EMC IT actually “eats their own dog-food” as the cliché goes. I’m tired of the “Do as I say, not as I do” crowd.

Perhaps some of you are going to EMC World as well? If so it’ll be a pleasure to meet up.

Filed under: oracle

What is the overhead of enabling Block Change Tracking?

I’ve been wondering what kind of impact BCT has on databases and had the opportunity to talk to Vinay Srihari.*

Kyle: Vinay, what kind of impact does BCT have on the performance of an Oracle database?

Vinay: After it was introduced in 10gR1, BCT was put through TPCC-style OLTP benchmarks internally at Oracle. There was no measurable performance overhead on OLTP. However, it took a release to stabilize the feature and fix several fundamental bugs: is all right, not so sure about 10.1.

There was some concern because BCT updates to the in-memory bitmap are done in the critical redo generation path by every foreground. The BCT logging turned out to be negligible in terms of CPU (instruction count) because redo generation was an order of magnitude more work. The main issue was bugs that caused the BCT background (CTWR) to get stuck or slow down, to the point where no more in-memory bitmap buffers were available and impacted foreground activity.

BCT for the Physical Standby was added in 11.1, as part of the Active Data Guard licensed option. There were some critical bugs with this feature that were fully resolved only in 11.2, although patches are available for

I have seen BCT deployed successfully at customers with > 20MB/s (50 GB/hr), Oracle’s internal testing was ~50MB/s.

Bottom line, stability of the feature and not performance is the concern. On the primary:,, 11.2 are solid. On the standby: (+ BCT patches), (+ BCT patches), are best.

* Vinay is currently an architect at Delphix and spent the past  12 years  at Oracle where he was involved in building High Availability services for the Oracle Database kernel. He was a member of early development teams that delivered RAC Cache Fusion, multi-threaded redo generation, and high performance replication. Since 2005 he was managing the High Availability infrastructure engineering team responsible for the development of the recoverability features of the core database and high-revenue products like Active Data Guard Standby, RAC, and Exadata. Prior to that Vinay also had stints in Oracle’s technical support and server performance organizations.


Starting with Oracle 11gR1 Oracle JVM includes Just-in-Time compiler. Its goal is to convert JVM bytecode into platform-specific native code based on the performance data gathered in run-time. It is stated in the documentation that JIT “enables much faster execution” – and this is, in general, true. There are ways to control the behavior of JIT, one way is described in the MOS Doc ID 1309858.1, and another one here.

SQL execution times from ASH


Something I’ve been looking forward to exploiting for a while is the SQL_EXEC_ID field in ASH. Thanks to Karl Arao for pushing me to look into this. Karl sent me an example query he had written using SQL_EXEC_ID and then we started discussing how to harness SQL_EXEC_ID. (see Karl’s blog here and some good examples of SQL_EXEC_ID here
One of the several cool things about ASH in Oracle 11g is there is now a SQL_EXEC_ID column. What good is the SQL_EXEC_ID column? Well without it , like in Oracle 10g there was no way to look at ASH data and distinguish between a SQL statement that had been re-executed multiple times in a row and showed up in consecutive ASH samples and a SQL statement that was running for a long time and showing up in consecutive ASH samples. Now in 11g with SQL_EXEC_ID we can now make the distinction. (the distinction can actually be done  since Oracle version 7 with SASH – simulated ASH). Now with SQL_EXEC_ID not only can we distinguish long running queries from re-executed queries we can start looking at  average execution times and maximum execution times and start seeing variation. These timings though are only accurate for longer running queries as fast running queries running faster than 1 second will only appear once per execute in ASH thus no deltas can be calculated.

Here is an example SQL execution timing data mining query (ashsqlt.sql):

col mx for 999999
col mn for 999999
col av for 999999.9

       max(tm) mx,
       avg(tm) av,
       min(tm) min
from (
        max(tm) tm
   from ( select
              ((cast(sample_time  as date)) -
              (cast(sql_exec_start as date))) * (3600*24) tm
           where sql_exec_id is not null
   group by sql_id,sql_exec_id
group by sql_id
having count(*) > 10
order by mx,av

and the output looks like

SQL_ID          COUNT(*)      MX        AV        MIN
------------- ---------- ------- --------- ----------
2h83mk46003q4         18       1        .2          0
77m98gu3uzb59         15       1        .2          0
6dnvpqs0zdxhv         30       1        .3          0
40r79s5rxabwr         18       1        .3          0
26ugyh75gmnt4         35       1        .3          0
1sjkhrtg3qa48         14       1        .3          0
4h6kn70hsgz9u         20       1        .3          0
b6s2h63d07t91         13       1        .3          0
a18wua1f703xh         29       1        .3          0
6az55y9y8u86c         12       1        .3          0
7ujfmgb1j83a3         21       1        .3          0

above is the beginning and times below is the end of output

SQL_ID          COUNT(*)      MX        AV        MIN
------------- ---------- ------- --------- ----------
0fvrpk7476b7y         26    3068     133.1          0
1pjp66rxcj6tg         15    3106     767.7         57
8r5wuxk1dprhr         39    3510     841.0         24
0w5uu5kngyyty         21    3652     442.3          0
0hbv80w9ypy0n        161    4089    1183.9          0
71fwb4n6a92fv         49    4481     676.9         30
0bujgc94rg3fj        604    4929      24.7          0
64dqhdkkw63fd       1083    7147       7.2          0
990m08w8xav7s        591    7681      51.8          0
2n5369dsuvn5a         16   10472    5726.8        303
2spgk3k0f7quz        251   29607     546.1          0
36pd759xym9tc         12   37934   23861.9       1391
497wh6n7hu14f         49   69438    5498.2          0

Extreme Performance with Oracle Exadata Executive Dinner

I spoke at an Oracle marketing event in San Antonio last night. Here is a link to the promotional page for the event (it will probably disappear soon). I promised to make a copy of my slides available so here it is:

How to Collect Statistics

May 4, 2011 In a previous article I shared a couple of the slides that I put together for an Oracle Database presentation for a specific ERP group - a presentation that as designed was far too long to fit into the permitted time slot.  I thought that in today’s blog article I would share a couple [...]

Getting fit, hopefully…

The other day I joked about the gym when I was talking about Thor.

I was very impressed by the change in Lewis Cunningham when I last saw him and I’ve been following Debra Lilley’s success with interest. Reading Chet’s post Tired of the Fat Jokes today has inspired me to out myself. I am on a bit of a health trip at the moment. Pretty much since I got into the Oracle ACE program I’ve piled on the weight. I could blame the disruption to my routine and foreign travel, but really I just got into avoiding exercise and eating crap food. Nobody to blame but myself.

About 2 months back I decided to sort my diet out. It’s been a little up and down, but so far I’ve lost 21 lbs. It feels a little like polishing a turd, as there is still a long way to go, but it’s a start. I ventured to the gym last week to start trying to get back into exercise again. I’m more interested in being fit, than being thin. It wasn’t very inspiring, so I decided to join a new place, hoping that the money would be a motivator. Today I had my first personal training session. Part of me hates the idea, because I know what I have to do, but I thought, in for a penny, in for a pound.

The guy was less than half my age and looked like he only stopped exercising to go to the toilet. He asked me what my goals were and after deciding that “to look like Brad Pitt in Fight Club” was not really an achievable goal I just said, “to get fit”. I would like to say I impressed him, but the reality was after 2 minutes I was considering feigning a heart attack so I could stop. After 15 minutes it took all my mental effort to stop myself from crying. I know interval training is great for getting fit, but it is so darn difficult, which is why it’s so easy not to bother with it when you are training on your own. After that I spent about 15 mins on a bike silently debating the pros and cons of throwing up. Next the guy took me to the mats to show me some stretching stuff. After the shame of the previous 30 minutes it was nice to do something I can do well. I’m very flexible and if I’m honest I’m a bit of a stretching snob. I managed to keep my gob shut (I really must have been knackered) and let him do his job. I’ve booked in a session with him again next week. I’m going to try and get to the gym every day between now and then. Hopefully next time I will last 16 minutes of intervals before I want to cry. :)

With a bit of luck adding the exercise into the mix will speed things up keep me motivated.

Thanks to all those mentioned above, and countless other bloggers, whose posts serve as encouragement when the thought of surfing the net and eating pizza seems infinitely more appealing than vegetables and treadmills. :)



16GFC Fibre Channel is 16-Fold Better Than 4GFC? Well, All Things Considered, Yes. Part I.

16GFC == 16X
If someone walked up to you on the street and said, “Hey, guess what, 16GFC is twice as fast as 8GFC! It’s even 8-fold faster than what we commonly used in 2007” you’d yawn and walk away.  In complex (e.g., database) systems there’s more to it than line rate. Much more.

EMC’s press release about 16GFC support effectively means 16-fold improvement over 2007 technology. Allow me to explain (with the tiniest amount of hand-waving).

When I joined the Oracle Exadata development organization in 2007  to focus on performance architecture, the state of the art in enterprise storage for Fibre Channel was 4GFC. However, all too many data centers of that era were slogging along with 2GFC connectivity (more on that in Part II). With HBAs plugged into FSB-challenged systems via PCIe it was uncommon to see a Linux commodity system configured to handle more than about 400 MB/s (e.g., 2 x 2GFC or a single active 4GFC path). I know more was possible, but for database servers that was pretty close to the norm.

We no longer have front-side bus systems holding us back*. Now we have QPI-based systems with large, fast main memory, PCI 2.0 and lots of slots.

Today I’m happy to see that 16GFC is quickly becoming a reality and I think balance will be easy to achieve with modern ingredients (i.e., switches, HBAs). Even 2U systems can handily process data flow via several paths of 16GFC (1600 MB/s). In fact,  I see no reason to shy away from plumbing 4 paths of 16GFC to two-socket Xeon systems for low-end DW/BI. That’s 6400 MB/s…and that is 16X better than where we were even as recently as 2007.

Be that as it may, I’m still an Ethernet sort of guy.  I’m also still an NFS sort of guy but no offense to Manly Men intended.

A Fresh Perspective
The following are words I’ve waited several years to put into my blog, “Let’s let customers choose.” There, that felt good.

In closing, I’ll remind folks that regardless of how your disks connect to your system, you need to know this:

Hard Drives Are Arcane Technology. So Why Can’t I Realize Their Full Bandwidth Potential?

* I do, of course, know that AMD Opteron-based servers were never bottlenecked by Front Side Bus. I’m trying to make a short blog entry. You can easily google “kevin +opteron” to see related content.

Filed under: oracle