Search

Top 60 Oracle Blogs

Recent comments

February 2011

The Myth of Oracle Fusion…

I read a post this morning by Grant Ronald talking about fusion apps. In Grant’s post he mentioned things that people have been saying about Fusion over the years. Middleware and Apps are not my specialist field, but I get to hear a lot about them from the conferences and ACE Director meetings, so I have been witness to the Oracle Fusion myth from the beginning.

Cast your mind back several years and the whole concept of Fusion was launched at OOW. We were told that the middleware stack was going to become a single coherent product, rather than the buggy rag-tag bunch of technologies we had in 9iAS and AS10g. Sounds good so far, but then all the existing stuff got rebranded as Fusion Middleware when the products it was made up of hadn’t significantly changed. That’s confusing.

Fast forward a bit and we were expecting something like real Fusion Middleware to appear, then the BEA buyout was announced and WebLogic became the core of Fusion Middleware. Oh. So this wonderful coherent product that Oracle had been developing and we were expecting soon was swapped for a best-of-breed app server from an acquisition. Strange and a little disconcerting, but at least we have a better app server now, except that some of the existing features still required you to install the old AS10g stuff. Still the name Fusion is plastered everywhere.

Fast forward a bit more and we have got to a point where applying the term “Fusion” to the middleware stack is less insulting, but if anyone experienced Fusion along the way they would probably have been left with a bad feeling about what Fusion actually means. It’s very hard to overcome a bad first impression. Are Oracle really surprised that the term “Fusion” is associated with myth and confusion?

OK. That’s the Middleware. What about Fusion Apps? Well, the name includes the word “Fusion”, so it takes on all the bad connotations associated with the infancy of Fusion Middleware. Added to that, since the original announcement of Fusion Apps there have been numerous acquisitions, all of which have no doubt added to the confusion about what Fusion Apps actually is. Then we are told there is no natural upgrade from eBusiness Suite to Fusion Apps. It’s a new product and we have to migrate data to it as we would any new ERP. Next we are told that the initial release will only be a subset of the modules we require, so we will have to run it alongside eBusiness Suite. Wow. This is really confusing. That sounds like a half-finished ERP built on a half-finished middleware stack. Once again, are Oracle really surprised people react like this?

Now I’m not saying the Fusion Middleware is bad. It’s come a long way. I’m also not saying Fusion Apps are bad. I’ve seen the demos and they look amazing. I’ve also talked to people in that field who are genuinely impressed and exited by it. I believe it will be a big eye opener and possibly a game-changer for a lot of people. What I’m saying is I can totally understand when people on the outside of our little goldfish bowl have a really bad and confused impression of anything containing the term “Fusion”, because it does have a very long and sordid history.

In my opinion the term Fusion needs to be scrapped and replaced, then perhaps we can forget the history and focus on the now. Kinda like they did with Beehive. :)

Cheers

Tim…

Redo over multiple weeks

I’ve always wanted some sort of calendar view of load where I could see patterns across the same days of the week and same hours of the day and then be able to pick different periods and diff them:

The first step of such a display would be selecting the data in such a way to represent the graphic. A graphic should be harder to do than a similar, though less powerful, ascii representation.

So here is a quick sketch of looking at redo generation over time. Right now I have to monitor how well redo can be archived which depends on how it fast it is generated. (typically what I’m more interested in is the general performance which is what the above graphic addressed)

The output below shows the redo generated per hour on average (picking the maximum hour in the bucket which is 4 hours long below) and the maximum generation at the minute level.  I may or may not be able to archive redo at the up to the minute but I do want to make sure I can keep up with each hour.

DAY    HOUR  MAX(MB)    w1     w2      w3       w4       w5
SAT     0       9               5       4       4       3
SAT     4       12              4       5       4       4
SAT     8       1               0       0       0       0
SAT     12      0               0       0       0       0
SAT     16      1               0       0       0       0
SAT     20      4               1       0       0       1
SUN     0       10              3       3       1       3
SUN     4       13              6       5       3       5
SUN     8       7               6       6       0       6
SUN     12      7               0       3       0       4
SUN     16      1               0       0       1       0
SUN     20      8               3       2       0       3
MON     0       8               3       2       3       2
MON     4       7               2       3       2       1
MON     8       5               1       0       2       0
MON     12      1               0       0       1       0
MON     16      1               0       0       0       0
MON     20      7               2       2       0       2
TUE     0       14              6       5       7       4
TUE     4       7               1       1       1       2
TUE     8       3               0       0       0       0
TUE     12      1       1       0       0       0       0
TUE     16      1       1       0       0       0       0
TUE     20      3       1       1       1       1       1
WED     0       8       3       2       3       2       2
WED     4       7       2       1       3       2       2
WED     8       8       1       0       0       2       3
WED     12      7       1       0       0       1       1
WED     16      1       1       0       0       0       1
WED     20      4       1       1       1       1       1
THU     0       15      7       8       6       6       6
THU     4       8       2       1       1       1       1
THU     8       1       1       0       0       0       1
THU     12      16      1       11      0       0       1
THU     16      1       1       0       0       0       1
THU     20      4       1       1       1       1       1
FRI     0       11      2       2       2       2       2
FRI     4       8       3       1       1       1       1
FRI     8       4       1       0       0       0       0
set heading off
set feedback off
set pagesize 0

with pivot_data as (
   select
          WW pivot_col
        , DY
        , D
        , HH
        , max(maxval) mv
        , max(average) av
   from
      ( select distinct
               begin_time,
               4*trunc(to_char(begin_time,'HH24')/4)     HH,
               to_char(begin_time,'DY')       DY,
               mod(to_char(begin_time,'D'),7)  D,
               mod(to_char(begin_time,'WW'),5)       WW,
               average/(1024*1024) average,
               maxval/(1024*1024) maxval,
               snap_id
        from   DBA_HIST_SYSMETRIC_SUMMARY
        where  dbid=&&DBID and
               metric_name='Redo Generated Per Sec'
	       and begin_time > sysdate - 31
       )
   group by HH,D,DY,WW
)
select DY ||'	'||
       HH ||'	'||
       round(max(mv),0)    ||'	'||
       round(max(w1),0)  ||'	'||
       round(max(w2),0)  ||'	'||
       round(max(w3),0)  ||'	'||
       round(max(w4),0)  ||'	'||
       round(max(w5),0)
from (
   select *
   from  pivot_data
   pivot ( avg(av)
           for pivot_col in ( 1 as w1,2 as w2,3 as w3 ,4 as w4 ,5 as w5 )
         )
)
group by DY,D,HH
order by D,HH
/

set heading on
set feedback on
set pagesize 30



PS the above pivot is for 11g, for 10g here is the query without pivot


set pagesize 100
col DY for A4
col HH for 99
col mx for 99
col w1 for 99
col w2 for 99
col w3 for 99
col w4 for 99
col w5 for 99

with pivot_data as (
   select
          WW
        , DY
        , D
        , HH
        , max(maxval) mv
        , max(average) av
   from
      ( select distinct
               begin_time,
               4*trunc(to_char(begin_time,'HH24')/4)     HH,
               to_char(begin_time,'DY')       DY,
               mod(to_char(begin_time,'D'),7)  D,
               mod(to_char(begin_time,'WW'),5)       WW,
               average/(1024*1024) average,
               maxval/(1024*1024) maxval,
               snap_id
        from   DBA_HIST_SYSMETRIC_SUMMARY
        where
               metric_name='Redo Generated Per Sec'
           and begin_time > sysdate - 31
       )
   group by HH,D,DY,WW
)
select DY, HH,
      round(max( mv ),0) mx,
      round(max( decode( WW, 0 , av, null ) ),0) w1,
      round(max( decode( WW, 1 , av, null ) ),0) w2,
      round(max( decode( WW, 2 , av, null ) ),0) w3,
      round(max( decode( WW, 3 , av, null ) ),0) w4,
      round(max( decode( WW, 4 , av, null ) ),0) w5
   from  pivot_data
group by DY,D,HH
order by D,HH
/

Exatadata Book 2

I have been getting quite a few questions about our upcoming Exadata Book lately so I thought I would post a quick update. We are working feverishly on it so please give us a break!

Just kidding!

I am actually feeling like we can see the light at the end of the tunnel now. We are well past the half way mark and I am feeling confident about the content. Well more than confident actually. I think it’s going to be awesome! In large part that’s due to the fact that I feel like we have the Dream Team working on the project. Tanel Poder has signed on as a co-author. Kevin Closson is the Official Technical Reviewer (and we’re tentatively planning on including a number of his comments in the book – in little “Kevin Says” sidebars). As one of the main architects of Exadata, this should provide some interesting perspective. Arup Nanda has volunteered as an unofficial technical reviewer as well. I have to say that Arup has been a great help. And I really appreciate him providing another perspective on what we’re writing about. All three of these guys are fellow Oak Table bretheren, by the way. Randy Johnson is the other co-author, and although he generally prefers to keep a low profile, he is extremely knowledgeable on things that the rest of us don’t deal with that much on a day to day basis, namely backup and recovery and storage configuration. He has a great RAC and ASM background as well. I have to also say that a guy none of you has ever heard of (Andy Colvin) has been a huge help as well. He is our in-house Exadata patching guru. Without him I’m not sure we would have been able to do the necessary testing to complete the book.

I must say that I feel honored to be involved in a project with such an accomplished group of guys. And by the way, we have had numerous offers from people that I have a lot of respect for to help with various aspects of the project. I want to thank all of you for those offers, even if we haven’t taken you up on all of them (our little brains can only absorb so much feedback at any one time). The book is actually available for pre-order on the Amazon already (so someone must think we are actually going to finish it pretty soon). I think we’re  right on track for later spring delivery. :-)

Philosophy – 13

If you see a comment like “X is a bad idea” this does not mean “some mechanism that is vaguely ‘not X’ is a good idea”.

If, for example, I say:

    “Histograms will not work well on character strings that are more than 32 bytes long and generally similar in the first 32 bytes”

that is absolutely not the same as saying

    “It’s a good idea to create histograms on character strings that are less than 32 bytes long.”

If this were a purely mathematical world we could invoke symbolic logic and point out:

(A => B) <=> (¬B => ¬A)

which means my statement is equivalent to:

    if you have a histogram that is working well then the data is not character strings of more than 32 bytes with generally similar values in the first 32 bytes”

Of course, being Oracle, you may find that someone, somewhere, has exactly such a histogram that appears to work brilliantly for them – but that will be because the optimizer has messed up the arithmetic so much that they are getting a great execution plan for completely the wrong reason … so they need to watch out for the next upgrade or patch release in case the optimizer gets enhanced.

[The Philsophy Series]

AAS on AWR

I’m starting this post as an ongoing discussion of research.
Today I received a AWR export and wanted to get a feel for what the load is on the system. I imported the AWR report and then wanted a quick overview, so I put together a script to give the AAS over time divided up as CPU, WAIT and IO. Current drawbacks is that the script uses SYSDATE which might not be appropriate if the AWR data is from many days ago.
The query seems a bit slow on a month of data, so will be looking at optimizations in the future.
+ = load demanding CPU
o = load waiting for IO to complete
- = load waiting for something like a lock, latch or other resource
the “8″ in the middle of the line is the # of CPU cores. A system that was a 100% maxed out and not waiting for IO or other resources would have pluses (“+”) right up to the number of cores, which is 8 in this case. If the pluses went over the number of cores then there would be contention on CPU resources.

TM                  AAS    CPU     IO   WAIT GRAPH
---------------- ------ ------ ------ ------ --------------------------------------------------------------------------------
03-FEB  04:00:00    3.6    1.8    1.8     .1 +++++++++ooooooooo                      8
03-FEB  05:00:00    4.1    1.9    2.1     .1 ++++++++++oooooooooo-                   8
03-FEB  06:00:00    5.1    2.8    1.9     .3 ++++++++++++++oooooooooo--              8
03-FEB  07:00:00    5.2    2.6    2.2     .3 +++++++++++++ooooooooooo--              8
03-FEB  08:00:00    5.9    3.0    2.6     .2 +++++++++++++++ooooooooooooo-           8
03-FEB  09:00:00    6.3    3.2    2.8     .3 ++++++++++++++++oooooooooooooo--        8
03-FEB  10:00:00    6.3    3.0    2.9     .3 +++++++++++++++ooooooooooooooo--        8
03-FEB  11:00:00    5.5    2.9    2.5     .2 ++++++++++++++oooooooooooo-             8
03-FEB  12:00:00    5.3    2.6    2.5     .2 +++++++++++++ooooooooooooo-             8
03-FEB  13:00:00    5.4    2.7    2.4     .3 +++++++++++++oooooooooooo-              8
03-FEB  14:00:00    5.5    2.8    2.4     .3 ++++++++++++++oooooooooooo-             8
03-FEB  15:00:00    6.1    2.8    3.1     .2 ++++++++++++++ooooooooooooooo-          8
03-FEB  16:00:00    6.3    3.1    3.0     .2 +++++++++++++++ooooooooooooooo-         8
03-FEB  17:00:00    5.6    2.7    2.6     .2 ++++++++++++++ooooooooooooo-            8
03-FEB  18:00:00    3.4    2.1    1.1     .1 +++++++++++oooooo-                      8
03-FEB  19:00:00    4.7    2.5    1.6     .6 +++++++++++++oooooooo---                8
03-FEB  20:00:00    4.7    2.8    1.8     .1 ++++++++++++++ooooooooo                 8
03-FEB  21:00:00    7.3    3.6    3.3     .4 ++++++++++++++++++ooooooooooooooooo--   8
03-FEB  22:00:00   19.2    6.4   11.5    1.2 ++++++++++++++++++++++++++++++++oooooooo8ooooooooooooooooooooooooooooooooooooooo
03-FEB  23:00:00   12.7    5.2    7.1     .5 ++++++++++++++++++++++++++oooooooooooooo8oooooooooooooooooooooo--
04-FEB  00:00:00   11.1    4.5    6.3     .3 ++++++++++++++++++++++oooooooooooooooooo8oooooooooooooo--
04-FEB  01:00:00   12.8    4.3    5.9    2.6 ++++++++++++++++++++++oooooooooooooooooo8oooooooooooo-------------
04-FEB  02:00:00    4.2    2.5    1.6     .1 ++++++++++++oooooooo                    8
04-FEB  03:00:00    2.1    1.3     .7     .1 +++++++ooo                              8
04-FEB  04:00:00    2.1    1.3     .9     .0 ++++++oooo                              8
04-FEB  05:00:00    2.3    1.2    1.0     .1 ++++++ooooo                             8
04-FEB  06:00:00    3.7    2.5    1.0     .2 +++++++++++++ooooo-                     8

Def v_days=10   -- amount of days to cover in report
Def v_secs=3600 -- size of bucket in seconds, ie one row represents avg over this interval
Def v_bars=5    -- size of one AAS in characters wide
Def v_graph=80  -- width of graph in characters

undef DBID
col graph format a&v_graph
col aas format 999.9
col total format 99999
col npts format 99999
col wait for 999.9
col cpu for 999.9
col io for 999.9

/* dba_hist_active_sess_history */

select
        to_char(to_date(tday||' '||tmod*&v_secs,'YYMMDD SSSSS'),'DD-MON  HH24:MI:SS') tm,
        --samples npts,
        round(total/&v_secs,1) aas,
        round(cpu/&v_secs,1) cpu,
        round(io/&v_secs,1) io,
        round(waits/&v_secs,1) wait,
	-- substr, ie trunc, the whole graph to make sure it doesn't overflow
        substr(
	   -- substr, ie trunc, the graph below the # of CPU cores line
           -- draw the whole graph and trunc at # of cores line
	   substr(
	     rpad('+',round((cpu*&v_bars)/&v_secs),'+') ||
             rpad('o',round((io*&v_bars)/&v_secs),'o')  ||
             rpad('-',round((waits*&v_bars)/&v_secs),'-')  ||
             rpad(' ',p.value * &v_bars,' '),0,(p.value * &v_bars)) ||
        p.value  ||
	   -- draw the whole graph, then cut off the amount we drew before the # of cores
           substr(
	     rpad('+',round((cpu*&v_bars)/&v_secs),'+') ||
             rpad('o',round((io*&v_bars)/&v_secs),'o')  ||
             rpad('-',round((waits*&v_bars)/&v_secs),'-')  ||
             rpad(' ',p.value * &v_bars,' '),(p.value * &v_bars),( &v_graph-&v_bars*p.value) )
        ,0,&v_graph)
        graph
from (
   select
       to_char(sample_time,'YYMMDD')                   tday
     , trunc(to_char(sample_time,'SSSSS')/&v_secs) tmod
     , (max(sample_id) - min(sample_id) + 1 )      samples
     , sum(decode(session_state,'ON CPU',10,decode(session_type,'BACKGROUND',0,10)))  total
     , sum(decode(session_state,'ON CPU' ,10,0))    cpu
     , sum(decode(session_state,'WAITING',10,0)) -
       sum(decode(session_type,'BACKGROUND',decode(session_state,'WAITING',10,0)))    -
       sum(decode(event,'db file sequential read',10,
                        'db file scattered read',10,
                        'db file parallel read',10,
                        'direct path read',10,
                        'direct path read temp',10,
                        'direct path write',10,
                        'direct path write temp',10, 0)) waits
     , sum(decode(session_type,'FOREGROUND',
		 decode(event,'db file sequential read',10,
                                  'db file scattered read',10,
                                  'db file parallel read',10,
                                  'direct path read',10,
                                  'direct path read temp',10,
                                  'direct path write',10,
                                  'direct path write temp',10, 0))) IO
       /* for waits I want to subtract out the BACKGROUND
          but for CPU I want to count everyone */
   from
      dba_hist_active_sess_history
   where sample_time > sysdate - &v_days
   and dbid=&&DBID
   and sample_time < (select min(sample_time) from v$active_session_history)
   group by  trunc(to_char(sample_time,'SSSSS')/&v_secs),
             to_char(sample_time,'YYMMDD')
) ash,
  ( select  value from dba_hist_parameter where DBID=&&DBID and parameter_name='cpu_count' and rownum < 2 ) p
order by to_date(tday||' '||tmod*&v_secs,'YYMMDD SSSSS')
/

About Database Security...

I'll be doing a webcast on Thursday Feb. 24th at 9:30am (PST) regarding "How Secure is your Enterprise Data?". It'll be just about 30 minutes long.

I won't be the only speaker - I'm on second at 9:30am (PST). Anyone can "attend" (since it is virtual after all).
Hope to "see" you there...

Using a Windows Authenticated Proxy Server with EM11g

Over the years I have a number of issues with proxy configuration in Enterprise Manager Grid Control. The product is very definitely improving in this regard, however EM 11g still does not currently correctly integrate enterprise proxy servers that authmatically authenticate users using NTLM – a fairly common configuration given the ubiquity of Active Directory in the marketplace [...]

Links

A quick and temporary note about a couple of links that I’ve just found:

Update

I’ve just popped this note to the top of the stack again to mention a useful article from Craig Shallahamer on “Important Statistical Distributions”. Every DBA and developer should read it – as an introduction to the patterns you find in real-world data it’s short and sweet; it’s also very important. I often see test data which is “the wrong kind of random”: nearly everyone thinks a uniform distribution is okay for testing their systems when lots of data sets exhibit “normal”, “log normal” or “poisson” distributions – and it makes a huge difference to the validity of performance tests if you get it wrong.

Update 2

Richard Feynmann is one of my “heroes” – so I had to pop this post to the top of the stack when I found this online archive video of him talking about Quantum Electrodynamics for the Douglas Robb Memorial Lectures at the University of Auckland some years ago.

Most Expensive SQL Statement Ever

You know the cost calculation that the cost based optimizer (CBO) uses to determine which execution plan to choose for a SQL statement, right? If you don’t, you should immediately stop reading this and pick up a good novel instead. Ah, you’re still here? Well I got an interesting email today from one of my co-workers saying he had to kill a query yesterday. Actually that’s a big part of his current job. Killing runaway queries – apparently that job takes most of his time between 8 and 5. Anyway, he sent me this execution plan today, no comments, “just have a look at this”, he said.

---------------------------------------------------------------------------------------------------
| Id  | Operation              | Name             | Rows  | Bytes|TempSpc| Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |                  |       |       |      |    65P(100)|          |
|   1 |  SORT ORDER BY         |                  |    18E|    15E|   15E|    65P (78)|999:59:59 |
|   2 |   COUNT                |                  |       |       |      |            |          |
|*  3 |    FILTER              |                  |       |       |      |            |          |
|   4 |     NESTED LOOPS       |                  |    18E|    15E|      |    14P  (3)|999:59:59 |
|   5 |      NESTED LOOPS      |                  |   984G|   216T|      |    14G  (3)|999:59:59 |
|   6 |       TABLE ACCESS FULL| CAT_6000_6001TBL |  7270K|  1074M|      |   176K  (3)| 00:15:46 |
|   7 |       TABLE ACCESS FULL| CAT_6000TBL      |   135K|    11M|      |  1950   (3)| 00:00:11 |
|   8 |      INDEX FULL SCAN   | PK_OBJECTS       |    32M|   306M|      | 15207   (3)| 00:01:22 |
---------------------------------------------------------------------------------------------------

So I had a look. Yes – that’s a 65P in the cost column. I’ve seen worse (but not in a production system). Cost is not always a good indication of run time, by the way. It’s just a sort of normalized estimation after all. But the estimate for the number of rows and bytes (18E and 15E) are very impressive as well. This query ran for several hours before my buddy finally killed it. As you might expect, the query was missing a join condition between a couple of large tables (7M and 32M).

Here’s a test I worked up to see how big a number I could get.

SYS@LAB1024> !cat dplan.sql
set lines 150
select * from table(dbms_xplan.display_cursor('&sql_id','&child_no','typical'))
/
 
SYS@LAB1024> @dplan
Enter value for sql_id: gf5nnx0pyfqq2
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  gf5nnx0pyfqq2, child number 0
-------------------------------------
select a.col2, sum(a.col1) from kso.skew a, kso.skew b group by a.col2
 
Plan hash value: 321450672
 
-----------------------------------------------------------------------------------
| Id  | Operation               | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |         |       |       |   689G(100)|          |
|   1 |  HASH GROUP BY          |         |     1 |    16 |   689G (84)|999:59:59 |
|   2 |   MERGE JOIN CARTESIAN  |         |  1024T|    14P|   145G (22)|999:59:59 |
|   3 |    TABLE ACCESS FULL    | SKEW    |    32M|   488M| 10032  (18)| 00:01:21 |
|   4 |    BUFFER SORT          |         |    32M|       |   689G (84)|999:59:59 |
|   5 |     INDEX FAST FULL SCAN| SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
-----------------------------------------------------------------------------------
 
 
17 rows selected.
 
SYS@LAB1024> @dplan
Enter value for sql_id: 12p7fuydx3dd5
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  12p7fuydx3dd5, child number 0
-------------------------------------
select a.col2, sum(a.col1) from kso.skew a, kso.skew b, kso.skew c group by
a.col2
 
Plan hash value: 175710540
 
------------------------------------------------------------------------------------
| Id  | Operation                | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |         |       |       |    18E(100)|          |
|   1 |  HASH GROUP BY           |         |     1 |    16 |    18E (81)|999:59:59 |
|   2 |   MERGE JOIN CARTESIAN   |         |    18E|    15E|  4670P (22)|999:59:59 |
|   3 |    MERGE JOIN CARTESIAN  |         |  1024T|    14P|   145G (22)|999:59:59 |
|   4 |     TABLE ACCESS FULL    | SKEW    |    32M|   488M| 10032  (18)| 00:01:21 |
|   5 |     BUFFER SORT          |         |    32M|       |   145G (22)|999:59:59 |
|   6 |      INDEX FAST FULL SCAN| SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
|   7 |    BUFFER SORT           |         |    32M|       |    18E (81)|999:59:59 |
|   8 |     INDEX FAST FULL SCAN | SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
------------------------------------------------------------------------------------
 
 
21 rows selected.
 
SYS@LAB1024> @dplan
Enter value for sql_id: 7b53dxh6w6mpj
Enter value for child_no: 
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7b53dxh6w6mpj, child number 0
-------------------------------------
select a.col2, sum(a.col1) from kso.skew a, kso.skew b, kso.skew c, kso.skew
d group by a.col2
 
Plan hash value: 3965951819
 
-------------------------------------------------------------------------------------
| Id  | Operation                 | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |         |       |       |    18E(100)|          |
|   1 |  HASH GROUP BY            |         |     1 |    16 |    18E  (0)|999:59:59 |
|   2 |   MERGE JOIN CARTESIAN    |         |    18E|    15E|    18E  (0)|999:59:59 |
|   3 |    MERGE JOIN CARTESIAN   |         |    18E|    15E|  4670P (22)|999:59:59 |
|   4 |     MERGE JOIN CARTESIAN  |         |  1024T|    14P|   145G (22)|999:59:59 |
|   5 |      TABLE ACCESS FULL    | SKEW    |    32M|   488M| 10032  (18)| 00:01:21 |
|   6 |      BUFFER SORT          |         |    32M|       |   145G (22)|999:59:59 |
|   7 |       INDEX FAST FULL SCAN| SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
|   8 |     BUFFER SORT           |         |    32M|       |  4670P (22)|999:59:59 |
|   9 |      INDEX FAST FULL SCAN | SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
|  10 |    BUFFER SORT            |         |    32M|       |    18E  (0)|999:59:59 |
|  11 |     INDEX FAST FULL SCAN  | SKEW_PK |    32M|       |  4558  (22)| 00:00:37 |
-------------------------------------------------------------------------------------
 
 
24 rows selected.

So it looks like the cost tops out at 18E as does the estimated number of rows. Oddly the number of bytes appears to top out at 15E. So the production query had maxed out the rows and bytes estimate although the cost was significantly under the max. Still 65P is the biggest cost I’ve seen in a production system. Anyone seen a bigger one?

P.S. I have two categories for SQL related posts. “Developer Tricks” and “Wall of Shame”. This one gets both tags.

Hereafter…

I went to see Hereafter this evening. On the surface the subject matter sounded pretty grim, what with it being about death and the afterlife, but Clint Eastwood directed it, so I thought I’d give it a go.

The overriding thing I can say about the film is it was very gentle. If this were standard Hollywood crap there would have been X-Files type revelations or flash ghostly effects. As it was, there was none of that. It was just the characters and how death and the afterlife had affected their lives. Very understated and told at a gentle pace and in no way challenging. I want to say words like “nice” and “pleasant”, but they sound a bit insulting, but I don’t mean them to be.

If you like spooky psychic stuff this definitely is *n0t* the film for you.

Matt Damon and Bryce Dallas Howard worked really well together. She had quite a small part in the film, but her character was definitely the highlight for me. The other actors were a bit out of their league in comparison, except for the cameo of Derek Jacobi playing himself. If he had got that wrong it would have been a bit sad. :)

Cheers

Tim…