Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

DBMS_PARALLEL_EXECUTE

Here is a recent thread from OTN forums. The question raised is WTF? whether a hint in the examples section of DBMS_PARALLEL_EXECUTE documentation have to be used or not and why it is there. What do you think about it? Have a look in the documentation and mark your choice in a poll: PS. In [...]

Aanmelden AMIS Query: “An Evening with Doug Burns”

Doug BurnsOp donderdag 17 juni, vanaf 18:00 uur, zal Oracle Database Expert en Oracle ACE Director Doug Burns(Schotland), een kennisavond vullen met live demo’s op basis van de Oracle Enterprise Manager Diagnostic en Tuning Pack.

Deze bijzondere avond, geheel zonder slides, met de naam “How I Learned to Love Pictures – Oracle 10g/11g Performance Analysis Using OEM” zal alle in en outs van de Oracle Enterprise Manager Diagnostic en Tuning pack in geur en kleur, de valkuilen en de verstopte juweeltjes van de Oracle Enterprise Manager, demonstreren gebruik makend van onder andere Swingbench. Voor meer informatie, over Doug Burns of de inhoud van deze presentatie, zie de volgende blog posts en URL’s:

Wil je er bij zijn dan kun je je aanmelden voor deze AMIS Query, via de volgende URL:

8-)

Index Rebuilds

A question came up on OTN a little while ago about an oddity during an online index rebuild. The original question describes how a relatively small index (500MB) takes several minutes to build, with a few minutes where users who are trying to update the table seem to be waiting for locks on the table [...]

Applying the rules ...

Developing software has many things in common with aircraft development, depending of course, on how you look at it. In both cases, getting technical innovations to market as quickly as possible is key to success and while our users may not fall from the sky if our software fails, there are many software products that have enormous dollar and human cost when they don't work as they should. Even

Enabling and Disabling Database Options

One of those small items that is easy overlooked (at least I overlooked it...) and I think, arrived with Oracle database version 11.2, at least on Windows/Linux...

The Oracle 11gR2 Database Installation Guide for Windows described a new tool, at least for me, that enables or disables database features on Windows. In the manual it is described as follows...

When you install Oracle Database, certain options are enabled and others are disabled. If you must enable or disable a particular database feature for an Oracle home, then shut down the database and use the chopt tool. See Example 5-1.

The chopt tool is a command-line utility that is located in the ORACLE_HOME\bin directory. The syntax for chopt is as follows:

chopt [ enable | disable] db_option

The possible values for db_option described in the following table.



Value Description
dm Oracle Data Mining RDBMS Files
dv Oracle Database Vault
lbac Oracle Label Security
olap Oracle OLAP
partitioning Oracle Partitioning
rat Oracle Real Application Testing
ode_net_2 Oracle Database Extensions for .NET 2.0

Example 5-1 Complete Example of Running the Chopt Tool

To enable the Oracle Label Security option in your Oracle binary files:

  1. Shut down the database with srvctl or SQL*Plus:
    srvctl stop database -d myDb
  2. Stop the database service, OracleServiceSID, using the Services program in Control Panel.
  3. Run the following commands:
    cd %ORACLE_HOME%/bin
    chopt enable lbac
  4. Start the database service, OracleServiceSID, using the Services program in Control Panel.
  5. Start up the database:
srvctl start database -d myDb

In linux the "chopt" statement will show the following output

#66cc66;">[oracle#33cc33;">@localhost ~#66cc66;">]$ chopt
 
usage:
 
chopt #66cc66;">[enable|disable#66cc66;">] #66cc66;">{option#66cc66;">}
 
options:
                  dm = Oracle Data Mining RDBMS Files
                  dv = Oracle Database Vault option
                lbac = Oracle Label Security
                olap = Oracle OLAP
        partitioning = Oracle Partitioning
                 rat = Oracle Real Application Testing
 
e.g. chopt enable rat

Useful information / good to know...its that easy in 11.2 to enable/disable database options.

HTH

Kelly Johnson's 14 Rules of Management

Kelly Johnson's 14 Rules of Management, in their original form (highlighting added by me):--------------------------------------------------------1. The Skunk Works manager must be delegated practically complete control of his program in all aspects. He should report to a division president or higher. 2. Strong but small project offices must be provided both by the military and industry. 3

Identifying Execution Plan Problems

(original at http://sites.google.com/site/embtdbo/tuner/oracle-tcf-query-and-lios-per-row)
Based on the idea by Wolfgang Breitling that we can identify cost and cardinality errors of Oracle's optimizer by looking at the discrepancies between the ESTIMATED rows and the ACTUAL rows in each line of the execution plan.
I also added in a calculation to see how many IOs we were doing per row returned.
In order to get much out of this script you have to have run the query with the hint

/*+ gather_plan_statistics */
or
set statistics_level=all
or
sql_trace=true (only way on 9i other than an underscore parameter)


col cn format 99
col ratio format 99
col ratio1 format A6
--set pagesize 1000
set linesize 140
break on sql_id on cn
col lio_rw format 999
col "operation" format a60
col a_rows for 999,999,999
col e_rows for 999,999,999
col elapsed for 999,999,999

Def v_sql_id=&SQL_ID

select
-- sql_id,
--hv,
childn cn,
--ptime, stime,
case when stime - nvl(ptime ,0) > 0 then
stime - nvl(ptime ,0)
else 0 end as elapsed,
nvl(trunc((lio-nvl(plio,0))/nullif(a_rows,0)),0) lio_rw,
--id,
--parent_id,
--starts,
--nvl(ratio,0) ratio,
' '||case when ratio > 0 then
rpad('-',ratio,'-')
else
rpad('+',ratio*-1 ,'+')
end as ratio1,
starts*cardinality e_rows,
a_rows,
--nvl(lio,0) lio, nvl(plio,0) parent_lio,
"operation"
from (
SELECT
stats.LAST_ELAPSED_TIME stime,
p.elapsed ptime,
stats.sql_id sql_id
, stats.HASH_VALUE hv
, stats.CHILD_NUMBER childn
, to_char(stats.id,'990')
||decode(stats.access_predicates,null,null,'A')
||decode(stats.filter_predicates,null,null,'F') id
, stats.parent_id
, stats.CARDINALITY cardinality
, LPAD(' ',depth)||stats.OPERATION||' '||
stats.OPTIONS||' '||
stats.OBJECT_NAME||
DECODE(stats.PARTITION_START,NULL,' ',':')||
TRANSLATE(stats.PARTITION_START,'(NRUMBE','(NR')||
DECODE(stats.PARTITION_STOP,NULL,' ','-')||
TRANSLATE(stats.PARTITION_STOP,'(NRUMBE','(NR') "operation",
stats.last_starts starts,
stats.last_output_rows a_rows,
(stats.last_cu_buffer_gets+stats.last_cr_buffer_gets) lio,
p.lio plio,
trunc(log(10,nullif
(stats.last_starts*stats.cardinality/
nullif(stats.last_output_rows,0),0))) ratio
FROM
v$sql_plan_statistics_all stats
, (select sum(last_cu_buffer_gets + last_cr_buffer_gets) lio,
sum(LAST_ELAPSED_TIME) elapsed,
child_number,
parent_id,
sql_id
from v$sql_plan_statistics_all
group by child_number,sql_id, parent_id) p
WHERE
stats.sql_id='&v_sql_id' and
p.sql_id(+) = stats.sql_id and
p.child_number(+) = stats.child_number and
p.parent_id(+)=stats.id
)
order by sql_id, childn , id
/
clear breaks

output like


Enter value for sql_id: g2w9n4gksyys6
old 59: stats.sql_id='&v_sql_id' and
new 59: stats.sql_id='g2w9n4gksyys6' and

ELAPSED LIO_RW RATIO1 E_ROWS A_ROWS operation
---------- ------ ------ ------- ------ ---------------------------------
0 0 1 SELECT STATEMENT
5,720,456 0 1 1 HASH GROUP BY
29,711 0 1,909 NESTED LOOPS
0 0 +++ 1 1,909 NESTED LOOPS
1,969,304 0 +++ 1 1,909 NESTED LOOPS
0 0 +++ 1 2,027 NESTED LOOPS
7,939,649 0 +++ 1 1,656 NESTED LOOPS
716,054 0 +++ 1 1,657 NESTED LOOPS
270,201 0 ++ 39 23,171 HASH JOIN
23 0 5 1 JOIN FILTER CREATE :BF00
31 1 5 1 TABLE ACCESS BY INDEX R
14 2 5 1 INDEX RANGE SCAN PS0PA
141,467 0 18,503 23,171 VIEW VW_SQ_1
3,032,120 0 18,503 23,171 HASH GROUP BY
152,564 0 163,420 33,020 JOIN FILTER USE :BF000
407,746 0 163,420 33,020 MERGE JOIN
55 0 5 1 SORT JOIN
12 2 5 1 INDEX RANGE SCAN PS
79,435 0 40,000 33,020 SORT JOIN
119,852 0 40,000 40,000 INDEX FAST FULL SCA
2,959,031 13 - 23,171 1,657 TABLE ACCESS BY INDEX ROW
944,887 1 23,171 23,174 INDEX RANGE SCAN WB_JOB
102,650 0 1,657 1,656 VIEW PUSHED PREDICATE VW_
73,769 0 1,657 1,657 SORT AGGREGATE
25,617 0 1,657 1,657 FIRST ROW
225,497 1 1,657 1,657 INDEX RANGE SCAN (MIN/M
357,872 0 3,312 2,027 TABLE ACCESS BY INDEX ROWID
3,655,774 1 3,312 2,027 INDEX RANGE SCAN WB_RETROP
199,884 0 2,027 1,909 TABLE ACCESS BY INDEX ROWID
317,793 1 2,027 1,909 INDEX RANGE SCAN PS_RETROPA
71,534 0 1,909 1,909 INDEX RANGE SCAN PS#RETROPAYP
18,396 0 1,909 1,909 TABLE ACCESS BY INDEX ROWID PS

I can see that are 5 lines where the optimizer only expect 1 row and the actual results were over 1000, ie 3 orders of magnitude difference.
These are the three lines with
+++
There is one line with
-
where actual was an order of magnitude smaller. On that same line wee see it's one of the slower lines almost 3 seconds and that the were 13 lio's per row returned, which is sign of inefficiency.

Oracle Video Demos

Just found this one

video demos of Oracle features - pretty cool. For example one on managed baseline metrics in OEM:
On the subject of OEM 10g and adaptive thresholds, Doug Burn's blog has by far the best explanation of setting up an using them, see here:
I never worked on the specifics of adaptive baselines, that was JB's specialtiy, but i did work on some UI mockups:
Here's what it actually looks like in OEM 10g:

SQLPLUS -prelim

There is a way to connect to Oracle even when sessions are maxed out or there is a hang etc:
I knew about this a while back but its just one of those things that's important to remember when it's needed! Thus, so I can always know where to find it , I'm reposting:

http://oraclue.com/2008/09/25/sqlplus-preliminary-connection/


1) sqlplus -prelim / as sysdba

2) sqlplus /nolog
set _prelim on
conn / as sysdba

see more from Tanel Poder

Seeing Exadata in action



Last week I was able to attend a 3 day exadata workshop conducted by performance engineers from Singapore. It was supposed to be a 5 day workshop with hands on labs but everything was squeezed (unfortunately no hands on) and just the critical sections were tackled.. The workshop highlight was the demos and these are as follows:

– loading large amount of data from an external table
– cell offloading on a typical datawarehouse query
– creation of a 400gb tablespace
– linear scalability of storage cells

Although all of these demos were done on Exadata Version 1.. and just 13 of the storage servers were running..I can say that the performance is still stunning!!!

But the real highligth for me is having my scripts run on the database machine </p />
</p></td></tr></tbody></table></div>

    	  	<div class=