Oakies Blog Aggregator

New Oracle Security book - Oracle Incident Response and Forensics

I have been quiet on here for a while due to a large workload and also in the last weeks writing a new book - Oracle Incident Response and Forensics" to be published by Apress. The book is complete as....[Read More]

Posted by Pete On 03/10/17 At 08:52 AM

Direct path insert and IOTs

(Please tell me that I’m not the only one who thinks "Index Organized Table" instead of "Internet Of Things" when hearing IOT…)

This post is inspired by Connor McDonald and his blog post from a year ago about direct mode operations and IOTs.
You can read it here: https://connor-mcdonald.com/2016/07/04/direct-mode-operations-on-iots/amp/

While writing a redo parser for V00D00 I had to investigate this subject very closely from a redo log perspective. And this will be the subject of my 10-minute lightning talk at Oak Table World 2017 at Oracle Open World!

Before we start – in the last blog post (rollback internals) I introduced a new tool for parsing redo log dumps. It was called logfile_dump_parser but since it was a boring name, I decided to call it apud (archivelog parser using dumpfile). If you want to know why it is less boring name, reverse name apud and find the meaning in Polish-English dictionary </p />
</p></div>

    	  	<div class=

Direct path insert and IOTs

(Please tell me that I’m not the only one who thinks "Index Organized Table" instead of "Internet Of Things" when hearing IOT…)

This post is inspired by Connor McDonald and his blog post from a year ago about direct mode operations and IOTs.
You can read it here: https://connor-mcdonald.com/2016/07/04/direct-mode-operations-on-iots/amp/

While writing a redo parser for V00D00 I had to investigate this subject very closely from a redo log perspective. And this will be the subject of my 10-minute lightning talk at Oak Table World 2017 at Oracle Open World!

Before we start – in the last blog post (rollback internals) I introduced a new tool for parsing redo log dumps. It was called logfile_dump_parser but since it was a boring name, I decided to call it apud (archivelog parser using dumpfile). If you want to know why it is less boring name, reverse name apud and find the meaning in Polish-English dictionary </p />
</p></div>

    	  	<div class=

markhot

How can a single piece of SQL text – checked very carefully – end up with multiple SQL_IDs ? There are probably quite a lot of people who know the answer to this question but won’t think of it until they’re reminded and, thanks to a question that came up on the forum formerly known as OTN a couple of days ago, I was reminded about it recently and rediscovered an article I had drafted on the topic a few years ago.

The specific problem on the forum was about having a huge number of child cursors for a single parent thanks to a frequently executed update statement that updated all 50 columns of a table using bind variables to do so. The reason why a single statement could produce so many child cursors seemed to be due to the variation in the lengths of the bind variables supplied – which could well be the consequence of an internal library mechanism rather than an explicit design mechanism written into the client code. One of the comments to my 2007 article suggested event  10503 as a damage limitation mechanism, but there was some problem with it not working as expected at the time. A quick check on MoS now reports bug 10274265 : “EVENT 10503 NOT WORKING ON THE SESSION LEVEL” as fixed in 12.1 with lots of backport patches to various versions of 11.2

Moving on from the back-story, the case that prompted my draft note in 2014 was the simpler one of having lots of sessions constantly executing exactly the same SQL statement and child cursor, so rather than having a latch/mutex type of problem because of a large number of child cursors I was seeing a problem purely because of the level of concurrent access to the same child cursor.  The solution in the version of Oracle the client was using at the time was to tell Oracle to mark the SQL statement as “hot” by setting a hidden parameter; but the mechanism is now officially exposed in a procedure called dbms_shared_pool.markhot() that I learned about a few months ago when I was at a client who had a similar problem with highly concurrent execution of a small set of statements – with the extra twist that the table referenced in the critical statements was a partitioned table which suffered a fairly regular partition exchange.

When a statement (through it’s “full_hash_value”) is marked as hot, an extra value visible as the property column in v$db_object_cache is set to a value that seems to be dependent on the process id of the session attempting to execute the statement, and this value is used as an extra component in calculating a new full_hash_value (which leads to a new hash_value and sql_id). With a different full_hash_value the same text generates a new parent cursor which is (probably) going to be associated with a new library cache hash bucket and latch. The property value for the original parent cursor is set to “HOT”, and the extra copies become “HOTCOPY1”, “HOTCOPY2” and so on. Interestingly once an object is marked as HOT and the first HOTCOPYn has appeared the original version may disappear from v$sql while still existing in v$db_object_cache.

The number of “HOTCOPYn” versions of the statement is limited by the hidden parameter “_kgl_hot_object_copies” which (according to my notes) defaults to either cpu_count or cpu_count/2. On my most recent test on 11.2.0.4 it seemed to be the latter.

Marking a cursor hot

There are three options:

  • set a hidden parameter in the startup file
  • execute an “alter system” command to set the hidden parameter
  • from 11.2.0.3 onwards (possibly earlier in 11.2) call dbms_shared_pool.markhot()

Examples:

Startup file:

_kgl_debug="hash='cc7d5ecdcc9e7c0767456468efe922ea' namespace=0 debug=33554432"

Alter system call with multiple targets:

alter system set "_kgl_debug" =
        "hash='cc7d5ecdcc9e7c0767456468efe922ea' namespace=0 debug=33554432",
        "hash='59a1f6575a5006600792ee802558305b' namespace=0 debug=33554432"
;

markhot() procedure:

begin
        dbms_shared_pool.markhot(
                hash            =>'71fc6ccf9a3265368492ec9fc05b785b',
                namespace       =>0,
                global          =>true
        );
end;
/

The namespace identifies the object as an SQL Cursor (you can mark other types of object as hot if you need to), and for those of a mathematical bent you’ll work out that the debug values is power(2,25).

The value supplied as the hash is the full_hash_value and you can find this in v$db_object_cache either by searching on some string that easily identifies your statement, or by searching v$sql on a string to get the (short) hash value of the statement and using that to search v$db_object_cache on the hash_value column.


select
        hash_value,
        full_hash_value,
        namespace,
        child_latch,
        property        hot_flag,
        executions,
        invalidations
from
        v$db_object_cache
where
        name like '{some part of your critical SQL statement}'
;

I ran into two problems using the markhot() approach. The first not terribly serious – the second fatal, except I’m not going to do it again and I wouldn’t have run into it if I hadn’t been impatient working around the first.

First: if you’ve already got lots of sessions executing the statement and holding cursors open in some way before you call markhot() then it may be some time before all those sessions release the hot parent and child and acquire a “cool” parent and child and unfortunately you can’t call markhot() until at least one session has opened the relevant cursor – and that’s a problem that isn’t relevant if you’ve got the hidden parameter set.

Secondly: although eventually your hot cursor(s) will drop out of use, if you try to get rid of them early by a cunning call to dbms_shared_pool.purge() you may find that you don’t manage to purge them; if you decide to try again, and again (as I did) you may find that your session goes into an infinite CPU spin and no-one can get at the hot cursor.  Be patient, once you’ve marked a cursor as hot your application will (probably) end up spreading itself over the copies.

One last detail – if, for any reason, you decide that a cursor no longer needs to be marked hot there is a procedure dbms_shared_pool.unmarkhot() that takes the same three parameters to clear the property and allow the copies to disappear.

Footnote

The OTN problem that prompted me to write this note wasn’t about high concurrency levels, it was about mutex contention while searching for the right child cursor. The markhot() procedure doesn’t really look as if it’s designed to address this issue but, as a side-effect of having multiple parent cursors for the same statement text, there should be fewer sessions searching each child-cursor chain at any one moment and this may be enough to reduce the contention. Statistically, of course, every child chain is likely to end up the same length so the amount of shared pool memory used by the SQL statement will eventually grow by a factor matching the number of hot copies produced – but if the problem is contention it may be better (e.g.) to have 16 times the memory used so that 100 concurrent sessions can be spread across 16 different chains rather than having 100 sessions all trying to search the same chain at the same time.

 

SystemTap and Oracle RDBMS: I/O and Network Traffic

Now that I am able to aggregate SytemTap probes by Oracle database, let’s focus on I/O and Network Traffic.

For this purpose a new SystemTap script (traffic_per_db.stp) has been created and has been added into this github repository.

traffic_per_db

This script tracks the I/O and Network traffic per database and also groups the non database(s) traffic.

Usage:

$> stap -g ./traffic_per_db.stp   

Output example (I/O only):

$> stap -g ./traffic_per_db.stp 54321 5000 io

-------------------------------------------------------------------------------------------------------------
|                                                          I/O                                              |
-------------------------------------------------------------------------------------------------------------
                                  READS                     |                     WRITES                    |
                                                            |                                               |
                       VFS                    BLOCK         |          VFS                    BLOCK         |
            | NB                 KB | NB                 KB | NB                 KB | NB                 KB |
            | --                 -- | --                 -- | --                 -- | --                 -- |
BDTS        | 6203            49280 | 0                   0 | 11                 64 | 12                128 |
NOT_A_DB    | 45                  3 | 0                   0 | 15                  2 | 0                   0 |
-------------------------------------------------------------------------------------------------------------

For this example the database files are located on a file system. In this output, we can see than the reads I/O are served by the file system cache: Reads VFS I/O and no Reads BLOCK I/O.

Output example (Network only):

Example 1: The database files are located on Kernel NFS (kNFS)

$> stap -g ./traffic_per_db.stp 54321 5000 network
-------------------------------------------------------------------------------------------------------------------------------------------------------------
|                                                                                Network                                                                    |
-------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              RECV                                  |                                 SENT                                  |
                                                                                    |                                                                       |
                       TCP                     UDP                     NFS          |          TCP                     UDP                     NFS          |
            | NB                 KB | NB                 KB | NB                 KB | NB                 KB | NB                 KB | NB                 KB |
            | --                 -- | --                 -- | --                 -- | --                 -- | --                 -- | --                 -- |
NOT_A_DB    | 4                  32 | 0                   0 | 61                706 | 1943              252 | 0                   0 | 5                  80 |
BVDB        | 0                   0 | 0                   0 | 1623            16825 | 113                13 | 0                   0 | 170              1437 |
-------------------------------------------------------------------------------------------------------------------------------------------------------------

As expected we can observed NFS traffic at the database level.

Example 2: The database files are located on Direct NFS (dNFS)

$> stap -g ./traffic_per_db.stp 54321 5000 network

-------------------------------------------------------------------------------------------------------------------------------------------------------------
|                                                                                Network                                                                    |
-------------------------------------------------------------------------------------------------------------------------------------------------------------
                                              RECV                                  |                                 SENT                                  |
                                                                                    |                                                                       |
                       TCP                     UDP                     NFS          |          TCP                     UDP                     NFS          |
            | NB                 KB | NB                 KB | NB                 KB | NB                 KB | NB                 KB | NB                 KB |
            | --                 -- | --                 -- | --                 -- | --                 -- | --                 -- | --                 -- |
BVDB        | 3810            18934 | 0                   0 | 0                   0 | 2059             1787 | 0                   0 | 0                   0 |
NOT_A_DB    | 3                  24 | 0                   0 | 0                   0 | 4                   2 | 0                   0 | 0                   0 |
-------------------------------------------------------------------------------------------------------------------------------------------------------------

As you can see the NFS traffic has been replaced by a TCP traffic at the database level.

Remarks

  • In this post the word database stands for “all the foreground and background processes linked to an oracle database”.
  • In a consolidated environment, having a view per database can be very useful.
  • This script helps to display the traffic per database and also reports the one that is not related to databases (“NOT_A_DB”).
  • The probes documentation can be found here.

Conclusion

We are able to track the I/O and the Network traffic at the database level.

Oaktable World 2017 @ Oracle Open World

 

http://www.oaktable.net/blog/oak-table-world-2017-oracle-open-world

The Oak Table members will be discussing their latest technical obsessions and research on Monday and Tuesday, (Oct. 2nd and 3rd, 2017).  The truth is, folks-  The Oak Table experts are an AWESOME group, (if I don’t say so myself! :)) as we could have easily done another day of incredible sessions, but alas, two days is all we have available for this year’s event.

 

Screen Shot 2017-09-29 at 11.10.53 AM

Screen Shot 2017-09-29 at 11.11.04 AM

“_suppress_identifiers_on_dupkey” – the SAP workaround for bad design

In SQL, ‘upsert’ is a conditional insert or update: if the row is there, you update it, but if it is not there, you insert it. In Oracle, you should use a MERGE statement for that. You are clearly doing it wrong if you code something like:

begin
insert...
exception
when dup_val_on_index then update...
end;


But it seems that there are many applications with this bad design, and Oracle has introduced an underscore parameter for them: “_suppress_identifiers_on_dupkey”. You won’t be surprised that this one is part of the long list of parameters required for SAP.

Let’s investigate this.

Insert – Exception – Update

So the idea is to try first an insert, rely on the unique constraint (primary key) to get an exception if the row exists, and in this case update the existing row. There are several flows with that.

The first problem, is that it is not as easy as it looks like. If a concurrent session deletes the row between you insert and update, then the update will fail. You have to manage this. The failed insert cannot leave a lock on the rows that was not inserted.

The second problem is that the SQL engine is optimized for transactions which commit. When the ‘dup_val_on_index’ on index occurs, you have already inserted the table row, updated some indexes, etc. And all that has to be rolled back when the exception occurs. This generates unnecessary contention on the index leaf block, and unnecessary redo.

Then the third problem, and probably the worst one, is that an exception is an error. And error management has lot of work to do, such as looking into the dictionary for the violated constraint name in order to give you a nice error message.

I’ve created the following table:

create table demo as select * from dual;
create unique index demo on demo(dummy);

And I’ve run 10 million inserts on it, all with duplicates:

exec for i in 1..1e7 loop begin insert into demo values('x'); exception when others then null; end; end loop;

Here is some extracts from the AWR on manual snapshots taked before and after.

Elapsed: 20.69 (mins)
DB Time: 20.69 (mins)

This has run for 20 minutes.


Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 100.00
Execute to Parse %: 33.34 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 92.31 % Non-Parse CPU: 94.90
Flash Cache Hit %: 0.00

The ‘Execute to Parse %’ show that 2/3 of statements are parsed each time.


SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 19-20
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 180,125,740
-> Captured SQL account for 127.7% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.80094E+08 1 1.800942E+08 100.0 1,239.8 99.5 .3 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
1.60094E+08 10,000,000 16.0 88.9 983.1 100.3 .4 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')
 
49,999,995 9,999,999 5.0 27.8 201.1 103.2 0 2skwhauh2cwky
PDB: PDB1
select o.name, u.name from obj$ o, user$ u where o.obj# = :1 and o.owner# = u.u
ser#
 
19,999,998 9,999,999 2.0 11.1 148.5 98.9 0 2jfqzrxhrm93b
PDB: PDB1
select /*+ rule */ c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# =
cd.con# and cd.enabled = :1 and c.owner# = u.user#

My failed inserts have read on average 16 blocks for each attempt. that’s too much for doing nothing. And in addition to that, I see two expensive statements parsed and executed each time: one to get the object name and one to get the constraint name.
This is how we can retreive the error message which is:

 
ORA-00001: unique constraint (SCOTT.DEMO) violated
 

This is a big waste of resource. I did this test in PL/SQL but if you cumulate all worst practices and run those inserts row by row, then you will see those colors:
CaptureBreakReset

The Orange is ‘Log File Sync’ because you generate more redo than necessary.
The Green is ‘CPU’ because you read more blocks than necessary.
The read is ‘SQL*Net break/reset to client’ when the server process sends the error.

_suppress_identifiers_on_dupkey

When you set “_suppress_identifiers_on_dupkey” to true, Oracle will not return the name of the constraint which is violated, but only the information which is already there in the session context.

Here is the message that you get:

 
ORA-00001: unique constraint (UNKNOWN.obj#=73375) violated
 

Where 73375 is the OBJECT_ID of the index where the unique constraint exception has been violated.

You have less information, but it is faster:

Elapsed: 15.45 (mins)
DB Time: 15.48 (mins)

There is no Soft Parse overhead:

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 96.43
Execute to Parse %: 99.98 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 90.38 % Non-Parse CPU: 99.95
Flash Cache Hit %: 0.00

Our statement is the only one using the CPU and reads less blocks:

SQL ordered by Gets DB/Inst: CDB1/CDB1 Snaps: 21-22
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> %Total - Buffer Gets as a percentage of Total Buffer Gets
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Total Buffer Gets: 110,132,467
-> Captured SQL account for 81.8% of Total
 
Buffer Gets Elapsed
Gets Executions per Exec %Total Time (s) %CPU %IO SQL Id
----------- ----------- ------------ ------ ---------- ----- ----- -------------
1.10091E+08 1 1.100906E+08 100.0 926.2 98.8 1 frvpzg5yubp29
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
BEGIN for i in 1..1e7 loop begin insert into demo values('x'); exception when ot
hers then null; end; end loop; END;
 
90,090,580 10,000,000 9.0 81.8 515.7 99.1 1.9 319ypa1z41aba
Module: java@VM104 (TNS V1-V3)
PDB: PDB1
INSERT INTO DEMO VALUES('x')

This parameter is a workaround for bad design, but not a solution.

Update – no rows – Insert

In order to avoid all this rollback and exception management overhead, there is another idea. Start with the update and, when no row was found, insert it. This is easy with the ROWCOUNT.

begin
update ...
if SQL%ROWCOUNT = 0 then insert ...

This is more efficient but still subject to a concurrent session inserting the row between your update and you insert. But at least, you manage the different scenario with a condition on ROWCOUNT rather than with an exception, which is more scalable.

So what?

Always use the database in the expected way. Exceptions and Errors are not for the normal scenario of the use-case. Exceptions should be unusual. The solution is to use the MERGE statement which has been implemented exactly for this reason: do an upsert without the error management overhead and with the statement isolation level which prevents errors in a multi-user environment.

 

Cet article “_suppress_identifiers_on_dupkey” – the SAP workaround for bad design est apparu en premier sur Blog dbi services.

My OpenWorld theme song

(With apologies to any Simon & Garfunkel fans out there)

Hello jetlag my old friend,
You’ve come to mess with me again.
The bedside clock is stuck on 10 to 4
And my eyes are just so bloody sore.

The decision to get on that frickin’ plane.
Was just insane.

Suffering …. my jetlag in silence

 

A few gin and tonics tonight and I might even put it to musicSmile

The devastating death PC emoji of doom

So there I am.  Sitting at Perth airport.  My flight from Perth to Sydney, the first leg on my trip to OpenWorld 2017 is delayed.  Of course, delays are not unusual and do not normally bother me.  Because I can just flip open my laptop, knock off a few AskTOM questions while I wait.

But not this time.  I press the Power button and I get this:

Image result for windows 10 collectin memory dump

That is the first time I’ve ever seen that on my nice Dell XPS 13 laptop, but I figure “No big drama.  Probably just some sleep glitch” and let the machine reboot.

That is when the real fun started.  As it is restarting I get a similar screen (which I can’t screen dump…because the laptop isn’t booting!) saying the machine cannot boot.

Now I’m starting to be concerned.  Because I’m a few days out from 6 conference talks at the biggest Oracle conference of the year.  The laptop then goes into a “Cannot boot, so we’ll reboot” cycle which I don’t seem to be able to escape from, and then boarding call comes out over the public address system at the airport.

Aggghhhhh!!!

So I have to do a forced power off on the laptop, jump on the plane…and now sit there in a cold sweat for 5 hours while I fly to Sydney (because I’m now too scared to tackle this issue without a power supply).

I arrive in Sydney and renew the battle.  I power it up, and the laptop says “I cannot boot”, but this time says … “Would I like to repair the startup?”.  Why it has changed I have no idea, but I give it a shot.  After a lot of progress bars, and messages about how long this might take, the laptop finally displays the normal login screen.

Phew !!!!!!!!!

But all is not totally right.  If I sleep or shutdown the laptop, then on boot up, the first message I get is a sad emoji BSOD like the one above, after which it reboots again and then appears fine.

So … here I am, at the hotel with a working laptop… and zero confidence in it.  Grrrrrr.

Not the best start to the week </p />
</p></div>

    	  	<div class=

When deterministic function is not

When you declare a function-based index, the function is deterministic, which means that calling it with same arguments will always return the same result. This is required because the indexed values, which are the result of the function, are stored in the index. But what happens if you declare a function deterministic when it is not?

I wanted to do this test after reading the following documents about Postgres HOT and WARM.

They say that they cannot vacuum one page at a time because index entries must be cleaned, and there’s a risk when trying to find an index entry from the table in case a user indexed a function which is not actually deterministic. This could lead to logical corruption. So, it seems that Postgres will always navigate from the index to the table and not the opposite. And that is possible in Postgres because they don’t implement DELETE and UPDATE physically. They only do an INSERT with the new version of the whole row and mark the old version as stale.

But Oracle is far more complex than that. Critical OLTP applications must be able to update in-place, without row movement, or the indexes maintenance would kill the performance and the redo generation would be orders of magnitude larger. An update is done in-place and the updated column must maintain the related index. And deletes will also delete all the index entries. Then, Oracle needs to navigate from the table to the index. This is done with a lookup onf the value in the index structure. The value is either a value stored in the table row, or derived with a deterministic function.

So what happens if I declare a function deterministic when it is not?

Here is a table:

SQL> create table DEMO (n not null) pctfree 99 as select rownum from xmltable('1 to 5');
 
Table created.

And here is a function which returns a rendom number. But I declare it deterministic:

SQL> create or replace function DEMO_FUNCTION(n number) return number deterministic as
2 begin
3 return dbms_random.value;
4 end;
5 /
 
Function created.

I declare an index on it:

SQL> create index DEMO_FUNCTION on DEMO(DEMO_FUNCTION(n));
 
Index created.

Oracle cannot verify if the function is deterministic or not, and trusts me.

A full table scan re-calculates the value each time, and do not raise any error.

SQL> select /*+ full(DEMO) */ DEMO_FUNCTION(n),rowid,n from DEMO where DEMO_FUNCTION(n) is not null;
 
DEMO_FUNCTION(N) ROWID N
---------------- ------------------ ----------
.743393494 AAAR5kAAMAAABXbAAA 1
.075404174 AAAR5kAAMAAABXbAAB 2
.601606733 AAAR5kAAMAAABXbAAC 3
.716335239 AAAR5kAAMAAABXbAAD 4
.253810651 AAAR5kAAMAAABXbAAE 5

If you run it several times, you will see different values.

An index acess will show always the same values because they come from the index:

SQL> select /*+ index(DEMO) */ DEMO_FUNCTION(n),rowid,n from DEMO where DEMO_FUNCTION(n) is not null;
 
DEMO_FUNCTION(N) ROWID N
---------------- ------------------ ----------
.135108581 AAAR5kAAMAAABXbAAE 5
.440540027 AAAR5kAAMAAABXbAAD 4
.480565266 AAAR5kAAMAAABXbAAA 1
.546056579 AAAR5kAAMAAABXbAAB 2
.713949559 AAAR5kAAMAAABXbAAC 3

Oracle could have run the function on the value from the table and compare it with the value from the index, and then raise an error. But that would be more expensive.

But then, what happens if I delete a row? Oracle will try to find the index entry by running the function, but then the value is not found in the index:

SQL> delete from DEMO where n=3 and DEMO_FUNCTION(n) is not null;
delete from DEMO where n=3 and DEMO_FUNCTION(n) is not null
*
ERROR at line 1:
ORA-08102: index key not found, obj# 73317, file 12, block 5603 (2)

This is a logical corruption caused by the bug in the function which was declared deterministic but is not. Verifying the deterministic truth would require running the function several times and even that would not detect values that change after days. It is the developer responsibility, to tell the truth. This was just a test. I you are in this case, make the index unusable and fix the function before re-building it.

 

Cet article When deterministic function is not est apparu en premier sur Blog dbi services.