Oakies Blog Aggregator

Why INSERT causes TX – row lock contention or TM – contention in #Oracle

https://uhesse.files.wordpress.com/2015/10/helps.png?w=600&h=558 600w, https://uhesse.files.wordpress.com/2015/10/helps.png?w=150&h=140 150w" sizes="(max-width: 300px) 100vw, 300px" />

Inserts may cause enq: TX – row lock contention and enq: TM – contention under special circumstances:

Bitmap Indexes

First session inserts. Second session inserts with the same values for the indexed column. Second session experiences wait event enq: TX – row lock contention until first session commits, then proceeds without error message.

Unique Indexes

First session inserts. Second session inserts with the same values for the indexed column. Second session experiences wait event enq: TX – row lock contention until first session commits, then gets ORA-00001: unique constraint violated.

Parallel Inserts

First session inserts with parallel dml enabled. Second session inserts sequentially or in parallel doesn’t matter. Second session experiences wait event enq: TM – contention. This is also true for partitioned tables unless the partitions are explicitly referred to!

My test cases for the above scenarios have been done with 12.2.0.1 and  look as follows.

Bitmap Index scenario:

SQL> grant dba to adam identified by adam;

Grant succeeded.

SQL> connect adam/adam
Connected.
SQL> create table t1 (n number);

Table created.

SQL> create bitmap index b on t1(n);

Index created.

SQL> insert into t1 values(1);

1 row created.

Second session inserts the same and waits. You see the wait event in v$session then.
Unique Index

SQL> create table t2 (n number);

Table created.

SQL> alter table t2 add constraint u unique(n);

Table altered.

SQL> insert into t2 values (1);

1 row created.

Second session inserts the same and waits.
Parallel insert

SQL> create table t3 (n number);

Table created.

SQL> alter table t3 parallel;

Table altered.

SQL> alter session enable parallel dml;

Session altered.

SQL> insert into t3 select 1 from dual;

1 row created.

Second session waits for any DML.
Partitioned table parallel insert without partition clause

SQL> create table t4 (n number) partition by list (n) (partition p1 values(1),partition p2 values(2));

Table created.

SQL> alter table t4 parallel;

Table altered.

SQL> alter session enable parallel dml;

Session altered.

SQL> insert into t4 select 1 from dual;

1 row created.

Second session waits for any DML.
Non-blocking parallel insert with explicit partition clause
Same table as above, same parallel degree and parallel DML enabled, but:

SQL> insert into t4 partition (p1) select 1 from dual;

This blocks only partition p1. Second session can do any DML against p2 explicitly.

I have the feeling that this is not widely known – could actually be a good interview question: When do inserts lead to locking waits? As always: Don’t believe it, test it! </p />
</p></div>

    	  	<div class=

#DOAG2017

CaptureGPTW

The discussions about the technologies we love. With Bryn about my tests on the MLE and the fact that I compared very different things, running a recursive function on different datatype (integer vs. number). With Mike about the way RUs will be recommended and RURs only for very special cases. With Nigel about the ODC Database Ideas, with Stefan about what is documented or not, with… Discussions about community also, and user groups.

The trip, where meeting fellow speakers start in the plane,…

The dinners with ACEs, with Speakers, with friends…

The beers, thanks to the Pieter & Philippe for sharing Belgian beers & cheese & mustard & celery salt & your good mood

The sessions of course. Kamil’s tool to show tablespace fragmentation visually, Jan’s comparison between Oracle and EDB, Philippe & Pieter technical view on GDPR, Adam’s research on NFS for his appliance,…

The party for sure,…

DSC00332My session, and the very interesting questions I got… I was lucky to speak on the first day. And proud to speak on the Oak Table stream for the first time. I was happy to see many people already with a CDB and even in production. It is a slow adoption but people come to it and finally notice that it is not a big change for daily job.

IMG_4712And colleagues of course. This is the conference where dbi services has a booth and several speakers. We are passionate and like to share. At the booth, we did some demos of Dbvisit Standby 8, Orachrome Lighty, and also the OpenDB Appliance. We meet customers, or candidatees, talk about the technologies we love, explain how we do our training workshops. It is also a great place to discuss among us. Even if we have internal projects, and two ‘dbi xChange’ events every year, we are mainly at customers and have so much to share.

DOAG is an amazing conference. Intense time compressed into 3 days. This incredibly friendly ambiance is hard to quit at the end of the conference. Fortunately, persistence and durability are guaranteed thanks to Kamil’s snapshots:

When you see how Kamil highlights each personality with a simple camera, can you imagine what he can do when organizing a conference? Keep an eye on POUG website.

 

Cet article #DOAG2017 est apparu en premier sur Blog dbi services.

nVision Performance Tuning: 9. Using Compression without the Advanced Compression Licence

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Table compression can significantly decrease the size of tables and reduce the volume of I/O required to retrieve data.  Compression of the ledger, ledger budget, and summary ledger tables can significantly improve the performance of scans in nVision.
The Advanced Compression licence enables compression features to be used where segments are still updated by the application.  Without this licence, only simple table compression can be used, although Hybrid Column Compression (HCC) can only be used on an engineered system.  Neither forms of compression can be used in on-line line transaction processing mode.  A table that is marked for compression, but that is populated in conventional path mode will not be compressed.  Any compressed blocks will no longer be compressed after being updated.
However, in most systems, ledger rows cease to be updated within a month or two of the period closing.  The ledger tables will usually be partitioned by fiscal year and accounting period, so a month after the period has closed the corresponding partition will cease to be updated.  Therefore, it could then be compressed.
I usually compress historical partitions when I introduce partitioning.  This following example comes from an engineered system, so Hybrid Columnar Compression has been used.

  • There is one range partition each for the whole of fiscal years 2014 and 2015.  This data is historical and rarely queried, and then not by a single period.  A more aggressive compression QUERY HIGH has been used.  
  • Monthly partitioning is used for the previous fiscal year, 2016.  These partitions are compressed, but using QUERY LOW which should deliver better performance with lower CPU overhead than QUERY HIGH (although usually there isn't much in it).
  • Partition LEDGER_2017_05 is not partitioned because it is current (at the time of building this script) and could still be updated by the application.
#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE TABLE sysadm.ps_ledger
(...)
TABLESPACE GLLARGE
PCTFREE 10 PCTUSED 80
PARTITION BY RANGE(FISCAL_YEAR,ACCOUNTING_PERIOD)
SUBPARTITION BY LIST (LEDGER)
(PARTITION ledger_2014 VALUES LESS THAN (2015,0) PCTFREE 0 COMPRESS FOR QUERY HIGH
(SUBPARTITION ledger_2014_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2014_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015 VALUES LESS THAN (2016,0) PCTFREE 0 COMPRESS FOR QUERY HIGH
(SUBPARTITION ledger_2015_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2016_bf VALUES LESS THAN (2016,1) PCTFREE 0 COMPRESS FOR QUERY LOW
(SUBPARTITION ledger_2016_bf_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2016_bf_z_others VALUES (DEFAULT)
)

,PARTITION ledger_2017_05 VALUES LESS THAN (2017,6)
(SUBPARTITION ledger_2017_05_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2017_05_z_others VALUES (DEFAULT)
)

)
ENABLE ROW MOVEMENT
PARALLEL
NOLOGGING
/

As periods close there are more partitions that can be compressed.  However, it may not be practical to take an outage to rebuild ledger partitions each month, and it certainly isn't practical to rebuild the entire ledger table every month.  Instead, from 11g, partitions can be rebuilt in place in an online operation.  The compression attribute can only be specified on the partition, and then the sub-partitions can be rebuilt.  The data in the partition can still be read while it is being moved.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">ALTER TABLE ps_ledger MODIFY PARTITON ledger_2017_05 PCTFREE COMPRESS FOR QUERY LOW;
ALTER TABLE ps_ledger MODIFY SUBPARTITION ledger_2017_05_actuals ONLINE PARALLEL 32;

ALTER TABLE ps_ledger MODIFY SUBPARTITION ledger_2017_05_z_others ONLINE PARALLEL 32;

NB: Moving a partition online can fail in 12.1.0.1 or later of the database due to bug 2070300.  This is documented in MOS Doc ID 2040742.1.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-00932: inconsistent datatypes: expected NUMBER got BINARY

The bug is fixed in 12.2 and can be addressed in 12.1 by patch 2070300.

Friday Philosophy – Doing DOAG (& a Little, Light, Hem-Touching)

This week I’ve been at annual DOAG conference. DOAG is the German (Deutsch) Oracle User Group annual conference. It’s my second time there and I very much enjoyed it, meeting lots of people & seeing some great talks. I also got a request to do more Friday Philosophies, so…

https://mwidlake.files.wordpress.com/2017/11/img_5700.jpg?w=600&h=450 600w, https://mwidlake.files.wordpress.com/2017/11/img_5700.jpg?w=150&h=113 150w" sizes="(max-width: 300px) 100vw, 300px" />

DOAG is now the biggest Oracle User Group conference in Europe, it overtook the UKOUG conference a few years back. Do I see this as “competition”? Well, a little bit of me does because for several years I was involved in organising the UKOUG tech conference – and a part of me would like “my” conference to be the biggest. But that is just misplaced, juvenile pride – really there is no competition between us. DOAG caters to the German Oracle User community (and nearby countries), UKOUG to the British Isles and, to a certain extent, Ireland and the closer parts of mainland Europe. If there is any competition then it is for presenters. I know that sometimes presenters have had to pick between the UKOUG and DOAG as they can only manage so much time doing these thing. But I also know many presenters who do both. Also, both conferences are lucky enough to receive many more, very good presentation abstracts than they have presentation slots for. There will always be a great selection of presentations at both conferences.

There are some aspects of DOAG that I really do wish we could replicate for UKOUG. The first is the venue. Not only is the space they have at the Nuremberg conference centre so much larger and and better suited than the ICC in Birmingham, but it costs them “a lot less”. It might be outside of town (and Nuremberg is a nice town) whereas the UKOUG conference is almost in the middle of Birmingham, but at DOAG you get free transport as part of the conference pass. The second is the catering. The food at DOAG is very, very good; coffee is available at all times; you can get real, decent coffee from some places (in the UK you need to go find a place that will sell you decent coffee); DOAG end the conference with beers and light snacks – the UKOUG conference tends to fizzle out.

But for me, though it is a close-run thing, I do ever so slightly prefer Birmingham and the UKOUG conference. I find it a little more relaxed (certainly there are less suits in evidence) and, on a personal level, I know so many more people there. I like knowing where the pubs & restaurants are and which ones are terrible! And somewhat ironically, our German Christmas Market is not only in full swing during the conference, but it is bigger than Nuremberg’s. But how many wooden toys, Gluhwein and sausage do you need in your life?

https://mwidlake.files.wordpress.com/2017/11/img_5723.jpg?w=600&h=450 600w, https://mwidlake.files.wordpress.com/2017/11/img_5723.jpg?w=150&h=113 150w" sizes="(max-width: 300px) 100vw, 300px" />

I did have a somewhat bizarre time with my presentations at DOAG though. First, I had to cancel a presentation. I was preparing a new one on the philosophy & process of performance tuning but due to some back pain issues (or rather the impact this had on my sleep and the pain medication had on my brain) I was utterly failing to get it done. So with only a week to go I had to ask if they could replace me. I hated doing it so late, I know what it is like organising these conferences and losing talks when you have printed the agenda is a real pain. Plus you now need to find a replacement. But I also know they would not appreciate a poor talk, so I let them choose. They chose to drop the talk.

But I honoured my other two presenting slots. The first was at 11am the first day and I experienced that thing that most presenters secretly like – it was so popular there was only standing room! As a result, the DOAG organisers asked if I would repeat it the next day or last day. Of course! However, as it worked out, they asked me to repeat it later that afternoon as one speaker was lost in transit. There was of course no time to really advertise the change. So I repeated the talk 4 hours later in the largest auditorium I have ever presented in – to 27 people. They of course were scattered around the room like lost souls. I guess it was using a room that would otherwise have been empty, and the session was recorded I think. But it did feel odd.

In between these two talks, I saw a couple of other people present. And in one talk, my phone kept buzzing. That was unusual, especially as it was a German number. I eventually exited (from the front row) and took the call. It was DOAG! They wanted to know why I was not at the interview I had agreed to do. “Because that is on Tuesday!”. Pause. The confused lady on the phone said “Yes. It IS Tuesday…” *sigh* – did I mention the pain meds and my brain? That was embarrassing. I had to go back into the room, to the front, get my stuff and wave an apology to Chris Saxon & Heli Helskyah before scuttling off to this interview. Which I did very badly.

My final talk was interesting for other reasons. The talk was on calling PL/SQL from SQL and the impact it can have on performance and the point-in-time reliability of the results (if your called PL/SQL function itself runs SQL). I’ve discussed this topic with Bryn Llewellyn, the product manager (distinguished no less) of PL/SQL & EBR, in the past and I was able to catch up with him just before the talk. Then he came to my talk. I’m presenting in front of the Oracle employee who owns the tech I am talking about. No pressure. Then I look around the crowd and it is liberally scattered with other senior Oracle technical people, OakTable members, Oracle ACEs…

This is an unappreciated, small problem with becoming friends with these people. The bas…. good fellows and ladies come to your talk – and heckle.

https://mwidlake.files.wordpress.com/2017/11/screenhunter_259-nov-24-10-... 600w, https://mwidlake.files.wordpress.com/2017/11/screenhunter_259-nov-24-10-... 150w" sizes="(max-width: 300px) 100vw, 300px" />

Well, it keeps me honest and the heckling that did inevitably happen was all good-natured, and corrected a couple of slightly weak bits of my talk. So the crowd got a better talk than they otherwise would have.

And the Hem Touching? Well, go back a few years and we did not have the breadth and diversity of information the web now provides for us. In fact, we are talking back in the 1990’s when there was nothing like google and blogs and Oracle Base. What information was out there for Oracle was much more paper-based (you know, actual books & magazines!) or the odd word document that was emailed between people. One name I saw on such things quite often and who taught me an awful lot back then was Craig Shallahammer. Well, Craig was at DOAG, I’d seen him in the crowds once or twice. And after this talk he came up for a quick chat. I might have been presenting now for a good few years and met many of the best known people in our world of Oracle and I’m generally immune from the desire or need to go “Oh! You’re xxx! I’ve read all your papers!!!!”. But I did a little with Craig, as he was from my “Oracle childhood”. And he was very nice about it.

So all in all, an excellent few days. I’ll try and come again next year. Maybe if I finish that talk on the philosophy of performance tuning, they’ll let me do it?

How Not to Make WIT Result in WTH

The support of Women in Technology at Oracle is a fine line and a careful argument. I receive between 20-30 communications in any week on the topic and I think the challenge as I speak to people is the importance of education over persecution. I’m at DOAG, (Deutschland Oracle User Group) conference this week and it’s common for someone to ask to speak to me on the topic of WIT. These valuable conversations have lead to deep reflection of what it means to offer support and how we can improve diversity in the Oracle community….then again, it may just be the jet lag talking… </p />
</span></p></div></div>

    	  	<div class=

12c Multitenant Internals: compiling system package from PDB

DPKi1vxX0AAADLmWhen I explain the multitenant internals, I show that all metadata about system procedures and packages are stored only in CDB$ROOT and are accessed from the PDBs through metadata links. I take an example with DBMS_SYSTEM that has nothing in SOURCE$ of the PDB. But I show that we can compile it from the PDB. This is my way to prove that the session can access the system objects, internally switching the session to the root container when it needs to read SOURCE$. At DOAG Conference I had a very interesting question about what happens exactly in CDB$ROOT: Is the session really executing all the DML on the internal tables storing the compiled code of the procedure?

My first answer was something like ‘why not’ because the session in a PDB can switch and do modifications into CDB$ROOT internally. For example, even a local PDB DBA can change some ‘spfile’ parameters which are actually stored in the CDB$ROOT. But then I realized that the question goes further: is the PDB session really compiling the DBMS_SYSTEM package in the CDB$ROOT? Actually, there are some DDL that are transformed to ‘no-operation’ when executed on the PDB.

To see which ones are concerned, the best is to trace:

SQL> alter session set events='10046 trace name context forever, level 4';
Session altered.
SQL> alter session set container=PDB1;
Session altered.
SQL> alter package dbms_system compile;
Package altered.
SQL> alter session set events='10046 trace name context off';
Session altered.

I’ll not show the whole trace here. For sure I can see that the session switches to CDB$ROOT to read the source code of the package:

*** 2017-11-22T08:36:01.963680+01:00 (CDB$ROOT(1))
=====================
PARSING IN CURSOR #140650193204552 len=54 dep=1 uid=0 oct=3 lid=0 tim=5178881528 hv=696375357 ad='7bafeab8' sqlid='9gq78x8ns3q1x'
select source from source$ where obj#=:1 order by line
END OF STMT
PARSE #140650193204552:c=0,e=290,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=5178881527
EXEC #140650295606992:c=1000,e=287,p=0,cr=0,cu=0,mis=0,r=0,dep=2,og=4,plh=813480514,tim=5178881999
FETCH #140650295606992:c=0,e=35,p=0,cr=4,cu=0,mis=0,r=1,dep=2,og=4,plh=813480514,tim=5178882057
CLOSE #140650295606992:c=0,e=12,dep=2,type=3,tim=5178882104

That was my point about metadata links. But now about modifications.

As I need to see only the statements, I can use TKPROF to get them aggregated, but then the container switch – like (CDB$ROOT(1)) here – is ignored.

Here is a small AWK script I use to add the Container ID to the SQL ID so that it is visible and detailed into TKPROF output:

awk '/^[*]{3}/{con=$3}/^PARSING IN/{sub(/sqlid=./,"&"con" ")}{print > "con_"FILENAME }'

Then I run TKPROF on the resulting file, with ‘sort=(execu)’ so that I have the modifications (insert/delete/update) first. The result starts with something like this:

SQL ID: (PDB1(3)) 1gfaj4z5hn1kf Plan Hash: 1110520934
 
delete from dependency$
where
d_obj#=:1

I know that dependencies are replicated into all containers (because table metadata is replicated into all containers) so I see following tables modified in the PDB: DEPENDENCY$, ACCESS$, DIANA_VERSION$, and of course OBJ$.

But to answer the initial question, there are no modifications done in the CDB$ROOT. Only SELECT statements there, on SOURCE$, SETTINGS$, CODEAUTH$, WARNING_SETTINGS$

So, probably, the updates have been transformed to no-op operations once the session is aware that the source is the same (same signature) and it just reads the compilation status.

Just as a comparison, tracing the same compilation when done on the CDB$ROOT will show inserts/delete/update on ARGUMENT$, PROCEDUREINFO$, SETTINGS$, PROCEDUREPLSQL$, IDL_UB1$, IDL_SB4$, IDL_UB2$, IDL_CHAR$, … all those tables sorting the compiled code.

So basically, when running DDL on metadata links in a PDB, not all the work is done in the CDB, especially not writing again what is already there (because you always upgrade the CDB$ROOT first). However, up to 12.2 we don’t see a big difference in time. This should change in 18c where the set of DDL to be run on the PDB will be pre-processed to avoid unnecessary operations.

 

Cet article 12c Multitenant Internals: compiling system package from PDB est apparu en premier sur Blog dbi services.

LOBs from afar

This has always been a nuisance.  There you are – getting all the bells and whistles with LOBs…until a database link enters the room Smile


--
-- Database: DB11
--
SQL> create table t ( id int, c clob );

Table created.

SQL> insert into t values (1,rpad('x',32000,'x'));

1 row created.

SQL> select * 
  2  from   t;

        ID C
---------- --------------------------------------------
         1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

--
-- Database: anywhere except DB11
--
SQL> select * 
  2  from   t@db11;

ERROR:
ORA-22992: cannot use LOB locators selected from remote tables

 

Well that sucks. But look what has snuck its way into 12.2 !


SQL> select * 
  2  from   t@db122;

        ID C
---------- --------------------------------------------
         1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

 

There is a whole chapter on it here in the official docs.  Very nice indeed.

nVision Performance Tuning: 8. Interval Partitioning and Statistics Maintenance of Tree Selector Tables

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The decision to use interval partitioning on the tree selector tables came from the need to have accurate statistics for the optimizer when parsing nVision queries.  It is not possible to introduce hints into nVision SQL. The dynamic nature of the code means it is not viable to consider any of the forms of database plan stability across the whole application, (although it may be possible to use SQL Profiles in limited cases). Therefore, as far as possible the optimizer has to choose the best plan on its own. Without accurate optimizer statistics, I have found that the optimizer will usually not choose to use a Bloom filter.
If the selector tables are not partitioned, then each table will usually contain rows for many different selectors. Even with perfectly up to date statistics, including a histogram on SELECTOR_NUM, and extended statistics on SELECTOR_NUM and RANGE_FROM_nn, I have found that Oracle miscosts the join on RANGE_FROMnn and the attribute on the ledger table.
I propose that the tree selector tables should be interval partition such that each selector goes into its own partition.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE TABLE PSYPSTREESELECT10 
(SELECTOR_NUM INTEGER NOT NULL,
TREE_NODE_NUM INTEGER NOT NULL,
RANGE_FROM_10 VARCHAR2(10) NOT NULL,
RANGE_TO_10 VARCHAR2(10) NOT NULL)
PARTITION BY RANGE (selector_num) INTERVAL (1)
(PARTITION VALUES LESS THAN(2))
TABLESPACE "PTWORK"
STORAGE(INITIAL 128K NEXT 128K)
/
INSERT INTO PSYPSTREESELECT10
( SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10)
SELECT SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10
FROM PSTREESELECT10
/
DROP TABLE PSTREESELECT10
/
ALTER TABLE PSYPSTREESELECT10 RENAME TO PSTREESELECT10
/
  • nVision queries will reference a single selector with a literal value, and therefore Oracle will eliminate all but that single partition at parse time and will use the statistics on that partition to determine how to join it to other tables.
  • Statistics only have to be maintained at partition level, and not at table level. 
  • Now that there is only a single selector number in any one partition, there is no need for extended statistics. 
  • The need to use dynamic selectors, in order to get equality joins between selectors and ledger tables, in order to make use of the Bloom filter, means that statistics on selector table will inevitably be out of date. The PL/SQL triggers and package that log the selector usage, are also used to maintain statistics on the partition. 
  • Partitions do not have to be created in advance. They will be created automatically by Oracle as they are required by the application. 

Compound Triggers on Tree Selector Tables 

There are a pair of compound DML triggers on each tree selector tables, one for insert and one for delete.

  • The after row section captures the current selector number. The one for insert also counts the number of rows and tracks the minimum and maximum values of the RANGE_FROMnn and RANGE_TOnn columns. 
  • The after statement section updates the selector log. The insert trigger directly updates the statistics on the partition, including the minimum and maximum values of the range columns.
    • It is not possible to collect statistics in a trigger in the conventional manner because dbms_stats includes an implicit commit. If dbms_stats was called within an autonomous transaction it could not see the uncommitted insert into the tree selector that fired the trigger. Hence the trigger calls the XX_NVISION_SELECTORS package that uses dbms_stats.set_table_stats and dbms_stats.set_column_stats to set values directly. 
    • The trigger then submits a job to database job scheduler that will collect statistics on the partition in the conventional way using dbms_job. The job number is recorded on the selector log. The job will be picked up by the scheduler when the insert commits. However, there can be a short lag between scheduling the job, and it running. The first query in the nVision report can be parsed before the statistics are available. 
    • The procedure that directly sets the statistics has to make some sensible assumptions about the data. These mostly lead the optimizer to produce good execution plans. However, testing has shown that performance can be better with conventionally collected statistics. Therefore, the trigger both directly sets statistics and submits the database job to collect the statistics.
    • It is important that table level statistics are not maintained by either technique as this would lead to locking between sessions. Locking during partition statistics maintenance will not occur as no two sessions populate the same selector number, and each selector is in a different partition. A table statistics preference for granularity is set to PARTITION on each partitioned tree selector table. 

The combination of dynamics selectors, single value joins, interval partitioning of selector tables, logging triggers on the selector tables driving timely statistics maintenance on the partitions delivers execution plans that perform well and that make effective use of engineered system features.

However, there are two problems that then have to be worked around. 

Library Cache Contention 

Some data warehouse systems can need new partitions in tables daily or even hourly. If partitions were not created in a timely fashion, the application would either break because the partition was missing, or performance would degrade as data accumulated in a single partition. Oracle intended interval partitions to free the DBA from the need to actively manage such partitioning on a day-to-day basis by creating them automatically as the data was inserted. 
However, on a busy nVision system, this solution could create thousands of new selectors in a single day, and therefore thousands of new partitions. This is certainly not how Oracle intended interval partitioning to be used.  I freely admit that I am abusing the feature.
If you have multiple concurrent nVision reports running, using dynamic selectors, you will have multiple database sessions concurrently inserting rows into the tree selector tables each with a different selector number, and therefore each creating new partitions mostly into the same tables.
The recursive code that creates the new partitions, and maintains the data dictionary, acquires a lock the object handle in library cache to prevent other sessions from changing it at the same time.  As the number of concurrent nVisions increase you will start to see nVision sessions waiting on the library cache lock event during the insert into the tree selector table while the new partition is being created. Perversely, as the performance of the nVision queries improve (as you refine tree selector settings) you may find this contention increases. 
The workaround to this is to create multiple database schemas, each with copies of the partitioned tree selector tables (similarly interval partitioned) and the PSTREESELCTL table (to manage static selectors in those schemas). Synonyms will be required for all other tables referenced by nVision queries. 
Then a trigger on the process scheduler request table PSPRCSRQST will arbitarily set the current schema of the nVision batch processes to one of those schemas. The nVision reports still connect and run with privileges of the Owner ID (usually SYSADM), but the objects tables from the current schema. 
I have used a hash function to distribute nVision processes between schemas. I suggest the number of schemas should be a power of 2 (ie, 2, 4, 8 etc.).

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">CREATE OR REPLACE TRIGGER sysadm.nvision_current_schema
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus IN('7') AND new.prcsname = 'RPTBOOK' AND new.prcstype like 'nVision%')
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET current_schema = NVEXEC'||LTRIM(TO_CHAR(dbms_utility.get_hash_value(:new.prcsinstance,1,8),'00'));
EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/

Thus different nVision reports use different tree selector tables in different schemas rather than trying to create partitions in the same tree selector table, thus avoiding the library cache locking.

Limitation on the Maximum Number of Partitions 

In Oracle, it is not possible to have more than 1048576 partitions in a table. That applies to all forms of partitioning.
The tree selector tables are interval partitioned on selector number with an interval of 1 starting with 1. So the highest value of SELECTOR_NUM that they can store is 1048575.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">INSERT INTO pstreeselect05 VALUES(1048576,0,' ',' ')
*
ERROR at line 1:
ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions

New selector numbers are allocated sequentially from PSTREESELNUM. Left unattended, the selector numbers used by nVision will increase until they eventually hit this limit, and then nVision and ad-hoc queries that use the tree-exists operator will start to fail with this error.
Therefore, a procedure RESET_SELECTOR_NUM has been added to the PL/SQL package to reset the selector number allocation table PSTREESELNUM back to 0, delete any tree selector entries for which there is no logging entry, and then runs the regular selector PURGE procedure in the same
package that will drop unwanted interval partitions.

Recommendation: XX_NVISION_SELECTORS.RESET_SELECTOR_NUM should be scheduled run sufficiently frequently to prevent the selector number reaching the maximum.  

Unstructured vs. structured

The title of this blog post was: “Tracing DBMS_RCVMAN for reclaimable archivelogs” until I started to write the conclusion…

In a previous post I mentioned that there’s a bug with archivelog deletion policy when you want to mention both the ‘BACKED UP … TIMES TO …’ and ‘APPLIED’ or ‘SHIPPED’ as conditions for archived logs to be reclaimable. I opened a SR, they didn’t even try to reproduce it (and I can guarantee you can reproduce it in 2 minutes on any currently supported version) so I traced it myself to understand the bug and suggest the fix.

I traced the DBMS_RCVMAN with Kernel Recovery Area function SQL Tracing:

SQL> alter session set events 'trace[kra_sql] disk high, memory disable';
SQL> dbms_backup_restore.refreshAgedFiles;
SQL> alter session set events 'trace[kra_sql] off';

I know refreshAgedFiles checks for reclaimable file in FRA since it was an old bug where we had to run it manually on databases in mount.

I compared the traces when changing the order of ‘APPLIED’ and ‘BACKED UP’ and found the following:

< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: setRedoLogDeletionPolicy with policy = TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: setRedoLogDeletionPolicy with policy = TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK
5340c5340
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: policy = TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: policy = TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK
5343c5343
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING setRedoLogDeletionPolicy with policy = TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY with alldest = 1
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING setRedoLogDeletionPolicy with policy = TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK with alldest = 1
5350,5351c5350,5351
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption devtype=DISK
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption backed up conf - devtype=DISK , backups=1
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption devtype=DISK
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: parseBackedUpOption backed up conf - devtype=DISK, backups=1
5363c5363
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpAl with TRUE
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpAl with key = 128 stamp = 958068130
5367c5367
< *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpFiles with: no_data_found
---
> *:KRA_SQL:kraq.c@1035:kraqgdbg(): DBGRCVMAN: EXITING getBackedUpFiles

You see at the top the difference in the way I mentioned the deletion policy. You see at the bottom that the first one (starting with ‘BACKED UP’) didn’t find archivelogs being backed up (no_data_found). But the second one (starting with ‘APPLIED’) mentioned the sequence# 128.

But if you look carefully, you see another difference in the middle: the “devtype=DISK” has an additional space before the comma in the first case.

So I traced a bit further, including SQL_TRACE and I found that the deletion policy is just using some INSTR and SUBSTR parsing on the deletion policy text to find the policy, the backup times, and the device type. For sure, looking for backups with DEVICE_TYPE=’DISK ‘ instead of ‘DISK’ will not find anything and this is the reason for the bug: no archived logs backed up means no archived log reclaimable.

If you look closer at DBMS_RCVMAN you will find that the device type is extracted with SUBSTR(:1, 1, INSTR(:1, ‘ ‘)) when the device type is followed by a space, which is the reason of this additional space. The correct extraction should be SUBSTR(:1, 1, INSTR(:1, ‘ ‘)-1) and this is what I suggested on the SR.

So what?

Writing the conclusion made me change the title. Currently, a lot of people are advocating for unstructured data. Because it is easy (which rhymes with ‘lazy’). Store information as it comes and postpone the parsing to a more structured data type until you need to process it. This seems to be how the RMAN configuration is stored: as the text we entered. And it is parsed later with simple text function as INSTR(), SUBSTR(), and LIKE. But you can see how a little bug, such as reading an additional character, has big consequences. If you look at the archivelog deletion policy syntax, you have 50% chances to run into this bug on a Data Guard configuration. The Recovery Area will fill up and your database will be blocked. The controlfile grows. Or you noticed it before and you run a ‘delete archivelog’ statement without knowing the reason. You waste space, removing some recovery files from local storage, which could have been kept for longer. If the deletion policy was parsed immediately when entered, like SQL DDL or PL/SQL APIs, the issue would have been detected a long time ago. Structure and strong typing is the way to build robust applications.

 

Cet article Unstructured vs. structured est apparu en premier sur Blog dbi services.

Great Britain and Northern Ireland February 2018 Dates: “Oracle Indexing Internals and Best Practices” Seminar (Battle For Britain)

UPDATE: ALL TICKETS ARE NOW AVAILABLE FOR PURCHASE !! Attention Oracle Professionals in the United Kingdom !! I have now finalised all the dates and venues for a series of my popular and critically acclaimed “Oracle Indexing Internals and Best Practices” seminar I’ll be running in the UK in February 2018. I’m extremely excited as […]