Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Friday Philosophy – If I Was a Computer, I Might Be An IBM System 360

So today I turn 50. I spent a few minutes looking up what your typical computer looked like then and I decided the IBM System/360 was about right. It was a business machine. Personal computers did not exist then as, well, it was almost the dark ages…

{Note, it has been pointed out to me that I should say “If I were a computer”. Seems that even at my age I still have things still to learn properly… I’ll leave it as is to remind myself…}.

Technology changes and we who work with it change. With tech, after a while any individual part (say a specific computer) becomes obsolete and we throw it away. The technology itself changes and we stop using the parts that are superceeded. I don’t remember the last time I used any sort of floppy disc or a Zip Drive. Ironically we still often use a floppy disc icon to identify the Save button.

But us? Do we who work with I.T. become obsolete? Yes. We do so when we stop changing with the times (or “stop working”, but this is not an “I’m old and considering my mortality” type post, you will be relieved to know). I think that if you lose your passion to keep learning something new in computing and/or programming, it’s time to move out of the arena; as, worryingly soon, you are going to become like those old systems that never get updates and you know will be thrown out if they develop a serious flaw or just become too expensive to keep on the payroll – err, I mean plugged in.

I nearly lost interest about 8,10 years ago. I think it was partly that I found myself doing the same things over & over again and having the same stupid arguments (sorry, “discussions”) about how not doing things correctly was going to just make everyone’s life harder in the long run. I don’t mean doing things the same, I mean doing the things that make a good system – ensuring it fits the business need, that it is tested before it gets released, and you do this crazy thing called design. This time it was not that I needed to alter along with the tech, I needed to alter myself a little. I finally realised that, although it was getting worse, the I.T. world has always been a bit like that and part of the trick to this business is simply walking away from places that are too bad and looking for those who are a bit better. I’m lucky to be able to do that moving about (don’t get me wrong, I did have to put effort into it and I think that is where some people go wrong, they seem to almost expect an external agent to make things better for them) but then I’m 50 and still in the business. I’ve seen lots of people simply leave the industry when they could not affect that change.

However, doing a bit of the introverted-navel-gazing that comes with Significant Birthdays, I find it interesting that at 20, 25, 30, 35,… 42 (very significant that one) things have always been changing for me.

At 20 I was studying Genetics & Zoology at college and thought I would be a lab scientist. A career in I.T. was not even a consideration.
By 25 I’d taken up computing and I had fallen into this company called Oracle and I reckoned I would be with them for a good while, building systems with Forms, ReportWriter. PL/SQL and whatever came next. Oracle would not last for ever…
When I was 30 I was self employed, touting my services to various companies and mostly doing systems design and performance work.
Come 35 and I was back full-time employed (that was a surprise) working in science organisation (even more of a surprise) using my degree to some, well, degree (an utter surprise). And presenting at Oracle user group conferences.
At 40 I was self-employed again, but now totally focused on performance and and Oracle RDBMS Subject Matter Expert (someone who knows a bit about most of it and most of a bit of it).
42. 42 is a great age. You are the answer to everything…
At 45 I was retired. Except when I was not. OK, I had become a Consultant, doing short jobs for different clients. And doing all this User Group stuff. Me! Antisocial, miserable, slightly-autistic me!
Now at 50, I have to confess I am not keeping on top of the technical details of my chosen sphere the way I probably should, if my career is still in this area. But I’m not doing bad and my “job” is now even more as a presenter and a member of the User Group community. I need new skills for that.

So it keeps changing. Sometimes I chose the change and sometimes changes just dropped on me. But I’ll look at the options as they come up. And if no options are coming up and I am not happy in my job, I go look for options. I won’t say I always choose the best option but, heck, it’s worked OK so far.

I wonder what I’ll be doing at 55 and 60? I just hope I am not stuck in a museum with a “do not touch” sign next to me, like all the remaining IBM System/360s

Creating the Listener and Database in Silent Mode


In previous posts, I talked about a customer where I needed to do a silent installation of the Grid Infrastructure and Database, as well as the post installation configuration steps. For that particular customer, I didn’t create an empty database but used RMAN to duplicate the existing database. But what about if you want to create a new listener and database in silent mode as well? That’s what this post covers.


Just like my previous posts, I installed the RDBMS using a response file. Since I’ve already covered that earlier, I’m not going to reproduce that here again. Once the RDBMS software is installed, it’s just a matter of creating the listener and the database. Creating the listener is dead easy. Oracle provides a netca.rsp file, which for my simple installation didn’t need any modification at all. Here’s the file:

$ cat netca.rsp
## Copyright(c) 1998, 2014 Oracle Corporation. All rights reserved. ##
##                                                                  ##
## Specify values for the variables listed below to customize your  ##
## installation.                                                    ##
##                                                                  ##
## Each variable is associated with a comment. The comment          ##
## identifies the variable type.                                    ##
##                                                                  ##
## Please specify the values in the following format:               ##
##                                                                  ##
##         Type         Example                                     ##
##         String       "Sample Value"                              ##
##         Boolean      True or False                               ##
##         Number       1000                                        ##
##         StringList   {"String value 1","String Value 2"}         ##
##                                                                  ##
##                                                                  ##
## This sample response file causes the Oracle Net Configuration    ##
## Assistant (NetCA) to complete an Oracle Net configuration during ##
## a custom install of the Oracle12c server which is similar to     ##
## what would be created by the NetCA during typical Oracle12c      ##
## install. It also documents all of the NetCA response file        ##
## variables so you can create your own response file to configure  ##
## Oracle Net during an install the way you wish.                   ##
##                                                                  ##


# Name       : SHOW_GUI
# Datatype   : Boolean
# Description: This variable controls appearance/suppression of the NetCA GUI,
# Pre-req    : N/A
# Default    : TRUE
# Note:
# This must be set to false in order to run NetCA in silent mode.
# This is a substitute of "/silent" flag in the NetCA command line.
# The command line flag has precedence over the one in this response file.
# This feature is present since

# Name       : LOG_FILE
# Datatype   : String
# Description: If present, NetCA will log output to this file in addition to the
#              standard out.
# Pre-req    : N/A
# Default    : NONE
# Note:
#       This is a substitute of "/log" in the NetCA command line.
# The command line argument has precedence over the one in this response file.
# This feature is present since

#INSTALLED_COMPONENTS;StringList;list of installed components
# The possible values for installed components are:
# "net8","server","client","aso", "cman", "javavm"

#INSTALL_TYPE;String;type of install
# The possible values for install type are:
# "typical","minimal" or "custom"

#LISTENER_NUMBER;Number;Number of Listeners
# A typical install sets one listener

#LISTENER_NAMES;StringList;list of listener names
# The values for listener are:
# A typical install sets only "LISTENER"

#LISTENER_PROTOCOLS;StringList;list of listener addresses (protocols and parameters separated by semicolons)
# The possible values for listener protocols are:
# "TCP;1521","TCPS;2484","NMP;ORAPIPE","IPC;IPCKEY","VI;1521"
# A typical install sets only "TCP;1521"

#LISTENER_START;String;name of the listener to start, in double quotes

#NAMING_METHODS;StringList;list of naming methods
# The possible values for naming methods are:
# A typical install sets only: "TNSNAMES","ONAMES","HOSTNAMES"

#NOVELL_NAMECONTEXT;String;Novell Directory Service name context, in double quotes
# A typical install does not use this variable.

#SUN_METAMAP;String; SUN meta map, in double quotes
# A typical install does not use this variable.

#DCE_CELLNAME;String;DCE cell name, in double quotes
# A typical install does not use this variable.

#NSN_NUMBER;Number;Number of NetService Names
# A typical install sets one net service name

#NSN_NAMES;StringList;list of Net Service names
# A typical install sets net service name to "EXTPROC_CONNECTION_DATA"

#NSN_SERVICE;StringList;Oracle12c database's service name
# A typical install sets Oracle12c database's service name to "PLSExtProc"

#NSN_PROTOCOLS;StringList;list of coma separated strings of Net Service Name protocol parameters
# The possible values for net service name protocol parameters are:
# A typical install sets parameters to "IPC;EXTPROC"

#SERVICEUSERPASSWORD;String;Windows service user password
# If the oracle home is installed as secure user, supply the password

And to create the listener, you just need:

[oracle@host1 response]$ export ORACLE_HOME=/u01/app/oracle/product/
[oracle@host1 response]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@host1 response]$ pwd
[oracle@host1 response]$ netca -silent -responseFile /u01/app/database/response/netca.rsp

Parsing command line arguments:
    Parameter "silent" = true
    Parameter "responsefile" = /u01/app/database/response/netca.rsp
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Oracle Net Listener Startup:
    Running Listener Control:
      /u01/app/oracle/product/ start LISTENER
    Listener Control complete.
    Listener started successfully.
Listener configuration complete.
Oracle Net Services configuration successful. The exit code is 0
[oracle@host1 response]$ lsnrctl stat

LSNRCTL for Linux: Version - Production on 20-APR-2018 10:52:01

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Alias                     LISTENER
Version                   TNSLSNR for Linux: Version - Production
Start Date                20-APR-2018 10:51:26
Uptime                    0 days 0 hr. 0 min. 35 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/
Listener Log File         /u01/app/oracle/diag/tnslsnr/host1/listener/alert/log.xml
Listening Endpoints Summary...
The listener supports no services
The command completed successfully


Now we have the listener started, let’s set up the database using a response file as well. This time, I did need to make some changes to the response file Oracle provided. As you can probably guess by looking at some of my changes, the database we’re creating here is going to be used for the Enterprise Manager repository for this customer. Oh, one thing – obviously I didn’t use “password” as the password for SYS, SYSTEM and DBSNMP when I really ran this! </p />

    	  	<div class=

18.2 patch… painless for me

18.2 was released a few days ago, so I thought I’d throw it against my 18c instance and see how things played out.  This was just a single instance database, running with a single pluggable.

Step 1 for me was to download the latest OPatch.  To be honest, I don’t even check whether I already have the latest version in any installed ORACLE_HOME anymore.  If I’m going to apply a patch, I’m always going to download the latest OPatch unless the README explicitly tells me some reason why I should not.  I figure getting the latest version reduces the chances I’ll hit problems.  For me, I grabbed version “”, but your experience may differ depending on when you encountered upon this post.

So I then downloaded patch 27676517 (note, this does not include patches for the JVM which are done separately).

From there is was just a case of following the README instructions and I had no dramas at all.  Software patching went through smoothly:

[oracle@host18 27676517]$ export PATH=$PATH:$ORACLE_HOME/OPatch
[oracle@host18 27676517]$ opatch apply
Oracle Interim Patch Installer version
Copyright (c) 2018, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/18.0.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/18.0.0/dbhome_1/oraInst.loc
OPatch version    :
OUI version       :
Log file location : /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/opatch2018-04-19_08-32-51AM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   27676517  

Do you want to proceed? [y|n]
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/18.0.0/dbhome_1')

Is the local system ready for patching? [y|n]
User Responded with: Y
Backing up files...
Applying interim patch '27676517' to OH '/u01/app/oracle/product/18.0.0/dbhome_1'
ApplySession: Optional component(s) [ oracle.assistants.usm, ] , [, ] , [, ] , [ oracle.assistants.asm, ]  not present in the Oracle Home or a higher version is found.

Patching component,

Patching component,

Patching component oracle.assistants.acf,

Patching component oracle.sqlplus.ic,

Patching component oracle.rdbms.deconfig,

Patching component oracle.sqlplus,

Patching component oracle.rdbms.util,

Patching component oracle.rdbms,

Patching component oracle.rdbms.dbscripts,

Patching component oracle.assistants.deconfig,

Patching component oracle.assistants.server,

Patching component oracle.rdbms.install.plugins,

Patching component oracle.rdbms.rsf,

Patching component oracle.rdbms.rman,

Patching component oracle.javavm.client,

Patching component oracle.ldap.owm,

Patching component,
Patch 27676517 successfully applied.
Log file location: /u01/app/oracle/product/18.0.0/dbhome_1/cfgtoollogs/opatch/opatch2018-04-19_08-32-51AM_1.log

OPatch succeeded.


And the datapatch stage to update the database(s) also went through without any problems.

[oracle@host18 OPatch]$ ./datapatch -verbose
SQL Patching tool version Production on Thu Apr 19 08:37:40 2018
Copyright (c) 2012, 2018, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_10552_2018_04_19_08_37_40/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry: Release_Update 1804041635: Installed
    No release update patches installed
    No release update patches installed
    No release update patches installed

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB1
    No interim patches need to be rolled back
    Patch 27676517 (Database Release Update : (27676517)):
      Apply from Feature Release to Release_Update 1804041635
    No interim patches need to be applied

Installing patches...
Patch installation complete.  Total patches installed: 3

Validating logfiles...done
Patch 27676517 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_CDBROOT_2018Apr19_08_38_45.log (no errors)
Patch 27676517 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_PDBSEED_2018Apr19_08_39_58.log (no errors)
Patch 27676517 apply (pdb PDB1): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_DB18_PDB1_2018Apr19_08_40_01.log (no errors)
SQL Patching tool complete on Thu Apr 19 08:41:13 2018
[oracle@host18 OPatch]$ 

[oracle@host18 ~]$ sqlplus scott/tiger@pdb1

SQL*Plus: Release - Production on Thu Apr 19 08:44:27 2018

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

Connected to:
Oracle Database 18c Enterprise Edition Release - Production


So all in all, a painless and straightforward experience for me.

One minor annoyance for me was the “Bugs Fixed” document.  The README references this as:


but that document is not accessible currently.  However, if you follow a different link in the README


then you will get to the document which contains the list of fixes etc.

Footnote: If you are not familiar with the new nomenclature of “Patch”, “Release Update”, etc etc, here is a good place to start

and follow the various links within that post for more information.

Happy upgrading!

Addenda: When it comes to the ‘datapatch’ script, the README is perhaps a little ambiguous.  More on that here


Choosing the Right Compression

Choosing the right database table compression matters.  It affects the size of the objects and also the performance of your application.  Compression makes objects smaller, and for many systems, this is an objective in itself, but it also affects performance.  As compressed objects are smaller, less physical I/O is required to access them.  However, that saving is paid for in CPU.  More CPU is required to compress the data when it is written, and then again (albeit usually less CPU) to decompress it when you read it.  The question is whether the saving in I/O pays for the additional CPU.

There are a number of forms of compression to choose between.  They have different characteristics.  The resulting objects are different sizes, and they require different amounts of CPU to compress and decompress.

  • Basic Compression:  This is not really compression, but a form of block-level de-duplication.  It has been available since Oracle 9i.  It is available on all platforms.
  • Hybrid Columnar Compression is only available on Engineered Systems, FS Flash Storage System and ZFS Storage Appliance (ZFSSA).  4 different levels of compression that use 3 different compression algorithms.  HCC is frequently positioned as being highly effective at reducing the size of data warehouses, but it also has implications for performance.
    • Query Low: LZO algorithm
    • Query High: ZLIB algorithm
    • Archive Low: ZLIB algorithm higher level with a larger block size.
    • Archive High: BZIP2 algorithm.  The higher compression ratio comes at the expense of significantly increased CPU overhead on decompression.
Further Reading:

Which one is right for your application?  It depends.  Ultimately, you need to test it for yourself.  In this blog, I am going to relate my experience of one particular system, including how I tested it.  Our objective was optimising performance, rather than reducing database size.

My Test System

  • SuperCluster M7 (i.e. an Engineered System), Oracle, 2-node RAC, running PeopleSoft Financials 9.2.
  • In particular, this system runs a large number of complex PeopleSoft General Ledger nVision reports that use a number of reporting views on summary ledgers. There are materialized views built on the reporting views.  The summary ledgers are updated nightly.  Then the materialized views are fully refreshed in non-atomic mode so the tables are truncated and repopulated in direct-path mode.  Hence, they can be Hybrid Columnar Compressed without needing the Advanced Compression licence.  The materialized views are not indexed.  Thus they can only be full scanned, and these scans can be offloaded to the storage cells (i.e. smart scan).

The metrics presented here are extracted from test system where the summary ledger tables are updated, materialised view refreshed and then 136 General Ledger (GL) report books are run on regular a daily schedule.  This approach ensures that each test is, as far as was possible, run with the same initial conditions and only known parameters are changed each time.  In each test, a different compression level was set on the materialized views.

  • I have not investigated ARCHIVE compression.


Description Materialized Views
Size (MB) Comp. % Comp. Ratio Elapsed Total DB Time
No Compression 151,868 0% 1.0 1:01 18:13
Basic Compression 34,720 77% 4.4 1:12 22:54
Query Low 22,666 85% 6.7 1:00 18:27
Query High 14,086 91% 10.8 1:05 22:56

all times expressed in hours:minutes

  • While basic compression does significantly reduce table sizes (4.4x), Hybrid Columnar Compression does better (6.7x – 10.8x).  
  • HCC Query Low only has a very small overhead while writing data.  The materialized views are refreshed in parallel, and while the total DB time is higher, the elapsed time is lower!
  • Both basic compression and HCC Query High take about 20% longer to write the data in direct path mode.


Description nVision DB Time
On CPU+CPU Wait cell smart table scan Others Total On CPU+CPU Wait cell smart table scan Others
No Compression 201:26 97:12 18:15 316:54 64% 31% 6%
Basic Compression 466:13 66:03 31:07 563:24 83% 12% 6%
Query Low 241:13 12:34 44:57 298:45 81% 4% 15%
Query High 278:20 4:43 11:38 294:42 94% 2% 4%

The nVision GL reports only read the materialized views (although they do write some temporary working storage tables).

  • As the level of compression of the materialized views increases, the time spent on cell smart table scan (physical read) decreases, this is mainly because the objects are smaller.
  • The slightly surprising result is that the CPU overhead of basic compression is higher than either form of HCC, and more than twice that of using uncompressed tables.  Hence it significantly increases the total database response time
  • Both forms of query HCC perform better than the uncompressed table.  The increase in CPU time is more than compensated by the reduction in time spent on physical I/O.
  • The Query low test was affected by other factors external to the test (a lot more time spent in other wait events).  Otherwise, it would have outperformed query high.  If we look at just CPU and physical read time it is clearly the best option.
In an offloaded (i.e. smart) scan on an engineered system, decompression as well as the physical I/O is done in the storage cell.  The saving in time spent on physical between query low and query high is small because the disk subsystem performs so well on the engineered system.
Whereas on a non-engineered system compression and I/O must be done by the database server, there is nowhere else.  You may well find that a different level of compression is optimal.


  • Basic compression exhibits a surprisingly high CPU overhead.  In my test, the performance was poorer than with uncompressed tables.
  • HCC Query low was generally the best option, saving in I/O more than paid for the increase in CPU.  
  • I also analysed ASH data for the different tests broken down by materialized view.  I found it was better to leave smaller materialized views uncompressed but to compress the larger ones.  This is not surprising since HCC adds additional structures.

    These results and conclusions are from one set of test on one particular application, on one particular 'engineered' hardware configuration using one particular data set.  So you need to conduct your own comparative test and decide what is best for your system.

    A look into oracle redo: index and overview

    When you screw up … make it positive for your users

    Yesterday I was caught up in an interesting SNAFU at my local Supermarket.  All of the checkout registers shut down, thus making it impossible to pay for groceries.  Later on Twitter, the company apologized as we discovered it was actually a nationwide outage!



    News of the outage spread like wildfire through the media:

    TL;DR – people were forced to abandon their shopping trolleys and had to leave the stores. 

    Needless to say, consumers vented loudly and forcefully at Woolworths, and the negative press naturally struck a chord with the general public, because we all feel empathy for the parent in front of the television cameras lamenting their inability to feed their family that night.  (For the record, my boys had meat balls and salad last night that we made with the leftovers we had in the fridge Smile)

    In a perfect world, IT system updates should never cause pain for the users of those IT systems, but no matter how careful the testing and planning, I think it is reasonable to assert that we can never eliminate totally the chances of a major problem during an upgrade, our aim is always to shrink the probability to a close to zero as possible.

    That brings me to the point of this post – and this perhaps slightly controversial stance.  I don’t think IT outages really matter that much from the perspective of the customer. For example, a while back Amazon had a huge outage here in Australia due to storms in Sydney.  Delta Airlines had a big outage in late 2016.  But last time I checked, people are still flying Delta and still buying stuff they didn’t need from Amazon Smile. Customers will forgive an outage but only if you prioritize their needs over yours during the crisis.  People are still ripping into Woolworths today because a Twitter apology doesn’t really get consumers any closer to taking groceries home. 

    So this is what I would have done if I was Woolworths…. Make an announcement in each store that the store needs to close unexpectedly and customers to take your trolley to the nearest checkout (even though I know that the checkout’s are not working).  At that point, simply let people take what they have accumulated so far in their trolleys for no charge.  The news articles above already mentioned that the stores had security staff on hand to assist with closing the stores – so there is protection against a “looting mentality” being created.  Yes, there will be still be some negative press for those customers that could not get into the stores once they closed, but I contend that ultimately this would have turned into a positive result for Woolworths.  Yes you take a hit on the bottom line for yesterdays revenue, but the media attention becomes the mums and dads walking out of the stores smiling about the free shop they just got, rather than swearing they’ll never shop at Woolworths again.

    Outages don’t matter.  Meeting the customer need is what matters.

    Don’t get me wrong – I’m not claiming that any and every company I have ever worked for, or worked with, has a glowing record of understanding how to meet customer needs during times of crisis.  My point is that it should be something to always strive for – when you inflict pain on your customers due to the information technology solutions you build, then do your best to own the problem, and bust a gut trying to make the experience as bearable as possible for your customers, or even a win for them. 

    Whether you turn bad into good, or bad into worse, rest assured your customers will remember you for it.

    SLOB Can Now Be Downloaded From GitHub.

    This is a quick blog entry to announce that the SLOB distribution will no longer be downloadable from Syncplicity. Based on user feedback I have switched to making the kit available on GitHub. I’ve updated the LICENSE.txt file to reflect this distribution locale as authorized and the latest SLOB version is

    Please visit for more information.

    #000000;" src="" alt="" width="500" height="211" srcset=" 500w, 150w, 300w, 527w" sizes="(max-width: 500px) 100vw, 500px" />

    #000000;" src="" alt="downloadbutton" srcset=" 500w, 150w, 300w, 768w, 823w" sizes="(max-width: 500px) 100vw, 500px" />

    A look into oracle redo, part 10: commit_wait and commit_logging

    The redo series would not be complete without writing about changing the behaviour of commit. There are two ways to change commit behaviour:

    1. Changing waiting for the logwriter to get notified that the generated redo is persisted. The default is ‘wait’. This can be set to ‘nowait’.
    2. Changing the way the logwriter handles generated redo. The default is ‘immediate’. This can be set to ‘batch’.

    There are actually three ways these changes can be made:
    1. As argument of the commit statement: ‘commit’ can be written as ‘commit write wait immediate’ (statement level).
    2. As a system level setting. By omitting an explicit commit mode when executing the commit command, the setting as set with the parameters commit_wait (default: wait) and commit_logging (default: immediate).
    3. As a session level setting. By omitting an explicit commit mode, but by setting either commit_wait or commit_logging it overrides the settings at the system level.

    At this point I should say that in my personal opinion, if you need to change this, there is something very wrong with how the database is used in the first place. This can enhance performance a bit (totally depending on what you are doing and how your hardware looks like), but it does nothing magic, as you will see.

    a) commit wait/nowait
    I ran a pin tools debugtrace on a session that commits explicitly with the write mode explicitly set to wait (the default), and a session that commits explicitly with the write mode set to nowait. If you took the time to read the other redo related articles you know that a commit generates changes vectors that are written in the public redo strand, changes the transaction table in the undo segment header and then signals the logwriter to write in kcrf_commit_force_int, releases all transactional control on the rows in the transaction that are committed, after which kcrf_commit_force_int is called again in order to wait for the logwriter to get notified that the change vectors have been persisted.

    When commit is set to nowait, actually what happens is very simple: everything that is executed in ‘wait mode’ commit is executed in ‘nowait mode’ too, except for calling the kcrf_commit_force_int a second time, which is the functionality to wait for the notification from the logwriter.

    commit wait:

     | | < kpoal8+0x000000000f8c returns: 0x2
     | | > ksupop(0x1, 0x7a87a9a0, ...)
     | | | > ksugit_i(0x11526940, 0x7a87a9a0, ...)
     | | | < ksugit_i+0x00000000002a returns: 0
     | | | > _setjmp@plt(0x7ffda5959c50, 0x7a87a9a0, ...)
     | | | <> __sigsetjmp(0x7ffda5959c50, 0, ...)
     | | | <> __sigjmp_save(0x7ffda5959c50, 0, ...)
     | | | < __sigjmp_save+0x000000000025 returns: 0
     | | | > kcbdsy(0x7ffda5959c50, 0x7f3011cbc028, ...)
     | | | <> kcrf_commit_force_int(0x7f3011d75e10, 0x1, ...)
     | | | < kcrf_commit_force_int+0x000000000b9c returns: 0x1
     | | | > kslws_check_waitstack(0x3, 0x7f3011d82f40, ...)
     | | | < kslws_check_waitstack+0x000000000065 returns: 0
     | | | > kssdel(0x7a87a9a0, 0x1, ...)
     | | | | > kpdbUidToId(0, 0x1, ...)
     | | | | < kpdbUidToId+0x00000000014e returns: 0
     | | | | > kss_del_cb(0x7ffda5959b50, 0x7f3011d82f40, ...)
     | | | | | > kpdbUidToId(0, 0x7f3011d82f40, ...)
     | | | | | < kpdbUidToId+0x00000000014e returns: 0
     | | | | | > ksudlc(0x7a87a9a0, 0x1, ...)

    commit nowait:

     | | < kpoal8+0x000000000f8c returns: 0x2
     | | > ksupop(0x1, 0x63c82a38, ...)
     | | | > ksugit_i(0x11526940, 0x63c82a38, ...)
     | | | < ksugit_i+0x00000000002a returns: 0
     | | | > _setjmp@plt(0x7fff43332a50, 0x63c82a38, ...)
     | | | <> __sigsetjmp(0x7fff43332a50, 0, ...)
     | | | <> __sigjmp_save(0x7fff43332a50, 0, ...)
     | | | < __sigjmp_save+0x000000000025 returns: 0
     | | | > kslws_check_waitstack(0x3, 0x7fd1cea22028, ...)
     | | | < kslws_check_waitstack+0x000000000065 returns: 0
     | | | > kssdel(0x63c82a38, 0x1, ...)
     | | | | > kpdbUidToId(0, 0x1, ...)
     | | | | < kpdbUidToId+0x00000000014e returns: 0
     | | | | > kss_del_cb(0x7fff43332950, 0x7fd1ceae8f40, ...)
     | | | | | > kpdbUidToId(0, 0x7fd1ceae8f40, ...)
     | | | | | < kpdbUidToId+0x00000000014e returns: 0
     | | | | | > ksudlc(0x63c82a38, 0x1, ...)

    Yes, it’s that simple. In normal commit mode, commit wait, in ksupop (kernel service user pop (restore) user or recursive call) a call to kcbdsy is executed, which performs a tailcall to kcrf_commit_force_int. In nowait commit mode, kcbdsy is simply not called in ksupop, which actually exactly does what nowait means, the waiting for the logwriter notification is not done.

    b) commit immediate/batch
    I ran a pin tools debugtrace on a session that commits explicitly with the write mode explicitly set to immediate, and a session that commits explicitly with the write mode set to batch. If you read the other redo related articles you know that a commit generates changes vectors that are written in the public redo strand, changes the transaction table in the undo segment header and then signals the logwriter to write in kcrf_commit_force_int, then releases all transactional control on the rows in the transaction that are committed, after which kcrf_commit_force_int is called again in order to wait for the logwriter to get notified that the change vectors have been persisted.

    When commit is set to batch, actually what happens is very simple: everything is done exactly the same in ‘immediate mode’ commit, except for calling the kcrf_commit_force_int the first time, which is the functionality that triggers the logwriter to write. So it looks like ‘batch mode’ is not explicitly batching writes for the logwriter, but rather the disablement of the signal to the logwriter to write right after the change vectors have been copied and the blocks are changed. But that is not all…

    I noticed something weird when analysing the calls in the debugtrace of ‘commit write batch’: not only was the first invocation of kcrf_commit_force_int gone, the second invocation of kcrf_commit_force_int was also gone too! That is weird, because the Oracle documentation says:


    Use these clauses to specify when control returns to the user.

    The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed.

    The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost.

    If you omit this clause, then the transaction commits with the WAIT behavior.

    The important, and WRONG thing, is in the last line: ‘if you omit this clause, then the transaction commits with the WAIT behavior’. Actually, if the commit mode is set to batch, the commit wait mode flips to nowait with it. It does perform the ultimate batching, which is not sending a signal to the logwriter at all, so what happens is that change vectors in the public redo strands are written to disk by the logwriter only every 3 seconds, because that is the timeout for the logwriter sleeping on a semaphore, after which it obtains any potential redo to write via information in kcrfsg_ and KCRFA structures. This is important, because with NOWAIT behaviour, there is no guarantee changes have been persisted for the committing session.

    I was surprised to find this, which for me it meant I was searching for ‘kcrf_commit_force_int’ in the debugtrace of a commit with the ‘write batch’ arguments, and did not find any of them. Actually, this has been reported by Marcin Przepiorowski in a comment on an article by Christian Antognini on this topic.

    Can this commit batching be changed to include waiting for the logwriter? Yes, actually it can if you explicitly include ‘wait’ with the commit write batch. It is very interesting the kcrf_commit_force_int function then comes back at a totally different place:

     | | | | | | | | | | | | | < ktuulc+0x000000000119 returns: 0
     | | | | | | | | | | | | | > ktudnx(0x69fc8eb0, 0, ...)
     | | | | | | | | | | | | | | > ktuIMTabCacheCommittedTxn(0x69fc8eb0, 0x7ffe9eb79e74, ...)
     | | | | | | | | | | | | | | < ktuIMTabCacheCommittedTxn+0x000000000071 returns: 0
     | | | | | | | | | | | | | | > kslgetl(0x6ab9d6e8, 0x1, ...)
     | | | | | | | | | | | | | | < kslgetl+0x00000000012f returns: 0x1
     | | | | | | | | | | | | | | > kslfre(0x6ab9d6e8, 0x6ab9ce00, ...)
     | | | | | | | | | | | | | | < kslfre+0x0000000001e2 returns: 0
     | | | | | | | | | | | | | < ktudnx+0x0000000005e4 returns: 0
     | | | | | | | | | | | | | > ktuTempdnx(0x69fc8eb0, 0, ...)
     | | | | | | | | | | | | | < ktuTempdnx+0x000000000083 returns: 0
     | | | | | | | | | | | | | > kcb_sync_last_change(0x69fc8eb0, 0x6df64df8, ...)
     | | | | | | | | | | | | | <> kcrf_commit_force_int(0x7f525ba19c00, 0x1, ...)
     | | | | | | | | | | | | | < kcrf_commit_force_int+0x000000000b9c returns: 0x1
     | | | | | | | | | | | | | > kghstack_free(0x7f525bb359a0, 0x7f525690ead8, ...)
     | | | | | | | | | | | | | < kghstack_free+0x00000000005a returns: 0
     | | | | | | | | | | | | < ktucmt+0x000000000e0c returns: 0

    Instead of simply keeping the separate call after the transaction in the ksupop function, described above with commit wait/nowait, which is kcrf_commit_force_int with second argument set to 1, which means it notifies the logwriter as well as waits for the logwriter notification of the write, it is now is called after the function to clear the TX enqueue (ktuulc) and the undo transaction count has been lowered (ktudnx) at the end of the ktucmt function as a tailcall of kcb_sync_last_change, which wasn’t called before. Of course this limits the IO batching opportunities.

    Do not change your database or even your session to make your commit faster. If you must, read this article carefully and understand the trade offs. One trade off which hasn’t been highlighted is: this might change in a different version, and it requires some effort to investigate. And again: if you still are considering this: probably you have a different problem that you should look at. Do not take this option in desperation to hope for a magical restoration of performance.

    The commit_write option nowait does trigger the logwriter to write (the first invocation of the kcrf_commit_force_int function), but it does not wait for write confirmation. The commit_logging option batch does something different than the documentation says it does, it does not issue a signal to the logwriter, nor wait for it. This way the logwriter can wait the full three seconds before it times out on its semaphore and write what is in the public redo strands. But there is no way to tell if the redo for your change has been persisted yet, because that wait is gone too (that wait is the infamous ‘log file sync’ wait). If you want batching but still want a write notification, you must set commit_write to wait explicitly. By doing that you do not get the optimal batching because then waiting for the logwriter, including sending a signal to write is executed, which I suspect to be in the same ballpark as regular committing, but I haven’t checked that.

    Application Engine in Process Scheduler: PSAESRV Server Process -v- Standalone PSAE executable

    Whether to use the Application Engine server process (PSAESRV) in the process scheduler tuxedo domain or the standalone PSAE executable is a frequently discussed point amongst PeopleSoft administrator.  Over the years, I have written various things on the subject.  I am going to draw them together in this blog, and restate Oracle’s now clear advice about when to use which option.

    In PeopleTools 8.4, the Process Scheduler became a fully fledged Tuxedo domain.  The PSAESRV process was also introduced at this time.  It is a persistent process that handles both Application Engine and Optimization Engine requests.  Each server process creates persistent database connections.  A number of these server processes are started with the domain.   The PSAESRV process does not spawn like other Tuxedo server processes.  Instead, you must configure the number of server processes to match the maximum number of concurrent Application Engine process requests and concurrent Optimization Engine requests that the process scheduler can run.  The server was introduced to handle very short-lived Application Engine programs thus avoiding the overhead of instantiating a new process and new database sessions for each process request.  CRM typically uses Application Engine in this fashion, but generally, you do not see this in other PeopleSoft products.

    Oracle has not always been clear what they mean by a short-lived process.  It has been suggested that if Application Engine processes are typically taking less than 10-30 seconds, or if you run more than 1000 Application Engine processes requests per hour (note 651970.1) you should use PSAESRVs.
    PeopleBooks advises you should use PSAESRV because it delivers improved system performance.  However, PeopleTools Performance Guidelines Red Paper (Doc ID 747389.1) contradicts this somewhat.  Ultimately, if you have any doubts, you should it test each way and determine whether one way gives a clear advantage over the other.

    Oracle Support Note "What will be the impact of disabling PSAESRV on the Process Scheduler (Doc ID 651970.1)" explains that if PSAESRV is disabled in the Tuxedo domain configuration, the Process Scheduler goes back to the legacy behaviour and spawns a stand-alone PSAE process for each Application Engine request.  “The Application Engine will take a bit longer to start, [the] time delay may be range from millisecond to seconds” depending on hardware and configuration.

    The stand-alone process has several advantages.

    • At the end of the Application Engine program, it disconnects from the database and terminates.  Thus releasing resources from the process and the database session.  Whereas the persistent Application Engine process has been reported to accumulate allocated memory over time.
    • If you are using Oracle database Global Temporary Tables in an application engine, then you should not use PSAESRV because the tables are always created PRESERVE ON COMMIT and so are only released when the database session terminates.
    • If you set any session parameters within an Application Engine program run via PSAESRV, or enable database trace, then these settings will carry forward from one Application Program to the next unless you reset the parameter at the end of the program, or the start of the next.  This is not a concern with standalone PSAE processes.

    However, there is at least one case where you must use the server process:

    • If you are using Oracle Active Data Guard and wish to mark some Application Engine programs as read-only then they must be run via the PSAESRV process


    • PeopleTools Performance Guidelines Red Paper (Doc ID 747389.1 sums it up very nicely: “PSAE is as good as PSAESRV for most practical purposes.  If you have an application engine job that runs longer than 10 seconds, PSAE is equivalent to PSAESRV.  PSAE has the added advantage of being recycled at the end of each application engine job, cleaning up any outstanding SQL cursors to the database that may have been left behind.  Because PSAE recycles after each use, PSAE does not have any possible memory leakage problem that may occupy the precious system memory.  In short, PSAE is a cleaner workhorse.”
    • I think it is reasonable to use PSAESRV in CRM.  For all other products, I recommend that PSAESRV should be disabled from all Process Schedulers.
      • If you do have some Application Processes that are both short-lived (i.e. less than 10 seconds) and run frequently, then consider creating other process schedulers with PSAESRV processes that are dedicated to running only these process.  You can then move these processes to a new Process Scheduler category that only runs on these new Process Scheduler.
    • PSAESRV is configured by default, so if you don’t want to use it, and mostly you won’t, then you have to remember to disable it.

    Covering indexes in Oracle, and branch size

    A covering index is an index that contains all the columns required by your query, so that you don’t have to do a TABLE ACCESS BY INDEX ROWID, which is the major cost of an index range scan. You don’t need any special feature to do that in Oracle. Just add the required columns at the end of the index. In the execution plan you will see the columns used as index keys for the range scan displayed in ‘access’ predicates, and the further filtering done on the remaining columns with ‘filter’ predicates. The ‘projection’ shows the columns that are returned in the rowset result.
    However you may have seen that SQL Server has a special ‘INCLUDING’ keyword to separate those non-key columns added only for filtering or projection but not for access. What does it bring that Oracle doesn’t have?

    An index entry is composed of a key and data associated to the key. The index is sorted on the key. The data for each key have no special order, like in a heap table. The idea of the SQL Server INCLUDING keyword is to separate the columns belonging to the key and the columns belonging to the data. It is not mandatory. You can add all columns to the key but depending on the implementation, the benefit can be:

    • some data types may not be allowed in the key but allowed as data
    • sorting the data when not required may be a performance overhead
    • there can be limitations on the size of the key
    • having a larger key may require more space in the branches
    • adding sorted columns may change the clustering factor

    In Oracle, there are very few data types that cannot be indexed (like LONG). The limitation on the size of the key may come into play for large 12c Extended Datatypes. You can substring them, but that defeats the goal of covering indexes. I see two reasons why ‘INCLUDING’ indexes can be useful. The first reason is about the clustering factor. The second about sorting the whole index entry and referencing it from the branches. I’ll detail those reasons later, but first here is an example.

    SQL> create table DEMO (UNIQU ,RANGE ,RANDOM_TEXT ,CONSTANT_TEXT ) as select rownum UNIQU , mod(rownum,4) RANGE , dbms_random.string('u',80) RANDOM_TEXT , lpad('x',80,'x') CONSTANT_TEXT from xmltable('1 to 100000');
    Table DEMO created.
    SQL> commit;
    Commit complete.

    This table has an all-distinct-values column UNIQ, a few-distinct-values on (RANGE) and I’ll use them for the key. And I’ve two columns I’ll add as additional column for covering queries: one is with lot of distinct values (RANDOM_TEXT) and the other has few distinct values (CONSTANT_TEXT).
    The first rows look like this:

    SQL> select * from DEMO order by ROWID fetch first 5 rows only;
    ----- ----- -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
    1 1 XCFNWCRCFBEPJPSHREUVVVTBUCCXLZMRPJPNQDTHWYRZRUORBPDOBCIRFHLICETULTCZTMPOCMUNQITV xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    2 2 XUSPNDOMPQKOIRCVDDTVYAGKRDGIXOSVUNMRAQLSRQGYKOFEXRQMCPXPYZYKRHHKDXGIINOUUAUJOLOO xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    3 3 ZBCVFTDSRUFIUTSIWOOOBWIRMEFUXNWLADAPUPFNPVYDLPQTOUZVXJKMGIPCGZESXFXOIYVMKNSMMZKB xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    4 0 VOIRCXFVSRVZQRZDRLQRHZWNGQJAAWJXWXJKRCJVPWYDJSZLJIOEWAMCFSRCUPSPPEKITJYHHOUQSVYQ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    5 1 UUSAMEVRWNLPGCUVMJWVVPDAENRYKIWWMIHTUJSZRQASMTYOVQNCGZGZIJZWNSOJVSIBMMUEAXOHJCOA xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

    I’m adding indexes fo access on RANGE as the index key, with only the key, or covering the random or constant text:

    SQL> create index DEMO_RANGE on DEMO(RANGE) pctfree 50;
    Index DEMO_RANGE created.

    An additional one adding the unique column in-between:


    And now for access with the unique column as a key:


    Here are some interesting stats:

    SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
    PL/SQL procedure successfully completed.
    SQL> select index_name,blevel,leaf_blocks,num_rows,clustering_factor from user_indexes where table_name='DEMO' order by 2,3;
    -------------------------------- ------ ----------- -------- -----------------
    DEMO_RANGE 1 353 100000 9757
    DEMO_RANGE_COVERING_RANDOM 2 2440 100000 99967
    DEMO_RANGE_COVERING_CONSTANT 2 2440 100000 9757
    DEMO_UNIQU_COVERING_RANDOM 2 2500 100000 2440
    DEMO_UNIQU_COVERING_CONSTANT 2 2500 100000 2440
    DEMO_RANGE_COVERING_WITH_PK 2 2565 100000 9757
    6 rows selected.

    Leaf size

    About the size, the covering indexes have approximately the same number of leaf blocks because the included column (RANDOM_TEXT or CONSTANT_TEXT) has the same size (80 bytes). Of course, the non-covering index is smaller (but will need table access to query additional column). The key on UNIQU is slightly larger than the one on RANGE because the numbers go higher. The index with 3 columns is the largest.

    Clustering factor

    About the clustering factor, there’s one outlier here which deserves an explanation. But before that, you must understand that this higher clustering factor is not important for a query using the covering index, such as a SELECT RANDOM_TEXT WHERE RANGE=0, because in that case you don’t read the table. However for some queries you may cover only the filter predicates and go to the table for projection.
    But the big problem is that when you add a column to an index to address a specific query, you don’t want to risk a side effect on another query, and changing the clustering factor is a risk here. One solution is to keep the old non-covering index (DEMO_RANGE) but then the side effect is on DML overhead.

    To understand the change in clustering factor we must go deeper on Oracle index key and data implementation. The ‘data’ part exists in Oracle indexes even when not specified explicitely with an INCLUDING clause. The ROWID is the data part. An index entry associates a key (the indexed columns) with a pointer to the table row (the ROWID). At least, this is for UNIQUE indexes where each key is unique.

    Non-unique indexes are a special case. Actually, Oracle implements only unique key indexes. When the indexed columns are not unique, the ROWID is stored on the key part of the index entry, and there is no data part. You should read Richard Foote, Differences between Unique and Non-Unique Indexes for detailed explanation.

    Branch size

    The previous statistics displayed only the number of branch level, which was the same, but we can have more detail about the branch size with an ANALYZE INDEX.

    The non-covering index has only one branch block, the root, which references all the 353 leaf blocks containing the 100000 entries, with an average of 5479/352=15 bytes per branch entry:

    SQL> analyze index DEMO_RANGE validate structure offline;
    Index DEMO_RANGE analyzed.
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    2 384 353 1375000 352 1 5479 25000 2830616 1380479 49 25000 12502.5 19 1375000 353

    The covering index with lot of distinct values for the non-key columns has more branch blocks, with an average of 34623/2439=14 bytes per branch entry:

    SQL> analyze index DEMO_RANGE_COVERING_RANDOM validate structure offline;
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    3 2560 2440 9475000 2439 6 34623 1 19558408 9509623 49 1 4 2 9475000 2440

    Here the number of branches is higher only because there are more leaves (as we have more columns), but not because of the size in the branch entries, which are even smaller. They are smaller because the branch does not have to store the full value of all columns in order to identify one leaf block. Then, only the first bytes are needed and not the full 80 bytes of them.

    The covering index with few of distinct values for the non-key columns has a lot more branch blocks, with an average of 234755/2439=96 bytes per branch entry:

    SQL> analyze index DEMO_RANGE_COVERING_CONSTANT validate structure offline;
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    3 2560 2440 9475000 2439 31 234755 25000 19759108 9709755 50 25000 12503.5 86 9475000 2440

    So, here the size of the branch blocks is higher because we have multiple leaves blocks with the value of COVERING_CONSTANT the second column is not sufficient to identify only one leaf block. The full 80 bytes must be stored, and the rowid in addition to it.

    When the indexed column has only unique values, there is no need to store more in the branches (not the additional columns, not the rowid) and only 12 bytes are needed here on average:

    SQL> analyze index DEMO_UNIQU_COVERING_RANDOM validate structure offline;
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    3 2560 2500 9688892 2499 5 29737 1 20030140 9718629 49 1 4 0 9688892 2500

    As the second column is not needed, the size of branch is the same whether we use RANDOM_TEXT or CONSTANT_TEXT:

    SQL> analyze index DEMO_UNIQU_COVERING_CONSTANT validate structure offline;
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    3 2560 2500 9688892 2499 5 29737 1 20030140 9718629 49 1 4 0 9688892 2500

    Now, the last one is my workaround for the higher size when adding a column that do not have a lot of distinct values: just add a column before with more distinct values. Here I use the UNIQU one, but you probably have one that can be useful for your queries.

    SQL> analyze index DEMO_RANGE_COVERING_WITH_PK validate structure offline;
    SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
    ------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
    3 2688 2565 9963892 2564 6 37456 1 20557908 10001348 49 1 4 2 9963892 2565

    Now you get the idea. When creating an index, or adding columns for covering index, and you have the choice of column order, then try to have their first bytes selective enough so that the branch needs only a small substring to identify each leaf block (or lower level branches).

    Block dumps

    If you want to see the details about the branch length, here are some info from block dumps. I got them with the following:

    SQL> column value new_value tracefile
    SQL> select value from v$diag_info where name='Default Trace File';
    SQL> exec for i in (select header_file, header_block from dba_segments where owner='DEMO' and segment_name='DEMO_RANGE') loop execute immediate 'alter system dump datafile '||i.header_file||' block '||(i.header_block+1); end loop;
    PL/SQL procedure successfully completed.
    SQL> host tail -20 &tracefile

    Here is the last branch entry for the root block of DEMO_RANGE where the first column is not very selective and then the rowid is required in the branch:

    row#351[3279] dba: 113261807=0x6c03cef
    col 0; len 2; (2): c1 04
    col 1; len 6; (6): 07 00 05 7b 00 25

    Here is the last branch entry for the root block of DEMO_RANGE_COVERING_RANDOM where instead of the rowid the 3 first bytes of the RANDOM_TEXT column are sufficient:

    row#3[8006] dba: 113263037=0x6c041bd
    col 0; len 2; (2): c1 04
    col 1; len 3; (3): 53 51 52
    col 2; TERM

    Here is the last branch entry for the root block of DEMO_RANGE_COVERING_CONSTANT where the full 80 bytes of CONSTANT_TEXT are not even sufficient, and the ROWID is needed as a 3rd column:

    row#28[5316] dba: 117444566=0x7000fd6
    col 0; len 2; (2): c1 04
    col 1; len 80; (80):
    78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
    78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
    78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
    78 78 78 78 78
    col 2; len 6; (6): 07 00 05 43 00 25

    Here is the last branch entry for the root block of DEMO_UNIQU_COVERING_CONSTANT where the first column is sufficient:

    row#2[8026] dba: 117447160=0x70019f8
    col 0; len 4; (4): c3 09 0d 04
    col 1; TERM

    So what?

    We probably don’t need a feature like SQL Server INCLUDING indexes in most of the cases. However, this may require thinking about the order of columns, mainly:

    • ensure that selective columns appear as early as possible (without compromising the index access efficiency of course) in order to lower the bytes required to address branches and leaves
    • when adding columns, try to add first a column that will keep the clustering factor you had with the rowid, such as a date of insert

    Added 14-APR-2018

    The conclusion above was only focused at columns added for covering indexes (I wrote it after reading wrong things in this stackoverflow thread), and it is not a general statement about putting selective columns first, which is a common misconception. Columns like this CONSTANT_TEXT (which is an extreme case of non-selective) can have a better index compression (Enterprise Edition feature) when in front. Read this tread, answers and links from Richard Foote:


    Cet article Covering indexes in Oracle, and branch size est apparu en premier sur Blog dbi services.