Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Friday Philosophy – Condoning Bad Behaviour

I used to work with a man called Nick(*). Nick was friendly enough, he was good at programming and he had very few annoying personal habits. Nick was easy to work with.

https://mwidlake.files.wordpress.com/2018/02/temper-tantrum.jpg?w=150 150w" sizes="(max-width: 300px) 100vw, 300px" />


I WON’T take my share of the Christmas Cover!

When he finally turned up.

You see, Nick would sometimes turn up around 9am like everyone else. But more often he would get in just before 10am. And then it started to go past 10am and more like 10:15, 10:30… soon it was closer to 11am. He used to stay quite late to make up the time and he got done the programming work he was allocated. But it was a pain in the backside for everyone else. People who worked with him would be waiting for him to turn up and he would sometimes amble into a meeting after it had started.

Then I found myself managing Nick and about the first thing I did was have a little chat about his timekeeping. Nice, friendly Nick did not like this, he could not see the problem, he stayed late to do his work, the company was getting it’s “pound of flesh” as he put it. Why did it matter? So I explained the impact on the rest of the team and that core hours were clearly stated: 10:00-12:00 & 14:00-16:00. During those hours we all knew everyone was around and we could collaborate, it’s called team work.

Nick was having none of this – “If this was a problem, how come Sarah never raised it as an issue?”. And there was the reason that this was not just a small problem but a big problem. Yes, Sarah was his prior boss and she had not said anything to him about it. “You are just trying to show who is boss!”. Yes, yes I am, and being your boss is partly to tell you when you are doing things wrong, so stop it.

Nick’s prior boss had made the decision to condone bad behaviour, to let Nick come in later and later without intervening. Sometimes condoning bad behaviour is an active thing, like laughing at sexist/racist jokes, but usually it is a passive thing. If someone is doing something wrong and, as their manager, you do not challenge it then you are accepting it, you are condoning it. And once you have let it slip a few times, challenging it is harder. In Nick’s case it had resulted in the occasional late arrival becoming common, an accepted situation and a much more significant issue. It was also now a harder behaviour to challenge.

This situation is of course not limited to the manager/subordinate relationship, sometimes our friends or relatives behave badly and you have the choice to accept it or challenge it. I think that helps us immediately understand why we condone bad behaviour, as to challenge it causes confrontation. And very few of us like confrontation.

In another situation I had, there was a guy who would suddenly just go off the deep end for no good reason. Something would annoy him and he would start shouting and getting angry, way beyond what was reasonable. Now, to challenge that kind of bad behaviour you know it is going to be hard work. Thankfully, my boss at the time did, and explained to me at length and very forcefully that I needed to be more mature and less of a dick.

I think we can all agree that we should not condone bad behaviour but we can be reticent to do so due to the conflict.

Of course, a particularly difficult situation is when it is your boss (or parent!) who is behaving badly!

https://mwidlake.files.wordpress.com/2018/02/tantrum2.jpg?w=150&h=107 150w, https://mwidlake.files.wordpress.com/2018/02/tantrum2.jpg 500w" sizes="(max-width: 300px) 100vw, 300px" />


You will respect my authority!

Also, at what point do you challenge the behaviour? Probably not at the first incident, especially if it is minor like turning up to work late. After all, it might be a one-off, they may have reasons for the behaviour (one person I managed was turning up late as they were having a hell of a time at home, they needed some slack). Something more serious such as socially unacceptable behaviour, you need to question it right away. You also can’t challenge every small thing you perceive as wrong, you will just annoy everyone and become regarded as a control freak/moral bore.

You also need to consider the impact of challenging them. If it is over something that would embarrass or offend them, it could sour your relationship with them and the rest of the team. Catching someone out lying can be tricky to deal with (I once had someone ask me for holiday on short notice as a relative was ill. But his new girlfriend also reported to me and she was honest about the “urgent need” for the holiday…). I think the most common decision made when the bad behaviour is one that the other person will be embarrassed or in denial over is to let it lie or challenge it “if it happens again”. Only, just like with Nick and his late arrivals, each time you delay addressing the bad behaviour it will get harder to do so.

I can’t claim that I always handled the condoning of bad behaviour as well as I should, I was by no means a perfect boss or friend (or relative). I think it is one of the hardest parts of being a manager, especially if you are averse to confrontation. But over all, I’ve suffered more in the long run by not challenging bad behaviour than I have by trying to handle it.

As to how you handle it, that’s a whole different topic…

(* Nick was not his real name, I changed it to protect the innocent… It was Dave)

PeopleSoft and Invalid Views in the Oracle Database

I was listening to the section on Invalid Views in PSADMIN Podcast #117 (@19:00). Essentially, when you drop and recreate a view that is referenced by a second view, the status on the second view in the database goes invalid. This is not a huge problem because as soon as you query the second view it is compiled. However, you would like to know whether any change to a view prevents any dependent views from compiling, although you would expect have teased these errors out before migration to production.
The PeopleSoft solution to this is to include all the dependent views in the Application Designer project. However, as pointed out, in the podcast you are now releasing code, possibly unintentionally releasing code changes and certainly updating last change dates on record definitions when really you just need to compile the database objects.   PeopleSoft does this because it is a platform generic solution, but really this is using the PeopleSoft Application Designer to solve a database management issue.
A similar problem also occurs in the Oracle database with dependent PL/SQL procedures and packages where you sometimes get referential loops. Oracle provides a procedure DBMS_UTILITY.COMPILE_SCHEMA that recompiles all invalid objects in a schema and reports any errors to the ALL_ERRORS view.  I think this is a much safer option.

Here is a very simple (non-PeopleSoft) example

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">drop table t purge;
drop view a;
drop view b;

create table t (a number);
insert into t values (1);
create view a as select a from t;
create view b as select a from a;

column object_name format a12
select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B VALID
VIEW A VALID

Dropping and recreating view A renders view B invalid.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">drop view a;
create view a as select a from t;

select object_type, object_name, status
from user_objects
where object_name IN('T','A','B');

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B INVALID
VIEW A VALID

Just querying B makes it valid again.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">select * from b;
select object_type, object_name, status
from user_objects where object_name IN('T','A','B');

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B VALID
VIEW A VALID

Let's make B invalid again by rebuild A, but this time I will change the name of the column in view A from A to T so that view B cannot compile without an error.  I can recompile every invalid object in the schema by calling DBMS_UTILITY_COMPILE_SCHEMA.  However, B remains invalid because there is an error.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">drop view a;
create view a (t) as select a from t;

EXEC DBMS_UTILITY.COMPILE_SCHEMA(schema => 'SCOTT');
select object_type, object_name, status
from user_objects where object_name IN('T','A','B');

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B INVALID
VIEW A VALID

I can query the errors from USER_ERRORS.  So now I have recompiled all invalid objects and have a report of the exceptions that I can work on fixing.

#eeeeee; border: 0px solid #000000; font-family: courier new; font-size: 100%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">NAME
---------------------------------------------------
TYPE SEQUENCE LINE POSITION
------------ ---------- ---------- ----------
TEXT
---------------------------------------------------
ATTRIBUTE MESSAGE_NUMBER
--------- --------------
B
VIEW 1 0 0
ORA-00904: "A": invalid identifier
ERROR 0

N.B.: if you use CREATE OR REPLACE VIEW is not left invalid unless there is an error. Unfortunately, Application Designer always drops and recreates views.

Conferences 2017

Last year I’ve been to a few conferences. At some point I thought I need to record some of the sessions to let more people see them as well.
So I took a cheap action camera & recorded several presentations. Video quality is not good (mostly) due to lighting but still is enough to get an idea of how was it back there in a room.
Here’re links to the videos. Enjoy!

Hatem Mahmoud – Memory Access Tracing/Profiling

Jonathan Lewis – Just Don’t Do It

Panel Discussion at POUG2017

Marcin Przepiorowski – dNFS for DBAs

Neil Chandler – Why Has My Plan Changed

Timur Akhmadeev – Common Pitfalls in Complex Apps Performance Troubleshooting (same presentation in Russian at Russian OUG meetup in Moscow)

Neil Chandler – Histograms are Evil like Chocolate is Evil (part of the same presentation recorded in Poland as well)

Kamil Stawiarski – VOODOO: the black magic of cheep cross database replication

Roger MacNicol – Table Access Full

Mike Dietrich – Ensure Performance Stability When Upgrading Oracle Database

Adam Bolinski – High Performance Oracle Environment on Not Only NFS

Oracle Solaris 11.4 Public Beta Released

Yesterday the Oracle Solaris 11.4 Beta was released to the public. You can download it from OTN to have your go with it. Please read the documentation to learn all about the new and improved features. And remember that you can always contact me in case you need training on the most advanced Operating System […]

Creating synthetic and random test data from an existing table

We have all been there at some point. Either we need to run a test but can't bring production data
outside of the production network or we need to produce a test case for a support organization that
are not allowed to view production data. What to do?

Creating synthetic and random test data from an existing table

We have all been there at some point. Either we need to run a test but can't bring production data
outside of the production network or we need to produce a test case for a support organization that
are not allowed to view production data. What to do?

OSWatcher, Tracefile Analyzer, and Oracle 12.2 single instance

I have previously written about TFA, OSWatcher et all for Oracle 12.1. Since then, a lot of things have happened and I had an update for 12.2 on my to-do list for far too long. Experience teaches me that references to support notes and official documentation get out of date rather quickly, so as always, if you find anything that changed please let me know via the comments section and I’ll update the post.

This is going to be a 3 part mini-series to save you having to go over 42 pages of text … In this first part I’m going to have a look at single instance Oracle. In part 2 I’ll have a look at Oracle Restart environments, and finally in part 3 I’ll finish the series by looking at a 12.2 RAC system.

The environment

I am using a small VM to install Oracle 12.2.0.1.0 (initially) on Oracle Linux 7.4 with kernel UEK4. As always, my EE database binaries go into /u01/app/oracle/product/12.2.0.1/dbhome_1.

The installation/testing (and reporting) of my findings are following this approach:

  • Install the O/S
  • Install Oracle 12.2.0.1.0 EE
  • Create an EE database (not shown here)
  • Patch binaries and database to 12.2.0.1.180116
  • Upgrade TFA to 12.2.1.3.1 as downloaded from My Oracle Support DOC ID 1513912.1

These were the current versions at the time of writing.

Install Oracle 12.2.0.1.0

The first step after the O/S is provisioned is to install the Oracle software, obviously. I have noticed that TFA is part of the Oracle binaries. Towards the end of the installation process, you are prompted to execute root.sh, as normal. On my system, root.sh had the following contents:

      1 #!/bin/sh
      2 unset WAS_ROOTMACRO_CALL_MADE
      3 . /u01/app/oracle/product/12.2.0.1/dbhome_1/install/utl/rootmacro.sh "$@"
      4 . /u01/app/oracle/product/12.2.0.1/dbhome_1/install/utl/rootinstall.sh
      5 /u01/app/oracle/product/12.2.0.1/dbhome_1/suptools/tfa/release/tfa_home/install/roottfa.sh
      6 /u01/app/oracle/product/12.2.0.1/dbhome_1/install/root_schagent.sh
      7 
      8 #
      9 # Root Actions related to network
     10 #
     11 /u01/app/oracle/product/12.2.0.1/dbhome_1/network/install/sqlnet/setowner.sh
     12 
     13 #
     14 # Invoke standalone rootadd_rdbms.sh
     15 #
     16 /u01/app/oracle/product/12.2.0.1/dbhome_1/rdbms/install/rootadd_rdbms.sh
     17 
     18 /u01/app/oracle/product/12.2.0.1/dbhome_1/rdbms/install/rootadd_filemap.sh

After a few variables are set/defined by sourcing in files created during the installation, roottfa.sh is called (see line 5). It allows you to configure TFA to run as a background (daemon) process. I decided to go with that option after consulting chapter 4 in the 12.2 Autonomous Health Framework documentation and reading about the advantages of using TFA as a daemon. This may or may not be the right way to run TFA for you, the documentation is really good and helps you decide. Here is the transcript of my root.sh execution:

[root@server5 ~]#  /u01/app/oracle/product/12.2.0.1/dbhome_1/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.2.0.1/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Do you want to setup Oracle Trace File Analyzer (TFA) now ? yes|[no] :
yes
Installing Oracle Trace File Analyzer (TFA).
Log File: /u01/app/oracle/product/12.2.0.1/dbhome_1/install/root_server5_2018-01-22_17-21-41-005116657.log
Finished installing Oracle Trace File Analyzer (TFA)

Once that message is shown, TFA is configured and controlled via a systemd unit file:

[root@server5 ~]# systemctl cat oracle-tfa
# /etc/systemd/system/oracle-tfa.service
# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
#
# Oracle TFA startup
#
[Unit]
Description=Oracle Trace File Analyzer
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.tfa run >/dev/null 2>&1 

The service is enabled and running.

After the completion of roottfa.sh, TFA resides in $ORACLE_BASE/tfa and its subdirectories. This is documented in the 12.2 Autonomous Health Framework chapter 4.2.3 and has an interesting implication: if you set your environment using oraenv, you might find that you get errors invoking tfactl, such as these on my VM. I have used a “minimum install” for my operating system and quite specifically didn’t add any additional perl modules in my kickstart file. Now, when invoking tfactl after having set my environment using oraenv, I find that there are missing perl modules in my system’s perl installation:

[oracle@server5 ~]$ . oraenv
ORACLE_SID = [NCDB] ? NCDB
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@server5 ~]$ tfactl status
Can't locate Digest/MD5.pm in @INC (@INC contains: 
/usr/local/lib64/perl5 /usr/local/share/perl5 
/usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl
/usr/lib64/perl5 /usr/share/perl5 . /u01/app/oracle/tfa/server5/tfa_home/bin 
/u01/app/oracle/tfa/server5/tfa_home/bin/common /u01/app/oracle/tfa/server5/tfa_home/bin/modules 
/u01/app/oracle/tfa/server5/tfa_home/bin/common/exceptions) 
at /u01/app/oracle/tfa/server5/tfa_home/bin/common/tfactlshare.pm line 7628.
BEGIN failed--compilation aborted at /u01/app/oracle/tfa/server5/tfa_home/bin/common/tfactlshare.pm line 7628.
Compilation failed in require at /u01/app/oracle/tfa/server5/tfa_home/bin/tfactl.pl line 223.
BEGIN failed--compilation aborted at /u01/app/oracle/tfa/server5/tfa_home/bin/tfactl.pl line 223.

The output has been changed for readability (originally I was missing Data::Dumper as well). After studying the documentation (still section 4.2.3 in the aforementioned document), it turns out to be a user mistake. As I said before, after TFA is configured using roottfa.sh as part of the root.sh script execution, it runs in daemon mode and crucially, is available from $ORACLE_BASE/tfa. I found that location being referred to in /etc/init.d/init.tfa as well. When I simply typed “tfactl” into my terminal window, I invoked a different “tfactl”. There is a lot more to be said about this, and I will try and do so in a different post.

NB: the same section 4.2.3 in the documentation states that even should you not run TFA in daemon mode, you can still make use of “user mode TFA” in $ORACLE_HOME, although there are certain restrictions. I haven’t pursued that route.

Anyway, after switching to the location where TFA is actually installed ($ORACLE_BASE/tfa), all is well. It seems that running roottfa.sh creates a new “Oracle” perl:

[root@server5 ~]# find /u01/app/oracle/tfa -name perl
/u01/app/oracle/tfa/server5/tfa_home/perl
/u01/app/oracle/tfa/server5/tfa_home/perl/bin/perl

I found Digest::MD5 and Data::Dumper in /u01/app/oracle/tfa/server5/tfa_home/perl/lib/5.22.0/x86_64-linux-thread-multi.

So let’s try and get the status of the current installation from $ORACLE_BASE/tfa:

[oracle@server5 ~]$ /u01/app/oracle/tfa/bin/tfactl status

Access Denied: Only TFA Admin can run this command

Nearly there: the perl modules are no longer reported to be missing, the “Oracle” perl installation appears to be used now. But what about this error message? I read in section 4.2.4 “Securing Access to Oracle Trace File Analyzer” (still referring to the Autonomous Health Framework manual) that access to TFA is restricted. However, the RDBMS owner should have been granted access automatically.

Using the commands shown in the manual I checked permissions and it turns out that the oracle user is configured to have access to TFA.

[root@server5 ~]# /u01/app/oracle/tfa/bin/tfactl access lsusers
.---------------------------------.
|       TFA Users in server5      |
+-----------+-----------+---------+
| User Name | User Type | Status  |
+-----------+-----------+---------+
| oracle    | USER      | Allowed |
'-----------+-----------+---------'

In fact, I can query TFA’s status using the “print status” command as oracle (/u01/app/oracle/tfa/bin/tfactl print status). I compared the output of “tfactl -help” between oracle and root, and there are more options available when running as root. This might explain the above error.

What is the status now?

TFA is now set up and working, but using the base release:

[root@server5 ~]# /u01/app/oracle/tfa/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host    | Status of TFA | PID   | Port  | Version    | Build ID             | Inventory Status |
+---------+---------------+-------+-------+------------+----------------------+------------------+
| server5 | RUNNING       | 18786 | 41482 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE         |
'---------+---------------+-------+-------+------------+----------------------+------------------'

It should probably be patched to something more recent. I’ll try that in 2 ways: first by applying the January 2018 RU to see if the version changes. Since the standard deployment doesn’t come with OSWatcher which I’m particularly interested in, I’ll download and apply TFA 12.2.1.3.1 next. As with all patching, I need to make sure that I have working backups which I’m comfortable restoring should anything go badly wrong.

Status after applying the January RU

A combination of opatch/datapatch later, my system is on the latest RU patchlevel:

[oracle@server5 OPatch]$ opatch lspatches
27105253;Database Release Update : 12.2.0.1.180116 (27105253)

OPatch succeeded.

However, this did not have an effect on the version of TFA in $ORACLE_BASE:

[root@server5 ~]# systemctl restart oracle-tfa
[root@server5 ~]# /u01/app/oracle/tfa/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host    | Status of TFA | PID   | Port  | Version    | Build ID             | Inventory Status |
+---------+---------------+-------+-------+------------+----------------------+------------------+
| server5 | RUNNING       | 24042 | 37226 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE         |
'---------+---------------+-------+-------+------------+----------------------+------------------'

Not quite what I expected after reading the docs: the installation of the latest RU should have updated TFA as well. But maybe I got something wrong on my end. The RU readme did not have any reference to TFA that I could find.

Yet it doesn’t matter: I wanted to have all the great support tools anyway (and they aren’t shipped with “stock TFA”), so it was time to install the latest version from MOS.

Upgrading TFA using 12.2.1.3.1 (MOS)

The patch is quite simple and well documented. If TFA is up and running in daemon mode as in my example, the patching tool will recognise that fact and patch the installation in-place. After a couple of minutes on my VM, I have a new version:

[root@server5 stage]# /u01/app/oracle/tfa/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host    | Status of TFA | PID   | Port  | Version    | Build ID             | Inventory Status |
+---------+---------------+-------+-------+------------+----------------------+------------------+
| server5 | RUNNING       | 28105 | 39100 | 12.2.1.3.1 | 12213120171215143839 | COMPLETE         |
'---------+---------------+-------+-------+------------+----------------------+------------------'

The MOS version comes with lots of useful tools as well:

[oracle@server5 stage]$ /u01/app/oracle/tfa/bin/tfactl toolstatus

.------------------------------------------------------------------.
|                   TOOLS STATUS - HOST : server5                  |
+----------------------+--------------+--------------+-------------+
| Tool Type            | Tool         | Version      | Status      |
+----------------------+--------------+--------------+-------------+
| Development Tools    | orachk       |   12.2.0.1.3 | DEPLOYED    |
|                      | oratop       |       14.1.2 | DEPLOYED    |
+----------------------+--------------+--------------+-------------+
| Support Tools Bundle | darda        | 2.10.0.R6036 | DEPLOYED    |
|                      | oswbb        |        8.1.2 | RUNNING     |
|                      | prw          | 12.1.13.11.4 | NOT RUNNING |
+----------------------+--------------+--------------+-------------+
| TFA Utilities        | alertsummary |   12.2.1.1.0 | DEPLOYED    |
|                      | calog        |   12.2.0.1.0 | DEPLOYED    |
|                      | changes      |   12.2.1.1.0 | DEPLOYED    |
|                      | dbglevel     |   12.2.1.1.0 | DEPLOYED    |
|                      | events       |   12.2.1.1.0 | DEPLOYED    |
|                      | grep         |   12.2.1.1.0 | DEPLOYED    |
|                      | history      |   12.2.1.1.0 | DEPLOYED    |
|                      | ls           |   12.2.1.1.0 | DEPLOYED    |
|                      | managelogs   |   12.2.1.1.0 | DEPLOYED    |
|                      | menu         |   12.2.1.1.0 | DEPLOYED    |
|                      | param        |   12.2.1.1.0 | DEPLOYED    |
|                      | ps           |   12.2.1.1.0 | DEPLOYED    |
|                      | pstack       |   12.2.1.1.0 | DEPLOYED    |
|                      | summary      |   12.2.1.1.0 | DEPLOYED    |
|                      | tail         |   12.2.1.1.0 | DEPLOYED    |
|                      | triage       |   12.2.1.1.0 | DEPLOYED    |
|                      | vi           |   12.2.1.1.0 | DEPLOYED    |
'----------------------+--------------+--------------+-------------'

Note :-
  DEPLOYED    : Installed and Available - To be configured or run interactively.
  NOT RUNNING : Configured and Available - Currently turned off interactively.
  RUNNING     : Configured and Available.

[oracle@server5 stage]$ 

Since I care a lot about OSWatcher, I was very pleased to see it running.

[oracle@server5 stage]$ ps -ef | grep -i osw
oracle   28344     1  0 10:58 ?        00:00:00 /bin/sh ./OSWatcher.sh 30 48 NONE /u01/app/oracle/tfa/repository/suptools/server5/oswbb/oracle/archive
oracle   28934 28344  0 10:58 ?        00:00:00 /bin/sh ./OSWatcherFM.sh 48 /u01/app/oracle/tfa/repository/suptools/server5/oswbb/oracle/archive
oracle   30662 27252  0 11:01 pts/4    00:00:00 grep --color=auto -i osw
[oracle@server5 stage]$ 

Kindly refer to the documentation for more information about TFA. It’s quite a collection of tools, and it helps you in so many ways…

Index Skip Scan: Potential Use Case or Maybe Not ? (Shine On You Crazy Diamond)

While answering a recent question on a LinkedIn forum, it got me thinking whether there’s a potential use case for using an INDEX SKIP SCAN I hadn’t previously considered. I’ve discussed Index Skip Scans previously (as I did here), a feature introduced around Oracle9i that allows an index to be considered by the CBO even […]

Histogram Threat

Have you ever seen a result like this:


SQL> select sql_id, count(*) from V$sql group by sql_id having count(*) > 1000;

SQL_ID		COUNT(*)
------------- ----------
1dbzmt8gpg8x7	   30516

A client of mine who had recently upgraded to 12.2.0.1 RAC, using DRCP (database resident connection pooling) for an application using PHP was seeing exactly this type of behaviour for a small number of very simple SQL statements and wanted to find out what was going on because they were also seeing an undesirable level of contention in the library cache when the system load increased.

In this note I just want to highlight a particular detail of their problem – with an example – showing how easily histograms can introduce problems if you don’t keep an eye out for the dangers.

One of their queries really was as simple as this:

select count(*), sum(skew2) from t1 where skew = :b1;

And I’m going to use this query to model the problem. All I have to do is arrange for a data set that results in a hybrid (or height-balanced) histogram being created on the skew column, and then run the query lots of times with different input bind values. In the case of the client there were around 18,000 possible values for the column, and the number of rows per value varied from 1 to about 20,000 – but whatever the number of rows selected the optimum execution plan was always going to be an indexed access.


rem
rem     Script:         acs_child_cursors.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2018
rem

create table t1 (
        id, skew, skew2, padding
)
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 3e3
)
select
        rownum  id,
        g1.id   id1,
        g2.id   id2,
        rpad('x',100)
from
        generator       g1,
        generator       g2
where
        g2.id <= g1.id     -- > comment to avoid WordPress format issue
order by
        g2.id, g1.id
;

alter table t1 modify skew not null;
alter table t1 modify skew2 not null;

create index t1_skew on t1(skew);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1 for columns skew size 254'
--              method_opt       => 'for all columns size 1'
        );
end;
/

variable b1 number
exec :b1 := 0;

set termout off
@start_1000
@start_1000
@start_1000
set termout on

set linesize 120
set trimspool on

column sql_text format a55

select
        child_number, plan_hash_value, executions,
        is_bind_sensitive,
        is_bind_aware,
        is_shareable,
        sql_text
from
        v$sql
where   sql_id = 'b82my582cnvut'
;

The data set contains 3,000 distinct values for skew and the way I’ve generated the rows means that the value N will appear N times – so there’s one row with the value 1 and 3,000 rows with the value 3,000 and so on for a total of 4,501,500 rows. If you want to run the tes the code is likely to take a couple of minutes to complete, requiring roughly 700 MB of disk space.

The mechanism of the script start_1000 is something I published a few years ago, and essentially it executes a script called start_1.sql 1,000 times which, for this test, contains the following two lines:


exec :b1 := :b1 + 1

select count(*), sum(skew2) from t1 where skew = :b1;

The net effect of the 3 calls to start_1000.sql is that my simple SQL statement is called once in turn for each value of skew from 1 to 3,000. The SQL_ID of the statement is ‘b82my582cnvut’ which I’ve used to query v$sql when the run is complete, with the following result:


CHILD_NUMBER PLAN_HASH_VALUE EXECUTIONS I I I SQL_TEXT
------------ --------------- ---------- - - - -------------------------------------------------------
	   0	  1041516234	    498 Y N N select count(*), sum(skew2) from t1 where skew = :b1
	   1	  1041516234	     25 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   2	  1041516234	    104 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   3	  1041516234	    308 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   4	  1041516234	    429 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   5	  1041516234	    640 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   6	  1041516234	     31 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   7	  1041516234	    305 Y Y N select count(*), sum(skew2) from t1 where skew = :b1
	   8	  1041516234	    660 Y Y Y select count(*), sum(skew2) from t1 where skew = :b1

9 rows selected.

I’ve got 9 child cursors, all with the same execution plan, all except the last labelled as not shareable (you may find that you don’t get exactly the same result, on repeated tests I got between 5 and 9 cursors). Given enough time all these cursors except the last (shareable) one would be aged out of the library cache. In the case of the client, who had a shared pool that was probably quite a bit larger than needed, the number of non-shareable cursors could get very large and they were hanging around for ages. Typically most of the cursors would report an execution count less than 30, with many showing just one or two executions and a handful showing execution counts in the region of several hundred (and that hanful were the ones that were still marked as shareable).

After eliminating the critical histogram (using dbms_stats.delete_column_stats()) and eliminating the redundant child cursors (using dbms_pool.purge()) the massive proliferation stopped happening and the performance threat disappeared. The only issue then was to change the table preferences for stats collection on this table to add the clause “for columns size 1 skew” so that the histogram would not be recreated on the next gather.

Further Observations.

I suspect that part of the client’s probem – something that exaggerated the count rather than causing it – could be attributed to using DRCP (database resident connection pool) which probably still has some quirky bits of behaviour. It was also true that the client’s connection pool was probably much bigger than it needed to be so if there were any funny little bits of emergent behaviour at scale the client would probably have hit them.

The problem of escalating child cursors is one that Oracle has been working on for quite a long time, and there’s a (hidden) parameter that was introduced late in 11gR2 (though I think that the 11g mechanism first appeared through a fix control) to allow Oracle to mark a parent cursor obsolete if it acquired too many child cursors.  There’s a note on MoS that the client had read on this topic: Doc ID: 2298504.1: Cursor Mutex X Wait Events: After Upgrading To 12.2″ which looked as if it was describing their symptoms so they had set this parameter (_cursor_obsolete_threshold) from 8192 (the 12.2 default) down to 1024 (the default for 12.1 and earlier versions). This had been of some help with the library cache problem.  When the sql_id at the top of this article reported 30,516 child cursors that would be 29 “obsolete” parent cursors with 1,024 childs cursor and one “live” parent cursor with 820 child cursors.

You can appreciate that if Oracle has to find a parent cursor and pin it while walking a chain of 30,516 child cursors that’s likely to take a lot more time than walking a chain of 30 parent cursors (while holding a library cache mutex, perhaps) to find the one non-obsolete parent, then acquiring the parent mutex to walk a chain of 820 child cursor.

I don’t know the exact details of the mechanisms involved with manipulating the various chains – but there are likely to be times when one process needs exclusive mutexes/latches to change a chain while other processes are holding shared mutexes/latches to search the chain. When you’ve got 30,000 child cursors in total the important questions become: “where’s the trade-off between making the child chains shorter and making the parent search longer ?” (I’d expect – or hope – that the Oracle developers had actually found a way to code the obsolence so that the new parent was first in the search, and the rest were never visited, of course.)

One of the suggestions I made to try to alleviate the problem – which I had assumed was due in part to the degree of concurrent execution of the statement – was to mark the cursor as “hot” This resulted in 36 differet sql_ids for the same statement (the client machine had 72 CPUs). This had some effect but ultimately meant that there were 36 chains of parents that would eventually end up with lots of child cursors – the only difference was the rate at which the total cursor count was growing (a lot slower), balanced against the threat that they might all manage to get to 30,000+ child cursors! Instead, as a quick and dirty workaround, I supplied the client with a script that could be run at quieter moments during the day to call dbms_shared_pool.purge() for the sql_id to flush all its cursors from the library cache.

One final oddity – which isn’t from the client site. When I changed my test above to avoid generating the histogram (using the commented out method_opt “for all columns size 1”) I found that I still got two child cursors; the first ended up marked as bind-aware but insensitive and non-shareable, the second appeared (time and time again) when my bind value got to 499 and was marked bind-sensitive, bind-aware and shareable.  I still have no idea why this happened.

Footnote:

When repeating the test I started with a “flush shared_pool” – but I’ve removed this line from the code above in case anyone ran it without considering the possible side effects. You do need to clear previous copies of the key statement from the library cache, though, if you want to avoid confusing the issue on repeated runs of the test.

 

Result Cache: when *not* to use it

I encountered recently a case where result cache was incorrectly used, leading to high contention when the application encountered a peak of load. It was not a surprise when I’ve seen that the function was called with an ‘ID’ as argument, which may have thousands of values in this system. I mentioned to the software vendor that the result cache must be used only for frequently calling the function with same arguments, not for random values, even if each value have 2 or 3 identical calls. And, to detail this, I looked at the Oracle Documentation to link the part which explains when the result cache can be used and when it should be avoided.

But I’ve found nothing relevant. This is another(*) case where the Oracle Documentation is completely useless. Without explaining how a feature works, you completely fail to get this feature used. Most people will not take the risk to use it, and a few will use it in the wrong place, before definitely blacklisting this feature.

(*) By another case, I’m thinking about Kamil Stawiarski presentation about Pragma UDF and the lack of useful documentation about it.

Oracle documentation

So this is what I’ve find in the Database Performance Tuning Guide about the Benefits of Using the Server Result Cache

  1. The benefits of using the server result cache depend on the application
  2. OLAP applications can benefit significantly from its use.
  3. Good candidates for caching are queries that access a high number of rows but return a small number, such as those in a data warehouse.

So, this is vague (‘depends’, ‘can benefit’, ‘good candidates’). And doesn’t help to decide when it can be used.
The ‘access a high number of rows but return a small number’ is an indication why cache hits can benefit. However, there is no mention of the most important things, which are :

  • The cache result is invalidated for any DML on the tables the result relies on.
  • The cache miss, when the result is invalidated is expensive
  • The cache miss, when the result is not in the result cache is expensive
  • The ‘expensive’ here is a scalability issue: not detected in unit tests, but big contention when load increases

Real things to know

The first thing to know is that the Result Cache memory is protected by a latch:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This latch has no children:

SQL> select * from v$latch_children where name like '%Result Cache%';
 
no rows selected

Only one latch to protect the whole result cache: concurrent sessions – even for different functions – have to serialize their access on the same latch.

This latch is acquired in exclusive mode when the session has to write to the result cache (cache miss, invalidation,…) or in shared mode – since 11gR2 when reading only. This has been explained by Alex Fatkulin http://afatkulin.blogspot.ch/2012/05/result-cache-latch-in-11gr2-shared-mode.html.

This means that, whatever the Oracle Documentation says, the benefit of result cache comes only at cache hit: when the result of the function is already there, and has not been invalidated. If you call the same function with always the same parameter, frequently, and with no changes in the related tables, then we are in the good case.

But if there was a modification of one of the tables, even some rows that have nothing to do with the result, then you will have an overhead: exclusive latch get. And if you call the function with new values for the arguments, that’s also a cache miss which has to get this exclusive latch. And if you have multiple sessions experiencing a cache miss, then they will spin on CPU to get the exclusive latch. This can be disastrous with a large number of sessions. I have seen this kind of contention for hours with connection pools set to 100 sessions when the call to the function is frequent with different values.

To show it, I create a demo table (just to have a dependency) and a result_cache function:

SQL> create table DEMO as select rownum n from xmltable('1 to 1000');
Table created.
 
SQL> create or replace function F(n number) return number result_cache as begin for i in (select * from DEMO where DEMO.n=F.n) loop return i.n; end loop; end;
2 /
Function created.

I have just restarted the instance and my latch statistics are reset:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

Result Cache Hit

This will call the function always with the same argument, and no change in the table it relies on:
SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(1); end loop; end;
2 /
PL/SQL procedure successfully completed.

So, the first call is a cache miss and the 999 next calls are cache hits. This is the perfect case for Result Cache.

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 1009 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So, that’s about 1000 latch gets. With cache hits you get the latch once per execution, and this is a shared latch, so no contention here.
You want to see check that it is a shared latch? Just set a breakpoint with gdb on the ksl_get_shared_latch function (up to 12.1 because 12.2 uses ksl_get_shared_latch_int) and print the arguments (as explained by Stefan Koehler and Frits Hoogland):

As my RC latch is at address 00000000600477D0 I set a beakpoint on ksl_get_shared_latch where the first argument is 0x600477d0 and display the other arguments:

break ksl_get_shared_latch
condition 1 $rdi == 0x600477d0
commands
silent
printf "ksl_get_shared_latch laddr:%x, willing:%d, where:%d, why:%d, mode:%d\n", $rdi, $rsi, $rdx, $rcx, $r8
c
end

Then one call with cache hit displays:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8

Mode 8 is shared: many concurrent sessions can do the same without waiting. Shared is scalable: cache hits are scalable.

Cache miss – result not in cache

Here each call will have a different value for the argument, so that they are all cache misses (except the first one):

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

Now the ‘RC latch’ statistics have increased further:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 6005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This is about 5000 additional latch gets, which means 5 per execution. And, because it writes, you can expect them to be exclusive.

Here is my gdb script output when I call the function with a value that is not already in cache:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5347, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5374, mode:16

Mode 16 is exclusive. And we have 3 of them in addition to the shared one. You can imagine what happens when several sessions are running this: spin and wait, all sessions on the same resource.

Cache miss – result in cache but invalid

I run the same again, where all values are in cache now:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

So this is only 1000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 7005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

The function depends on DEMO table, and I do some modifications on it:

SQL> insert into DEMO values (0)
1 row created.
SQL> commit;
Commit complete.

This has invalidated all previous results. A new run will have all cache miss:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

And this is 5000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 12007 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So what?

The important thing to know is that each cache miss requires an exclusive access to the Result Cache, multiple times. Those must be avoided. The Result Cache is good for a static set of result. It is not a short-term cache to workaround an application design where the function is called two or three times with the same values. This is, unfortunately, not explained in the Oracle Documentation. But it becomes obvious when we look at the implementation, or when we load test it with multiple sessions. The consequence can be this kind of high contention during minutes or hours:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
latch free 858,094 1,598,387 1863 78.8
enq: RC - Result Cache: Contention 192,855 259,563 1346 12.8

Without either the knowledge of the implementation, or relevant load tests, the risk is that a developer stays on his good results in unit testing, and implement Result Cache in each function. The consequence will be seen too late, in production, at a time of load peak. If this happens to you, you can disable the result cache (DBMS_RESULT_CACHE.BYPASS(TRUE);) but the risk is to have performance degradation in the ‘good cases’. Or recompile the procedures with removed RESULT_CACHE, but you may bring a new contention on library cache then.

 

Cet article Result Cache: when *not* to use it est apparu en premier sur Blog dbi services.