# Oakies Blog Aggregator

## The CEIL function, dates and floating point numbers

Oh yes indeed, I have dug deep into the well of cheesy blog post titles for this one . But hey, I have two teenage children so I figure that I have reached the age where you’ll need to permit me my share of terrible Dad-joke style puns.

Consider the two dates below (both of which include a time component):

```
04-SEP-2020 00:00:00
05-SEP-2020 00:08:02
```

Let us do some quick arithmetic in our heads. How many minutes are there between those two dates? To avoid any reader headaches I’ll throw in the fact that there are 1440 minutes in a day. So there is one day between them (1440) plus an additional 8 minutes past midnight bringing us to 1448, plus a couple of seconds left over.

If we have been tasked to derive the minutes between those two date/times, we probably need to take into account the seconds as well, and for the sake of discussion, we’ll assume that any portion of a minute should be rounded up to the next minute. That all flows nicely into the following expression to derive the minutes between the two dates:

```
ceil( ( date2 - date1 ) * 1440 )
```

I’ll apply that expression to the two values above, and all works as expected

```
SQL> with t as
2  ( select
3       to_date('04-SEP-2020 00:00:00','DD-Mon-YYYY HH24:MI:SS') d1,
4       to_date('05-SEP-2020 00:08:02','DD-Mon-YYYY HH24:MI:SS') d2
5    from dual
6  )
7  select ceil((d2-d1)*1440) x
8  from t;

X
----------
1449
```

Bear with me, we’re going to move past the realm of the bleedin’ obvious shortly.

Now let me adjust the dates a little, to the values below:

```
04-SEP-2020 00:00:00
05-SEP-2020 00:08:00
```

All I have done is remove the seconds, so this seems an even easier proposition because we have an exact number of minutes, namely 1448. I’ll re-run my SQL and lets look at the result:

```
SQL> with t as
2  ( select
3       to_date('04-SEP-2020 00:00:00','DD-Mon-YYYY HH24:MI:SS') d1,
4       to_date('05-SEP-2020 00:08:00','DD-Mon-YYYY HH24:MI:SS') d2
5    from dual
6  )
7  select ceil((d2-d1)*1440) x
8  from t;

X
----------
1449
```

Hmmmm…not go great. This is the danger that is inherent in floating point arithmetic. Being able to use numeric expressions and operators with dates is a very cool feature of the Oracle Database, but whenever numbers get involved, you should always be on the look out for potential risks that all numeric operations in all software products must come to grips with, namely, we don’t have infinite precision on a chip.

So what is happening here? To find out, we need to remove the CEIL function and pad out the precision of our result a little

```
SQL> col x format 99999.999999999999999999999999999999999999999
SQL> with t as
2  ( select
3       to_date('04-SEP-2020 00:00:00','DD-Mon-YYYY HH24:MI:SS') d1,
4       to_date('05-SEP-2020 00:08:00','DD-Mon-YYYY HH24:MI:SS') d2
5    from dual
6  )
7  select (d2-d1)*1440 x
8  from t;

X
----------------------------------------------
1448.000000000000000000000000000000000006000
```

As you can see, the operation was out by a mere 6 undecillionths of a day, and no, I did not make the term undecillion up

But CEIL does not care about magnitudes; it sees that the result is ever so slightly larger than 1448 and thus takes it up to the ceiling integer of 1449. You might be thinking “We’ll just swap CEIL for ROUND” but that does not meet the requirement of partial minutes always being taken up to the next whole minute.

CEIL is indeed the correct operation here, but we need to apply it only once we have explicitly taken care of any floating point anomalies. There are only 86400 seconds in a day, so the absolute most precision we will need for the difference between 2 date/times is 5 decimal places. Thus the correct expression should be:

```
SQL> with t as
2  ( select
3       to_date('04-SEP-2020 00:00:00','DD-Mon-YYYY HH24:MI:SS') d1,
4       to_date('05-SEP-2020 00:08:00','DD-Mon-YYYY HH24:MI:SS') d2
5    from dual
6  )
7  select ceil(round((d2-d1)*1440,5)) x
8  from t;

X
----------
1448

SQL> with t as
2  ( select
3       to_date('04-SEP-2020 00:00:00','DD-Mon-YYYY HH24:MI:SS') d1,
4       to_date('05-SEP-2020 00:08:02','DD-Mon-YYYY HH24:MI:SS') d2
5    from dual
6  )
7  select ceil(round((d2-d1)*1440,5)) x
8  from t;

X
----------
1449
```

TL;DR: If you’re using numeric operations in Oracle, and not using the NUMBER data type directly, then it is your responsibility to deal with the nuances of floating point arithmetic that all computers perform.

## Video : Simple Oracle Document Access (SODA) for SQLcl

In today’s video we give a demonstration of Simple Oracle Document Access (SODA) for SQLcl. SODA is a feature of Oracle REST Data Services (ORDS),
but this allows to access the document store directly from SQLcl.

This video is based on the following article.

You might find these useful.

The star of today’s video is Kris Rice, who’s essentially singing “dot com” underwater.

## Docs hack to save you time!

The tech world is a big and varied place, so whilst there are some database customers running on 19c and have applied the latest RUs etc to get them to (at of time of writing) 19.8, there are plenty of customers running on 18c, plenty running on 12.2, plenty on 12.1 and so forth all the way down the version tree.

What this means is that we have a plethora of versions of the documentation available to serve those customers, and since all of those versions are in active use, the various search engines of the world have to decide which versions of the documentation best match the search items you have entered.

Thus if you search for “DBMS_JOB”, then your search engine of choice might show you the 12c version, or the 10.2 version, or the 19c version, all dependent on the multitude of things that search engines use to decide what you should be shown first.

For me, no matter version of the database I am running, I almost always want to see the most up to date version of documentation. Whilst its true that there may be some discrepancies in functionality between versions, I think this is by far outweighed by the fact that documentation (like code) is a “living, breathing” thing. It evolves over time. Feedback from the community, omissions, errors etc all go into making the documentation improve over time in terms of the comprehensibility and completeness of the content. A simple example of this is the presence now of LiveSQL demo hyperlinks throughout the documentation set. For that reason, the current version is where I always want to be (or at least start).

But the search engines don’t know this unless I carefully tailor my search item every time I want to find something in the docs. So I came up with my own solution.  I created a bookmark with the following code:

```

javascript:(function () { var url = window.location.href; var n = url.indexOf("#"); if ( n > 0 ) { window.location.assign("https://www.oracle.com/pls/topic/lookup?ctx=dblatest&id=" + url.substring(n+1) ); } else { n = url.indexOf("database/121"); if ( n > 0 ) { window.location.assign("https://www.oracle.com/pls/topic/lookup?ctx=dblatest&id=" + url.substring(n+13,n+18).toUpperCase() ); } else { n = url.indexOf("oracle-database/12.2"); if ( n > 0 ) { window.location.assign("https://www.oracle.com/pls/topic/lookup?ctx=dblatest&id=" + url.substring(n+21,n+26).toUpperCase() ); } } }})()

```

This attempts (and I stress that ) to investigate the URL contents of the doc and determine if a redirect to the equivalent section in the latest version of the docs can be done. Due to the way the documentation is built, I only have this working for 12c and above. I don’t have a way yet for going from 11g or 10g to the current docs, but I’m still exploring!

Here’s a sample of it in action. I’m on the 12.1 documentation for DBMS_JOB and with one click, I’m sent over to the 19c version.

Note if you have ad-blockers or javascript interception in your browser the code in this post might not be visible. You might need to turn that off for just long enough to see the code here, and once you’ve created the bookmark you should be good to go as per normal.

Enjoy!

Shout out to Tanel Poder for the motivation here.

## DBMS_JOB and 19c – code changes needed

Here’s a “gotcha” brought to my attention by one of our AskTOM readers. I’ve mentioned in the past that DBMS_JOB, having been deprecated in favour of DBMS_SCHEDULER, got a new lease of life in 19c because under the covers we translated calls to create a job under DBMS_JOB to the same mechanism in DBMS_SCHEDULER.

The benefit of that is that we don’t need to maintain our older DBMS_JOB code base, but your existing code is fine to keep running. However, as I said in the other post, you do need to alter your privileges, but here is another discovery that might impact you as well.

If I have a procedure that accepts a number of parameters, and that procedure was to be called (many times) from DBMS_JOB, or perhaps the parameters for that procedure contained sensitive information, then building a string to dispatch the DBMS_JOB could be problematic. A typical construction could be something like

```
```

to obtain a DBMS_JOB submission block of:

```
```

That works but could easily either expose sensitive information (for example, the password above) or smash your system with parsing costs for every call. And of course, if any of this information was being passed from somewhere upstream, then concatenation opens up the risks of SQL injection.

A common technique used in the past to avoid the SQL injection and parsing risks was that DBMS_JOB would make the running job number available to the PLSQL block that was being run. For example, lets say you wanted to pass 3 parameters to a DBMS_JOB procedure. Rather than have your procedure accept those 3 parameters directly, you would create a table to hold those parameters, with the job number as a primary key:

```
SQL> create table my_job_parameters
2    (job_id number,
3     param1 date,
4     param2 varchar2(30),
5     param3 int
6  );
```

To pass those values to a DBMS_JOB-submitted procedure, you would create a job, take note of the job number and store it in the above table, along with the parameters that this invocation of the procedure should receive, eg

```
SQL> declare
2    l_job number;
3  begin
4    dbms_job.submit( l_job, 'my_program(JOB);', sysdate);
5
6    insert into my_job_parameters values (l_job,sysdate,'x',1);  <<=== the parameters for this job
7    commit;
8  end;
9  /
```

Notice that the called routine “my_program” takes a parameter “JOB” which is passed automatically by the DBMS_JOB runtime. Thus that procedure would be coded to use the passed job parameter to discover the rest of the “true” parameters:

```
SQL> create or replace
2  procedure my_program(p_job in int)
3  is
4  begin
5    --
6    -- select * into ... from my_job_parameters where job_id = p_job;
7    --
8    -- then processing continues now that we have the parameters
9    --
10  end;
11  /
```

Thus in a nutshell,

• you submit a job,
• that returns a JOB number,
• you store additional parameters in a table keyed by that JOB number,
• the procedure (running as a job) picks up those details when launched by the (DBMS_JOB) scheduler.

In later releases, we didn’t even need to pass “JOB” because SYS_CONTEXT(‘USERENV’,’BG_JOB_ID’) could be used from within the procedure to pick up the currently running job number. However, in 19c we know that DBMS_JOB is transparently translated to the equivalent objects within DBMS_SCHEDULER. Because of this, you get a slightly different behaviour when it comes to accessing the job number. Here is a more complete version of the solution above, which also serves to demonstrate how 19c changes things.

First I’ll have a table to hold my job parameters. I’ll also create a table called CAPTURED_DETAILS so we can track some relevant details throughout the execution of our code.

```
SQL> create table my_job_parameters
2    (job_id number primary key,
3     param1 date,
4     param2 varchar2(30),
5     param3 int
6  );

Table created.

SQL> create table captured_details(msg varchar2(200));

Table created.
```

Now here is my procedure in “old style” DBMS_JOB style. I’ll assume that the execution environment will be passing a JOB parameter. Line 6 shows what I would then typically do in my code. I would use the passed JOB to fetch the rest of the parameters and proceed. In this way, they never needed to be passed directly and were never exposed to the outside world via the anonymous block I’ll be passing via DBMS_JOB.

```

SQL> create or replace
2  procedure my_program(p_job in int)
3  is
4  begin
5    --
6    -- select * into ... from my_job_parameters where job_id = p_job;
7    --
8    -- then processing continues
9    --
10    insert into captured_details values ('p_job='||p_job);
11    insert into captured_details values ('BG_JOB_ID='||sys_context('USERENV','BG_JOB_ID'));
12
13    commit;
14    --
15    -- just so you can look at dba_jobs_running / dba_scheduler_running_Jobs if you like
16    --
17    dbms_session.sleep(60);
18  end;
19  /

Procedure created.
```

So now I create my job to obtain the job number. Then I can use the job to store the rest of the parameters in the MY_JOB_PARAMETERS table. You can see the benefit on line 4, where the anonymous block is static and exposes no parameters. I commit the block to set the job in motion asynchronously.

```
SQL> declare
2    l_job number;
3  begin
4    dbms_job.submit( l_job, 'my_program(JOB);', sysdate);
5    insert into captured_details values ('DBMS_JOB returned job='||l_job);
6    insert into my_job_parameters values (l_job,sysdate,'x',1);
7    commit;
8  end;
9  /

PL/SQL procedure successfully completed.
```

But in 19c, we need to look at what is revealed from our CAPTURED_DETAILS table.

```
SQL>
SQL> select * from captured_details;

MSG
--------------------------------------------------------------------------------
DBMS_JOB returned job=110
p_job=99469
BG_JOB_ID=99469
```

The job number that comes out of the DBMS_JOB call is 110, which is a sequence related purely to DBMS_JOB. However, when we executed the routine from the scheduler (which is no longer DBMS_JOB but DBMS_SCHEDULER) the job number that is passed in is not 110 but 99469. We’ve lost the link between our running job and the parameters table because the job number seen within the job no longer matches the job number we returned from the DBMS_JOB call.

So where does 99469 come from? That ID is now the object# of the scheduler job we created. A job created via DBMS_SCHEDULER is indeed a fully fledged database object.

```
SQL> select * from user_objects
2  where object_type = 'JOB'
3  @pr
==============================
OBJECT_NAME                   : DBMS_JOB\$_110
SUBOBJECT_NAME                :
OBJECT_ID                     : 99469
DATA_OBJECT_ID                :
...
```

If you have been using this technique to pass parameters, you can work around this easily in code. Use the job number that the scheduler passed in (the object ID) to determine the original job sequence number and then strip off the leading part of the object name.

```
SQL> create or replace
2  procedure my_program(p_job in int) is
3    l_job int;
4  begin
5    select replace(object_name,'DBMS_JOB\$_')
6    into   l_job
7    from   user_objects
8    where  object_id  = p_job;
9    --
10    -- select * into ... from my_job_parameters where job_id = l_job;   <<<=== NOT p_job
11    --
12    -- then processing continues
13    --
```

This is an easy fix, but if you are changing the code anyway, then (unless you need the transactional capabilities of DBMS_JOB) it is probably time to refactor the code to use DBMS_SCHEDULER directly.

## Video : Simple Oracle Document Access (SODA) for PL/SQL

In today’s video we give a demonstration of Simple Oracle Document Access (SODA) for PL/SQL. SODA is a feature of Oracle REST Data Services (ORDS),
but this PL/SQL interface for SODA was introduced in Oracle Database 18c.

You might find these useful.

The star of today’s video is the son of Dan Iverson. Dan, not his son, is an Oracle ACE focusing on PeopleSoft and Oracle Cloud Architect. He’s also in Army National Guard, which means he’s already prepared for the zombie apocalypse!

## Frequency histogram – where did that estimate come from?

Frequency histograms in any version of Oracle are pretty cool things, so whenever you have a distribution of data that potentially skewed and the number of distinct values fall under the limit of how many histogram buckets you can have, then a frequency histogram is pretty much a no-brainer. In particular, if you don’t have a large number of distinct values, the nice thing is that you can nominate the largest bucket size possible, and we’ll only create the buckets required to contain the frequency information.

For example, I’ll create table with only 3 distinct values (1,3 and 5) and the distribution of the data is skewed. Then I’ll ask for a 254-bucket histogram, but you can see by querying USER_HISTOGRAMS that only 3 buckets were required to hold the histogram.

```
SQL> create table t ( x int );

Table created.

SQL> insert into t select 1 from dual connect by level <= 100;

100 rows created.

SQL> insert into t select 3 from dual connect by level <= 1000;

1000 rows created.

SQL> insert into t select 5 from dual connect by level <= 2000;

2000 rows created.

SQL> exec dbms_stats.gather_table_stats('','T',method_opt=>'for all columns size 254');

PL/SQL procedure successfully completed.

SQL> select
2    endpoint_number,
3    endpoint_value,
4    endpoint_number -
5       nvl(lag(endpoint_number)
6          over ( order by endpoint_number),0) occurrences
7  from user_histograms
8  where   table_name = 'T'
9  order by 1;

ENDPOINT_NUMBER ENDPOINT_VALUE OCCURRENCES
--------------- -------------- -----------
100              1         100
1100              3        1000
3100              5        2000

```

With the frequency histogram in place, the optimizer does an excellent job of predicting the row estimates for any value which is known within the histogram.

```
SQL> set autotrace traceonly explain
SQL> select * from t where x = 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   100 |   300 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |   100 |   300 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=1)

SQL> select * from t where x = 3;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  1000 |  3000 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |  1000 |  3000 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=3)

SQL> select * from t where x = 5;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  2000 |  6000 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |  2000 |  6000 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=5)
```

But it is important to remember the old adage “garbage-in-garbage-out”. If the only data contained in my table T for column X are the values 1, 3 and 5, then there’s a good chance the optimizer is going to be our friend. But what if after you gathered statistics, we then added some rows with a value of X=4? When you query for a value of 4, the optimizer has no information about how likely that value to be present. So what does it do? Let’s take a look.

```
SQL> select * from t where x = 4;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    50 |   150 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    50 |   150 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=4)
```

The obvious question is: Where did “50” come from? You could say that the optimizer takes an “optimistic” approach to its statistics, and the person who gathered them. It’s line of thinking is: “Surely … I mean SURELY… someone is going to ensure that when we gathered statistics, we did it when the data in the table best represented the state it would be in when user queries would be executed?”

Based on this assumption, if you present a query with a value not contained in the frequency histogram, this value “surely” must be an outlier, an anomaly, something unexpected and rare. For that reason, it will assume that its likelihood of being in the table is half that of the least frequent known value. In this case, the optimizer knows that there are only 100 values of X=1, so it decides that there will be 50 values of the unknown value X=4.

Of course, the best solution here would be to collect statistics when there values for X=4 are present, so that the optimizer has better information. Better information all the time is one of the motivations for the real time statistics and high frequency gathering initiatives you’ll see in 19c.

The optimizer also takes some other things into consideration when you lob an unknown value at it. The value of “4” sits between the two extrema that the optimizer knows about (X=1 and X=5). But when a value is presented in a query that sits outside the known range of value, the optimizer tackles things differently. Once again taking an optimistic approach to your statistics gathering regime, it assumes that values outside the known range must be anomalous. If you query just outside the range, then it will make assumption similar to that of the value of 4, in this case, a little less than the half the least known value.

```
SQL> select * from t where x = 6;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    38 |   114 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    38 |   114 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=6)
```

But as the value gets further outside the known range, that estimate drops away. For example, for x=10 the optimizer assumes a solitary row.

```
SQL> select * from t where x = 10;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     3 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |     1 |     3 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=10)
```

The “sliding scale” reduction of the estimate can be more easily seen if we increase the size of the demo. I’ll have values 1 to 100 with a frequency histogram, and the explore values past 100.

```
SQL> create table t ( x int );

Table created.

SQL>
SQL> insert into t select p from
2 ( select 1 from dual connect by level <= 100),
3 ( select level p from dual connect by level <= 100);

10000 rows created.

SQL>
SQL> exec dbms_stats.gather_table_stats('','T',method_opt=>'for all columns size 254');

PL/SQL procedure successfully completed.

SQL> set autotrace traceonly explain
SQL> select * from t where x = 100;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   100 |   300 |     7   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |   100 |   300 |     7   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=100)

SQL> select * from t where x = 101;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    49 |   147 |     7   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    49 |   147 |     7   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=101)

SQL> select * from t where x = 102;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    49 |   147 |     7   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    49 |   147 |     7   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=102)

SQL> select * from t where x = 103;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    48 |   144 |     7   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    48 |   144 |     7   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=103)

SQL> select * from t where x = 104;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    48 |   144 |     7   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    48 |   144 |     7   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=104)

SQL> select * from t where x = 105;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    47 |   141 |     7   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    47 |   141 |     7   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=105)

SQL> select * from t where x = 110;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    45 |   135 |     7   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    45 |   135 |     7   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=110)

SQL> select * from t where x = 150;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    25 |    75 |     7   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    25 |    75 |     7   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=150)

SQL> select * from t where x = 200;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     3 |     7   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |     1 |     3 |     7   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("X"=200)

SQL> set autotrace off
```

Hopefully this explains why you might see unexpected estimates from the optimizer even when you have frequency histograms in place. When in doubt, you can query USER_HISTOGRAMS to see what data has been collected, and if any values are missing. Often a fresh gather is all that is required to rectify things.

TL;DR: Frequency histograms are great, but do your best to ensure that all potential values are present in the table when you gather the statistics.

## Oracle 19c Automatic Indexing: DDL Statements With Auto Indexes (No Control)

I’ve had a number of questions in relation to DDL support for Automatic Indexes since my last post on how one can now drop Automatic Indexes, so decided to quickly discuss what DDL statements are supported with Automatic Indexes. Many DDL commands are NOT supported with Automatic Indexes, such as making indexes (IN)VISIBLE and […]

## Vagrant Box Drama

I had a little bit of VirtualBox and Vagrant drama today.

I was doing my normal thing of recreating some test VMs and I started to get errors like this during the first part of the VM build, before the config scripts ran.

```==> default: Machine booted and ready!
VBoxService inside the vm claims: 6.1.12
Going on, assuming VBoxService is correct…
[default] GuestAdditions seems to be installed (6.1.12) correctly, but not running.
VBoxService inside the vm claims: 6.1.12
Going on, assuming VBoxService is correct…
Job for vboxadd.service failed because the control process exited with error code.
See "systemctl status vboxadd.service" and "journalctl -xe" for details.
Job for vboxadd-service.service failed because the control process exited with error code.
See "systemctl status vboxadd-service.service" and "journalctl -xe" for details.
VBoxService inside the vm claims: 6.1.12
Going on, assuming VBoxService is correct…
modules. This may take a while.
VirtualBox Guest Additions: To build modules for other installed kernels, run
5.4.17-2011.5.3.el8uek.x86_64. Please install them and execute
modprobe vboxguest failed
The log file /var/log/vboxadd-setup.log may contain further information.
==> default: Checking for guest additions in VM…
default: No guest additions were detected on the base box for this VM! Guest
default: additions are required for forwarded ports, shared folders, host only
default: networking, and more. If SSH fails on this machine, please install
default: the guest additions and repackage the box to continue.
default:
default: This is not an error message; everything may continue to work properly,
default: in which case you may ignore this message.
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
Stdout from the command:
modules. This may take a while.
VirtualBox Guest Additions: To build modules for other installed kernels, run
5.4.17-2011.5.3.el8uek.x86_64. Please install them and execute
Stderr from the command:
modprobe vboxguest failed
The log file /var/log/vboxadd-setup.log may contain further information.```

The VM had booted, but because the guest additions weren’t working it couldn’t mount the shared folders, so none of the setup scripts had run.

Reading the output I figured the kernel-uek-devel package was missing from the Vagrant box, so I did the following…

I connected to the VM, installed the “kernel-uek-devel” package and exited from the VM.

```vagrant ssh

sudo dnf install -y kernel-uek-devel
exit```

Then I restarted the VM.

`vagrant haltvagrant up`

During this second startup of the VM the problem kernel module was rebuilt, and the rest of the configuration steps ran the way you would expect from a normal first-time startup.

One of the issues about using someone else’s Vagrant box is you are at the mercy of what they decide to do with it. In this case I was using the ‘bento/oracle-8’ Vagrant box, which was built using Oracle Linux 8.2 with UEK 6, but the installed guest additions were not happy, and the packages were not present to allow the kernel module to be rebuilt on the fly.

If you are trying to use my Vagrant builds, which mostly use the ‘bento/oracle-8’ Vagrant box, and you are getting this type of issue, now you know what to do about it. Hopefully the next release of this Vagrant box will be less problematic.

Cheers

Tim…

Update: I spent some time figuring out Packer, and now I’ve switched all my OL8 builds to use my own image called ‘oraclebase/oracle-8’.

The post Vagrant Box Drama first appeared on The ORACLE-BASE Blog.

Vagrant Box Drama was first posted on August 31, 2020 at 7:34 pm.

## Introducing packer: building immutable infrastructure in the cloud

After having spent a bit of time with Packer to create Vagrant base boxes it was time to focus on the cloud. I have referenced Packer multiple times in my cloud talks as a popular way to create immutable infrastructure and/or custom images, and I wanted to share how you could potentially make use of the technology

Be aware that this is a post about cloud technology. If you are following along, be aware that you will incur cost.

## The case for using Packer

Saying it with my words, Packer is a tool allowing me to build immutable infrastructure. It does so by using a base image a cloud provider offers, spinning it up temporarily and installing the software you want to use. The result is then converted to a custom image for later use.

Think for example you would like to deploy Oracle 19.8.0 binaries across your cloud infrastructure. One way to get there would be to create a VM, followed by running Ansible playbooks to install Oracle 19.8.0. That’s a perfectly fine approach! However, bear in mind that every execution of an Ansible playbooks might fail, and it certainly takes a bit of time for the playbook to finish.

The alternative is to create a custom image with Oracle 19.8.0 already installed. f you want to be absolutely sure all of your deployments are identical, Packer might be the tool for you. Deploying a custom image should take a lot less time than the VM + Ansible approach, too. And once you validated the custom image by means of rigorous regression testing, it should provide a lot of confidence. A long time ago many of us tried to achieve the same goal by using custom RPMs for example, a much less reliable approach compared to what we can do these days.

Oh, and by the way: regardless which way you choose to deploy Oracle software, you have to always ensure you are license compliant.

# Let’s give it a try!

In this post I won’t install Oracle 19.8.0, I’ll stick with XE to keep it reasonably simple. The overall process is identical though, which should allow you to plug your Ansible playbooks into the Packer template with ease.

My build environment is based on Packer 1.6 running on top of Ubuntu 20.04 LTS. I am using Oracle Cloud Infrastructure.

## Building Oracle XE in Packer

The process is similar to what I described in my first post about Packer, so please head over there for additional background. Unlike building a Virtualbox VM based on a local ISO image, Packer has to know which base OS image to use. It also has to be able to spin it up, so a Virtual Cloud Network as well as a subnet with appropriate security lists are needed. I have written how to create these via Terraform in a separate post.

Most examples about Oracle OCI and Packer don’t appear overly secure to me: you find username/password combinations used pretty much everywhere. I prefer to stick to SSH keys instead.

When configuring the JSON template for Packer you can make use of an existing Oracle Cloud CLI configuration file, or use variables very similar to those the Terraform provider for OCI requires as well. I don’t need a local installation of the OCI CLI since the cloud shell became GA so I went with the latter approach.

To save me some work I’m using the same shell variables in Packer as I do in Terraform. Packer 1.6 can read environment variables when invoked, but as far as I understand it only does so in the “variables” section. This leads to funny looking code, but it works. It might of course be so that I’m wrong, and I’ll happily stand corrected. Please let me know!

## Packer Template

Let’s get started with the Packer JSON template. It is essentially a faithful implementation of the documented options of the oci-oracle builder.

```
{
"variables": {
"oci_tenancy_ocid": "{{env `TF_VAR_tenancy_ocid`}}",
"oci_user_oci": "{{env `TF_VAR_user_ocid`}}",
"oci_compartment_ocid": "{{env `TF_VAR_compartment_ocid`}}",
"oci_api_key_fingerprint": "{{ env `TF_VAR_fingerprint`}}",
"oci_api_key_file": "{{ env `TF_VAR_private_key_path`}}",
"oci_region": "eu-frankfurt-1"
},
"builders": [
{
"type": "oracle-oci",
"tenancy_ocid": "{{user `oci_tenancy_ocid`}}",
"user_ocid": "{{user `oci_user_oci`}}",
"compartment_ocid": "{{user `oci_compartment_ocid`}}",
"fingerprint": "{{ user `oci_api_key_fingerprint`}}",
"key_file": "{{user `oci_api_key_file`}}",

"region": "{{user `oci_region`}}",
"subnet_ocid": "ocid1...",

"base_image_ocid": "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaahxue6crkdeevk75bzw63cmhh3c4uyqddcwov7mwlv7na4lkz7zla",
"image_name": "oracle-xe-ol7.8",
"shape": "VM.Standard.E2.1.Micro",
"ssh_agent_auth": true,
"ssh_timeout": "10m",
"ssh_authorized_keys" : "ssh-rsa ... key"
}
}
],
"provisioners": [
{
"type": "ansible",
"user": "opc",
"playbook_file": "ansible/xe.yml",
"extra_arguments": [ "-v" ]
}
]
} ```

Let’s have a look at the various sections in the Template.

### Variables

The first section, “variables”, allows me to read environment variables. I’m using the same environment variables for Packer and Terraform, hence the somewhat strange looking names. The variables section populates Packer variables I prefixed “oci”.

### Builders

The builders section is where the actual action starts. I need to provide the usual suspects (tenancy OCID, user OCID, API key location and fingerprint etc) as per the documentation. The first part of the “builders” section should be self-explanatory. Well maybe apart from the fact I couldn’t use environment variables in this part of the template, hence the reference to my user variables declared earlier. Again, if there is a better way of doing this, please let me know.

The middle part of the code is concerned with resource location. As I live in Germany the easiest option is to use EU-Frankfurt-1. One of the pre-requisites I need to complete before Packer can go and do its magic is to define a public “build” subnet. I have an automatically, pre-created separate build network (VCN) for this purpose. All I need is to grab the OCID for my subnet and provide it to Packer.

The third part of the builders section is where all the fun starts:

• I chose the base image to be the latest Oracle Linux 7.8 image at the time of writing. More information about Oracle Cloud IaaS images and their corresponding OCIDs can be found in the official documentation reference
• The new custom image to be created is to be named oracle-xe-ol7.8
• I would like the VM shape for the build to be VM.Standard.E2.1.Micro

As with my Vagrant base box I’ll use SSH authentication rather than a more insecure username/password combination. To do so I need to use the local ssh agent to store and forward my key to the VM. I’m providing that exact key via the metadata directive. Once the VM has been created, opc’s `authorized_keys` file contains my SSH key allowing me to connect.

### Provisioners

In comparison to the earlier section this is rather uneventful. The only action it performs is to call the Ansible playbook responsible for the installation of Oracle XE. The playbook is too boring to show it here, all it does is update all RPMs before it installs the Oracle XE RPM.

Thanks to the SSH settings provided earlier no passwords have to go over the wire. If the provisioner fails, you most likely didn’t add your SSH key to the agent. Like I did ;)

```==> oracle-oci: Waiting for SSH to become available...
==> oracle-oci: Error waiting for SSH: Packer experienced an authentication error when trying to connect via SSH.
part of your debugging process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods
[publickey none], no supported methods remain
==> oracle-oci: Terminating instance ```

This is super easy to fix: once the prompt returns you add the key to the agent using `ssh-add` and restart the build.

## End result

After a little while, Packer build will finish and create a new custom image with its own OCID for use in later deployments.

```\$ ANSIBLE_STDOUT_CALLBACK=debug packer build oracle-xe-oci.json
oracle-oci: output will be in this color.

==> oracle-oci: Creating temporary ssh key for instance...
==> oracle-oci: Creating instance...
==> oracle-oci: Created instance (ocid1.instance.oc1.eu-frankfurt-1.ant...).
==> oracle-oci: Waiting for instance to enter 'RUNNING' state...
==> oracle-oci: Instance 'RUNNING'.
==> oracle-oci: Instance has IP: 11.22.33.44.
==> oracle-oci: Using ssh communicator to connect: 11.22.33.44
==> oracle-oci: Waiting for SSH to become available...
==> oracle-oci: Connected to SSH!
==> oracle-oci: Provisioning with Ansible...
oracle-oci: Setting up proxy adapter for Ansible....
==> oracle-oci: Executing Ansible: ansible-playbook -e packer_build_name="oracle-oci"...
oracle-oci: Using /etc/ansible/ansible.cfg as config file
oracle-oci:
oracle-oci: PLAY [all] *********************************************************************

[...]

oracle-oci:
oracle-oci: PLAY RECAP *********************************************************************
oracle-oci: default : ok=11   changed=9    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
oracle-oci:
==> oracle-oci: Creating image from instance...
==> oracle-oci: Image created.
==> oracle-oci: Terminating instance (ocid1.instance.oc1.eu-frankfurt-1.aaa)...
==> oracle-oci: Terminated instance.
Build 'oracle-oci' finished.

==> Builds finished. The artifacts of successful builds are:
--> oracle-oci: An image was created: 'oracle-xe-ol7.8' (OCID: ocid1.image.oc1.eu-frankfurt-1.aaa) in region 'eu-frankfurt-1' ```

As you can see, a new custom image has been created. You can read more about using custom images in a later post.

# Summary

Compared to my first attempt at using Packer to build a Vagrant base box, using Packer with Oracle Cloud Infrastructure turned out to be a lot easier. The only tricky bit is the use of SSH, however that’s quickly overcome if you ever used the OCI Terraform provider and know about the metadata directive.

## Tracing Errors

This is a little lesson in trouble-shooting. It assumes you have the privilege to generate and edit trace files, and all I’m going to do is show how I worked out the answer to a fairly simple question that appeared recently on the Oracle Developer Community forum.

I have a table t1 which is reasonably large (1M rows) with a column v30, and I’ve issued the command.

```rem
rem     Script:         drop_column.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2003
rem

alter table t1 set unused column v30;
```

After this I’ve issued another command, pressed return, and immediately hit ctrl-C – I’ve got a brief window for this interrupt as the command is going to generate roughly 230MB of redo. If you want to try the experiment it might take a couple of attempts to get the timing right.

```alter table t1
drop unused columns
checkpoint 1000
/
{ctrl-C}
```

Then I’ve tried to drop the table with the following result:

```drop table t1
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-12986: columns in partially dropped state. Submit ALTER TABLE DROP COLUMNS CONTINUE
```

This emulates the problem that appeared on the forum but the OP didn’t really want to issue the “continue” command because their table was “large” and the drop had been running for 24 hours when it was interrupted. It’s worth noting that four columns had been specified as unused, which means the drop command was probably generating roughly 1KB of redo per row. It’s also worth mentioning that the table was 600 columns wide – so there may have been a few performance side effects due to the multiple (minimum 3) row-pieces per row and row-chaining.

Ignoring any questions about the possible impact of having 600 columns, the key question in this case is:

• Why isn’t it possible to drop the table (the manuals suggest it should be possible at this point)
• Is there a way to bypass the problem?

This is a repeatable error – we can try to drop the table as many times as we like and we will get the same error reported in fractions of a second. so an obvious strategy is to enable tracing and see if the trace file can tell us anything about why the error is happening. This is a particularly sensible strategy to follow since error ORA-00604 is telling us that there’s something wrong in the recursive SQL, which means there’s a very good chance that we will actually find an SQL statement in the trace file that is the rescursive statement triggering the error.

So, enable sql_trace (or set event 10046), or do whatever you like to do to enable basic tracing (no need for anything above level 1 just yet); try to execute the drop; then search the trace file for the word ERROR at the start of a line (“^ERROR”). Here’s what I found as the first match in my trace file:

```ERROR #139987889121800:err=12986 tim=424276176462
```

Note that the err= value is the 12986 that was reported as the error under the ORA-00604 error. Sometimes it’s necessary to search backwards in the trace file until you find the matching cursor number (#139987889121800), but in this case it was already visible just a few lines further up the file. So here’s the fragment of the file around that ERROR line:

```
PARSING IN CURSOR #139987889121800 len=72 dep=1 uid=0 oct=15 lid=0 tim=424276175961 hv=1894278903 ad='75c06e38' sqlid='4wv7gq1sfhtrr'
ALTER TABLE "TEST_USER"."T1" RENAME TO "BIN\$rhxBELdQGMXgUwEAAH93eQ==\$0"
END OF STMT
PARSE #139987889121800:c=2896,e=3035,p=0,cr=4,cu=0,mis=1,r=0,dep=1,og=4,plh=0,tim=424276175960
EXEC #139987889121800:c=186,e=185,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=0,tim=424276176418
ERROR #139987889121800:err=12986 tim=424276176462
CLOSE #139987889121800:c=10,e=9,dep=1,type=0,tim=424276176712
EXEC #139987891478792:c=34686,e=35859,p=0,cr=73,cu=12,mis=0,r=0,dep=0,og=1,plh=0,tim=424276176774
ERROR #139987891478792:err=604 tim=424276176808

```

The error has come from an SQL statement that is trying to rename my table to some wierd and wonderful name which starts with the characters “BIN\$” – that’s a “drop” command trying to move a table into the recycle bin by renaming it – and you’re not allowed to rename a table that is in a partially dropped state. So that’s why we get the error; and the obvious way to bypass it is: “drop table t1 purge;” – which works.

You’ll notice that I’ve include a couple of lines after the first ERROR. This is to show you the line that generated the ORA-00604 (err=604) error. It comes from cursor #139987891478792, and seraching backwards up the file for that cursor number I get to:

```PARSING IN CURSOR #139987891478792 len=13 dep=0 uid=107 oct=12 lid=107 tim=424276140765 hv=202649412 ad='7f517dd3fe00' sqlid='d47kdkn618bu4'
drop table t1
END OF STMT
```

That’s not a suprise, of course, but it is important to cross-check that you haven’t been chasing the wrong error. There are some cases where the Oracle code does something to see if an error will occur but has an exception handler that means the error doesn’t end up reaching the application, so you do need to do a check that the error you found first was the one that floated up to the top of the stack.

### Footnote

From Oracle 12.2.0.1 you could arrange to read your own trace file – while your session is connected – through the dynamic performance view v\$diag_trace_file_contents.