Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Wait for Java

This is a note courtesy of Jack can Zanen on the Oracle-L list server who asked a question about “wait for CPU” and then produced the answer a couple of days later. It’s a simple demonstration of how Java in the database can be very deceptive in terms of indicating CPU usage that isn’t really CPU usage.

Bottom line – when you call Java Oracle knows you’re about to start doing some work on the CPU, but once you’re inside the java engine Oracle has no way of knowing whether the java code is on the CPU or waiting. So if the java starts to wait (e.g. for some slow file I/O) Oracle will still be reporting your session as using CPU.

To demonstrate the principle, I’m going to create little java procedure that simply goes to sleep – and see what I find in the active session history (ASH) after I’ve been sleeping in java for 10 seconds.

rem
rem     Script:         java_wait_for_cpu.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
rem     Based on an email from Jack van Zanen to Oracle-L
rem

set time on

create or replace procedure milli_sleep(i_milliseconds in number) 
as 
        language java
        name 'java.lang.Thread.sleep(int)';
/

set pagesize 60
set linesize 132
set trimspool on

column sample_time format a32
column event       format a32
column sql_text    format a60
column sql_id      new_value m_sql_id

set echo on
execute milli_sleep(1e4)

select 
        sample_time, sample_id, session_state, sql_id, event 
from 
        v$active_session_history
where 
        session_id = sys_context('userenv','sid')
and     sample_time > sysdate - 1/1440 
order by 
        sample_time
;

select sql_id, round(cpu_time/1e6,3) cpu_time, round(elapsed_time/1e6,3) elapsed, sql_text from v$sql where sql_id = '&m_sql_id';

I’ve set timing on and set echo on so that you can see when my code starts and finishes and correlate it with the report from v$active_session_history for my session. Since I’ve reported the last minute you may find some other stuff reported before the call to milli_sleep() but you should find that you get a report of about 10 seconds “ON CPU” even though your session is really not consuming any CPU at all. I’ve included a report of the SQL that’s “running” while the session is “ON CPU”.

Here (with a little edit to remove the echoed query against v$active_session_history) are the results from a run on 12.2.0.1 (and the run on 19.3.0.0 was very similar):


Procedure created.

18:51:17 SQL> execute milli_sleep(1e4)

PL/SQL procedure successfully completed.

SAMPLE_TIME                       SAMPLE_ID SESSION SQL_ID        EVENT
-------------------------------- ---------- ------- ------------- --------------------------------
16-DEC-19 06.51.11.983 PM          15577837 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.12.984 PM          15577838 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.13.985 PM          15577839 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.14.985 PM          15577840 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.15.986 PM          15577841 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.16.996 PM          15577842 ON CPU  8r3xn050z2uqm
16-DEC-19 06.51.17.995 PM          15577843 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.18.999 PM          15577844 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.20.012 PM          15577845 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.21.018 PM          15577846 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.22.019 PM          15577847 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.23.019 PM          15577848 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.24.033 PM          15577849 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.25.039 PM          15577850 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.26.047 PM          15577851 ON CPU  4jt6zf4nybawp
16-DEC-19 06.51.27.058 PM          15577852 ON CPU  4jt6zf4nybawp

16 rows selected.

18:51:27 SQL>
18:51:27 SQL> select sql_id, round(cpu_time/1e6,3) cpu_time, round(elapsed_time/1e6,3) elapsed, sql_text from v$sql where sql_id = '&m_sql_id';

SQL_ID          CPU_TIME    ELAPSED SQL_TEXT
------------- ---------- ---------- ------------------------------------------------------------
4jt6zf4nybawp       .004     10.029 BEGIN milli_sleep(1e4); END;


As you can see I had a statement executing for a few seconds before the call to milli_sleep(), but then we see milli_sleep() “on” the CPU for 10 consecutive samples; but when the sleep ends the query for actual usage shows us that the elapsed time was 10 seconds but the CPU usage was only 4 milliseconds.

 

Oracle Database 19c Automatic Indexing – Indexed Column Reorder (What Shall We Do Now?)

  I previously discussed how the default column order of an Automatic Index (in the absence of other factors) is based on the Column ID, the order in which the columns are defined in the table. But what if there are “other factors” based on new workloads and the original index column order is no […]

Finding the deleted TYPE when ANYDATA raises ORA-21700: object does not exist or is marked for…

Finding the deleted TYPE when ANYDATA raises ORA-21700: object does not exist or is marked for delete

The current message about Oracle Database is: multi-model database. That’s not new. At the time of Oracle 9i, Object Oriented was the trend, with all the flexibility of polymorphism, but without the mess of unstructured data and without the inconsistency of NoSQL. Oracle added a datatype that can contain any datatype: SYS.ANYDATA. In the same column, you can put a number in row 1, a varchar2 in row 2, a record in row 3, andy object in row 4… Any arbitrary object can be stored, but, unlike a RAW or a BLOB (or XML or JSON), each object is structured and references a known datatype or a user-created TYPE.

However, it is impossible to enforce the dependency for each row and it can happen that you DROP a TYPE that is used by an ANYDATA object.

Example

I create two types. Very simple ones, and similar for this example, but it can be any complex object definition:

DEMO@//localhost/pdb1> create type DEMO1 as object(a number);
2 /
Type created.
DEMO@//localhost/pdb1> create type DEMO2 as object(a number);
2 /
Type created.

I create a table with a key (NUMBER) and value (ANYDATA):

DEMO@//localhost/pdb1> create table DEMO ( k number, v anydata );
Table created.

I insert two instances of DEMO1

DEMO@//localhost/pdb1> insert into DEMO values(1, 
anydata.convertobject( DEMO1(1)) );
1 row created.
DEMO@//localhost/pdb1> insert into DEMO values(2,
anydata.convertobject( DEMO1(1)) );
1 row created.

and two instances of DEMO2

DEMO@//localhost/pdb1> insert into DEMO values(3,
anydata.convertobject( DEMO2(1)) );
1 row created.
DEMO@//localhost/pdb1> insert into DEMO values(4, 
anydata.convertobject( DEMO2(1)) );
1 row created.

Type name and Dump

I query the table. SQL Developer displays the type but I can also get it with ANYDATA.GETTYPENAME()

select k,v,anydata.getTypeName(v) from demo;

By curiosity, I look at the binary storage:

select k,anydata.getTypeName(v),substr(dump(v,16),1,145) from demo;

This contains the Type Object ID. Here are my types from USER_TYPES:

select * from user_types;

On this example it is clear that the TYPE_OID is there:

99ED99CFEAB04E7FE0531103000A3EA6 is contained in Typ=58 Len=74: 0,1,0,0,0,0,0,1,0,0,0,19,83,df,0,34,48,90,0,2e,0,0,2a,1,85,1,2a,1,1,2,4,0,6c,99,ed,99,cf,ea,b0,4e,7f,e0,53,11,3,0,a,3e,a6,0,1,0,0,

99ED99CFEAB44E7FE0531103000A3EA6 is contained in Typ=58 Len=74: 0,1,0,0,0,0,0,1,0,0,0,19,83,e2,0,34,48,90,0,2e,0,0,2a,1,85,1,2a,1,1,2,4,0,6c,99,ed,99,cf,ea,b4,4e,7f,e0,53,11,3,0,a,3e,a6,0,1,0,0,

Drop the TYPE

Now, I can drop the TYPE without having any error:

drop type DEMO2;

This is not a bug (Bug 14828165 : TYPE IS ALLOWED TO BE DROPPED closed in status 92). With ANYDATA you want flexibility, right?

However, I cannot query a value that references this dropped TYPE:

select * from demo
*
ERROR at line 1:
ORA-21700: object does not exist or is marked for delete

And the problem is that I cannot even know the type name:

select k,anydata.getTypeName(v) from demo;

The only thing that I can see is the Type OID from the dump of the ANYDATA value:

But as the TYPE was dropped, I cannot get the name from USER_TYPES.

Flashback query

Ideally, you can get this metadata information from a Data Pump export (OID is visible in the DDL sqlfile) or from a backup. Here, as the DROP was recent, I’ll simply use Flashback Query.

I cannot “versions between” on a view so I query first the SCN from TYPE$

select toid,versions_endscn,versions_operation
from sys.type$ versions between scn minvalue and maxvalue
where ',99,ed,99,cf,ea,b4,4e,7f,e0,53,11,3,0,a,3e,a6,0,1,0,0,'
like '%,'||regexp_replace(dump(type$.toid,16),'^.* ')||',%'
;

(I passed through a regexp because SQL Developer adds thousand separators which made their way to the substitution variable)

And then I query “as of” the DBA_TYPES for this SCN to get all information:

select *
from dba_types as of scn ( 11980082 -1)
where rawtohex(type_oid)= '99ED99CFEAB44E7FE0531103000A3EA6'

Here I have it: the dropped type referenced by this ANYDATA value is DEMO.DEMO2 and that can help me understand what it was and when it has been dropped. As long as I am in the UNDO retention I can find all information to recreate it (mentioning the OID).

I’ve put all that in a function which takes the ANYDATA value and DUMP() to find the OID and name when the ORA-21700 is encountered:

with function try(x anydata,d varchar2) return varchar2 as
l_toid varchar2(1000); l_scn number; l_name varchar2(1000); begin return anydata.getTypeName(x); exception when others then select rawtohex(toid),versions_endscn into l_toid,l_scn from sys.type$ versions between scn minvalue and maxvalue where d like '%,'||regexp_replace(dump(type$.toid,16),'^.* ')||',%' order by versions_endscn fetch first 1 rows only; select owner||'.'||type_name into l_name from dba_types as of scn (l_scn -1) where rawtohex(type_oid)=l_toid; return sqlerrm||' -> '||l_name; end; select k,try(v,dump(v,16)) from demo.demo /

Basically, ANYDATA stores all known datatypes in their own format, in a record, with an OID to reference the structure metadata. Here is an example where the NUMBER format is visible inside:

Who says that there is an impedance mismatch between Relational Databases and Object Oriented models? There are not. You can store objects in a relational database. But there are only a few use cases where you want a column with a generic datatype where you can store ANYDATA. For example, Advanced Queuing uses that for queued messages: you know what you put. You know what you read. But the table can store heterogeneous data without having to define one table queue for each type. Yes, this looks like inheritance and abstract class, in a relational table.

IOT Bug

Here’s a worrying bug that showed up a couple of days ago on the Oracle-L mailing list. It’s a problem that I’ve tested against 12.2.0.1 and 19.3.0.0 – it may be present on earlier versions of Oracle. One of the nastiest things about it is that you might not notice it until you get an “out of space” error from the operating system. You won’t get any wrong results from it, but it may well be adding an undesirable performance overhead.

Basically it seems that (under some circumstances, at least) Oracle is setting the “block guess” component of the secondary index on Index Organized Tables (IOTs) to point to blocks in the overflow segment instead of blocks in the primary key segment. As a result, when you execute a query that accesses the IOT through the secondary index and has to do reads from disc to satisfy the query – your session goes through the following steps:

  • Identify index entry from secondary index – acquire “block guess”
  • Read indicated block and discover the object number on the block is wrong, and the block type is wrong
  • Write a (silent) ORA-01410 error and do a block dump into the trace file
  • Use the “logical rowid” from the secondary index (i.e. the stored primary key value) to access the primary key index by key value

So your query runs to completion and you get the right result because Oracle eventually gets there using the primary key component stored in the secondary index, but it always starts with the guess[see sidebar] and for every block you read into the cache because of the guess you get a dump to the trace file.

Here’s a little code to demonstrate. The problem with this code is that everything appears to works perfectly, you have to be able to find the trace file for your session to see what’s gone wrong. First we create some data – this code is largely copied from the original posting on Oracle-L, with a few minor changes:


rem
rem     Script:         iot_bug_12c.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem
rem     Notes
rem     THe OP had tested on 19.5.0.0 to get the same effect, see:
rem     //www.freelists.org/post/oracle-l/IOT-cannot-get-valid-consensus-bug-or-unexplained-behavio
rem

drop table randomload purge;

create table randomload(
        roll number,
        name varchar2(40),
        mark1 number,
        mark2 number,
        mark3 number,
        mark4 number,
        mark5 number,
        mark6 number,
        primary key (roll)
) 
organization index 
including mark3 overflow
;

create index randomload_idx on randomload(mark6);

insert into randomload 
select 
        rownum, 
        dbms_random.string(0,40) name, 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,100)), 
        round(dbms_random.value(0,10000)) 
from 
        dual 
connect by 
        level < 1e5 -- > comment to avoid wordpress format issue
;

commit;

exec dbms_stats.gather_table_stats(null,'randomload', cascade=>true);

prompt  ==================================================
prompt  pct_direct_access should be 100 for randomload_idx
prompt  ==================================================

select 
        table_name, index_name, num_rows, pct_direct_access, iot_redundant_pkey_elim  
from 
        user_indexes
where
        table_name = 'RANDOMLOAD'
;

It should take just a few seconds to build the data set and you should check that the pct_direct_access is 100 for the index called randomload_idx.

The next step is to run a query that will do an index range scan on the secondary index.

 
column mark6 new_value m_6

select 
        mark6, count(*) 
from
        randomload 
group by 
        mark6
order by 
        count(*)
fetch first 5 rows only
;

alter system flush buffer_cache;
alter session set events '10046 trace name context forever, level 8';
set serveroutput off

select avg(mark3) 
from 
        randomload 
where 
        mark6 = &m_6
;

select * from table(dbms_xplan.display_cursor);

alter session set events '10046 trace name context off';
set serveroutput on

I’ve started by selecting one of the least frequencly occuring values of m_6 (a column I know to be in the overflow); then I’ve flushed the buffer cache so that any access I make to the data will have to start with disk reads (the original poster suggested restarting the database at this point, but that’s not necessary).

Then I’ve enabled sql_trace to show wait states (to capture details of what blocks were read and which object they belong to),, and I’ve run a query for m_3 (a column that is in the primary key (TOP) segment of the IOT) and pulled its execution plan from memory to check that the query did use a range scan of the secondary index. Here’s the plan:

----------------------------------------------------------------------------------------
| Id  | Operation          | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |       |       |    11 (100)|          |
|   1 |  SORT AGGREGATE    |                   |     1 |     7 |            |          |
|*  2 |   INDEX UNIQUE SCAN| SYS_IOT_TOP_77298 |    10 |    70 |    11   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN| RANDOMLOAD_IDX    |    10 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("MARK6"=1316)
   3 - access("MARK6"=1316)

As you can see the plan shows what we are hoping to see – an index range scan of the secondary index that let’s it follow up with a unique scan of the primary key segment. It’s just a little odd that the access predicate reported for operation 2 (unique scan of TOP) suggests that the access is on a column that isn’t in the primary key and isn’t even in the TOP section.

So the query works and gives the right answer. But what do we find in the trace directory ? If you’re running 12c (possibly only 12.2), each time the error occurs the following pattern of information will be written to the alert log (it didn’t appear in 19.3)


ORCL(3):Hex dump of (file 22, block 16747) in trace file /u01/app/oracle/diag/rdbms/orcl12c/orcl12c/trace/orcl12c_ora_7888.trc
ORCL(3):
ORCL(3):Corrupt block relative dba: 0x0580416b (file 22, block 16747)
ORCL(3):Bad header found during multiblock buffer read (logical check)
ORCL(3):Data in bad block:
ORCL(3): type: 6 format: 2 rdba: 0x0580416b
ORCL(3): last change scn: 0x0000.0b86.0e36484c seq: 0x1 flg: 0x06
ORCL(3): spare3: 0x0
ORCL(3): consistency value in tail: 0x484c0601
ORCL(3): check value in block header: 0x4408
ORCL(3): computed block checksum: 0x0
ORCL(3):

And the following pattern of information is written to the trace file [Update: a follow-up test on 11.2.0.4 suggests that the basic “wrong block address” error also happens in that version of Oracle, but doesn’t result in a dump to the trace file]:


kcbzibmlt:: encounter logical error ORA-1410, try re-reading from other mirror..
cursor valid? 1 makecr 0 line 15461 ds_blk (22, 16747) bh_blk (22, 16747)
kcbds 0x7ff1ca8c0b30: pdb 3, tsn 8, rdba 0x0580416b, afn 22, objd 135348, cls 1, tidflg 0x8 0x80 0x0
    dsflg 0x108000, dsflg2 0x0, lobid 0x0:0, cnt 0, addr 0x0, exf 0x10a60af0, dx 0x0, ctx 0
    whr: 'qeilwh03: qeilbk'
env [0x7ff1ca8e3e54]: (scn: 0x00000b860e364893   xid: 0x0000.000.00000000  uba: 0x00000000.0000.00  statement num=0  parent xid:  0x0000.000.00000000  st-scn: 0x0000000000000000  hi-scn: 0x0000000000000000  ma-scn: 0x00000b860e364879  flg: 0x00000660)
BH (0xb1fd6278) file#: 22 rdba: 0x0580416b (22/16747) class: 1 ba: 0xb1c34000
  set: 10 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 763,14
  dbwrid: 0 obj: 135348 objn: 135348 tsn: [3/8] afn: 22 hint: f
  hash: [0x9eff0528,0x77cff808] lru: [0xb1fd2578,0x9ff84658]
  ckptq: [NULL] fileq: [NULL]
  objq: [0xb6f654c0,0x9ff84680] objaq: [0xb6f654d0,0x9ff84690]
  use: [0x77b78128,0x77b78128] wait: [NULL]
  st: READING md: EXCL tch: 0
  flags: only_sequential_access
  Printing buffer operation history (latest change first):
  cnt: 5
  01. sid:10 L122:zgb:set:st          02. sid:10 L830:olq1:clr:WRT+CKT
  03. sid:10 L951:zgb:lnk:objq        04. sid:10 L372:zgb:set:MEXCL
  05. sid:10 L123:zgb:no:FEN          06. sid:10 L083:zgb:ent:fn
  07. sid:08 L192:kcbbic2:bic:FBD     08. sid:08 L191:kcbbic2:bic:FBW
  09. sid:08 L604:bic2:bis:REU        10. sid:08 L190:kcbbic2:bic:FAW
  11. sid:08 L602:bic1_int:bis:FWC    12. sid:08 L822:bic1_int:ent:rtn
  13. sid:08 L832:oswmqbg1:clr:WRT    14. sid:08 L930:kubc:sw:mq
  15. sid:08 L913:bxsv:sw:objq        16. sid:08 L608:bxsv:bis:FBW
Hex dump of (file 22, block 16747)

   ... etc.

Corrupt block relative dba: 0x0580416b (file 22, block 16747)
Bad header found during multiblock buffer read (logical check)
Data in bad block:
 type: 6 format: 2 rdba: 0x0580416b
 last change scn: 0x0000.0b86.0e36484c seq: 0x1 flg: 0x06
 spare3: 0x0
 consistency value in tail: 0x484c0601
 check value in block header: 0x4408
 computed block checksum: 0x0
TRCMIR:kcf_reread     :start:  16747:0:/u01/app/oracle/oradata/orcl12c/orcl/test_8k_assm.dbf
TRCMIR:kcf_reread     :done :  16747:0:/u01/app/oracle/oradata/orcl12c/orcl/test_8k_assm.dbf

The nasty bit, of course, is the bit I’ve removed and replaced with just “etc.”: it’s a complete block dump (raw and symbolic) which in my example was somthing like 500 lines and 35KB in size.

It’s not immediately obvious exactly what’s going on and why, but the 10046 trace helps a little. From another run of the test (on 19.3.0.0) I got the following combination of details – which is an extract showing the bit of the wait state trace leading into the start of the first block dump:

WAIT #140478118667016: nam='db file scattered read' ela= 108 file#=13 block#=256 blocks=32 obj#=77313 tim=103574529210
WAIT #140478118667016: nam='db file scattered read' ela= 2236 file#=13 block#=640 blocks=32 obj#=77313 tim=103574531549
WAIT #140478118667016: nam='db file scattered read' ela= 534 file#=13 block#=212 blocks=32 obj#=77312 tim=103574532257
kcbzibmlt: encounter logical error ORA-1410, try re-reading from other mirror..
cursor valid? 1 warm_up abort 0 makecr 0 line 16082 ds_blk (13, 212) bh_blk (13, 212)

Object 77313 is the secondary index, object 77312 is the primary key index (IOT_TOP). It may seem a little odd that Oracle is using db file scattered reads of 32 blocks to read the indexes but this is a side effect of flushing the buffer – Oracle may decide to prefeetch many extra blocks of an object to “warmup” the cache just after instance startup or a flush of the buffer cache. The thing I want to check, though, is what’s wrong with the blocks that Oracle read from object 77312:


alter system dump datafile 13 block min 212 block max 243;

BH (0xc8f68e68) file#: 13 rdba: 0x034000d4 (13/212) class: 1 ba: 0xc8266000
  set: 10 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xa7fd6c38) file#: 13 rdba: 0x034000d4 (13/212) class: 1 ba: 0xa7c2a000
  set: 12 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xa5f75780) file#: 13 rdba: 0x034000d5 (13/213) class: 0 ba: 0xa5384000
  set: 11 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

BH (0xdafe9220) file#: 13 rdba: 0x034000d5 (13/213) class: 1 ba: 0xdadcc000
  set: 9 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77311 objn: 77311 tsn: [3/6] afn: 13 hint: f

I’ve reported the first few lines of the symbolic dump for the first few blocks of the resulting trace file. Look at the third line of each group of three BH lines: it’s reporting object 77311 (the overflow segment), not 77312 (the TOP segment). And every single block reported in the db file scattered read of 32 blocks for object 77312 reports itself, when dumped, as being part of object 77311. And that’s possibly the immediate cause of the ORA-01410.

We can take the investigation a little further by dumping a leaf block or two from the secondary index.


alter session set events 'immediate trace name treedump level 77313';

----- begin tree dump
branch: 0x3400104 54526212 (0: nrow: 542, level: 1)
   leaf: 0x340010d 54526221 (-1: row:278.278 avs:2479)
   leaf: 0x340075e 54527838 (0: row:132.132 avs:5372)
   leaf: 0x34005fb 54527483 (1: row:41.41 avs:7185)

alter system dump datafile 13 block 1886   -- leaf: 0x340075e

BH (0xd5f5d090) file#: 13 rdba: 0x0340075e (13/1886) class: 1 ba: 0xd5158000
  set: 9 pool: 3 bsz: 8192 bsi: 0 sflg: 2 pwc: 0,15
  dbwrid: 0 obj: 77313 objn: 77313 tsn: [3/6] afn: 13 hint: f
...
row#6[5796] flag: K------, lock: 0, len=18
col 0; len 2; (2):  c1 1d
col 1; len 4; (4):  c3 07 41 5c
tl: 8 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 4]  03 40 05 7c

I’ve done a treedump of the secondary index and picked a leaf block address from the treedump and dumped that leaf block, and from that leaf block I’ve extracted one index entry to show you the three components: the key value (c1 1d), the primary key for the row (c3 07 41 5c), and the block guess (03 40 05 75). Read the block guess as a 4 byte hex number, and it translates to file 13, block 1397 – which should belong to the TOP segment. So the exciting question is – what object does block (13, 1397) think it belongs to ?


alter system dump datafile 13 block 1397;

Block header dump:  0x03400575
 Object id on Block? Y
 seg/obj: 0x12dff  csc:  0x00000b860e308c46  itc: 2  flg: E  typ: 1 - DATA
     brn: 0  bdba: 0x3400501 ver: 0x01 opc: 0
     inc: 0  exflg: 0

Converting from Hex to Decimal: obj: 0x12dff = 77311 which is the overflow segment. The secondary index block guess is pointing at a block in the overflow segment.

There are two ways to handle this problem – you could simply rebuild the index (alter index rebuild) or, as the original poster did, use the “update block references” command to correct all the block guesses: “alter index randomload_idx update block references;”. Neither is desirable, but if you’re seeing a lot of large trace files following the pattern above then it may be necessary.

There was one particular inconsistency in the tests – which I ran many times – occasionally the pct_direct_access for the secondary index would be reported as zero (which, technically, should always happen given the error).  If it did, of course, Oracle wouldn’t follow the guess but would go straight to the step where it used the primary key “logical rowid” – thus bypassing the error and block dump.

tl;dr

In some circumstances the block guesses in the secondary indexes of IOTs may be pointing to the overflow segment instead of the primary key (TOP) segment. If this happens then queries will still run and give the right answers, but whenever they read a “guessed” block from disc they will report an ORA-01410 error and dump a block trace. This will affect performance and may cause space problems at the O/S level.

Sidebar

An entry in the secondary index of an Index Organized Table (IOT) consists of three parts, which intially we can think in the form:

({key-value}, {logical rowid}, {block guess})

Since IOTs don’t have real rowids the “logical rowid” is actually the primary key of the row where the {key value} will be found. As a short cut for efficient execution Oracle includes the block address (4 bytes) where that primary key value was stored when the row was inserted. Because an IOT is an index “rows” in the IOT can move as new data is inserted and leaf blocks split, so eventually any primary key may move to a different block – this is why we refer to the block address as a guess – a few days, hours, or minutes after you’ve inserted the row the block address may no longer be correct.)

To help the runtime engine do the right thing Oracle collects a statistic called pct_direct_access for secondary indexes of IOTs. This is a measure of what percentage of the block guesses are still correct at the time that the statistics are gathered. If this value is high enough the run-time engine will choose to try using the block guesses while executing a query (falling back to using the logical rowid if it turns out that the guess is invalid), but if the value drops too low the optimizer will ignore the block guesses and only use the logical rowid.

Not relevant to this note – but a final point about secondary indexes and logical rowids – if the definition of the index includes  some of the columns from the primary keys Oracle won’t store those columns twice (in more recent version, that is) – the code is clever enough to use the values stored in the (key value) component when it needs to use the (logical rowid) component.

 

Update (Jan 2020)

I passed this example on to Oracle, and there are now two bugs recorded for it:

  • Bug 30733525 – ALERT LOG ENTRIES RE BLOCK GUESSES IN THE SECONDARY INDEXES OF IOTS POINTING TO OVERFLOW SEGMENT INSTEAD OF INDEX SEGMENT
  • Bug 30733563 – WRONG GUESS DBA IN INDEX

 

 

 

Extended Column Group Statistics, Composite Index Statistics, Histograms and an EDB360 Enhancement to Detect the Coincidence

In this post:

  • A simple demonstration to show the behaviour of extended statistics and how it can be disabled by the presence of histograms.  None of this is new, there are many other blog posts on this topic. I provide links to some of them.
  • I have added an enhancement to the EDB360 utility to detect histograms on columns in extended statistics.

Introduction

'Extended statistics were introduced in Oracle 11g to allow statistics to be gathered on groups of columns, to highlight the relationship between them, or on expressions. Oracle 11gR2 makes the process of gathering extended statistics for column groups easier'. [Tim Hall: https://oracle-base.com/articles/11g/extended-statistics-enhancements-11gr2]

Example 1: Cardinality from Extended Statistics

Without extended statistics, Oracle will simply multiply column cardinalities together.  Here is a simple example.  I will create a table with 10000 rows, where two columns each have the same 100 rows of 100 values, so they correlate perfectly.  I will gather statistics, but no histograms.

#eeeeee; border: 0px solid #000000; font-family: "courier new"; font-size: 100%; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">create table t
(k number
,a number
,b number
,x varchar2(1000)
);

insert into t
with n as (select rownum n from dual connect by level <= 100)
select rownum, a.n, a.n, TO_CHAR(TO_DATE(rownum,'J'),'Jsp')
from n a, n b
order by a.n, b.n;

exec dbms_stats.gather_table_stats(user,'T',method_opt=>'FOR ALL COLUMNS SIZE 1');

I will deliberately disable optimizer feedback so that Oracle cannot learn from experience about the cardinality misestimates.

#eeeeee; border: 0px solid #000000; font-family: "courier new"; font-size: 100%; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">alter session set statistics_level=ALL;
alter session set "_optimizer_use_feedback"=FALSE;

select count(*) from t where a = 42 and b=42;

COUNT(*)
----------
100

Oracle estimates that it will get 1 row but actually gets 100.
It estimates 1 because it is 1/100 * 1/100 * 10000 rows

#eeeeee; border: 0px solid rgb(0 , 0 , 0); font-family: "courier new"; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">select * from table(dbms_xplan.display_cursor(null,null,format=>'ADVANCED +ALLSTATS LAST, IOSTATS -PROJECTION -OUTLINE'));

Plan hash value: 1071362934
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 21 (100)| | 1 |00:00:00.01 | 73 |
| 1 | SORT AGGREGATE | | 1 | 1 | 26 | | | 1 |00:00:00.01 | 73 |
|* 2 | TABLE ACCESS FULL| T | 1 | 1 | 26 | 21 (0)| 00:00:01 | 100 |00:00:00.01 | 73 |
---------------------------------------------------------------------------------------------------------------------

Now I will create extended statistics on the column group.  I can do that in one of two ways:

  • either by explicitly creating the definition and then creating them by gathering statistics:
#eeeeee; border: 0px solid #000000; font-family: "courier new"; font-size: 100%; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT dbms_stats.create_extended_stats(ownname=>user, tabname=>'t', extension=>'(a,b)')
FROM dual;
exec dbms_stats.gather_table_stats(user,'T',method_opt=>'FOR ALL COLUMNS SIZE 1');
  • Or, I can create extended statistics directly in one go by specifying them in the method opt clause.
#eeeeee; border: 0px solid #000000; font-family: "courier new"; font-size: 100%; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">exec dbms_stats.gather_table_stats(user,'T',method_opt=>'FOR ALL COLUMNS SIZE 1, FOR COLUMNS SIZE 1 (A,B)');

Now Oracle correctly estimates that the same query will fetch 100 rows because it directly knows the cardinality for the two columns in the query.

#eeeeee; border: 0px solid rgb(0 , 0 , 0); font-family: "courier new"; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">Plan hash value: 1071362934
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 21 (100)| | 1 |00:00:00.01 | 73 |
| 1 | SORT AGGREGATE | | 1 | 1 | 6 | | | 1 |00:00:00.01 | 73 |
|* 2 | TABLE ACCESS FULL| T | 1 | 100 | 600 | 21 (0)| 00:00:01 | 100 |00:00:00.01 | 73 |
---------------------------------------------------------------------------------------------------------------------

Example 2: Cardinality from Index Statistics

I can get exactly the same effect by creating an index on the two columns.

#eeeeee; border: 0px solid #000000; font-family: "courier new"; font-size: 100%; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">drop table t purge;

create table t
(k number
,a number
,b number
,x varchar2(1000)
);

insert into t
with n as (select rownum n from dual connect by level <= 100)
select rownum, a.n, a.n, TO_CHAR(TO_DATE(rownum,'J'),'Jsp')
from n a, n b
order by a.n, b.n;

create index t_ab on t(a,b) compress;

This time I have not collected any statistics on the table.  Statistics are automatically collected on the index when it is built.  I have used a hint to stop the query using the index to look up the rows, nonetheless, Oracle has correctly estimated that it will get 100 rows because it has used the number of distinct keys from the index statistics.

#eeeeee; border: 0px solid rgb(0 , 0 , 0); font-family: "courier new"; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL_ID  711banpfgfa18, child number 0
-------------------------------------
select /*+FULL(t)*/ count(*) from t where a = 42 and b=42

Plan hash value: 1071362934
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 21 (100)| | 1 |00:00:00.01 | 74 |
| 1 | SORT AGGREGATE | | 1 | 1 | 26 | | | 1 |00:00:00.01 | 74 |
|* 2 | TABLE ACCESS FULL| T | 1 | 100 | 2600 | 21 (0)| 00:00:01 | 100 |00:00:00.01 | 74 |
---------------------------------------------------------------------------------------------------------------------

Note, that there is nothing in the execution plan to indicate that the index statistics were used to estimate the number of rows returned!

Example 3: Histograms Disable the Use of Extended Statistics

There have long been blogs that refer to behaviour that Oracle has documented as Bug 6972291: Column group selectivity is not used when there is a histogram on one column:
'As of 10.2.0.4 CBO can use the selectivity of column groups but this option is disabled if there is a histogram defined on any of the columns of the column group.
Note:  This fix is disabled by default. To enable the fix set "_fix_control"="6972291:ON"
When ENABLED the code will use multi-column stats regardless of whether there is a histogram on one of the columns or not.  When DISABLED (default) CBO will not use multi-column stats if there is a histogram on one of the columns in the column group.'

  • Christian Antognini, 2014: https://antognini.ch/2014/02/extension-bypassed-because-of-missing-histogram/
  • Jonathan Lewis, 2012: https://jonathanlewis.wordpress.com/2012/04/11/extended-stats/
    • Maria Colgan also commented: 'This … was a deliberate design decision to prevent over-estimations when one of the values supplied is ‘out of range’. We can’t ignore the ‘out of range’ scenario just because we have a column group. Extended statistics do not contain the min, max values for the column group so we rely on the individual column statistics to check for ‘out of range’ scenarios like yours. When one of the columns is ‘out of range’, we revert back to the column statistics because we know it is going to generate a lower selectivity range and if one of the columns is ‘out of range’ then the number of rows returned will be lower or none at all, as in your example'

In this example, I explicitly create a histogram on one of the columns in my extended statistics.  However, in the real world that can happen automatically if the application references one column and not another.

#eeeeee; border: 0px solid rgb(0 , 0 , 0); font-family: "courier new"; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">exec dbms_stats.gather_table_stats(user,'T',method_opt=>'FOR ALL COLUMNS SIZE AUTO, FOR COLUMNS SIZE 100 A, FOR COLUMNS SIZE 1 B (A,B)');

My cardinality estimate goes back to 1 because Oracle does use the extended statistics in the presence of a histogram on any of the constituent columns.  Exactly the same happens if the number of distinct values on the combination of columns comes from composite index statistics.  A histogram similarly disables their use.

#eeeeee; border: 0px solid rgb(0 , 0 , 0); font-family: "courier new"; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL_ID  8trj2kacqhm6f, child number 1
-------------------------------------
select count(*) from t where a = 42 and b=42

Plan hash value: 1071362934
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 21 (100)| | 1 |00:00:00.01 | 73 |
| 1 | SORT AGGREGATE | | 1 | 1 | 6 | | | 1 |00:00:00.01 | 73 |
|* 2 | TABLE ACCESS FULL| T | 1 | 1 | 6 | 21 (0)| 00:00:01 | 100 |00:00:00.01 | 73 |
---------------------------------------------------------------------------------------------------------------------

This is likely to happen in real-life systems because histograms can be automatically created when statistics are gathered.

Out-of-Range Predicates

If you have one or more predicates on columns that are part of an extended statistics, and that predicate goes out of range when compared to the column statistics, then Oracle still doesn’t use the extended statistics (see also https://jonathanlewis.wordpress.com/2012/04/11/extended-stats/), irrespective of whether it has a histogram or not, or whether fix control 6972291 is set or not.
The extended histogram uses a virtual column whose value is derived from SYS_OP_COMBINED_HASH().  You can see this in the default data value for the column.  Therefore the optimizer cannot use the minimum/maximum value (see also https://jonathanlewis.wordpress.com/2018/08/02/extended-histograms-2/).
Instead, Oracle does the linear decay of the density of the column predicates, and if there is a frequency or top-frequency histogram then it uses half the density of the lowest frequency bucket and applies linear decay to that.

Example 4: Extended Histograms

This time I will create a histogram on my extended statistics as well as histograms on the underlying columns.

#eeeeee; border: 0px solid rgb(0 , 0 , 0); font-family: "courier new"; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">exec dbms_stats.gather_table_stats(user,'T',method_opt=>'FOR ALL COLUMNS SIZE AUTO, FOR COLUMNS SIZE 100 A B (A,B)');

I am back to getting the correct cardinality estimate.

#eeeeee; border: 0px solid rgb(0 , 0 , 0); font-family: "courier new"; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL_ID  8trj2kacqhm6f, child number 0
-------------------------------------
select count(*) from t where a = 42 and b=42

Plan hash value: 1071362934
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 21 (100)| | 1 |00:00:00.01 | 73 |
| 1 | SORT AGGREGATE | | 1 | 1 | 6 | | | 1 |00:00:00.01 | 73 |
|* 2 | TABLE ACCESS FULL| T | 1 | 100 | 600 | 21 (0)| 00:00:01 | 100 |00:00:00.01 | 73 |
---------------------------------------------------------------------------------------------------------------------

And this is also something that has been blogged about previously:

Threats

In this blog, Jonathan Lewis comments (https://jonathanlewis.wordpress.com/2012/04/11/extended-stats/) on certain weaknesses in the implementations.  He also references other bloggers.
Either creating or removing histograms on columns in either extended statistics or composite indexes may result in the execution plan changing because those extended statistics may change.  This could happen automatically when gathering statistics as data skew and predicate usage changes.
If I drop a composite index, maybe because it is not used, or maybe it is redundant because it is a subset of another index, then I should replace it with an extended histogram on the same set of columns.

Detecting the Problem

I have added a report to section 3c of EDB360 to detect the problem.  The SQL query is shown below.  It will report on histograms on columns in either:

  • composite indexes where there are no extended column group statistics, or
  • extended column group statistics for which there are no histograms.

3c.25. Columns with Histograms in Extended Statistics (DBA_STAT_EXTENSIONS)

#336699 .75pt; mso-yfti-tbllook: 1184; width: 100%px;">
#0066CC; border-bottom: none; border-left: solid #336699 1.0pt; border-right: none; border-top: solid #336699 1.0pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;">
#
#0066CC; border-top: solid #336699 1.0pt; border: none; mso-border-top-alt: solid #336699 .75pt;">
Table
Owner
#0066CC; border-top: solid #336699 1.0pt; border: none; mso-border-top-alt: solid #336699 .75pt;">
Table
Name
#0066CC; border-top: solid #336699 1.0pt; border: none; mso-border-top-alt: solid #336699 .75pt;">
Object
Type
#0066CC; border-top: solid #336699 1.0pt; border: none; mso-border-top-alt: solid #336699 .75pt;">
Index/Extension Name
#0066CC; border-top: solid #336699 1.0pt; border: none; mso-border-top-alt: solid #336699 .75pt;">
Number of
Distinct 
Values
#0066CC; border-top: solid #336699 1.0pt; border: none; mso-border-top-alt: solid #336699 .75pt;">
Number of
Buckets
#0066CC; border-top: solid #336699 1.0pt; border: none; mso-border-top-alt: solid #336699 .75pt;">
EXTENSION
#0066CC; border-top: solid #336699 1.0pt; border: none; mso-border-top-alt: solid #336699 .75pt;">
Column
Name
#0066CC; border-top: solid #336699 1.0pt; border: none; mso-border-top-alt: solid #336699 .75pt;">
Column
Number of
Distinct
Values

#0066CC; border-top: solid #336699 1.0pt; border: none; mso-border-top-alt: solid #336699 .75pt;">
Column
Number of
Buckets

#0066CC; border-bottom: none; border-left: none; border-right: solid #336699 1.0pt; border-top: solid #336699 1.0pt; mso-border-right-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;">
Column
Histogram
Type
#336699 1.0pt; mso-border-alt: solid #336699 .75pt;" valign="top">
1
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
HR
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
JOB_HISTORY
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
Index
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
JHIST_EMP_ID_ST_DATE_PK
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
10
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
("EMPLOYEE_ID","START_DATE")
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
EMPLOYEE_ID
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
7
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
7
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt;" valign="top">
FREQUENCY
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
2
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
OE
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
INVENTORIES
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
Index
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
INVENTORY_IX
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
1112
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
("WAREHOUSE_ID","PRODUCT_ID")
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
PRODUCT_ID
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
208
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
208
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
FREQUENCY
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
3
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
OE
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_ITEMS
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
Index
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_ITEMS_PK
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
665
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
("ORDER_ID","LINE_ITEM_ID")
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_ID
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
105
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
105
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
FREQUENCY
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
4
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
OE
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_ITEMS
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
Index
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_ITEMS_UK
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
665
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
("ORDER_ID","PRODUCT_ID")
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_ID
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
105
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
105
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
FREQUENCY
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
5
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
OE
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_ITEMS
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
Index
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_ITEMS_UK
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
665
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
("ORDER_ID","PRODUCT_ID")
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
PRODUCT_ID
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
185
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
185
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
FREQUENCY
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
6
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
SCOTT
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
T
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
Extension
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
SYS_STUNA$6DVXJXTP05EH56DTIR0X
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
100
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
1
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
("A","B")
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
A
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
100
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
100
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
FREQUENCY
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
7
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
SCOTT
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
T
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
Extension
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
SYS_STUNA$6DVXJXTP05EH56DTIR0X
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
100
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
1
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
("A","B")
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
B
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
100
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
100
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
FREQUENCY
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
8
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
SOE
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDERS
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
Index
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORD_WAREHOUSE_IX
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
10270
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
("WAREHOUSE_ID","ORDER_STATUS")
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_STATUS
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
10
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
10
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
FREQUENCY
#336699 1.0pt; mso-border-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
9
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
SOE
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_ITEMS
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
Index
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
ORDER_ITEMS_PK
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
13758515
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
("ORDER_ID","LINE_ITEM_ID")
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
LINE_ITEM_ID
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
7
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
7
#336699 1.0pt; border-left: none; border-right: solid #336699 1.0pt; border-top: none; mso-border-alt: solid #336699 .75pt; mso-border-left-alt: solid #336699 .75pt; mso-border-top-alt: solid #336699 .75pt;" valign="top">
FREQUENCY

Just because something is reported by this test, does not necessarily mean that you need to change anything.

  • Providing fix control 6972291 is not enabled, should you wish to drop or alter any reported index, you at least know that it cannot be used to provide column group statistics.  Though you would still need to consider SQL that might use the index directly.
  • You might choose to add column group histograms, and sometimes that will involve adding column statistics.  However, the number of distinct values on the column group will usually be higher than on the individual columns and can easily be greater than the number of buckets you can have in a frequency histogram.  In such cases, from 12c, you may end up with either a Top-frequency histogram or a hybrid histogram.
  • Or you might choose to remove the histograms from the individual columns so that the column group statistics are used.
  • Or you might choose to enforce the status quo, by setting table statistics preferences to ensure currently existing histograms are preserved and currently, non-existent histograms are not introduced.

Whatever you choose to do regarding statistics and histogram collection, I would certainly recommend doing so declaratively, by defining a table statistic preference.  For example, here I will preserve the histograms on the columns in the column group, but I will also build a histogram on the column group:

#eeeeee; border: 0px solid #000000; font-family: "courier new"; font-size: 100%; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">exec dbms_stats.gather_table_stats(user,'T',method_opt=>'FOR ALL COLUMNS SIZE AUTO, FOR COLUMNS SIZE 254 A B (A,B)');
  • Or, you might even enable the fix_control. You can also do that at session level or even statement level (but beware of disabling any other fix controls that may be set). 
#eeeeee; border: 0px solid rgb(0 , 0 , 0); font-family: "courier new"; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SQL_ID  16judk2v0uf7w, child number 0
-------------------------------------
select /*+FULL(t) OPT_PARAM('_fix_control','6972291:on')*/ count(*)
from t where a = 42 and b=42

Plan hash value: 1071362934
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 21 (100)| | 1 |00:00:00.01 | 73 |
| 1 | SORT AGGREGATE | | 1 | 1 | 6 | | | 1 |00:00:00.01 | 73 |
|* 2 | TABLE ACCESS FULL| T | 1 | 100 | 600 | 21 (0)| 00:00:01 | 100 |00:00:00.01 | 73 |
---------------------------------------------------------------------------------------------------------------------

Outline Data
-------------

/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('19.1.0')
DB_VERSION('19.1.0')
OPT_PARAM('_fix_control' '6972291:1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "T"@"SEL$1")
END_OUTLINE_DATA
*/

EDB360 Test Query

This is the SQL query that produces the report in EDB360.

#eeeeee; border: 0px solid #000000; font-family: "courier new"; font-size: 100%; line-height: 1.0; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">WITH i as ( /*composite indexes*/
SELECT i.table_owner, i.table_name, i.owner index_owner, i.index_name, i.distinct_keys
, '('||(LISTAGG('"'||c.column_name||'"',',') WITHIN GROUP (order by c.column_position))||')' column_list
FROM dba_indexes i
, dba_ind_columns c
WHERE i.table_owner = c.table_owner
AND i.table_name = c.table_name
AND i.owner = c.index_owner
AND i.index_name = c.index_name
AND i.table_name NOT LIKE 'BIN$%'
AND i.table_owner NOT IN ('ANONYMOUS','APEX_030200','APEX_040000','APEX_040200','APEX_SSO','APPQOSSYS','CTXSYS','DBSNMP','DIP','EXFSYS','FLOWS_FILES','MDSYS','OLAPSYS','ORACLE_OCM','ORDDATA','ORDPLUGINS','ORDSYS','OUTLN','OWBSYS')
AND i.table_owner NOT IN ('SI_INFORMTN_SCHEMA','SQLTXADMIN','SQLTXPLAIN','SYS','SYSMAN','SYSTEM','TRCANLZR','WMSYS','XDB','XS$NULL','PERFSTAT','STDBYPERF','MGDSYS','OJVMSYS')
GROUP BY i.table_owner, i.table_name, i.owner, i.index_name, i.distinct_keys
HAVING COUNT(*) > 1 /*index with more than one column*/
), e as ( /*extended stats*/
SELECT e.owner, e.table_name, e.extension_name
, CAST(e.extension AS VARCHAR(1000)) extension
, se.histogram, se.num_buckets, se.num_distinct
FROM dba_stat_extensions e
, dba_tab_col_statistics se
WHERE e.creator = 'USER'
AND se.owner = e.owner
AND se.table_name = e.table_name
AND se.column_name = e.extension_name
AND e.table_name NOT LIKE 'BIN$%'
AND e.owner NOT IN ('ANONYMOUS','APEX_030200','APEX_040000','APEX_040200','APEX_SSO','APPQOSSYS','CTXSYS','DBSNMP','DIP','EXFSYS','FLOWS_FILES','MDSYS','OLAPSYS','ORACLE_OCM','ORDDATA','ORDPLUGINS','ORDSYS','OUTLN','OWBSYS')
AND e.owner NOT IN ('SI_INFORMTN_SCHEMA','SQLTXADMIN','SQLTXPLAIN','SYS','SYSMAN','SYSTEM','TRCANLZR','WMSYS','XDB','XS$NULL','PERFSTAT','STDBYPERF','MGDSYS','OJVMSYS')
)
SELECT e.owner, e.table_name
, 'Extension' object_type
, e.extension_name object_name, e.num_distinct, e.num_buckets, e.extension
, sc.column_name
, sc.num_distinct col_num_distinct
, sc.num_buckets col_num_buckets
, sc.histogram col_histogram
FROM e
, dba_tab_col_statistics sc
WHERE e.histogram = 'NONE'
AND e.extension LIKE '%"'||sc.column_name||'"%'
AND sc.owner = e.owner
AND sc.table_name = e.table_name
AND sc.histogram != 'NONE'
AND sc.num_buckets > 1 /*histogram on column*/
AND e.num_buckets = 1 /*no histogram on extended stats*/
UNION ALL
SELECT /*+ NO_MERGE */ /* 3c.25 */
i.table_owner, i.table_name
, 'Index' object_type
, i.index_name object_name, i.distinct_keys, TO_NUMBER(null), i.column_list
, sc.column_name
, sc.num_distinct col_num_distinct
, sc.num_buckets col_num_buckets
, sc.histogram col_histogram
From i
, dba_ind_columns ic
, dba_tab_col_statistics sc
WHERE ic.table_owner = i.table_owner
AND ic.table_name = i.table_name
AND ic.index_owner = i.index_owner
AND ic.index_name = i.index_name
AND sc.owner = i.table_owner
AND sc.table_name = ic.table_name
AND sc.column_name = ic.column_name
AND sc.histogram != 'NONE'
AND sc.num_buckets > 1 /*histogram on column*/
AND NOT EXISTS( /*report index if no extension*/
SELECT 'x'
FROM e
WHERE e.owner = i.table_owner
AND e.table_name = i.table_name
AND e.extension = i.column_list)
ORDER BY 1,2,3,4;

Listener log data mining with SQL

If you take a look at the log files created by the listener, there is obviously a nice wealth of information in there. We get service updates, connections etc, all of which might be useful particularly in terms of auditing security

However, it also is in a fairly loose text format, which means ideally I’d like to utilise the power of SQL to mine the data.


16-DEC-2019 09:54:20 * service_update * db18 * 0
2019-12-16T09:54:23.563403+08:00
16-DEC-2019 09:54:23 * service_update * db18 * 0
2019-12-16T09:54:34.442995+08:00
16-DEC-2019 09:54:34 * (CONNECT_DATA=(CID=(PROGRAM=null)(HOST=__jdbc__)(USER=null))(SERVICE_NAME=pdb1)) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=61716)) * establish * pdb1 * 0
2019-12-16T09:54:38.578569+08:00
16-DEC-2019 09:54:38 * (CONNECT_DATA=(CID=(PROGRAM=null)(HOST=__jdbc__)(USER=null))(SERVICE_NAME=pdb1)) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=61720)) * establish * pdb1 * 0
16-DEC-2019 09:54:38 * (CONNECT_DATA=(CID=(PROGRAM=null)(HOST=__jdbc__)(USER=null))(SERVICE_NAME=db18)) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=61721)) * establish * db18 * 0
2019-12-16T09:55:04.116322+08:00
16-DEC-2019 09:55:04 * (CONNECT_DATA=(CID=(PROGRAM=null)(HOST=__jdbc__)(USER=null))(SERVICE_NAME=db18)) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=61742)) * establish * db18 * 0

It is easy to create an external table to simply grab each line of the log file and return it as a row from a table. We can see that below, simply returning a single column called LINE


SQL> create or replace directory LISTENER_LOG as 'C:\oracle\diag\tnslsnr\gtx\listener18\trace';

Directory created.

SQL> create table listener
  2  (
  3     line varchar2(4000)
  4  )
  5  organization external (
  6     type   oracle_loader
  7     default directory LISTENER_LOG
  8     access parameters
  9     (
 10        records delimited by newline
 11        nobadfile
 12        nologfile
 13        nodiscardfile
 14        fields ldrtrim
 15        missing field values are null
 16        reject rows with all null fields
 17        (
 18            line char(4000)
 19        )
 20     )
 21     location ('listener.log')
 22  )
 23  reject limit unlimited
 24  /

Table created.

SQL>
SQL> select * from listener where rownum <= 20;

LINE
----------------------------------------------------------------------------------------------------------------------------------
2019-02-18T23:48:42.647524-08:00
Create Relation ADR_CONTROL
Create Relation ADR_INVALIDATION
Create Relation INC_METER_IMPT_DEF
Create Relation INC_METER_PK_IMPTS
LISTENER for Linux: Version 19.0.0.0.0 - Production
Version 19.2.0.0.0
System parameter file is /u01/app/oracle/product/19.2.0/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/db192/listener/alert/log.xml
Trace information written to /u01/app/oracle/diag/tnslsnr/db192/listener/trace/ora_4087_140671000284160.trc

While I have achieved something by having this data exposed as a table, there are still some challenges to face. It now becomes our job to extract the metadata from each line. For example, to appropriately timestamp each row, I need to look for date formatted strings and use some analytics to carry that date down through subsequent rows to assign them to the rest of the data. In the example below I’ve done that with LAST_VALUE, and given an example of how we could extract the HOST metadata


SQL> with details as
  2  (
  3  select last_value(tstamp ignore nulls) over ( order by tstamp ) as tstamp,
  4         substr(host,1,instr(host,')')-1) host
  5  from
  6    ( select
  7        case when line like '__-___-____ __:__:__ %' then to_date(substr(line,1,20),'DD-MON-YYYY HH24:MI:SS') end tstamp,
  8        case when line like '%HOST=%' then substr(line,instr(line,'HOST=')+5) end host
  9      from listener
 10    )
 11  )
 12  select  *
 13  from details
 14  where host is not null;

TSTAMP              HOST
------------------- ----------------------------------------
18/02/2019 23:48:43 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:28 db192.localdomain
18/02/2019 23:49:29 db192.localdomain
18/02/2019 23:49:29 db192.localdomain
18/02/2019 23:49:29 db192.localdomain
18/02/2019 23:49:29 db192.localdomain
18/02/2019 23:50:09 192.168.1.182

That kind of SQL manipulation is cool and fun, but it is also going to get complex fast! Here is perhaps an alternative way of tackling the problem. As well as the ‘trace’ folder, the listener logs are also written as XML to the ‘alert’ folder. Looking at the log.xml file in this folder, we can see the same listener log information in XML format.


Creating new log segment:



14-NOV-2019 15:27:56 * service_update * db18 * 0


 

Rather than using an external table, perhaps I can load that data as XML because then much of the metadata is already appropriately tagged for me. No more complex SQL!


SQL> create table t ( c clob );

Table created.

SQL> create or replace directory LISTENER_LOG as 'C:\oracle\diag\tnslsnr\gtx\listener18\alert';

Directory created.

SQL> declare
  2     l_c clob;
  3     b bfile := bfilename('LISTENER_LOG', 'log.xml');
  4     dest_offset integer := 1;
  5     src_offset integer := 1;
  6     src_csid number := nls_charset_id('UTF8');
  7     lang_context integer := dbms_lob.default_lang_ctx;
  8     warning integer;
  9
 10  begin
 11    insert into t values (empty_clob()) returning c into l_c;
 12    dbms_lob.open(b);
 13    dbms_lob.loadclobfromfile(l_c, b, dbms_lob.getlength(b), dest_offset, src_offset, src_csid, lang_context, warning);
 14    dbms_lob.close(b);
 15    commit;
 16  end;
 17  /

PL/SQL procedure successfully completed.

So far so good. I now have my listener XML log loaded as a single CLOB into a table. But what happens when I try to manipulate that data using XMLTYPE.


SQL> select xmltype(c) from t;
ERROR:
ORA-31011: XML parsing failed
ORA-19213: error occurred in XML processing at lines 7
LPX-00245: extra data after end of document
ORA-06512: at "SYS.XMLTYPE", line 272
ORA-06512: at line 1



no rows selected

Hmmm…not no good. Initially I suspected that maybe the log files were not valid XML. So I extracted just a single MSG row from the file and tried to convert that to XMLTYPE.


SQL> delete t;

1 row deleted.

SQL> declare
  2    x clob := q'{ Creating new log segment:  }';
  3  begin
  4  delete t;
  5  insert into t values (x);
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> select * from t;

C
--------------------------------------------------------------------------------
 Creating new log seg

1 row selected.

SQL> select xmltype(c) from t;

XMLTYPE(C)
-----------------------------------------------------------------------------------------------------------------------------

  Creating new log s


1 row selected.

So a single row is fine, but what happens if I try to repeat that with the same row duplicated. I know that the row is valid XML because I just proved that above. But look what happens.



SQL> delete t;

1 row deleted.

SQL> declare
  2    x clob := q'{  Creating new log segment:  }';
  3  begin
  4  delete t;
  5  insert into t values (x||x);
  6  end;
  7  /

PL/SQL procedure successfully completed.

SQL> select * from t;

C
--------------------------------------------------------------------------------
 Creating new log seg

1 row selected.

SQL> select xmltype(c) from t;
ERROR:
ORA-31011: XML parsing failed
ORA-19213: error occurred in XML processing at lines 1
LPX-00245: extra data after end of document
ORA-06512: at "SYS.XMLTYPE", line 272
ORA-06512: at line 1

This is an expected result, because the XMLTYPE conversion is expecting a single XML document. Hence in order to mimic that with multiple MSG tags, I need to enclose the entire clob in a single XML tag. I’ll use LISTENER



SQL> delete t;

1 row deleted.

SQL> declare
  2     l_c clob;
  3     b bfile := bfilename('LISTENER_LOG', 'log.xml');
  4     dest_offset integer := 1;
  5     src_offset integer := 1;
  6     src_csid number := nls_charset_id('UTF8');
  7     lang_context integer := dbms_lob.default_lang_ctx;
  8     warning integer;
  9
 10     l_xml clob;
 11  begin
 12    dbms_lob.createtemporary(l_c,true);
 13    dbms_lob.open(b);
 14    dbms_lob.loadclobfromfile(l_c, b, dbms_lob.getlength(b), dest_offset, src_offset, src_csid, lang_context, warning);
 15    dbms_lob.close(b);
 16
 17    insert into t values (empty_clob()) returning c into l_xml;
 18    dbms_lob.writeappend(l_xml,10,'');
 19    dbms_lob.append(l_xml,l_c);
 20    dbms_lob.writeappend(l_xml,11,'');
 21
 22    commit;
 23  end;
 24  /

PL/SQL procedure successfully completed.

And now finally, the clob can be treated as an XMLTYPE without any issues.



SQL> select xmltype(c) from t;

XMLTYPE(C)
------------------------------------------------------------------------------------------------------------------------------


SQL> select xt.*
  2  from  t,
  3        xmltable('//listener/msg' passing xmltype(t.c)
  4                                       columns
  5                                           xorg_id      varchar2(10) path '@org_id',
  6                                           xcomp_id     varchar2(10) path '@comp_id',
  7                                           xtime        varchar2(30) path '@time',
  8                                           xtype        varchar2(10) path '@type',
  9                                           xlevel       varchar2(10) path '@level',
 10                                           xhost_id     varchar2(10) path '@host_id',
 11                                           xhost_addr   varchar2(30) path '@host_addr',
 12                                           xpid         varchar2(10) path '@pid',
 13                                           xtxt         varchar2(300) PATH 'txt'
 14                                   ) as xt;

XORG_ID    XCOMP_ID   XTIME                          XTYPE      XLEVEL     XHOST_ID   XHOST_ADDR                     XPID       XTXT
---------- ---------- ------------------------------ ---------- ---------- ---------- ------------------------------ ---------- ------------------------------
oracle     tnslsnr    2019-11-14T15:27:35.463+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       Creating new log segment:
oracle     tnslsnr    2019-11-14T15:27:56.452+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:27:56 * service
oracle     tnslsnr    2019-11-14T15:36:56.539+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:36:56 * service
oracle     tnslsnr    2019-11-14T15:37:35.542+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:37:35 * service
oracle     tnslsnr    2019-11-14T15:37:56.544+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:37:56 * service
oracle     tnslsnr    2019-11-14T15:38:02.545+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:38:02 * service
oracle     tnslsnr    2019-11-14T15:44:26.610+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:44:26 * service
oracle     tnslsnr    2019-11-14T15:44:47.613+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:44:47 * service
oracle     tnslsnr    2019-11-14T15:44:56.613+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:44:56 * service
oracle     tnslsnr    2019-11-14T15:47:35.640+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:47:35 * service
oracle     tnslsnr    2019-11-14T15:47:56.642+08:00  UNKNOWN    16         GTX        fe80::65ad:2a40:1ed2:b65a%24   4696       14-NOV-2019 15:47:56 * service

So there you go. Data mining your listener logs by taking advantage of the XML facilities. I bet you never thought you’d hear a database guy singing the praises of XML </p />
</p></div>

    	  	<div class=

Back to the “good old days”, and other cases of denying change!

This is going to be about the technology industry, but I’m going to liken things to what’s going on here in the UK…

Things are pretty depressing at the moment. The latest political fiasco in the UK makes me realise I have little in common with the majority of the British voting public, and I’m starting to think I have little in common with a lot of people working in the technology industry.

I was listening to one of the politicians in a northern constituency that recently elected a conservative MP for the first time in ages. One of the first things she said was, “We need more investment in the north, like a focus on our high streets”. Well, I agree entirely with the first part of the sentence, but the second part reeks of living in the past. This is a classic case of not understanding how the world has changed. I come from a time when going down to the market or the local high street was the way we shopped. Now I buy almost everything off the internet. Judging by my nephews and their friends, this is the norm, but I suspect it’s not so normal for lots of people in in my age bracket and above, who haven’t moved with the times. Unfortunately, those are a big chunk of the voting public, who are looking to the past for inspiration.

I try to surround myself with people who give a crap and are focused on change, so a lot of the time I’m in this echo chamber of “progress”, but I don’t think these values are shared by our industry as a whole. Why? Because I think a lot of people higher up in the chain of command just don’t get it. They either come from a time which was “pre-technology”, or they have not progressed from “their days of technology”. They are the technology equivalent of the people shopping on the high street. With that type of people in control, progress is stalled.

Cloud deniers are the climate change deniers of our industry. We have a big problem with climate change, but people don’t want to change their lives, which is the only way we are going to fix things. Likewise, lots of people are in cloud denial, but the cloud is the only way a lot of medium sized businesses will be able to fix their issues. The cloud was originally marketed as being cheaper. It’s not. It definitely costs more money, but if you embrace it and use it to your advantage it can deliver more value than you well ever get on-prem. Replicating your data centre in the cloud in an Infrastructure as a Service (IaaS) manner is a cloud failure in my opinion. I’m fine with it if this is a stepping stone, but if you think this is the final goal you’ve already failed. You are getting little in the way of benefits and all of the costs and hassles. Instead you need to focus on platforms, which bring something new to the table. Linking services together to bring new opportunities to generate more value from your data. It might cost more, but if you can leverage it to add more value, then you are still winning. If you are being driven by people who are stuck in an infrastructure frame of mind, the potential value is not even recognised, let alone on the road-map.

The “return of the local high street” will only be possible if the local high street offers something new and unique to the consumer. What is that? I guess nobody knows as nobody has been able to do it successfully. I’m not sure a bunch of charity shops is what I consider “the future”. Likewise, those people who are doubling down on their on-prem stuff, or even talking of moving back from the cloud to on-prem need to show the value add of doing that. If you focus purely on numbers, then it’s possible you can move your crappy IaaS from the cloud to on-prem and “save some money”, but at what cost? You will be stuck in the past forever. Many of the interesting services out there will *never* be available on-prem. They just won’t! Even if you were to make the move, you can’t do things the way you used to. Waiting weeks/months for a new service is a thing of the past. If you haven’t already automated that on-prem so it happens in minutes, you have already failed. To automate that yourself will require engineers that come at a price, and the people who are into that stuff are probably not going to be interested in working in your crappy backwater.

I’m not suggesting we completely forget the past, but if you are going to focus on it, or treat it as some utopian goal you are doomed to failure. Humans have to progress or die. We can do that in a way that harms everything around us, or we can we sensitive to our impact, but regardless of which approach we take, forward is the only way!

Cheers

Tim…

PS. I’m sorry if this post sounds really negative, but I can’t help thinking people of my generation and older are robbing the future from those who come after us.

Update: Based on comments from Twitter, I thought it was worth addressing some things.

  • When I talk about cloud, I am not talking about a specific provider. I am talking about whoever provides the service you need.
  • When I talk about a move to the cloud, I am not suggesting blindly moving to the cloud without any planning. It’s not magic.
  • I’m not talking about moving to the cloud if that means a degradation in your service or functionality.
  • I am suggesting that for many companies there are services you can simply not build and support on-prem.
  • I do believe that the cloud is *often* an easier place to try things out. I did a POC of something the other day for less that a dollar. That would have cost hundreds of pounds in staffing costs alone on-prem in my company.
  • I think many of the negative cloud comments or demands for additional clarifications when discussing cloud act as a distraction from the message, and are used by others as a convenient excuse not to do anything. I understand, but most people are not willing to change, so giving them an excuse not to do anything is not what we need. <br />
</li></ul></div>

    	  	<div class=

Installing Microsoft SQL Server 2019 on Linux, Part II

This is a 2-part blog post, the first in the series can be found on the Microsoft SQL Server blog here.

The recently released SQL Server 2019 on Linux includes several updated features, including replication, support for Microsoft Distributed Transaction Coordinator, and Change Data Capture (CDC) support. If you’re running SQL Server 2017 on Linux, these are great reasons to upgrade to SQL Server 2019 to take advantage of these updates.

Microsoft SQL Server 2019 was subjected to significant testing and supported to run on several Linux distribution platforms: Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), and Ubuntu.

Note: Linux running on Windows 10 isn’t a supported platform for SQL Server.

System Requirements for Microsoft SQL Server 2019 on Linux

Like SQL Server on Windows, there are minimum requirements for running SQL Server 2019 on Linux:

  • 2 GB of memory
  • XFS or EXT4 file system
  • 6 GB of disk space
  • 2 GHz processor
  • 2 processor cores or 4 virtual CPU
  • x64-compatible processor

Network File System, (NFS) remote shares, (mounted file system) has significant benefits, allowing for file system sharing between multiple hosts. SQL Server 2019 on Linux recommends using NFS version 4.2 or higher, which includes support for sparse file creation and fallocate- all features identified with modern file system support.

When using NFS, remember to only keep your datafiles, (those commonly located in /var/opt/mssql) to the shared file system and not the installation, (bin) files for SQL Server. When configuring the NFS client, ensure to use the ‘nolock’ option when mounting the remote share. The reason for setting it to no lock on files is that a relational database engine has it’s own locking mechanism for files and the NFS file system were to implement an underlying lock on the files outside of the relational database, it’s easy to see how problems could occur.

Installing SQL Server

Linux uses repositories to update the OS and packages, including software. Microsoft maintains package repositories for installing SQL Server and supports installation via the most common native package managers per distribution like yum, zypper, and apt-get.  For the latest installation information, you can refer to the Microsoft SQL Server 2019 on Linux step-by-step instructions in the quickstart documentation for the appropriate Linux distribution:

Update or upgrade SQL Server

To update the mssql-server package to the latest release, commands are run from the command line to first download the latest updates for the Linux distribution then install or replace the binaries located under the /opt/mssql/ directory.

RHEL sudo yum update

sudo yum install –y mssql-server

SLES sudo zypper –gpg-auto-import-keys refresh

sudo zypper install -y mssql-server

Ubuntu sudo apt-get update

sudo apt-get install mssql-server

 

To acquire SQL Server 2019 for Linux, you must change the repository being used before you attempt to run the installation steps.  Once the repository is pointing to the updated location, the update and installation commands can be executed to upgrade your version of SQL Server to version 2019.

Tips and Tricks to Remember with SQL Server 2019 Installations

 

  • Verifying which version and edition of SQL Server on Linux you’re running can be queried with the sqlcmd tool to run a T-SQL command displaying pertinent version information:
sqlcmd -S localhost -U SA -Q 'select @@VERSION'
  • If your Linux machine doesn’t have access to the online repositories documented in the quickstarts, you can download the installation packages directly and install SQL Server 2019 offline. To perform this, see the package download links in the Release Notes and review the instructions for offline installations.
  • Performing unattended installations of a SQL Server 2019 on Linux can lesson demands on technical resources. Instructions on how to do this can be found online in Microsoft’s documentation.
  • Along with SQL Server 2017, there are numerous other versions of SQL Server to consider upgrading to SQL Server 2019. There are several options for migrating existing databases in older versions of SQL Server from Windows to Linux.
  • Use the SQL Server Migration Assistant (SSMA) to migrate data from Oracle and other database engines to SQL Server on Linux, granting the same powerful OS and the powerful new features of SQL Server 2019.  SSMA supports migration from several types of database engines including DB2, MySQL, Oracle, and SAP ASE with their additional extension packs.

Armed with this information and these tips and tricks, you’re now ready to install Microsoft SQL Server 2019 on Linux. To learn more about what you can do with Microsoft SQL Server 2019, check out the free Packt guide Introducing Microsoft SQL 19. If you’re ready to jump to a fully managed cloud solution, check out the Essential Guide to Data in the Cloud.

 



Tags:  , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright ©  [Installing Microsoft SQL Server 2019 on Linux, Part II], All Right Reserved. 2020.

Tips'n'tricks: finding the (injected) private key pair used in Vagrant boxes

In an earlier article I described how you could use SSH keys to log into a Vagrant box created by the Virtualbox provider. The previous post emphasised my preference for using custom Vagrant boxes and my own SSH keys.

Nevertheless there are occasions when you can’t create your own Vagrant box, and you have to resort to the Vagrant insecure-key-pair-swap procedure instead. If you are unsure about these security related discussion points, review the documentation about creating one’s own Vagrant boxes (section “Default User Settings”) for some additional background information.

Continuing the discussion from the previous post, what does a dynamically injected SSH key imply for the use with the SSH agent?

Vagrant cloud, boxes, and the insecure key pair

Let’s start with an example to demonstrate the case. I have decided to use the latest Ubuntu 16.04 box from HashiCorp’s Vagrant cloud for no particular reason. In hindsight I should have gone for 18.04 instead, as it’s much newer. For the purpose of this post it doesn’t really matter though.

$ vagrant up ubuntu
Bringing machine 'ubuntu' up with 'virtualbox' provider...
==> ubuntu: Importing base box 'ubuntu/xenial64'...
==> ubuntu: Matching MAC address for NAT networking...
==> ubuntu: Checking if box 'ubuntu/xenial64' version '20191204.0.0' is up to date...
==> ubuntu: Setting the name of the VM: ubuntu
==> ubuntu: Fixed port collision for 22 => 2222. Now on port 2200.
==> ubuntu: Clearing any previously set network interfaces...
==> ubuntu: Preparing network interfaces based on configuration...
    ubuntu: Adapter 1: nat
    ubuntu: Adapter 2: hostonly
==> ubuntu: Forwarding ports...
    ubuntu: 22 (guest) => 2200 (host) (adapter 1)
==> ubuntu: Running 'pre-boot' VM customizations...
==> ubuntu: Booting VM...
==> ubuntu: Waiting for machine to boot. This may take a few minutes...
    ubuntu: SSH address: 127.0.0.1:2200
    ubuntu: SSH username: vagrant
    ubuntu: SSH auth method: private key
    ubuntu: 
    ubuntu: Vagrant insecure key detected. Vagrant will automatically replace
    ubuntu: this with a newly generated keypair for better security.
    ubuntu: 
    ubuntu: Inserting generated public key within guest...
    ubuntu: Removing insecure key from the guest if it's present...
    ubuntu: Key inserted! Disconnecting and reconnecting using new SSH key...
==> ubuntu: Machine booted and ready!
==> ubuntu: Checking for guest additions in VM...
    ubuntu: The guest additions on this VM do not match the installed version of
    ubuntu: VirtualBox! In most cases this is fine, but in rare cases it can
    ubuntu: prevent things such as shared folders from working properly. If you see
    ubuntu: shared folder errors, please make sure the guest additions within the
    ubuntu: virtual machine match the version of VirtualBox you have installed on
    ubuntu: your host and reload your VM.
    ubuntu: 
    ubuntu: Guest Additions Version: 5.1.38
    ubuntu: VirtualBox Version: 6.0
==> ubuntu: Setting hostname...
==> ubuntu: Mounting shared folders...
    ubuntu: /vagrant => /home/martin/vagrant/ubunutu 

This started my “ubuntu” VM (I don’t like it when my VMs are called “default”, so I tend to give them better designations):

$ vboxmanage list vms | grep ubuntu
"ubuntu" {a507ba0c-...24bb} 

You may have noticed that 2 network interfaces are brought online in the output created by vagrant up. This is done to stay in line with the story of the previous post and not something that’s strictly speaking necessary.

The key message in the context of this blog post found the logs is this:

    ubuntu: SSH auth method: private key
    ubuntu: 
    ubuntu: Vagrant insecure key detected. Vagrant will automatically replace
    ubuntu: this with a newly generated keypair for better security.
    ubuntu: 
    ubuntu: Inserting generated public key within guest...
    ubuntu: Removing insecure key from the guest if it's present...
    ubuntu: Key inserted! Disconnecting and reconnecting using new SSH key... 

As you can read, the insecure key was detected and replaced. But where can I find the replaced key?

Locating the new private key

This took me a little while to find out, and I’m hoping this post saves you a minute. The key information (drum roll please) can be found in the output of vagrant ssh-config:

$ vagrant ssh-config ubuntu
Host ubuntu
  HostName 127.0.0.1
  User vagrant
  Port 2200
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key
  IdentitiesOnly yes
  LogLevel FATAL 

This contains all the information you need to SSH into the machine! It doesn’t seem to print information about the second NIC though, but that’s ok as I can always look at its details in the Vagrantfile itself.

Connection!

Using the information from above, I can connect to the system using either port 2200 (forwarded on the NAT device), or the private IP (which is 192.168.56.204 and has not been shown here):

$ ssh -p 2200 \
> -i /home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key \
> vagrant@localhost hostname
ubuntu

$ ssh -i /home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key \
> vagrant@192.168.56.204 hostname
ubuntu 

This should be all you need to get cracking with the Vagrant box. But wait! The full path to the key is somewhat lengthy, and that makes it a great candidate for storing it with the SSH agent. That’s super-easy, too:

$ ssh-add /home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key
Identity added: /home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key (/home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key)

Apologies for the formatting. But it was worth it!

$ ssh vagrant@192.168.56.204 hostname
ubuntu

That’s a lot less typing than before…

By the way, it should be easy to spot this key in the output of ssh-add -l as it’s most likely the one with the longest path. If that doesn’t help you identify the key, ssh-keygen -lf /path/to/key prints the key’s fingerprint, for which you can grep in the output of ssh-add -l.

Have fun!

dbca silent mode – Windows

Just a quick tip that often catches me out. If you are like me, you have long since tired of clicking Next, Next, Next, … through the GUI when you want to quickly create a database. Many people work around this by storing a set of database creation scripts. However, you can do even better. The Database Creation Assistant (dbca) can also be used at the command line and in silent mode.

On Windows, this is the error I commonly get when using dbca at the command line


C:\oracle\product\19\bin>dbca -silent -createDatabase  -templateName General_Purpose.dbc  -gdbname db19x ...
[FATAL] [DBT-50000] Unable to check for available memory.

There is nothing wrong with the tool, and nothing wrong with Windows Smile. In this case, it is a PEBKAC moment – all you need to do is ensure (as per the docs!) that you always create your databases with a command prompt that has been opened with “Run as Administrator”. Once you’ve done that, all should be fine.


C:\oracle\product\19\bin>dbca -silent -createDatabase  -templateName General_Purpose.dbc  -gdbname db19x ...
Prepare for db operation
8% complete
31% complete
Creating and starting Oracle instance
...

If you haven’t used dbca at the command line, you should check it out. Databases are now trivial to create with a one line command.