Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oracle Database 18c Indexing Related New Features (New Angels of Promise)

Although the recently released Oracle 18c Database is really just under the covers, there are a few little features and enhancements that are of interest from an indexing perspective. These include: Memory Optimized Rowstore Scalable Sequences Oracle Text indexing enhancements, such as automatic background index maintenance and new optimize index options JSON Search Index […]

Duplicating a Database on ODA


Every so often, we get a request to duplicate a database for one of our customers using an Oracle Database Appliance (ODA). The process for doing that is relatively straightforward, but there are a couple of nuances along the way so I thought I’d write it up as a blog post in case it’s of use to others. Obviously, I have obfuscated any customer-specific information to protect their identity.


The first nuance is to understand what database is being used as the source for the clone. Generally, a request for cloning will be something like this:

“We are creating a new environment which needs a new database to be set up. Please copy P1_SRV_T and restore as P1_SRV_F”.

The important thing to understand here is what the customer is providing is what they know – how they connect to the database. These are service names, not database names, so you need to be able to translate that into the relevant entities needed for cloning.

To do this, we have created two scripts in this environment that are invaluable – dbstatus (database status) and sstatus (service status) – as you can guess by the output below these are wrappers to various srvctl commands. Running dbstatus will show the following (output restricted to the part we require for brevity):

[oracle@oda1 ~]$ dbstatus
DB unique name: P1SRVT
Instance ABCSRVT is running on node oda2

Likewise, running sstatus returns:

[oracle@oda1 ~]$ sstatus
DB unique name: P1SRVT
Service P1_SRV_T is running on instance(s) ABCSRVT

From this information, you can see that the P1_SRV_T service referred to as the source for the first database to be cloned is running on oda2, with the instance name ABCSRVT. By logging onto that machine, we can also determine the database name:

[oracle@oda2 ~]$ . oraenv
The Oracle base has been set to /u01/app/oracle
[oracle@oda2 ~]$ sqlplus / as sysdba

SQL*Plus: Release Production on Wed Feb 21 13:30:04 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Advanced Analytics
and Real Application Testing options

SQL> SELECT name FROM v$database;


So, to summarize, we now have:

Source DB name – ABCSRVT
Source DB Unique Name – P1SRVT
Source Service Names – P1_SRV_T
Source Server – oda2

From that information, and the original request, we can surmise the target information to be:

Target DB name – ABCSRVF
Target DB Unique Name – P1SRVF
Target Service Name – P1_SRV_F
Target Server – oda2

Finally, we know that in this environment, the backups are stored under /backup/[DATABASE_NAME], so for this exercise, the backup we use to duplicate the database is stored under /backup/ABCSRVT.

That is all the information we need to proceed with the cloning request.

Creating the Target Database

To create the target, we will be using the oakcli command. This provides the added benefit of creating the various dump directories etc. To use oakcli, you need to be logged on to node 1 of the ODA as the root user:

[root@oda1 ~]# oakcli create database -db ABCSRVF -oh OraDb12102_home3 -params srv
INFO: 2018-02-20 15:05:45: Please check the logfile  '/opt/oracle/oak/log/oda1/tools/’ for more details

Note in this command that the ORACLE_HOME has already been created, and we are passing a parameter file srv. srv actually maps to a file called srv.conf located under /opt/oracle/oak/install/dbconf. In this case, the file contains settings for parameters such as the database block size, language, character set and so forth. You can find more information on what can be included in this file in the Oracle documentation. Next, you will be prompted for the SYSASM password. In our scenario, we use the highly secured password “welcome1”. </p />

    	  	<div class=

18c, Cloud First and Cloud Only features: think differently

Remember the times when the Oracle Software features were the same on all platforms? Where Oracle databases could be ported to any relevant platform? Where we were able to try any feature, freely, by downloading the latest release software? Now we need to think differently. Because:

  • The new software is released on Cloud first
  • The major new features will never be available on-premises
  • The Cloud here means the Oracle Cloud – not AWS, not Azure, not Google, not you local IaaS providers
  • Some new features are extended to on-premises for Oracle hardware only (Exadata, ODA)
  • All trial environments are paid services (but you can get free credits) but this may change with Oracle XE 18c

And you are concerned because if you start your new developments on the Oracle Cloud, or simply train yourself on new features, you may rely on features that you will never have on-premises. In my opinion, it makes sense for a startup, or a new project, to start development and early production on the Cloud. However, there will probably be a point where the cost optimization will involve on-premises servers, or IaaS, or different Cloud providers. Then the features you used may not be available.

Another concern is financial: when justifying to your CFO the cost of the 22% Support and Software Updates, you may list all the new features. But be careful. Most of the new features comes with additional options, or will not be available outside of the Oracle Cloud PaaS.

If you tried the Oracle 18c release on the Oracle Cloud, you may have seen some additional informations in the alert.log:

Capability Type : Network
capabilities requested : 1 detected : 0 Simulated : 0
Capability Type : Runtime Environment
capabilities requested : 400000FF detected : 4 Simulated : 0
Capability Type : Engineered Systems
capabilities requested : 3 detected : 0 Simulated : 0

So, it seems that Oracle is checking the capabilities of the platform to enable or not some features. When you are not on the right one, you may encounter this kind of error which is new in 18c:

[oracle@DBaaS18c ~]$ oerr ORA 12754
12754, 00000, "Feature %s is disabled due to missing capability %s."
// *Document: NO
// *Cause: This feature requires specific capabilities that were not present in the current database instance.
// *Action: Contact Oracle Support Services to enable this feature.

or maybe:

12755, 00000, "Feature %s is disabled due to unsupported capability."
// *Document: NO
// *Cause: This feature requires specific capabilities that were not supported
// in the current database instance.
// *Action: Contact Oracle Support Services to enable this feature.

For the moment, the on-premises binaries are available for Exadata only. But the Licensing documentation already gives you an idea. The following new features will not be available on-premises.

All the multitenant new features are for Oracle Cloud or Oracle Engineered systems only:

  • CDB Fleet Management
  • PDB Snapshot Carousel
  • Refreshable PDB switchover
  • Keystore for Each Pluggable Database

If you are on your own servers, or on one of the major cloud providers, you do not benefit from the latest software updates. Even if you pay each year 22% of your licenses cost, even on platforms where the core factor is maximum. You have support, and patches, but only a limited set of new features.

If you do not have the Active Data Guard option, you cannot benefit from most of the new features of the last releases. And buying this option can be expensive if you are on ULA (because you will buy it for all processors), or on non-Oracle Cloud (because of the core factor) and even there some features will not be available. The latest, Oracle Data Guard—Automatic Correction of Non-logged Blocks, is available on Oracle Cloud only, or Exadata/ODA. It is not a big problem as you can include this recovery after your nologging load, but it is important to know it.

Note that with this new release, some features also disappear. Not only deprecated. Not only desupported. But also removed. Oracle Change Data Capture has been desupported in 12c and if you look at 18c you will see that it has been removed. And it is the last version with Oracle Streams. As mentioned in the documentation, you need to buy Golden Gate.

This looks like bad news in 18c, but consider it as a new patchset on 12cR2. Remember that 12cR2 brought amazing features to all platforms and all editions, such as the online clone or move of pluggable databases. The important thing is to be informed and think differently as we used to when Oracle Databases were portable to all platforms. Be careful with features that will not be available on all platforms. Consider the costs correctly. And also look at all those features that are available to everybody and are probably not used enough. The best way is to design the application to use the database efficiently (processing data in the database, lowering the roundtrips and context switches) on a limited number of CPU cores. Then, all those options or cloud credits will not be as expensive as you may think. Nobody likes vendor lock-in, but it may be the most cost-efficient solution.


Cet article 18c, Cloud First and Cloud Only features: think differently est apparu en premier sur Blog dbi services.


In the spirit of Cary Millsap’s comment: “The fastest way to do anything is to not do it at all”, here’s my take (possibly not an original one) on solving problems:

“The best time to solve a problem is before it has happened.”

I spend quite a lot of my “non-contact” time thinking about boundary cases, feature collisions, contention issues, and any other things that could go wrong when you start to implement real systems with (new) Oracle features. The benefit of doing this, of course, is that when I’m looking at a client’s system I can often solve problems because I recognise symptoms that I’ve previously created “in the lab”. The strange thing about this is that there have been times when I’ve pushed Oracle to a breaking point, documented it, and then dismissed the threat because “no one would do that in real life” only to find that someone has done it in real life.

All this is just a preamble to a demonstration of a threat with a terrific feature that is just beginning to gain greater acceptance as a solution to some interesting problems – and the demonstration is going to exaggerate the problem to a level that (probably) won’t appear in a production. The driving example appeared as a question on the OTN/ODC database forum:

“I need customers who have done a transaction in September but not in October.”

There are obviously many ways to address this type of requirement (my first thought was to use the MINUS operator), and a few questions you might ask before trying to address it, but the OP had supplied some data to play which consisted of just a few rows of a table with three columns and some data restricted to just one year, and one solution offered was a very simple query using the 12c feature match_recognize():

  ( T_ID NUMBER, -- trans-id  
  ) ;  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (1,100,to_date('12-SEP-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (2,100,to_date('12-OCT-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (3,200,to_date('12-SEP-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (4,300,to_date('12-OCT-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (5,400,to_date('12-JAN-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (6,500,to_date('12-OCT-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (7,500,to_date('12-MAR-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (8,600,to_date('12-SEP-17','DD-MON-RR'));  
Insert into TEST_TABLE (T_ID,CUST_ID,TRANS_DT) values (9,600,to_date('12-JUL-17','DD-MON-RR'));  


select * from test_table
  partition by cust_id
  order by trans_dt
  pattern( x+ y* $)
    x as extract(month from trans_dt)  = 9,
    y as extract(month from trans_dt) != 10

The obvious benefit of this solution over a solution involving a set-wise MINUS is that it need only scan the data set once (whereas the MINUS strategy will be scanning it twice with a select distinct in each scan) – but it’s a solution that is likely to be unfamiliar to many people and may need a little explanation.

The partition by cust_id order by trans_dt means we sort the data by those two columns, breaking on cust_id. Then for each cust_id we walk through the data looking for a pattern which is defined as: “one or more rows where the month is september followed by zero or more rows where the month is NOT october followed by the end of the set for the customer”. The SQL leaves many details to default so the result set is just the cust_id column and only one row per occurrence of the pattern (which, given the data set, can occur at most once per customer).

For a cust_id that shows a matching pattern the work we will have done is:

  • Walk through rows for Jan to Aug until we reach the first September – which is the start of pattern
  • Keep on walking through to the last of the Septembers – which is a partial match
  • One of
  • Walk through zero rows of November and December and reach the end of cust_id
  • Walk through one or more rows of November and/or December then reach the end of cust_id
  • Record the end of pattern by reporting one row
  • Move on to next cust_id

The excitement starts when we think about a cust_id that doesn’t have a matching pattern – and for that I’m going to generate a new, extreme, data set.

rem     Script:         match_recognize_07.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2018

create table t1
with generator as (
                rownum id
        from dual
        connect by
                level  comment to avoid WordPress format issue
        rownum                          id,
        99                              cust_id,
        to_date('01-Sep-2017')          trans_dt,
        lpad(rownum,1000,'0')           padding
        generator       v1,
        generator       v2
        rownum  comment to avoid WordPress format issue

update t1
        trans_dt = to_date('01-Oct-2017','dd-mon-yyyy')
        rownum = 1

                ownname     => user,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'

select  *
from    (
                extract(year from trans_dt) yr, 
                extract(month from trans_dt) mth
        partition by cust_id
        order by trans_dt
                padding as t1_padding
        pattern( x+  y*  $ )
                x as mth = 9,
                y as mth != 10

I’ve moved the calculation of month number from the define clause into an in-line view purely to make the match_recognize() clause a little tidier.

I’ve created a table with just one customer with 100,000 transactions on 1st September 2017, then I’ve updated one row from September to October. Thanks to that one row Oracle is not going to be able to find the requested pattern. I’ve added a padding column of 1,000 characters to the table and included it in the measures that I want to select, so Oracle will have to sort roughly 100MB of data (100,000 rows at roughly 1KB per row) before it starts walking the data to find matches – and, though it’s not visible in the script, the workarea settings mean the session won’t be allowed to expand its PGA to accommodate the whole 100MB.

Test 1 – comment out the update and see how long it takes to produce a result: 0.67 seconds, and the padding value reported was the last one from the pattern.
Test 2 – put the update back in place and try again:

After running for 46 seconds with no result and interrupting the query these are some figures from a snapshot of the session stats:

Name                                                 Value
----                                                 -----
CPU used when call started                           3,662
DB time                                              3,711
user I/O wait time                                   1,538
consistent gets                                     14,300
physical reads direct                            1,298,939
physical read IO requests                          736,478
physical read bytes                         10,640,908,288      
physical writes                                     25,228
physical writes direct                              25,228
physical reads direct temporary tablespace       1,298,939
physical writes direct temporary tablespace         25,228
table scan rows gotten                             100,000
table scan blocks gotten                            14,286

  • I’ve scanned a table of 14,286 blocks to find 100,000 rows.
  • I’ve sorted and spilled to disc, using roughly 25,000 blocks of direct path writes and reads to do the sort.
  • Then I’ve spend the rest of the time burning up CPU and reading 1.27 million blocks from the temporary tablespace trying to find a match

The way that basic pattern matching works on a match failure is to go back to the row after the one where the current match attempt started, and begin all over again. So in this example, after dumping 100MB of Septembers to temp Oracle started at row 1, read 999,999 rows, then found the October that failed the match; so it went to row 2 [ed: doing some very expensive back-tracking: see comment #2 from Stew Ashton], read 999,998 rows, then found the October that failed the match; so it went to row 3 and so on. Every time it went back to (nearly) the beginning it had to start re-reading that 100,000 rows from temp because the session wasn’t allowed to keep the whole 100MB in memory.

You need to avoid defining a pattern that has to scan large volumes of data to identify a single occurrence of the pattern if the matching process is likely to fail. Even if you can keep the appropriate volume of data in memory for the entire time and avoid a catastrophic volume of reads from the temporary tablespace you can still see a huge amount of CPU being used to process the data – when I reduced the table from 100,000 rows to 10,000 rows it still took me 99 CPU seconds to run the query.


The 12c match_recognize() is a terrific tool, but you must remember two important details about the default behaviour when you think about using it:

  • You will sort a volume of data that is the number of input rows multiplied but the total length of the measures/partition output.
  • If you have a long sequence of rows that ends up failing to match a pattern Oracle goes back to the row after the start of the previous match attempt.

With the usual proviso that “large”, “small” etc. are all relative: keep the data volume small, and try to define patterns that will be short  runs of rows.

Do note, however, that I engineered this example to produce a catastrophe. There are many non-default actions you can choose to minimise the workload you’re likely to produce with match_recognize(), and if you just spare a little time to think about worst case events you probably won’t need to face a scenario like this in a real production environment.

See also:

Part 6 (which includes a list of earlier installments) of an introductory series to match_recognize() by Keith Laker.

A pdf file of Keith Laker’s presentation on match_recognize(), including some technical implementation details.


A free persistent Google Cloud service with Oracle XE

In a previous post I’ve listed several free online services which run an Oracle XE so that you can test your SQL easily. You may want use Oracle XE further, with full access to the database and its host, and still from a web browser. You probably have a Google account. Then you also have a Virtual Machine on the Google Cloud (0.5 vCPU / 1.70 GB RAM boostable to 1 vCPU / 3.75 GB) and 5 GB of persistent storage (as long as you used it in the 120 previous days). Just try this Google Cloud Shell:
In this post, I explain how to install Oracle XE there.

First, you need to download Oracle XE. You do that on your laptop to upload it to the Google Cloud Shell. For legal reason, there is no automated way to download it with wget because you have to manually accept the OTN License Term: and choose ‘Oracle Database Express Edition 11g Release 2 for Linux x64′

You can try to upload it to the Cloud Shell directly (menu on top right – upload file) but I had problems with the size of the file, so I’ve split it into two files:

split -b 150M

You should have ‘split’ even on Windows (Ubuntu Bash Shell) but you can also use any tool. 7-zip can do that.

I uploaded the two files:

Now on the Google Cloud shell, concatenate the files back to the .zip:

franck_pachot@cloudshell:~$ cat xa* >

Unzip it:

franck_pachot@cloudshell:~$ unzip
creating: Disk1/
creating: Disk1/upgrade/
inflating: Disk1/upgrade/gen_inst.sql
creating: Disk1/response/
inflating: Disk1/response/xe.rsp
inflating: Disk1/oracle-xe-11.2.0-1.0.x86_64.rpm

This .zip contains a .rpm but we are on Debian in the Google Cloud Shell. In addition to that, I’ll not follow the standard installation of Oracle XE because only my $HOME filesystem is persistent, so I want everything there. I need rpm2cpio to extract from the .rpm, and I’ll need libaio1 to install Oracle:

franck_pachot@cloudshell:~$ sudo apt-get -y install rpm2cpio libaio1

Here is the extraction:

franck_pachot@cloudshell:~$ rpm2cpio Disk1/oracle-xe-11.2.0-1.0.x86_64.rpm | cpio -idmv

This extracted to u01, etc and usr in my $HOME directory and I’ll leave the Oracle Home there.
I can remove the intermediate files:

franck_pachot@cloudshell:~$ rm -f xa? Disk1

The Oracle XE deployment contains a ‘’ which will create the XE database. You don’t have dbca here, you don’t have templates. Oracle XE is build to be small.

export ORACLE_HOME=$HOME/u01/app/oracle/product/11.2.0/xe
echo "$PATH" | grep "$ORACLE_HOME" || export PATH=$PATH:$ORACLE_HOME/bin

This takes time: create database, catalog, catproc… and the you have your database

The listener is not started. We need to create the directory for the log, and to define listener.ora to listen on default port:

mkdir -p ./u01/app/oracle/product/11.2.0/xe/network/log
echo "LISTENER=(DESCRIPTION_LIST=(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = = 1521))))" > ./u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
lsnrctl start
sqlplus sys/oracle as sysdba <<<'alter system register;'

You should see the XE service registered here:

franck_pachot@cloudshell:~$ lsnrctl status
LSNRCTL for Linux: Version - Production on 25-FEB-2018 23:01:40
Copyright (c) 1991, 2011, Oracle. All rights reserved.
Version TNSLSNR for Linux: Version - Production
Start Date 25-FEB-2018 23:00:01
Uptime 0 days 0 hr. 1 min. 38 sec
Trace Level off
Security ON: Local OS Authentication
Listener Parameter File /home/frank_pachot/u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
Listener Log File /home/frank_pachot/u01/app/oracle/product/11.2.0/xe/network/log/listener.log
Listening Endpoints Summary...
Services Summary...
Service "XE" has 1 instance(s).
Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully

Note that you cannot access your Google Cloud shell from outside, and then you can connect locally. But having a listener and connecting through services is always a good idea.

If your session is inactive, you may lose the connection and even have the VM stopped. But your $HOME will still be there when you restart, so you can set the .profile to set the correct environment and start the listener and database if not already running:
cat >> ~/.profile<<'END'
export ORACLE_HOME=$HOME/u01/app/oracle/product/11.2.0/xe
echo "$PATH" | grep "$ORACLE_HOME" || export PATH=$PATH:$ORACLE_HOME/bin
ps -edf | grep [t]nslsnr || lsnrctl start
ps -edf | grep [s]mon_XE || sqlplus sys/oracle as sysdba <<< startup

I don’t use /etc/oratab here because it is outside of the persistent area.

We can not connect ‘/ as sysdba’ because we are not in the ‘dba’ group. I don’t think we can change this in Oracle XE. Of course, we can sudo to root and add the group, but that will not be persistent. However, no need for it. The password for SYS is “oracle” and you can create all the users you want. The database, being stored under $HOME, is persistent.

Here are my datafiles:

franck_pachot@cloudshell:~$ rman target sys/oracle
Recovery Manager: Release - Production on Sun Feb 25 21:28:00 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: XE (DBID=2850165315)
RMAN> report schema;
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name XE
List of Permanent Datafiles
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 280 SYSTEM *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/system.dbf
2 190 SYSAUX *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/sysaux.dbf
3 235 UNDOTBS1 *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/undotbs1.dbf
4 100 USERS *** /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/users.dbf
List of Temporary Files
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 20 TEMP 500 /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/temp.dbf

You find the alert.log under $ORACLE_HOME/dbs (as all the database files):

franck_pachot@cloudshell:~$ tail $HOME/u01/app/oracle/product/11.2.0/xe/log/diag/rdbms/xe/XE/trace/alert_XE.logThread 1 advanced to log sequence 17 (LGWR switch)
Current log# 2 seq# 17 mem# 0: /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/log2.dbf
Sun Feb 25 22:01:05 2018
Shared IO Pool defaulting to 44MB. Trying to get it from Buffer Cache for process 2875.
Sun Feb 25 22:09:38 2018
Thread 1 advanced to log sequence 18 (LGWR switch)
Current log# 3 seq# 18 mem# 0: /home/franck_pachot/u01/app/oracle/product/11.2.0/xe/dbs/log3.dbf
Sun Feb 25 22:09:43 2018
SERVER COMPONENT id=UTLRP_BGN: timestamp=2018-02-25 22:09:43
SERVER COMPONENT id=UTLRP_END: timestamp=2018-02-25 22:09:50

The limitations and features of the Google Cloud Shell are documented here: In addition to the command line (through ‘tmux’ which allows to split the screen in different panes) you have a file editor in the browser. You can also install Apex as you have browser access to port 8080 in https (icon on top right just before the menu).

The major limitation here comes from Oracle XE which is an old version ( but this year should come Oracle XE 18c with the latest features. Oracle XE 18c may also come with EM Express and Google Cloud Shell gives access to https. I just hope that there will be a small image for Oracle XE 18c as we have only 5GB here. Maybe a docker container will be easier then, with only the database in an external volume under $HOME. We will see, but in the meanwhile, there’s already a lot we can do with Oracle XE. You can play with Backup/Recovery scenarios and you will always be able to re-create the database by running the again.

Added 26-FEB-2018

As I said that this can be a good lab to practice backup/recovery scenarios, you should run in archive log mode:

sqlplus sys/oracle as sysdba < shutdown immediate;
startup mount;
alter database archivelog;
alter database open;

Then to avoid to fill-in the recovery area, you can backup the database and archived logs frequently. You don’t need to put the backups on the persistent storage as it is a lab.
I suggest to put the following at the end of the .profile:

rman target sys/oracle > last_backup.log <<<"set echo on; configure channel device type disk format '/var/tmp/rman_backupset_XE_%U'; configure backup optimization on; configure controlfile autobackup on; crosscheck backup; delete noprompt expired backup; backup database plus archivelog; delete noprompt obsolete;" &

This will run a backup to /var/tmp when you connect, delete obsolete backups, and expired ones (as they will be removed if the machine is reset after long inactivity).


Cet article A free persistent Google Cloud service with Oracle XE est apparu en premier sur Blog dbi services.

PDB upgrade from 12c to 18c

Oracle 18c is out, in the Oracle Cloud, and the first thing I do with a new version is testing how long it takes to upgrade a previous version PDB by unplug/plug. Faster upgrade should be the benefit of having a slim dictionary where the system objects are reduced to metadata links and data links. However, it looks like upgrading the PDB dictionary still takes the same time as upgrading the CDB$ROOT.

The idea is to create a DBaaS service with a new CDB in 18.1 and plug a PDB coming from Actually, I’m saying 18.1 but that may be 18.0 as I’m now lost in those version numbers. The cloud service was created with version: “”, V$VERSION displays for the release and for the version:
Connected to:
Oracle Database 18c Enterprise Edition Release - Production

My understanding is that the is the version of the 18c dictionary, which will need a full upgrade only for 19c ( And is about the version, which will be incremented by Release Updates later.

I have an unplugged PDB that I plug into the new CDB:
SQL> create pluggable database PDB0 using '/u01/app/temp/PDB0.pdb';
Pluggable database PDB0 created.

When I open it, I get a warning:
SQL> alter pluggable database pdb0 open;
ORA-24344: success with compilation error
24344. 00000 - "success with compilation error"
*Cause: A sql/plsql compilation error occurred.
*Action: Return OCI_SUCCESS_WITH_INFO along with the error code
Pluggable database PDB0 altered.

SQL> select * from pdb_plug_in_violations;
---- ---- ----- ---- ------------ ---- ------- ------ ------ ------
24-FEB-18 PM PDB0 OPTION ERROR 0 1 Database option APS mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 2 Database option CATALOG mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 3 Database option CATJAVA mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 4 Database option CATPROC mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 5 Database option CONTEXT mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 6 Database option DV mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 7 Database option JAVAVM mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 8 Database option OLS mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 9 Database option ORDIM mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 10 Database option OWM mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 11 Database option SDO mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 12 Database option XDB mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 13 Database option XML mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 OPTION ERROR 0 14 Database option XOQ mismatch: PDB installed version CDB installed version PENDING Fix the database option in the PDB or the CDB 1
24-FEB-18 PM PDB0 Parameter WARNING 0 1 CDB parameter compatible mismatch: Previous '12.2.0' Current '18.0.0' PENDING Please check the parameter in the current CDB 1
24-FEB-18 PM PDB0 VSN not match ERROR 0 1 PDB's version does not match CDB's version: PDB's version CDB's version PENDING Either upgrade the PDB or reload the components in the PDB. 4

The messages are clear: all components have a dictionary and must be upgraded to a one

The PDB is opened in MIGRATE mode with only RESTRICTED sessions enabled:
SQL> show pdbs
SP2-0382: The SHOW PDBS command is not available.
SQL> pdbs
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1201448 2 11:42:25 NONE 942476327 3958500
3 CDB1PDB MOUNTED NORMAL 942476327 2 19:58:55 NONE 942476327 3958500
4 PDB0 MIGRATE YES NEW 941386968 3 20:34:50 NONE 942476327 3958500

Then, here is the upgrade for this newly plugged PDB0:

[oracle@DBaaS18c 18c]$ dbupgrade -c PDB0
Argument list for [/u01/app/oracle/product/18.0.0/dbhome_1/rdbms/admin/]
Run in c = PDB0
Do not run in C = 0
Input Directory d = 0
Echo OFF e = 1
Simulate E = 0
Forced cleanup F = 0
Log Id i = 0
Child Process I = 0
Log Dir l = 0
Priority List Name L = 0
Upgrade Mode active M = 0
SQL Process Count n = 0
SQL PDB Process Count N = 0
Open Mode Normal o = 0
Start Phase p = 0
End Phase P = 0
Reverse Order r = 0
AutoUpgrade Resume R = 0
Script s = 0
Serial Run S = 0
RO User Tablespaces T = 0
Display Phases y = 0
Debug z = 0
Debug Z = 0 VERSION: []
STATUS: [Production]

The Build number mentions 18.1 built on 03-JAN-2018

Look at the summary to see the time it takes:
Oracle Database Release 18 Post-Upgrade Status Tool 02-24-2018 21:36:5
Component Current Full Elapsed Time
Name Status Version HH:MM:SS
Oracle Server UPGRADED 00:13:37
JServer JAVA Virtual Machine UPGRADED 00:00:51
Oracle XDK UPGRADED 00:00:21
Oracle Database Java Packages UPGRADED 00:00:05
OLAP Analytic Workspace UPGRADED 00:00:11
Oracle Label Security UPGRADED 00:00:03
Oracle Database Vault UPGRADED 00:00:34
Oracle Text UPGRADED 00:00:11
Oracle Workspace Manager UPGRADED 00:00:18
Oracle Real Application Clusters UPGRADED 00:00:00
Oracle XML Database UPGRADED 00:00:49
Oracle Multimedia UPGRADED 00:01:03
Spatial UPGRADED 00:02:06
Oracle OLAP API UPGRADED 00:00:08
Upgrade Datapatch 00:00:05
Final Actions 00:00:09
Post Upgrade 00:00:02
Post Upgrade Datapatch 00:00:04
Total Upgrade Time: 00:20:47 [PDB0]
Database time zone version is 26. It is older than current release time
zone version 31. Time zone upgrade is needed using the DBMS_DST package.
Grand Total Upgrade Time: [0d:0h:21m:10s]

Here we see 18.1 but the important number is the time: 21 minutes… Once again, I see no improvement in the time to upgrade the PDB dictionary. This was on a service with 2 OCPU and I’ve run a whole CDB upgrade with a similar shape and the time to upgrade the CDB$ROOT is exaclty the same – see the screenshot on the right.

Finally I open the PDB:

SQL> alter pluggable database pdb0 open;
Pluggable database PDB0 altered.

And check that the violations are resolved:

SQL> select * from pdb_plug_in_violations where status'RESOLVED';
---- ---- ----- ---- ------------ ---- ------- ------ ------ ------
24-FEB-18 PM PDB0 OPTION WARNING 0 15 Database option RAC mismatch: PDB installed version CDB installed version NULL. PENDING Fix the database option in the PDB or the CDB 4

Ok, I suppose I can ignore that as this is not RAC.

I’ve not seen a lot of differences in the dbupgrade output. There’s a new summary of versions before and after upgrade, which was not there in 12c:

DOC> DIAG OS Version: linux x86_64-linux-thread-multi 2.6.39-400.211.1.el6uek.x86_64
DOC> DIAG Database Instance Name: CDB1
DOC> DIAG Database Time Zone Version: 31
DOC> DIAG Database Version Before Upgrade:
DOC> DIAG Database Version After Upgrade:

However, be careful with this information. The OS Version is not correct:

[opc@DB ~]$ uname -a
Linux DB 4.1.12-112.14.10.el6uek.x86_64 #2 SMP Mon Jan 8 18:24:01 PST 2018 x86_64 x86_64 x86_64 GNU/Linux

It seems that this info comes from which is the OS version where the perl binaries were built…

In summary, nothing changes here about the time it takes to upgrade a PDB when plugged into a new CDB.
However, in 18c (and maybe only with next Release Updates) we should have a way to get this improved by recording the upgrade of CDB$ROOT and re-playing a trimmed version on the PDB dictionaries, in the same way as in Application Containers for application upgrades. We already see some signs of it with ‘_enable_cdb_upgrade_capture’ undocumented parameter and PDB_UPGRADE_SYNC database property. This may even become automatic when PDB is opened with the PDB_AUTO_UPGRADE property. But that’s for the future, and not yet documented.

For the moment, you still need to run a full catupgrd on each container, through called by the ‘dbupgrade’ script. Here on a 2 OCPU service, it takes 20 minutes.


Cet article PDB upgrade from 12c to 18c est apparu en premier sur Blog dbi services.

ODA Lite: What is this ‘odacli’ repository?

When ODA Lite was introduced, with ODA X6-2 S/M/L, and now with ODA x7-2 S/M, a new ‘odacli’ was there to manage it. It will probably replace the oakcli for ODA HA as well in the future. One big difference: it uses a repository to record the configuration and the operations. I don’t really like it because when something fails you are blocked. Oracle Support can modify the directory, but they ask for an access to the machine for that and this is not easy in secured environments. Anyway, I really don’t understand why another repository has been introduced. We already have the Oracle Inventory, the Grid Infrastructure resources, the Linux /etc files,… And now we have a closed repository which controls everything, accessible only with the very limited odacli commands which are not the best example of automation code and error handling.

This post is about viewing what is inside. You may also want to update it in case you have a problem. I can’t tell you not to do it: this blog has readers who fixed critical issues by editing the binary data in system files, so changing some metadata in an embedded SQL database is not so dangerous. On the other hand, you take the risk to mess up everything. So better contact Oracle Support of you are not 142% sure about what you do. But when the support is long to answer, asks a remote access, or has no other solution than re-image the ODA, you may have to find other alternatives. Just limit yourseld to what you know you can do without risk.

So, this repository is stored in an embedded JavaDB which is, as far as I understand it, An Apache Derby database recolored in red by Oracle. There’s a jdbc driver to access it.

You find the repository on your ODA in the following directory:

You will probably copy the directory elsewhere to look at it. And you may do that with the DCS agent stopped.

SQuirreL SQL Client

I used SQuirreL SQL Client to read this database:

Download SQuirreL SQL Client: squirrel-sql-snapshot-20180206_2232-standard.jar from and install it.

Download derby.jar from

Run SQuirreL SQL Client, and add the derby.jar: CaptureDerby001

Connect to it. If you still have the repo at the original place, the URL is jdbc:derby:/opt/oracle/dcs/repo/node_0. There is no user and no password.

Then, in the ‘APP’ catalog, you can browse the tables:



You probably want to see the data model for those few tables. I did it on a repository. I used SchemaSpy ( which is awesome because it uses the awesome Graphviz ( for the visual representation. If you want to do the same, here is how I did it:

export PATH=$PATH:"/cygdrive/c/Program Files (x86)/Graphviz2.38/bin"
java -jar schemaspy*.jar -t derby -db ./repo/node_0 -dp ./derby.jar -o ODAviz -u "" -cat "APP"

Here are some of the schemas generated if you want to have a look at what is stored in the ODA repository:

The schema is very simple. Only a few referential integrity constraints and very simple information.

One additional warning: modifications here are not supported by Oracle, and that may even be forbidden as the Capacity On Demand core count is also stored there.


Cet article ODA Lite: What is this ‘odacli’ repository? est apparu en premier sur Blog dbi services.

Scene and Cut. Training Days 2018

I’m recovering after a very successful RMOUG Training Days 2018 and now looking towards the next year.  The board elections come up in two months and I’m happy to say, that the goals that I hoped to achieve in my year as president, the board was able to achieve even as I worked on the conference and started as the president for the SQL Server User Group here in Denver, all in parallel.

The primary leadership for the user group was becoming aware of what most Oracle user groups are facing-  our user groups are getting older, (2018 was the 29th Training Days conference) so many are retiring, fewer companies are able to see the difference in free events and the deep technical content that is offered at an event like Training Days and fewer new technologists are going into relational database technology.

With this change, I was seeing consistent feedback at all the events I was presenting at stating they had declining numbers in attendance, memberships are down and a significant shift in what attracts people to become part of any “group”.

The Challenge

There were three main areas I wanted to address as my year as president:

  1. Spark new life into the annual Training Days conference.
  2. Move away from a hard copy Newsletter.
  3. Build long term goals for RMOUG as an organization.

It may not sound like much to take on in a year, but this organization is run by volunteers outside of our Executive Director, Peggy King, and even she has other demands on her, making her part time, so these were some seriously huge goals.

The first goal required planning of more than a year and I began it realizing as a conference director after almost 7 events, that the most crucial change to achieving it was having a new set of eyes on the event.  I felt that everyone, including me, needed to step back from the conference.

New Faces

The changes to the conference were going to be a monstrous undertaking and we couldn’t have done it without the incredible work by Linda Hoover and her team at EPS.  They were pivotal in negotiating the new location of the Westminster Westin Conference Center  after 20 years of holding the event at the Colorado Convention Center.  Komal Goyal, who had *suffered* years with me as her mentor, took over as the new Training Days Director and we even transitioned to new software to manage the event that included a more modernized interface.  We didn’t want to change what worked with the event, but I knew I had cut all costs from the event, offering no real perks for our members to ensure I didn’t have to raise the prices for three years and the costs for us to host the event were skyrocketing.  We are now back to a controllable expense and the three day event with 8 simultaneous sessions at the new venue was a great success.  Even with a move to a new town, our numbers stayed consistent with last year, which is an incredible achievement.  We had three times the revenue from exhibitors and we can reinvest that money towards scholarships and the future of RMOUG.

Of course, there were bumps along the way and surprises-  but these were expected.  The biggest challenge was that Komal, as the managing partner of 6e Technologies started to experience incredible business growth and this required her focus.  Needless to say, it was only right for me to take back the reins of the conference so she could focus on the important mission of her company’s success.  I’d like to say its because I’m such an awesome mentor, but I know it’s really because Komal has just done an incredible job making the success happen for her and her company.

Drop the Weight

The second goal, focused on the quarterly newsletter, which was a $14K cost to the user group annually.  Rene Antunez, as the newsletter director, with support from Tim, has started to move us to an electronic format for the publication.

For the last goal, this came about due to my incredible husband, Tim Gorman and the work of our executive director’s dedication, Peggy King.  They’d already started investigating the additional benefits of RMOUG as a non-profit before I became president.  They started additional initiatives that will continue in the upcoming years and we’d invested savings into low-risk stocks to ensure the financial security of the organization.  Now many might say that a non-profit shouldn’t be thinking about revenue from events and investing, but the long-term focus for the organization, (as I’ve adopted from Tim) is towards scholarships for those who want a career in technology and our annual event which is the primary revenue opportunity to continue that investment.

The scholarships have been carefully cared for and delivered almost seamlessly, by Nicole Navratil.  She really had no transition or training and yet she jumped in and has done an incredible job making sure we’ve achieved great success with scholarship distribution.  We’ve had a significant increase in WIT scholarships, which some of them paid out from profit from our investments, helping to decrease expenditures.

See the Change

With all of these changes, we just needed to see if it had resulted in success and this is where my incredible husband came in.  Anyone who knows Tim can attest to this, but he took our treasurer reports and added some simple, but very effective graphs to show financial trends and did analysis to show the turn around in our financial status vs. previous years.  The board has done an incredible job moving towards longevity from the previous downward turn in financial savings.  I have no doubt that we’ll continue this trend with the next board in the upcoming years as we move forward.


User groups are under incredible pressure to evolve as the technical industry does.  I’ve suggested that we take on an idea that was suggested when we first became a 503c non-profit and that’s to move from having Oracle in our name, (as a non-profit, we may focus on Oracle, but can’t be locked to any one vendor) and become a data user group.

Moving from an original proof of value for membership, we may move to one more in-line with meetups, in that membership doesn’t cost anything and we’ll look to other ways via sponsorship to support events outside of the annual conference and have the annual event as our main investment.  I spend a lot of time studying the Microsoft Pass community and I see many features of their SQL Saturday events that could be incorporated into a hybrid event for Oracle’s that could bring not just great success, but less cost to its attendees.

As for me, I’m going to retire after this year and simply be an advisor to the board.  Bryan Wells is going to take over the conference and I’m proud to say that many stepped up at the event with interest in joining the RMOUG board.  It’s a great group that I’m proud to call myself the current president of and look forward to many great things from them to come.

For those of you that look for my annual conference evaluation report, I’ll be producing that in about another month.  If you did speak or attend, do fill out that evaluation in your inbox.  I live for that data!!

Tags:  ,





Copyright © DBA Kevlar [Scene and Cut. Training Days 2018], All Right Reserved. 2018.

The post Scene and Cut. Training Days 2018 appeared first on DBA Kevlar.

18c–If you can’t wait

You’ve seen the tweet !!


but perhaps the accompanying blog post has tempered your enthusiasm Sad smile


You might be thinking:

“I’m not on Exadata – how can I play with 18c?”

Well, there is still a way to get ahead of the game and skill up on 18c.  We’ve upgraded to 18c, and we’ve pre-loaded some demo scripts for you as well.


So get cracking! Oracle Database 18c builds upon the foundation set by 12.2.



Huge Pages

A useful quick summary from Neil Chandler replying to a thread on Oracle-L:

Topic: RAC install on Linux

You should always be using Hugepages.

They give a minor performance improvement and a significant memory saving in terms of the amount of memory needed to handle the pages – less Transaction Lookaside Buffers, which also means less TLB misses (which are expensive).

You are handling the memory chopped up into 2MB pieces instead of 4K. But you also have a single shared memory TLB for Hugepages.

The kernel has less work to do, bookkeeping fewer pointers in the TLB.

You also have contiguous memory allocation and it can’t be swapped.

If you are having problems with Hugepages, you have probably overallocated them (I’ve seen this several times at clients so it’s not uncommon). Hugepages can *only* be used for your SGA’s. All of your SGA’s should fit into the Hugepages and that should generally be no more than about 60% of the total server memory (but there are exceptions), leaving plenty of “normal” memory (small pages) for PGA , O/S and other stuff like monitoring agendas.

As an added bonus, AMM can’t use Hugepages, so your are forced to use ASMM. AMM doesn’t work well and has been kind-of deprecated by oracle anyway – dbca won’t let you setup AMM if the server has more than 4GB of memory.

There are a few follow-up emails after Neil’s; particularly helpful are two from Stefan Koehler, here and here.