Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Silent Installation of Grid Infrastructure

Introduction

Recently I had a requirement to install the Grid Infrastructure and Oracle RDBMS on a completely new VM. The customer I was doing this work for wanted to take a copy of their Production environment to another server so they could test an upgrade of their existing environment from Oracle GI / RDBMS 12.1 to 12.2. So they built a VM for me, copied the 12.1 installation media for both GI and the RDBMS and said “Go for it!”

I decided to log everything I did and write a series of blog posts based on my experience, in case it was of use to others. There were a few issues I needed to deal with for this customer. Some of those issues were:

  • The customer wanted to test some cloning software for other reasons that were orthogonal to this task. The cloning software was only supported on Oracle Linux 6.5 (I didn’t ask why, just accepted that this was the requirement), so they built the VM for me on that release of Oracle Linux.
  • The existing Production environment was 12.1.0.1 – absolutely vanilla, no patches at all (you can see why we suggested they upgrade!). That release is, I think, no longer supported, but as that is what the Production environment was using, we had to build the test environment using 12.1.0.1 as well.
  • The Production environment was using Standard Edition, so the test environment had to use that as well.
  • I didn’t have direct access to the test VM. I had to ssh to the Production machine, and then onto the test VM. There was no GUI available in this configuration, so the entire build had to be done using response files. I had previously built a 12.2 RDBMS kernel for another customer, but had no experience at building Grid Infrastructure using a response file.
  • There wasn’t a lot of storage available for the VM (in fact I had to extend the volume as part of this work, which is the subject of another post). More importantly, there was certainly not enough space to load a backup of their Production environment, so we had to NFS mount their Production backup device to duplicate the database from.
  • The customer wasn’t familiar with installing Oracle, so they hadn’t performed any of the checks needed to ensure a successful installation.
  • Given this was a test to see how the upgrade would go in Production, I wanted to build an environment that matched their existing Production environment as closely as possible. That was built quite some time back, and not by me, so there were some decisions made at that stage that I probably wouldn’t have made. However, I replicated those decisions to try to match their Production environment.
  • ASM is used in the Production environment, so that’s why the Grid Infrastructure was installed.

So given that set of issues to deal with, here’s how the installation was done. In this post, I’m going to cover the installation of Grid Infrastructure. I’ll cover the rest in later posts.

Pre-installation Checks

Given the customer had built the VM without knowledge of the Oracle requirements, it was important to ensure all the installation requirements were met. I basically had a VM that had an “oracle” user, but still needed to do a lot of work to get the VM ready for a successful installation. So I grabbed the installation documentation and started running through the checks documented there.

Firstly I checked the memory configuration:

[root ~]# grep MemTotal /proc/meminfo
MemTotal:       33015808 kB
[root ~]$ grep SwapTotal /proc/meminfo
SwapTotal:       8130556 kB
[root ~]$ free
             total       used       free     shared    buffers     cached
Mem:      33015808     844904   32170904          0      19280     107812
-/+ buffers/cache:     717812   32297996
Swap:      8130556          0    8130556

I checked the groups on the VM, and found oinstall, dba, and oper did not exist. These were the only three groups used in the Production environment, so I restricted the test build to those as well:

[root ~]# groupadd oinstall
[root ~]# groupadd dba
[root ~]# groupadd oper

[root ~]# groups oracle
oracle : oracle
[root ~]# usermod -g oinstall oracle
[root ~]# usermod -G dba,oper oracle
[root ~]# id oracle
uid=500(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),503(oper)

Next, I went on to look at the storage, and create some directories for the installation (NOTE: Production only had an oracle user, not a grid user, which is why everything in Test is also owned by the oracle user):

[root ~]# mkdir -p /u01
[root ~]# mkdir -p /u01/app
[root ~]# mkdir -p /u01/app/oracle
[root ~]# chown oracle:oinstall /u01
[root ~]# chown oracle:oinstall /u01/app
[root ~]# chmod -R 775 /u01/

As mentioned earlier, though this is a standalone server (not RAC), ASM is used for the storage, so I needed to check for – and install / configure – ASM:

[root ~]# rpm -qa | grep oracleasm
[root ~]# yum install oracleasm-support
public_ol6_UEK_latest                                                   | 1.2 kB     00:00
public_ol6_u5_base                                                      | 1.4 kB     00:00
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package oracleasm-support.x86_64 0:2.1.8-1.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===============================================================================================
 Package                   Arch           Version             Repository                  Size
===============================================================================================
Installing:
 oracleasm-support         x86_64         2.1.8-1.el6         public_ol6_u5_base          73 k

Transaction Summary
===============================================================================================
Install       1 Package(s)

Total download size: 73 k
Installed size: 216 k
Is this ok [y/N]: y
Downloading Packages:
oracleasm-support-2.1.8-1.el6.x86_64.rpm                                |  73 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : oracleasm-support-2.1.8-1.el6.x86_64                                        1/1
  Verifying  : oracleasm-support-2.1.8-1.el6.x86_64                                        1/1

Installed:
  oracleasm-support.x86_64 0:2.1.8-1.el6

Complete!

[root oracle]# rpm -ihv oracleasmlib-2.0.12-1.el6.x86_64.rpm
Preparing...                ########################################### [100%]
   1:oracleasmlib           ########################################### [100%]
[root oracle]# rpm -qa | grep oracleasm
oracleasmlib-2.0.12-1.el6.x86_64
oracleasm-support-2.1.8-1.el6.x86_64
[root oracle]# which oracleasm
/usr/sbin/oracleasm
[root oracle]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting  without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

[root oracle]# ls -lrt /etc/sysconfig/oracleasm
lrwxrwxrwx 1 root root 24 Feb  5 11:46 /etc/sysconfig/oracleasm -> oracleasm-_dev_oracleasm
[root oracle]# oracleasm status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
[root oracle]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm

Verify the oracleasm configuration

a. Perform ‘df –ha’, this should show you that /dev/oracleasmfs is mounted.

Execute the ‘lsmod’ command, to show the loaded oracleasm module.

[root oracle]# lsmod | grep oracleasm
oracleasm              54288  1

[root oracle]# oracleasm-discover
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.12 (KABI_V2)]
 

[root oracle]# df -ha
Filesystem                             Size  Used Avail Use% Mounted on
/dev/mapper/vg_brsyddrdevdb01-lv_root   20G  7.8G   11G  42% /
proc                                      0     0     0    - /proc
sysfs                                     0     0     0    - /sys
devpts                                    0     0     0    - /dev/pts
tmpfs                                   16G     0   16G   0% /dev/shm
/dev/sda1                              485M   97M  364M  21% /boot
none                                      0     0     0    - /proc/sys/fs/binfmt_misc
oracleasmfs                               0     0     0    - /dev/oracleasm

Next, I wanted to install the Oracle Preinstallation RPM. Since this VM had been built by the client, I had no idea if it was registered for Linux support, so I downloaded a copy of the preinstallation RPM and did a local install of that:

[root oracle]# yum localinstall oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64.rpm
Setting up Local Package Process
Examining oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64.rpm: oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
Marking oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package oracle-rdbms-server-12cR1-preinstall.x86_64 0:1.0-3.el7 will be installed
--> Processing Dependency: compat-libcap1 for package: oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
--> Processing Dependency: compat-libstdc++-33 for package: oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
--> Processing Dependency: gcc for package: oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
--> Processing Dependency: gcc-c++ for package: oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
--> Processing Dependency: ksh for package: oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
--> Processing Dependency: libaio-devel for package: oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
--> Processing Dependency: libstdc++-devel for package: oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
--> Processing Dependency: xorg-x11-utils for package: oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
--> Processing Dependency: xorg-x11-xauth for package: oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
--> Running transaction check
---> Package compat-libcap1.x86_64 0:1.10-1 will be installed
---> Package compat-libstdc++-33.x86_64 0:3.2.3-69.el6 will be installed
---> Package gcc.x86_64 0:4.4.7-4.el6 will be installed
--> Processing Dependency: cpp = 4.4.7-4.el6 for package: gcc-4.4.7-4.el6.x86_64
--> Processing Dependency: cloog-ppl >= 0.15 for package: gcc-4.4.7-4.el6.x86_64
---> Package gcc-c++.x86_64 0:4.4.7-4.el6 will be installed
--> Processing Dependency: libmpfr.so.1()(64bit) for package: gcc-c++-4.4.7-4.el6.x86_64
---> Package ksh.x86_64 0:20120801-10.el6 will be installed
---> Package libaio-devel.x86_64 0:0.3.107-10.el6 will be installed
---> Package libstdc++-devel.x86_64 0:4.4.7-4.el6 will be installed
---> Package xorg-x11-utils.x86_64 0:7.5-6.el6 will be installed
--> Processing Dependency: libdmx.so.1()(64bit) for package: xorg-x11-utils-7.5-6.el6.x86_64
--> Processing Dependency: libXxf86misc.so.1()(64bit) for package: xorg-x11-utils-7.5-6.el6.x86_64
--> Processing Dependency: libXxf86dga.so.1()(64bit) for package: xorg-x11-utils-7.5-6.el6.x86_64
---> Package xorg-x11-xauth.x86_64 1:1.0.2-7.1.el6 will be installed
--> Processing Dependency: libXmuu.so.1()(64bit) for package: 1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64
--> Running transaction check
---> Package cloog-ppl.x86_64 0:0.15.7-1.2.el6 will be installed
--> Processing Dependency: libppl.so.7()(64bit) for package: cloog-ppl-0.15.7-1.2.el6.x86_64
--> Processing Dependency: libppl_c.so.2()(64bit) for package: cloog-ppl-0.15.7-1.2.el6.x86_64
---> Package cpp.x86_64 0:4.4.7-4.el6 will be installed
---> Package libXmu.x86_64 0:1.1.1-2.el6 will be installed
---> Package libXxf86dga.x86_64 0:1.1.3-2.el6 will be installed
---> Package libXxf86misc.x86_64 0:1.0.3-4.el6 will be installed
---> Package libdmx.x86_64 0:1.1.2-2.el6 will be installed
---> Package mpfr.x86_64 0:2.4.1-6.el6 will be installed
--> Running transaction check
---> Package ppl.x86_64 0:0.10.2-11.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================
 Package                              Arch   Version         Repository                 Size
=============================================================================================
Installing:
 oracle-rdbms-server-12cR1-preinstall x86_64 1.0-3.el7       /oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64
                                                                                        43 k
Installing for dependencies:
 cloog-ppl                            x86_64 0.15.7-1.2.el6  public_ol6_u5_base         93 k
 compat-libcap1                       x86_64 1.10-1          public_ol6_u5_base         17 k
 compat-libstdc++-33                  x86_64 3.2.3-69.el6    public_ol6_u5_base        183 k
 cpp                                  x86_64 4.4.7-4.el6     public_ol6_u5_base        3.7 M
 gcc                                  x86_64 4.4.7-4.el6     public_ol6_u5_base         10 M
 gcc-c++                              x86_64 4.4.7-4.el6     public_ol6_u5_base        4.7 M
 ksh                                  x86_64 20120801-10.el6 public_ol6_u5_base        755 k
 libXmu                               x86_64 1.1.1-2.el6     public_ol6_u5_base         65 k
 libXxf86dga                          x86_64 1.1.3-2.el6     public_ol6_u5_base         24 k
 libXxf86misc                         x86_64 1.0.3-4.el6     public_ol6_u5_base         17 k
 libaio-devel                         x86_64 0.3.107-10.el6  public_ol6_u5_base         13 k
 libdmx                               x86_64 1.1.2-2.el6     public_ol6_u5_base         20 k
 libstdc++-devel                      x86_64 4.4.7-4.el6     public_ol6_u5_base        1.6 M
 mpfr                                 x86_64 2.4.1-6.el6     public_ol6_u5_base        156 k
 ppl                                  x86_64 0.10.2-11.el6   public_ol6_u5_base        1.3 M
 xorg-x11-utils                       x86_64 7.5-6.el6       public_ol6_u5_base         94 k
 xorg-x11-xauth                       x86_64 1:1.0.2-7.1.el6 public_ol6_u5_base         34 k

Transaction Summary
=============================================================================================
Install      18 Package(s)

Total size: 23 M
Total download size: 23 M
Installed size: 56 M
Is this ok [y/N]: y
Downloading Packages:
(1/17): cloog-ppl-0.15.7-1.2.el6.x86_64.rpm                           |  93 kB     00:00
(2/17): compat-libcap1-1.10-1.x86_64.rpm                              |  17 kB     00:00
(3/17): compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm                   | 183 kB     00:00
(4/17): cpp-4.4.7-4.el6.x86_64.rpm                                    | 3.7 MB     00:02
(5/17): gcc-4.4.7-4.el6.x86_64.rpm                                    |  10 MB     00:07
(6/17): gcc-c++-4.4.7-4.el6.x86_64.rpm                                | 4.7 MB     00:03
(7/17): ksh-20120801-10.el6.x86_64.rpm                                | 755 kB     00:00
(8/17): libXmu-1.1.1-2.el6.x86_64.rpm                                 |  65 kB     00:00
(9/17): libXxf86dga-1.1.3-2.el6.x86_64.rpm                            |  24 kB     00:00
(10/17): libXxf86misc-1.0.3-4.el6.x86_64.rpm                          |  17 kB     00:00
(11/17): libaio-devel-0.3.107-10.el6.x86_64.rpm                       |  13 kB     00:00
(12/17): libdmx-1.1.2-2.el6.x86_64.rpm                                |  20 kB     00:00
(13/17): libstdc++-devel-4.4.7-4.el6.x86_64.rpm                       | 1.6 MB     00:01
(14/17): mpfr-2.4.1-6.el6.x86_64.rpm                                  | 156 kB     00:00
(15/17): ppl-0.10.2-11.el6.x86_64.rpm                                 | 1.3 MB     00:00
(16/17): xorg-x11-utils-7.5-6.el6.x86_64.rpm                          |  94 kB     00:00
(17/17): xorg-x11-xauth-1.0.2-7.1.el6.x86_64.rpm                      |  34 kB     00:00
---------------------------------------------------------------------------------------------
Total                                                        1.0 MB/s |  23 MB     00:22
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Installing : mpfr-2.4.1-6.el6.x86_64                                                  1/18
  Installing : libstdc++-devel-4.4.7-4.el6.x86_64                                       2/18
  Installing : cpp-4.4.7-4.el6.x86_64                                                   3/18
  Installing : libXmu-1.1.1-2.el6.x86_64                                                4/18
  Installing : 1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64                                    5/18
  Installing : ksh-20120801-10.el6.x86_64                                               6/18
  Installing : libXxf86dga-1.1.3-2.el6.x86_64                                           7/18
  Installing : compat-libstdc++-33-3.2.3-69.el6.x86_64                                  8/18
  Installing : libXxf86misc-1.0.3-4.el6.x86_64                                          9/18
  Installing : libaio-devel-0.3.107-10.el6.x86_64                                      10/18
  Installing : ppl-0.10.2-11.el6.x86_64                                                11/18
  Installing : cloog-ppl-0.15.7-1.2.el6.x86_64                                         12/18
  Installing : gcc-4.4.7-4.el6.x86_64                                                  13/18
  Installing : gcc-c++-4.4.7-4.el6.x86_64                                              14/18
  Installing : libdmx-1.1.2-2.el6.x86_64                                               15/18
  Installing : xorg-x11-utils-7.5-6.el6.x86_64                                         16/18
  Installing : compat-libcap1-1.10-1.x86_64                                            17/18
  Installing : oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64                   18/18
  Verifying  : compat-libcap1-1.10-1.x86_64                                             1/18
  Verifying  : gcc-c++-4.4.7-4.el6.x86_64                                               2/18
  Verifying  : libstdc++-devel-4.4.7-4.el6.x86_64                                       3/18
  Verifying  : xorg-x11-utils-7.5-6.el6.x86_64                                          4/18
  Verifying  : cpp-4.4.7-4.el6.x86_64                                                   5/18
  Verifying  : libdmx-1.1.2-2.el6.x86_64                                                6/18
  Verifying  : ppl-0.10.2-11.el6.x86_64                                                 7/18
  Verifying  : libaio-devel-0.3.107-10.el6.x86_64                                       8/18
  Verifying  : libXxf86misc-1.0.3-4.el6.x86_64                                          9/18
  Verifying  : mpfr-2.4.1-6.el6.x86_64                                                 10/18
  Verifying  : compat-libstdc++-33-3.2.3-69.el6.x86_64                                 11/18
  Verifying  : libXxf86dga-1.1.3-2.el6.x86_64                                          12/18
  Verifying  : 1:xorg-x11-xauth-1.0.2-7.1.el6.x86_64                                   13/18
  Verifying  : gcc-4.4.7-4.el6.x86_64                                                  14/18
  Verifying  : ksh-20120801-10.el6.x86_64                                              15/18
  Verifying  : oracle-rdbms-server-12cR1-preinstall-1.0-3.el7.x86_64                   16/18
  Verifying  : libXmu-1.1.1-2.el6.x86_64                                               17/18
  Verifying  : cloog-ppl-0.15.7-1.2.el6.x86_64                                         18/18

Installed:
  oracle-rdbms-server-12cR1-preinstall.x86_64 0:1.0-3.el7

Dependency Installed:
  cloog-ppl.x86_64 0:0.15.7-1.2.el6                 compat-libcap1.x86_64 0:1.10-1
  compat-libstdc++-33.x86_64 0:3.2.3-69.el6         cpp.x86_64 0:4.4.7-4.el6
  gcc.x86_64 0:4.4.7-4.el6                          gcc-c++.x86_64 0:4.4.7-4.el6
  ksh.x86_64 0:20120801-10.el6                      libXmu.x86_64 0:1.1.1-2.el6
  libXxf86dga.x86_64 0:1.1.3-2.el6                  libXxf86misc.x86_64 0:1.0.3-4.el6
  libaio-devel.x86_64 0:0.3.107-10.el6              libdmx.x86_64 0:1.1.2-2.el6
  libstdc++-devel.x86_64 0:4.4.7-4.el6              mpfr.x86_64 0:2.4.1-6.el6
  ppl.x86_64 0:0.10.2-11.el6                        xorg-x11-utils.x86_64 0:7.5-6.el6
  xorg-x11-xauth.x86_64 1:1.0.2-7.1.el6

Complete!

Next, I checked the values found in /etc/sysctl.conf and /etc/security/limits.conf on the Production machine, and copied those to the test environment. Finally, I edited the .bash_profile file for the oracle user and set umask to 022:

[oracle ~]$ . ./.bash_profile
[oracle ~]$ umask
0022

Grid Infrastructure Installation

Now that the VM was set up correctly, it was time to install the Grid Infrastructure code. Since I was doing a software installation only, the only relevant parts of the response file were sections A, B, and C (in fact even C wasn’t needed), so I left everything else blank. Here’s the grid.rsp file I used for the installation:

[oracle ~]$ cat grid.rsp
###############################################################################
## Copyright(c) Oracle Corporation 1998,2013. All rights reserved.           ##
##                                                                           ##
## Specify values for the variables listed below to customize                ##
## your installation.                                                        ##
##                                                                           ##
## Each variable is associated with a comment. The comment                   ##
## can help to populate the variables with the appropriate                   ##
## values.                                                                   ##
##                                                                           ##
## IMPORTANT NOTE: This file contains plain text passwords and               ##
## should be secured to have read permission only by oracle user             ##
## or db administrator who owns this installation.                           ##
##                                                                           ##
###############################################################################

###############################################################################
##                                                                           ##
## Instructions to fill this response file                                   ##
## To install and configure 'Grid Infrastructure for Cluster'                ##
##  - Fill out sections A,B,C,D,E,F and G                                    ##
##  - Fill out section G if OCR and voting disk should be placed on ASM      ##
##                                                                           ##
## To install and configure 'Grid Infrastructure for Standalone server'      ##
##  - Fill out sections A,B and G                                            ##
##                                                                           ##
## To install software for 'Grid Infrastructure'                             ##
##  - Fill out sections A,B and C                                            ##
##                                                                           ##
## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##
##                                                                           ##
###############################################################################

#------------------------------------------------------------------------------
# Do not change the following system generated value.
#------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.1.0

###############################################################################
#                                                                             #
#                          SECTION A - BASIC                                  #
#                                                                             #
###############################################################################

#-------------------------------------------------------------------------------
# Specify the hostname of the system as set during the install. It can be used
# to force the installation to use an alternative hostname rather than using the
# first hostname found on the system. (e.g., for systems with multiple hostnames
# and network interfaces)
#-------------------------------------------------------------------------------
ORACLE_HOSTNAME=devdb01.acme.com.au

#-------------------------------------------------------------------------------
# Specify the location which holds the inventory files.
# This is an optional parameter if installing on
# Windows based Operating System.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/u01/app/oraInventory

#-------------------------------------------------------------------------------
# Specify the languages in which the components will be installed.
#
# en   : English                  ja   : Japanese
# fr   : French                   ko   : Korean
# ar   : Arabic                   es   : Latin American Spanish
# bn   : Bengali                  lv   : Latvian
# pt_BR: Brazilian Portuguese     lt   : Lithuanian
# bg   : Bulgarian                ms   : Malay
# fr_CA: Canadian French          es_MX: Mexican Spanish
# ca   : Catalan                  no   : Norwegian
# hr   : Croatian                 pl   : Polish
# cs   : Czech                    pt   : Portuguese
# da   : Danish                   ro   : Romanian
# nl   : Dutch                    ru   : Russian
# ar_EG: Egyptian                 zh_CN: Simplified Chinese
# en_GB: English (Great Britain)  sk   : Slovak
# et   : Estonian                 sl   : Slovenian
# fi   : Finnish                  es_ES: Spanish
# de   : German                   sv   : Swedish
# el   : Greek                    th   : Thai
# iw   : Hebrew                   zh_TW: Traditional Chinese
# hu   : Hungarian                tr   : Turkish
# is   : Icelandic                uk   : Ukrainian
# in   : Indonesian               vi   : Vietnamese
# it   : Italian
#
# all_langs   : All languages
#
# Specify value as the following to select any of the languages.
# Example : SELECTED_LANGUAGES=en,fr,ja
#
# Specify value as the following to select all the languages.
# Example : SELECTED_LANGUAGES=all_langs
#-------------------------------------------------------------------------------
SELECTED_LANGUAGES=all_langs

#-------------------------------------------------------------------------------
# Specify the installation option.
# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY
#   - CRS_CONFIG : To configure Grid Infrastructure for cluster
#   - HA_CONFIG  : To configure Grid Infrastructure for stand alone server
#   - UPGRADE    : To upgrade clusterware software of earlier release
#   - CRS_SWONLY : To install clusterware files only (can be configured for cluster
#                  or stand alone server later)
#   - HA_SWONLY  : To install clusterware files only (can be configured for stand
#                  alone server later. This is only supported on Windows.)
#-------------------------------------------------------------------------------
oracle.install.option=CRS_SWONLY

#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Base.
#-------------------------------------------------------------------------------
ORACLE_BASE=/u01/app

#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Home.
#-------------------------------------------------------------------------------
ORACLE_HOME=/u01/app/product/12.1.0/grid

################################################################################
#                                                                              #
#                              SECTION B - GROUPS                              #
#                                                                              #
#   The following three groups need to be assigned for all GI installations.   #
#   OSDBA and OSOPER can be the same or different.  OSASM must be different    #
#   than the other two.                                                        #
#   The value to be specified for OSDBA, OSOPER and OSASM group is only for    #
#   Unix based Operating System.                                               #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# The DBA_GROUP is the OS group which is to be granted OSDBA privileges.
#-------------------------------------------------------------------------------
oracle.install.asm.OSDBA=dba

#-------------------------------------------------------------------------------
# The OPER_GROUP is the OS group which is to be granted OSOPER privileges.
# The value to be specified for OSOPER group is optional.
#-------------------------------------------------------------------------------
oracle.install.asm.OSOPER=oper

#-------------------------------------------------------------------------------
# The OSASM_GROUP is the OS group which is to be granted OSASM privileges. This
# must be different than the previous two.
#-------------------------------------------------------------------------------
oracle.install.asm.OSASM=oinstall

################################################################################
#                                                                              #
#                           SECTION C - SCAN                                   #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify a name for SCAN
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.scanName=

#-------------------------------------------------------------------------------
# Specify a unused port number for SCAN service
#-------------------------------------------------------------------------------

oracle.install.crs.config.gpnp.scanPort=

################################################################################
#                                                                              #
#                           SECTION D - CLUSTER & GNS                         #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify the type of cluster you would like to configure
# Allowed values: FLEX and STANDARD
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterType=


#-------------------------------------------------------------------------------
# Specify a name for the Cluster you are creating.
#
# The maximum length allowed for clustername is 15 characters. The name can be
# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-)
# and underscore(_).
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterName=

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Grid Naming Service(GNS), else
# specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.configureGNS=

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure GNS
# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP
# , else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.autoConfigureClusterNodeVIP=

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure GNS
# Specify the type of GNS configuration for cluster
# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsOption=

#-------------------------------------------------------------------------------
# Applicable only if SHARED_GNS is being configured for cluster
# Specify the path to the GNS client data file
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsClientDataFile=

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure GNS for this cluster
# oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
# Specify the GNS subdomain and an unused virtual hostname for GNS service
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=



#-------------------------------------------------------------------------------
# Specify the list of nodes that have to be configured to be part of the cluster.
#
# The list should a comma-separated list of tuples.  Each tuple should be a
# colon-separated string that contains
# - 2 fields if configuring a Standard Cluster, or
# - 3 fields if configuring a Flex Cluster
#
# The fields should be ordered as follows:
# 1. The first field should be the public node name.
# 2. The second field should be the virtual host name
#    (Should be specified as AUTO if you have chosen 'auto configure for VIP'
#     i.e. autoConfigureClusterNodeVIP=true)
# 3. The third field indicates the role of node (HUB,LEAF). This has to
#    be provide only if Flex Cluster is being configured.
#
# Examples
# For configuring Standard Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip,node2:node2-vip
# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
#
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterNodes=

#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetMask:InterfaceType
# where InterfaceType can be either "1", "2", "3", "4", or "5"
# InterfaceType stand for the following values
#   - 1 : PUBLIC
#   - 2 : PRIVATE
#   - 3 : DO NOT USE
#   - 4 : ASM
#   - 5 : ASM & PRIVATE
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Management Database Option, else
# specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.managementdb.configure=

################################################################################
#                                                                              #
#                              SECTION E - STORAGE                             #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting
# Disks files
#   - LOCAL_ASM_STORAGE
#   - FLEX_ASM_STORAGE
#   - FILE_SYSTEM_STORAGE
# If configuring a Flex Cluster, FLEX_ASM_STORAGE is the only allowed value
#-------------------------------------------------------------------------------
oracle.install.crs.config.storageOption=

#-------------------------------------------------------------------------------
# These properties are applicable only if FILE_SYSTEM_STORAGE is chosen for
# storing OCR and voting disk
# Specify the location(s) and redundancy for OCR and voting disks
# Multiple locations can be specified, separated by commas.
# In case of windows, mention the drive location that is specified to be
# formatted for DATA in the above property.
# Redundancy can be one of these:
#     EXTERNAL - one(1) location should be specified for OCR and voting disk
#     NORMAL - three(3) locations should be specified for OCR and voting disk
# Example:
#     For Unix based Operating System:
#     oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=/oradbocfs/storage/vdsk1,/oradbocfs/storage/vdsk2,/oradbocfs/storage/vdsk3
#     oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=/oradbocfs/storage/ocr1,/oradbocfs/storage/ocr2,/oradbocfs/storage/ocr3
#     For Windows based Operating System OCR/VDSK on shared storage is not supported.
#-------------------------------------------------------------------------------
oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=
oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=
################################################################################
#                                                                              #
#                               SECTION F - IPMI                               #
#                                                                              #
################################################################################

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Intelligent Power Management interface
# (IPMI), else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.useIPMI=

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure IPMI
# i.e. oracle.install.crs.config.useIPMI=true
# Specify the username and password for using IPMI service
#-------------------------------------------------------------------------------
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=

################################################################################
#                                                                              #
#                                SECTION G - ASM                               #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify a password for SYSASM user of the ASM instance
#-------------------------------------------------------------------------------
oracle.install.asm.SYSASMPassword=

#-------------------------------------------------------------------------------
# The ASM DiskGroup
#
# Example: oracle.install.asm.diskGroup.name=data
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.name=

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following
#   - NORMAL
#   - HIGH
#   - EXTERNAL
# Example: oracle.install.asm.diskGroup.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.redundancy=

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
#   - 1
#   - 2
#   - 4
#   - 8
#   - 16
#   - 32
#   - 64
# Example: oracle.install.asm.diskGroup.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.AUSize=

#-------------------------------------------------------------------------------
# List of disks to create a ASM DiskGroup
#
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disks=

#-------------------------------------------------------------------------------
# The disk discovery string to be used to discover the disks used create a ASM DiskGroup
#
# Example:
#     For Unix based Operating System:
#     oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/*
#     For Windows based Operating System:
#     oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK*
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.diskDiscoveryString=

#-------------------------------------------------------------------------------
# oracle.install.asm.monitorPassword=password
#-------------------------------------------------------------------------------
oracle.install.asm.monitorPassword=


################################################################################
#                                                                              #
#                             SECTION H - UPGRADE                              #
#                                                                              #
################################################################################
#-------------------------------------------------------------------------------
# Specify whether to ignore down nodes during upgrade operation.
# Value should be 'true' to ignore down nodes otherwise specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.ignoreDownNodes=

#------------------------------------------------------------------------------
# Specify the auto-updates option. It can be one of the following:
#   - MYORACLESUPPORT_DOWNLOAD
#   - OFFLINE_UPDATES
#   - SKIP_UPDATES
#------------------------------------------------------------------------------
oracle.installer.autoupdates.option=

#------------------------------------------------------------------------------
# In case MYORACLESUPPORT_DOWNLOAD option is chosen, specify the location where
# the updates are to be downloaded.
# In case OFFLINE_UPDATES option is chosen, specify the location where the updates
# are present.
#------------------------------------------------------------------------------
oracle.installer.autoupdates.downloadUpdatesLoc=

#------------------------------------------------------------------------------
# Specify the My Oracle Support Account Username which has the patches download privileges
# to be used for software updates.
#  Example   : AUTOUPDATES_MYORACLESUPPORT_USERNAME=abc@oracle.com
#------------------------------------------------------------------------------
AUTOUPDATES_MYORACLESUPPORT_USERNAME=

#------------------------------------------------------------------------------
# Specify the My Oracle Support Account Username password which has the patches download privileges
# to be used for software updates.
#
# Example    : AUTOUPDATES_MYORACLESUPPORT_PASSWORD=password
#------------------------------------------------------------------------------
AUTOUPDATES_MYORACLESUPPORT_PASSWORD=

#------------------------------------------------------------------------------
# Specify the Proxy server name. Length should be greater than zero.
#
# Example    : PROXY_HOST=proxy.domain.com
#------------------------------------------------------------------------------
PROXY_HOST=

#------------------------------------------------------------------------------
# Specify the proxy port number. Should be Numeric and atleast 2 chars.
#
# Example    : PROXY_PORT=25
#------------------------------------------------------------------------------
PROXY_PORT=

#------------------------------------------------------------------------------
# Specify the proxy user name. Leave PROXY_USER and PROXY_PWD
# blank if your proxy server requires no authentication.
#
# Example    : PROXY_USER=username
#------------------------------------------------------------------------------
PROXY_USER=

#------------------------------------------------------------------------------
# Specify the proxy password. Leave PROXY_USER and PROXY_PWD
# blank if your proxy server requires no authentication.
#
# Example    : PROXY_PWD=password
#------------------------------------------------------------------------------
PROXY_PWD=

#------------------------------------------------------------------------------
# Specify the proxy realm.
#
# Example    : PROXY_REALM=metalink
#------------------------------------------------------------------------------
PROXY_REALM=

Those of you that are familiar with Oracle installations will pick up a couple of things I need to comment on here:

12cR2 PDB archive

In 12.1 we had the possibility to unplug a PDB by closing it and generating a .xml file that describes the PDB metadata required to plug the datafiles into another CDB.
In 12.2 we got an additional possibility to have this .xml file zipped together with the datafiles, for an easy transport. But that was not working for ASM files.
The latest Release Update, Oct 17 includes the patch that fixes this issue and is the occasion to show PDB archive.

Here is Oracle 12.2.0.1 with Oct 2017 (https://updates.oracle.com/download/26737266.html) applied (needs latest OPatch https://updates.oracle.com/download/6880880.html)
With a PDB1 pluggable database:

[oracle@VM106 ~]$ rman target /
 
Recovery Manager: Release 12.2.0.1.0 - Production on Wed Oct 18 16:16:41 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: CDB1 (DBID=920040307)
 
RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 810 SYSTEM YES /acfs/oradata/CDB1/datafile/o1_mf_system_dmrbv534_.dbf
3 540 SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrbxvds_.dbf
4 70 UNDOTBS1 YES /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrbz8mm_.dbf
5 250 PDB$SEED:SYSTEM NO /acfs/oradata/CDB1/datafile/o1_mf_system_dmrc52tm_.dbf
6 330 PDB$SEED:SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrc52t9_.dbf
7 5 USERS NO /acfs/oradata/CDB1/datafile/o1_mf_users_dygrpz79_.dbf
8 100 PDB$SEED:UNDOTBS1 NO /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrc52x0_.dbf
21 250 PDB1:SYSTEM YES /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_system_dygrqqq2_.dbf
22 350 PDB1:SYSAUX NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_sysaux_dygrqqs8_.dbf
23 100 PDB1:UNDOTBS1 YES +ASM1/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/DATAFILE/undotbs1.257.957719779
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /acfs/oradata/CDB1/datafile/o1_mf_temp_dmrc4wlh_.tmp
2 64 PDB$SEED:TEMP 32767 /acfs/oradata/CDB1/pdbseed/temp012017-06-10_19-17-38-745-PM.dbf
3 64 PDB1:TEMP 32767 /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_temp_dygrqqsh_.dbf

I have moved one file to ASM to show that it is now handled correctly.

The pluggable database is closed, we can unplug it. Nothing changes with the unplug syntax except the extension of the file. If the file mentioned is a .pdb instead of a .xml then it is a PDB archive:

RMAN> alter pluggable database PDB1 unplug into '/var/tmp/PDB1.pdb';
 
RMAN> alter pluggable database PDB1 close;
 
Statement processed
 
RMAN> alter pluggable database PDB1 unplug into '/var/tmp/PDB1.pdb'
2> ;
 
Statement processed
 
RMAN> exit

Actually it is just a zip file with the datafiles, without the full path:

[oracle@VM106 ~]$ unzip -t /var/tmp/PDB1.pdb
Archive: /var/tmp/PDB1.pdb
testing: o1_mf_system_dygrqqq2_.dbf OK
testing: o1_mf_sysaux_dygrqqs8_.dbf OK
testing: undotbs1.257.957719779 OK
testing: /var/tmp/PDB1.xml OK
No errors detected in compressed data of /var/tmp/PDB1.pdb.

You can see that the ASM file is not different from the others.

I drop the pluggable database

RMAN> drop pluggable database PDB1 including datafiles;
 
using target database control file instead of recovery catalog
Statement processed
 

And plug back the PDB1, as PDB2, using the zip file:

RMAN> create pluggable database PDB2 using '/var/tmp/PDB1.pdb';
 
Statement processed
 
RMAN> report schema;
 
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 810 SYSTEM YES /acfs/oradata/CDB1/datafile/o1_mf_system_dmrbv534_.dbf
3 540 SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrbxvds_.dbf
4 70 UNDOTBS1 YES /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrbz8mm_.dbf
5 250 PDB$SEED:SYSTEM NO /acfs/oradata/CDB1/datafile/o1_mf_system_dmrc52tm_.dbf
6 330 PDB$SEED:SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrc52t9_.dbf
7 5 USERS NO /acfs/oradata/CDB1/datafile/o1_mf_users_dygrpz79_.dbf
8 100 PDB$SEED:UNDOTBS1 NO /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrc52x0_.dbf
24 250 PDB2:SYSTEM NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_system_dygwt1lh_.dbf
25 350 PDB2:SYSAUX NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_sysaux_dygwt1lm_.dbf
26 100 PDB2:UNDOTBS1 NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_undotbs1_dygwt1lo_.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /acfs/oradata/CDB1/datafile/o1_mf_temp_dmrc4wlh_.tmp
2 64 PDB$SEED:TEMP 32767 /acfs/oradata/CDB1/pdbseed/temp012017-06-10_19-17-38-745-PM.dbf
4 64 PDB2:TEMP 32767 /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_temp_dygwt1lp_.dbf

Here all files are there, created in the db_create_file_dest.

File name convert

When you create a pluggable database and you are not in OMF you need to add a FILE_NAME_CONVERT to convert from the source file names to destination file names. When the files are referenced by a .xml file, the .xml file references the path to the files as they were in the source database. If you move then, you can update the .xml file, or you can use SOURCE_FILE_NAME_CONVERT to mention the new place. With a .pdb archive, the .xml inside contains the original path, but this is not what will be used. The path of the .pdb itself is used, as if the files were unzipped at that place.

If you use Oracle-Managed-Files, don’t care about the file names and then you don’t need all those file name converts.

 

Cet article 12cR2 PDB archive est apparu en premier sur Blog dbi services.

Extending a Logical Volume Group

Introduction

Today I ran into the situation where I needed to extend a logical volume group so I could complete an installation. I’d already installed the Grid Infrastructure, but there wasn’t enough room remaining to install the Oracle kernel on the same device. This is for a test environment which was being built on a VM that had just been created, and performance is not the issue we’re looking at here, so installing the Grid Infrastructure and RDBMS on the same device is not a concern for me. I’ve been around the Oracle database for way too many years, but my sysadmin skills leave a lot to be desired, so I did what anyone in this situation would do – I googled “resize volume linux” and followed someone else’s instructions. </p />
</p></div>

    	  	<div class=

RMOUG Training Days 2018

So Training Days is coming up in two weeks.  You haven’t registered to attend?  How you feeling about that?

Rocky Mountain Oracle User Group, (RMOUG) has the largest Oracle user group grassroots conference each February.  For the 2018 year, we decided to shake it up with:

  • A new venue, The Westminster Westin Conference Center
  • A new management company, EPS Conference Management
  • A new Training Days Director, Komal Goyal
  • A new management software, Bizabo

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/02/Wesstin.png?r... 300w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/02/Wesstin.png?r... 768w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/02/Wesstin.png?w... 1068w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" />

This was a huge undertaking and I have to say, I was worried we wouldn’t be able to pull it off, but it was time.  There’s nothing scarier than saying, “but this is the way we’ve always done it” and it wasn’t that there was anything wrong with how we’d done it, but there’s also something to be said for adding value to a great event by doing something new and taking a few risks.

I came back to supplement my tasks as president to help Komal, as it is a huge undertaking.  We signed on Penny Avril as the keynote last month, added a WIT luncheon and an AWS workshop.  All of these are pretty damn impressive. We’ve replaced so much, but the one thing we didn’t touch was the fantastic speakers and great content.

There are four workshops on the first day:

  • Mike Dietrich
  • Kent Graziano
  • John King
  • Peter Koletzke

There’s over 100 sessions over 2 1/2 days from some of the best speakers in the Oracle community.  If you’re interested in a certain track or topic, consider searching by filter.

It all sums up to be a fantastic and impressive event that will all happen in Westminster, CO in a couple weeks.  Are you going to really miss this much awesomeness?  </p />
</p></div>

    	  	<div class=

JAN18: Database 11gR2 PSU, 12cR1 ProactiveBP, 12cR2 RU

If you want to apply the latest patches (and you should), you can go to the My Oracle Support Recommended Patch Advisor. But sometimes it is not up-todate. For example, for 12.1.0.2 only the PSU is displayed and not the Proactive Bundle Patch, which is highly recommended. And across releases, the names have changed and can be misleading: PSU for 11.2.0.4 (no Proactive Bundle Patch except for Engineered Systems). 12.1.0.2 can have SPU, PSU, or Proactive BP but the latest is highly recommended, especially now that it includes the adaptive statistics patches. 12.2.0.1 introduce the new RUR and RU, the latest one being the one recommended.

To get things clear, there’s also the Master Note for Database Proactive Patch Program, with reference to one note per release. This blog post is my master note to link directly to the recommended updates for Oracle Database.

Master Note for Database Proactive Patch Program (Doc ID 756671.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=756671.1

11.2.0.4 – PSU

Database 11.2.0.4 Proactive Patch Information (Doc ID 2285559.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=2285559.1
Paragraph -> 11.2.0.4 Database Patch Set Update

Latest as of Q1 2018 -> 16-Jan-2018 11.2.0.4.180116 (Jan 2018) Database Patch Set Update (DB PSU) 26925576 (Windows: 26925576)

12.1.0.2  – ProactiveBP

Database 12.1.0.2 Proactive Patch Information (Doc ID 2285558.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=2285558.1
Paragraph -> 12.1.0.2 Database Proactive Bundle Patches (DBBP)

Latest as of Q1 2018 -> 16-Jan-2018 12.1.0.2.180116 Database Proactive Bundle Patch (Jan 2018) 12.1.0.2.180116 27010930

12.2.0.1 – RU

Database 12.2.0.1 Proactive Patch Information (Doc ID 2285557.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=2285557.1
Paragraph -> 12.2.0.1 Database Release Update (Update)

Latest as of Q1 2018 -> 16-Jan-2018 12.2.0.1.180116 (Jan 2018) Database Release Update 27105253 (Windows: 12.2.0.1.180116 WIN DB BP 27162931)
 

Don’t forget SQL Developer

In the 12c Oracle Home SQL Developer is installed, but you should update it to the latest version.
Download the following from http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
– The SQL Developer zip for ‘Other Platforms’, currently 17.4
– The SQLcl zip for ‘All Platforms’, currently 17.4

On the server, remove, or rename the original directory:
mv $ORACLE_HOME/sqldeveloper $ORACLE_HOME/sqldeveloper.orig

Unzip what you have downloaded:
unzip -d $ORACLE_HOME/ sqldeveloper-*-no-jre.zip
unzip -d $ORACLE_HOME/sqldeveloper sqlcl-*-no-jre.zip

I suggest to have a login.sql which sets the beautiful ansiconsole for SQLcl:

echo "set sqlformat ansiconsole" > $ORACLE_HOME/sqldeveloper/sqlcl/login.sql

On 12.2 you can run SQLcl just with ‘sql’ (and same arguments as sqlplus: / as sysdba or /nolog,…) because this is what is defined in $ORACLE_HOME/bin.
However, it sets the current working directory and i prefer to keep the current one as it is probably were I want to run scripts from.

Then I add the following aliases in .bashrc

alias sqlcl='JAVA_HOME=$ORACLE_HOME/jdk SQLPATH=$ORACLE_HOME/sqldeveloper/sqlcl bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql'
alias sqldev='$ORACLE_HOME/sqldeveloper/sqldeveloper.sh'

When running SQL Developer for the first time you can create automatically a ‘/ as sysdba’ connection (but remember this is not a good practice to connect like this) and a connection for each user declared in the database: Right click on Connections and Create Local Connections

 

Cet article JAN18: Database 11gR2 PSU, 12cR1 ProactiveBP, 12cR2 RU est apparu en premier sur Blog dbi services.

Where in the World is Goth Girl- Cleveland Edition

I’ve returned from SQL Saturday Cleveland after presenting “Linux Performance Essentials for the SQL Server DBA”.  The event had 30% women speakers, which is incredible for a technical event.  I’m thrilled with the attendance and although my session went through a lot of Linux in an hour, people didn’t leave looking like their brains were going to explode, so mission accomplished.

I always feel like I’m walking around these events looking for the hidden cameras trying to record my astounded facial reactions. There is a significantly higher percentage of women speaking and in attendance and not a pencil skirt in sight.  Maybe its that the events are on the weekend vs. a weekday, but at SQL Summit, the week long conference in Seattle, the only dressing up I really observed was Halloween costumes and then the guys wearing kilts in support of the women at the event.  At SQL Saturday Cleveland, I had the surreal experience of feeling like the ONLY woman without a bright streak of color in her hair, (I’m in the middle of a redye process this week… :))  Erin Stellato, Wendy Pastrick, Holly Foley and others all had a flare of bright color in the midst of their natural hair color.  I have felt over-dressed at a couple SQL events, as they want me here for my brain, not for my appearance.

Saturday Schmaterday

SQL Saturdays are pretty cool events and incredibly successful for the Microsoft Data Platform community. Database specialists from each community get together to form a user group, which gives them the opportunity to network with each other, host events and learn from other.   They’ve already built a culture that expects to learn on the weekend, rarely taking business time off, but in exchange, the events are free with your PASS membership, (which is also free, I might add.) and seems to retain a consistent attendance due to this. Microsoft makes it a bit easier for the volunteers who run the user groups than Oracle, as they’ve created a platform, (for better or worse) that all user groups can use as a “home base” with resources, software, hosting and a dashboard to manage it all from.  There are a list of expectations that makes it easy for someone to launch a local user group, but they can also take it from there and build an event outside of the PASS organization if they wish to do so, it just is no longer part of PASS.  I’ve felt one of the biggest challenges for the Oracle user groups are that we’re constantly “reinventing the wheel” with each user group and a centralized resource location would be helpful.  I don’t think it would be possible to launch one on the Oracle side now, but the benefit of having one is apparent.

With that being said, the SQL Community also makes it easy to become a speaker with your PASS membership-  its easy to find events, to submit abstracts, promote and interact with the event on social media.  The SQL Family, (You can find #SQLFamily hashtag on Twitter) has a strong following

I think there’s a lot to be learned from all experiences and due to this, I’ve increased my involvement with the SQL Server user group in Denver as their new president.   When the board suggested it, I was a little hesitant, but as they displayed their desire to continue to perform the duties for each of their roles to the fullest, it became clear that it would make sense for me to become president, as most of what I do as president for the oracle user group, RMOUG, could be reused.  My biggest hurdle has been updating my SQL Saturday speaking schedule to not speak at the same time as a Saturday board meeting!

So let’s discuss what’s coming up on the calendar for the rest of February and all of March!

February

SQL Saturday Redmond, Redmond, WA- Feb. 10th

RMOUG Training Days, Denver, CO-  Feb. 20-22nd

Since I’m doing most of the planning for this event and also the president, I’m trying to give me some space for February…. </p />
</p></div>

    	  	<div class=

Multitenant, PDB, ‘save state’, services and standby databases

Creating – and using – your own services has always been the recommendation. You can connect to a database without a service name, though the instance SID, but this is not what you should do. Each database registers its db_unique_name as a service, and you can use it to connect, but it is always better to create your own application service(s). In multitenant, each PDB registers its name as a service, but the recommendation is still there: create your own services, and connect with your services.
I’ll show in this blog post what happens if you use the PDB name as a service and the standby database registers to the same listener as the primary database. Of course, you can workaround the non-unique service names by registering to different listeners. But this just hides the problem. The main reason to use services is to be independent from physical attributes, so being forced to assign a specific TCP/IP port is not better than using an instance SID.

I have the primary (CDB1) and standby (CDB2) databases registered to the default local listener:

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 03-FEB-2018 23:11:23
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 02-FEB-2018 09:32:30
Uptime 1 days 13 hr. 38 min. 52 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/VM122/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM122)(PORT=5501))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "627f7512a0452fd4e0537a38a8c055c0" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1_CFG" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1_DGB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1_DGMGRL" has 1 instance(s).
Instance "CDB1", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2_DGB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2_DGMGRL" has 1 instance(s).
Instance "CDB2", status UNKNOWN, has 1 handler(s) for this service...
Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

Look at service ‘pdb1′, which is the name for my PDB. Connecting to //localhost:1521/PDB1 can connect you randomly to CDB1 (the primary database) or CDB2 (the standby database).

Here is an example, connecting several times to the PDB1 service:

[oracle@VM122 ~]$ for i in {1..5} ; do sqlplus -L -s sys/oracle@//localhost/pdb1 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1

I was connected at random to CDB1 or CDB2.

As an administrator, you know the instance names and you can connect to the one you want with: //localhost:1521/PDB1/CDB1 or //localhost:1521/PDB1/CDB2:

[oracle@VM122 ~]$ for i in {1..3} ; do sqlplus -L -s sys/oracle@//localhost/pdb1/CDB1 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
[oracle@VM122 ~]$ for i in {1..3} ; do sqlplus -L -s sys/oracle@//localhost/pdb1/CDB2 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2

Of course this is not what you want. And we must not start or stop the default services. For the application, the best you can do is to create your service. And if you want to be able to connect to the Active Data Guard standby, which is opened in read-only, then you can create a ‘read-write’ service and a ‘read-only’ service that you start depending on the role.

Create and Start a read-write service on the primary

This example supposes that you have only Oracle Database software installed. If you are in RAC, with the resources managed by Grid Infrastructure, or simply with Oracle Restart, creating a service is easy with srvctl, and you add it to a PDB with ‘-pdb’ and also with a role to start it automatically in the primary or in the standby. But without it, you use dbms_service:

SQL> connect /@CDB1 as sysdba
Connected.
 
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.create_service(service_name=>'pdb1_RW',network_name=>'pdb1_RW');
PL/SQL procedure successfully completed.
 
SQL> exec dbms_service.start_service(service_name=>'pdb1_RW');
PL/SQL procedure successfully completed.
 
SQL> alter session set container=cdb$root;
Session altered.

The service is created, stored in SERVICE$ visible with DBA_SERVICES:

SQL> select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;
NAME NAME_HASH NETWORK_NAME CREATION_DATE PDB
---- --------- ------------ ------------- ---
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 PDB1
pdb1 1888881990 pdb1 11-JAN-18 PDB1

Save state

I have created and started the PDB1_RW service. However, if I restart the database, the service will not start automatically. How do you ensure that the PDB1 pluggable database starts automatically when you open the CDB? You ‘save state’ when it is opened. It is the same for the services you create. You need to ‘save state’ when they are opened.


SQL> alter pluggable database all save state;
Pluggable database ALL altered.

The information is stored in PDB_SVC_STATE$, and I’m not aware of a dictionary view on it:

SQL> select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;
 
NAME NAME_HASH NETWORK_NAME CREATION_DATE CON_ID
---- --------- ------------ ------------- ------
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 4
pdb1 1888881990 pdb1 11-JAN-18 4
 
SQL> select * from containers(pdb_svc_state$);
 
INST_ID INST_NAME PDB_GUID PDB_UID SVC_HASH SPARE1 SPARE2 SPARE3 SPARE4 SPARE5 SPARE6 CON_ID
------- --------- -------- ------- -------- ------ ------ ------ ------ ------ ------ ------
1 CDB1 627F7512A0452FD4E0537A38A8C055C0 2872139986 3128030313 1

The name is not in this table, you have to join with v$services using(name_hash):

SQL> select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;
 
NAME NAME_HASH NETWORK_NAME CREATION_DATE CON_ID
---- --------- ------------ ------------- ------
SYS$BACKGROUND 165959219 26-JAN-17 1
SYS$USERS 3427055676 26-JAN-17 1
CDB1_CFG 1053205690 CDB1_CFG 24-JAN-18 1
CDB1_DGB 184049617 CDB1_DGB 24-JAN-18 1
CDB1XDB 1202503288 CDB1XDB 11-JAN-18 1
CDB1 1837598021 CDB1 11-JAN-18 1
pdb1 1888881990 pdb1 11-JAN-18 4
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 4

So, in addition to storing the PDB state in PDBSTATE$, visible with dba_pdb_saved_states, the service state is also stored. Note that they are at different level. PDBSTATE$ is a data link: stored on CDB$ROOT only (because the data must be read before opening the PDB) but PDB_SVC_STATE$ is a local table in the PDB as the services can be started only when the PDB is opened.

This new service is immediately registered on CDB1:

Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Create and Start a read-only service for the standby

If you try to do the same on the standby for a PDB1_RO service, you cannot because service information has to be stored in the dictionary:

SQL> exec dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO');
 
Error starting at line : 56 File @ /media/sf_share/122/blogs/pdb_svc_standby.sql
In command -
BEGIN dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO'); END;
Error report -
ORA-16000: database or pluggable database open for read-only access

So, the read-only service has to be created on the primary:

SQL> connect /@CDB1 as sysdba
Connected.
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO');
 
SQL> select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;
NAME NAME_HASH NETWORK_NAME CREATION_DATE PDB
---- --------- ------------ ------------- ---
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 PDB1
pdb1_RO 1562179816 pdb1_RO 03-FEB-18 PDB1
pdb1 1888881990 pdb1 11-JAN-18 PDB1

The SERVICE$ dictionary table is replicated to the standby, so I can I can start it on the standby:

SQL> connect /@CDB2 as sysdba
Connected.
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.start_service(service_name=>'pdb1_RO');
PL/SQL procedure successfully completed.

Here is what is registered to the listener:

Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RO" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Now, the PDB_RO connects to the standby and PDB_RW to the primary. Perfect.

No ‘save state’ on the standby

At this point, you would like to have the PDB_RO started when PDB1 is opened on the standby, but ‘save state’ is impossible on a read-only database:

SQL> alter session set container=cdb$root;
Session altered.
 
SQL> alter pluggable database all save state;
 
Error starting at line : 84 File @ /media/sf_share/122/blogs/pdb_svc_standby.sql
In command -
alter pluggable database all save state
Error report -
ORA-16000: database or pluggable database open for read-only access

You can’t manage the state (open the PDB, start the services) in the standby database.

The primary ‘save state’ is replicated in standby

For the moment, everything is ok with my services:

Service "pdb1_RO" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...

If I restart the primary CDB1, everything is ok again because I saved the state of the PDB and the service. But what happens when the standby CDB2 restarts?


SQL> connect /@CDB2 as sysdba
Connected.
SQL> startup force;
...
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED

The PDB is not opened: the ‘saved state’ for PDB is not read in the standby.
However, when I open the PDB, it seems that the ‘saved state’ for service is applied, and this one is replicated from the primary:

SQL> alter pluggable database PDB1 open;
Pluggable database altered.
SQL> host lsnrctl status
...
Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

My PDB1_RW is registered for both, connections will connect at random to the primary or the standby, and then the transactions will fail half of the times. It will be the same in case of switchover. This is not correct.

Save state instances=()

What I would like is the possibility to save state for a specific DB_UNIQUE_NAME, like with pluggable ‘spfile’ parameters. But this is not possible. What is possible is to mention an instance but you can use it only for the primary instance where you save the state (or you get ORA-65110: Invalid instance name specified) and anyway, this will not be correct after a switchover.

So what?

Be careful, with services and ensure that the services used by the application are registered only for the correct instance. Be sure that this persists when the instances are restarted. For this you must link a service name to a database role. This cannot be done correctly with ‘save state’. You can use startup triggers, or better, Grid Infrastructure service resources.

Do not connect to the default service with the PDB name, you cannot remove it and cannot stop it, so you may have the same name for different instances in a Data Guard configuration. You can register the standby instances to different local listeners, to avoid the confusion, but you may still register to the same SCAN listener.

Create your own services, start them depending on the database role, and do not use ‘save state’ in a physical standby configuration.

 

Cet article Multitenant, PDB, ‘save state’, services and standby databases est apparu en premier sur Blog dbi services.

A look into Oracle redo, part 2: the discovery of the KCRFA structure

This is the second post in a series of blogposts on Oracle database redo internals. If you landed on this blogpost without having read the first blogpost, here is a link to the first blogpost: https://fritshoogland.wordpress.com/2018/01/29/a-look-into-oracle-redo-part-1-redo-allocation-latches/ The first blogpost contains all the versions used and a synopsis on what the purpose of this series of blogposts is.

In the first part, I showed how the principal access to the public redo strands is controlled by redo allocation latches, and showed a snippet of trace information of memory accesses of a foreground session when using the first public redo strand:

kcrfw_redo_gen_ext+11277:0x0000000076fdf3bc(shared pool|permanent memor,duration 1,cls perm+16114620 ):R:4:0x1256f/75119()
kcrfw_redo_gen_ext+11290:0x0000000076fdf3d8(shared pool|permanent memor,duration 1,cls perm+16114648 ):R:8:0/0()
kcrfw_redo_gen_ext+11300:0x0000000076fdf430(shared pool|permanent memor,duration 1,cls perm+16114736 ):W:8:0/0()
kcrfw_redo_gen_ext+11307:0x0000000076fdf3d0(shared pool|permanent memor,duration 1,cls perm+16114640 ):R:8:0xdeadbeef0000726e/-2401053092612115858()
kcrfw_redo_gen_ext+11312:0x0000000076fdf428(shared pool|permanent memor,duration 1,cls perm+16114728 ):W:8:0xdeadbeef0000726e/-2401053092612115858()
kcrfw_redo_gen_ext+11317:0x0000000076fdf3c8(shared pool|permanent memor,duration 1,cls perm+16114632 ):R:8:0x100010000726e/281479271707246()
kcrfw_redo_gen_ext+11322:0x0000000076fdf420(shared pool|permanent memor,duration 1,cls perm+16114720 ):W:8:0x100010000726e/281479271707246()
kcrfw_redo_gen_ext+11327:0x0000000076fdf3c0(shared pool|permanent memor,duration 1,cls perm+16114624 ):R:8:0x1b800001000/1889785614336(??)
kcrfw_redo_gen_ext+11332:0x0000000076fdf418(shared pool|permanent memor,duration 1,cls perm+16114712 ):W:8:0x1b800001000/1889785614336(??)
kcrfw_redo_gen_ext+11337:0x0000000076fdf3b8(shared pool|permanent memor,duration 1,cls perm+16114616 ):R:8:0x1256f0000b2fe/322633648354046()
kcrfw_redo_gen_ext+11342:0x0000000076fdf410(shared pool|permanent memor,duration 1,cls perm+16114704 ):W:8:0x1256f0000b2fe/322633648354046()
kcrfw_redo_gen_ext+11347:0x0000000076fdf3b0(shared pool|permanent memor,duration 1,cls perm+16114608 ):R:8:0/0()
kcrfw_redo_gen_ext+11352:0x0000000076fdf408(shared pool|permanent memor,duration 1,cls perm+16114696 ):W:8:0/0()
kcrfw_redo_gen_ext+11307:0x0000000076fdf3a8(shared pool|permanent memor,duration 1,cls perm+16114600 ):R:8:0x7270/29296()
kcrfw_redo_gen_ext+11312:0x0000000076fdf400(shared pool|permanent memor,duration 1,cls perm+16114688 ):W:8:0x7270/29296()
kcrfw_redo_gen_ext+11317:0x0000000076fdf3a0(shared pool|permanent memor,duration 1,cls perm+16114592 ):R:8:0xd8e4e200/3638878720(Redo Buffers(pgsz:2048k)|+15000064 redo|PUB_REDO_0+15000064 )
kcrfw_redo_gen_ext+11322:0x0000000076fdf3f8(shared pool|permanent memor,duration 1,cls perm+16114680 ):W:8:0xd8e4e200/3638878720(Redo Buffers(pgsz:2048k)|+15000064 redo|PUB_REDO_0+15000064 )
kcrfw_redo_gen_ext+11327:0x0000000076fdf398(shared pool|permanent memor,duration 1,cls perm+16114584 ):R:8:0/0()
kcrfw_redo_gen_ext+11332:0x0000000076fdf3f0(shared pool|permanent memor,duration 1,cls perm+16114672 ):W:8:0/0()
kcrfw_redo_gen_ext+11337:0x0000000076fdf390(shared pool|permanent memor,duration 1,cls perm+16114576 ):R:8:0x758e9f/7704223()
kcrfw_redo_gen_ext+11342:0x0000000076fdf3e8(shared pool|permanent memor,duration 1,cls perm+16114664 ):W:8:0x758e9f/7704223()
kcrfw_redo_gen_ext+11347:0x0000000076fdf388(shared pool|permanent memor,duration 1,cls perm+16114568 ):R:8:0x758ea0/7704224()
kcrfw_redo_gen_ext+11352:0x0000000076fdf3e0(shared pool|permanent memor,duration 1,cls perm+16114656 ):W:8:0x758ea0/7704224()
kcrfw_redo_gen_ext+11366:0x0000000076fdf438(shared pool|permanent memor,duration 1,cls perm+16114744 ):W:4:0xdeadbeef/3735928559(Redo Buffers(pgsz:2048k)|+112049903 redo|PUB_REDO_1+44941039 )
kcrfw_redo_gen_ext+11380:0x0000000076fdf3a8(shared pool|permanent memor,duration 1,cls perm+16114600 ):R:4:0x7270/29296()
kcrfw_redo_gen_ext+11386:0x0000000076fdf3a8(shared pool|permanent memor,duration 1,cls perm+16114600 ):W:4:0x7271/29297()
kcrfw_redo_gen_ext+11392:0x0000000076fdf468(shared pool|permanent memor,duration 1,cls perm+16114792 ):R:8:0xe0000000/3758096384(Redo Buffers(pgsz:2048k)|+134217728 )

And a snippet of trace information of memory accesses of a foreground session when using the second public redo strand:

kcrfw_redo_gen_ext+11277:0x0000000076fdf4e4(shared pool|permanent memor,duration 1,cls perm+16114916 ):R:4:0x1256f/75119()
kcrfw_redo_gen_ext+11290:0x0000000076fdf500(shared pool|permanent memor,duration 1,cls perm+16114944 ):R:8:0/0()
kcrfw_redo_gen_ext+11300:0x0000000076fdf558(shared pool|permanent memor,duration 1,cls perm+16115032 ):W:8:0/0()
kcrfw_redo_gen_ext+11307:0x0000000076fdf4f8(shared pool|permanent memor,duration 1,cls perm+16114936 ):R:8:0xdeadbeef000000e6/-2401053092612144922()
kcrfw_redo_gen_ext+11312:0x0000000076fdf550(shared pool|permanent memor,duration 1,cls perm+16115024 ):W:8:0xdeadbeef000000e6/-2401053092612144922()
kcrfw_redo_gen_ext+11317:0x0000000076fdf4f0(shared pool|permanent memor,duration 1,cls perm+16114928 ):R:8:0x10001000000e6/281479271678182()
kcrfw_redo_gen_ext+11322:0x0000000076fdf548(shared pool|permanent memor,duration 1,cls perm+16115016 ):W:8:0x10001000000e6/281479271678182()
kcrfw_redo_gen_ext+11327:0x0000000076fdf4e8(shared pool|permanent memor,duration 1,cls perm+16114920 ):R:8:0x800/2048()
kcrfw_redo_gen_ext+11332:0x0000000076fdf540(shared pool|permanent memor,duration 1,cls perm+16115008 ):W:8:0x800/2048()
kcrfw_redo_gen_ext+11337:0x0000000076fdf4e0(shared pool|permanent memor,duration 1,cls perm+16114912 ):R:8:0x1256f00012488/322633648383112()
kcrfw_redo_gen_ext+11342:0x0000000076fdf538(shared pool|permanent memor,duration 1,cls perm+16115000 ):W:8:0x1256f00012488/322633648383112()
kcrfw_redo_gen_ext+11347:0x0000000076fdf4d8(shared pool|permanent memor,duration 1,cls perm+16114904 ):R:8:0/0()
kcrfw_redo_gen_ext+11352:0x0000000076fdf530(shared pool|permanent memor,duration 1,cls perm+16114992 ):W:8:0/0()
kcrfw_redo_gen_ext+11307:0x0000000076fdf4d0(shared pool|permanent memor,duration 1,cls perm+16114896 ):R:8:0xe6/230()
kcrfw_redo_gen_ext+11312:0x0000000076fdf528(shared pool|permanent memor,duration 1,cls perm+16114984 ):W:8:0xe6/230()
kcrfw_redo_gen_ext+11317:0x0000000076fdf4c8(shared pool|permanent memor,duration 1,cls perm+16114888 ):R:8:0xdc01ce00/3691105792(Redo Buffers(pgsz:2048k)|+67227136 redo|PUB_REDO_1+118272 )
kcrfw_redo_gen_ext+11322:0x0000000076fdf520(shared pool|permanent memor,duration 1,cls perm+16114976 ):W:8:0xdc01ce00/3691105792(Redo Buffers(pgsz:2048k)|+67227136 redo|PUB_REDO_1+118272 )
kcrfw_redo_gen_ext+11327:0x0000000076fdf4c0(shared pool|permanent memor,duration 1,cls perm+16114880 ):R:8:0/0()
kcrfw_redo_gen_ext+11332:0x0000000076fdf518(shared pool|permanent memor,duration 1,cls perm+16114968 ):W:8:0/0()
kcrfw_redo_gen_ext+11337:0x0000000076fdf4b8(shared pool|permanent memor,duration 1,cls perm+16114872 ):R:8:0x75a1d2/7709138()
kcrfw_redo_gen_ext+11342:0x0000000076fdf510(shared pool|permanent memor,duration 1,cls perm+16114960 ):W:8:0x75a1d2/7709138()
kcrfw_redo_gen_ext+11347:0x0000000076fdf4b0(shared pool|permanent memor,duration 1,cls perm+16114864 ):R:8:0x75a1d2/7709138()
kcrfw_redo_gen_ext+11352:0x0000000076fdf508(shared pool|permanent memor,duration 1,cls perm+16114952 ):W:8:0x75a1d2/7709138()
kcrfw_redo_gen_ext+11366:0x0000000076fdf560(shared pool|permanent memor,duration 1,cls perm+16115040 ):W:4:0xdeadbeef/3735928559(Redo Buffers(pgsz:2048k)|+112049903 redo|PUB_REDO_1+44941039 )
kcrfw_redo_gen_ext+11380:0x0000000076fdf4d0(shared pool|permanent memor,duration 1,cls perm+16114896 ):R:4:0xe6/230()
kcrfw_redo_gen_ext+11386:0x0000000076fdf4d0(shared pool|permanent memor,duration 1,cls perm+16114896 ):W:4:0xe7/231()
kcrfw_redo_gen_ext+11392:0x0000000076fdf590(shared pool|permanent memor,duration 1,cls perm+16115088 ):R:8:0xe0200000/3760193536(Redo Buffers(pgsz:2048k)|+136314880 )

If you pay really close attention you see the memory location moved from 0x76fdf3?? / 0x76fdf4?? to 0x76fdf4?? / 0x76fdf5??, all in permanent memory in the shared pool. Would this mean information for each strand is kept in a struct per strand? Let’s take a memory allocation on a specific offset in the kcrfw_redo_gen_ext function, and take the allocation on exactly the same offset in the kcrfw_redo_gen_ext function, which means it’s probably doing the same, and calculate the difference of the memory allocation:

strand 0
kcrfw_redo_gen_ext+11322:0x0000000076fdf3f8(shared pool|permanent memor,duration 1,cls perm+16114680 ):W:8
strand 1
kcrfw_redo_gen_ext+11322:0x0000000076fdf520(shared pool|permanent memor,duration 1,cls perm+16114976 ):W:8

Now take the memory addresses, and calculate the difference: 0x76fdf520 – 0x76fdf3f8 = 0x128 = 296

Okay, so would the size of the probable struct per strand be 296 bytes? Another question: where would this potential struct start?

Often, the location of structs is fetched from another location, just like we saw earlier for the redo allocation latch, especially if they are more dynamically created than fixed SGA structures, which are not dynamic by nature. Upon close examination of similarities between both pina traces, I found usage of a pointer in the kcrfsg_ struct at offset +0:

kcrfw_redo_gen_ext+14362:0x0000000060016228(fixed sga|var:kcrfsg_+0 ):R:8:0x76fdf388/1996354440(shared pool|permanent memor,duration 1,cls perm+16114568 )

The difference between the memory address I took and the pointer address in kcrfsg_ at offset 0 for the first public redo strand is:
0x76fdf3f8 – 0x76fdf388 = 0x70 = 112
The difference between the memory address I took for the second public redo strand and the pointer in kcrfsg_ at offset 0 is:
0x76fdf520 – 0x76fdf388 = 0x198 = 408
Then subtract with the calculated difference between the memory locations of the first and second public redo strand:
408 – 296 = 112
Bingo!

It would be really helpful to have some kind of explanation about this structure… I searched through My Oracle Support (very seldomly the regular knowledge base provides some internals information, the best chance for internals is the bug database, the rest is…well, let me say: not that helpful. I also hate that other tabs are enabled if nothing is found), oraus.msg, oradebug dumplist, oradebug doc component for any tracing or dumps that could give more information, but nothing seemed to point to any explanation. I dumped the shared pool, but I found the same information (the memory location being permanent memory) as I obtained from x$ksmsp. Also, since the pinatrace annotation takes care of any x$ view that has it’s source address in the SGA, it seems there is no way other than puzzle all the individual locations in what I assume is a structure that holds per strand metadata.

I decided to go through the x$ views one more time, and found an x$ view that I already knew of, which points to both ‘strands’ and ‘kcrf’; x$kcrfstrand. However, the reason it is not annotated is because this x$ view has it’s addresses in the PGA:

SQL> select addr from x$kcrfstrand;

ADDR
----------------
00007F14CE31AE00
00007F14CE31AE00

This means that when querying this x$ view, the results are gathered from wherever it comes from (the source can be file headers like redo (x$kccle) and controlfile (x$kcccf) for example, in this case we will the actual source of x$kcrfstrand is the SGA) and then stored temporarily in PGA, potentially manipulated, and then shown from the PGA, as we can see from the addresses (addr field in x$kcrfstrand).

However, what I can do to investigate where the x$kcfstrand takes its information from and run the pina (memory access) trace on the execution of ‘select * from x$kcrfstrand’, and try to see if I can find where it takes its data from. This is how the relevant part looks like:

qerfxFetch+1563:0x00007ffee590a638():W:8:0x10ea8781/283805569()
kcrf_strand_ftcb+2:0x00007ffee590a630():W:8:0x7ffee590b130/140732749885744()
kcrf_strand_ftcb+10:0x00007ffee590a620():W:8:0x7f7780569f40/140151230996288()
kcrf_strand_ftcb+23:0x00000000600169b8(Fixed Size(pgsz:2048k)|+92600 fixed sga|var:kcrf_max_strands_+0 ):R:4:0x2/2()
kcrf_strand_ftcb+29:0x00000000600169c8(Fixed Size(pgsz:2048k)|+92616 fixed sga|var:kcrf_actv_pvt_strands_+0 ):R:4:0/0()
kcrf_strand_ftcb+35:0x00007f7780448558():R:4:0/0()
kcrf_strand_ftcb+52:0x0000000060016228(Fixed Size(pgsz:2048k)|+90664 fixed sga|var:kcrfsg_+0 shared pool|pointer:shared pool redo struct+0 ):R:8:0x76fdf388/1996354440(shared pool|permanent memor,duration 1,cls perm+16114568 )
kcrf_strand_ftcb+64:0x00007ffee590a618():W:8:0xcf7cb25/217565989()
_intel_fast_memcpy+7:0x00000000157f62a0():R:8:0x137fff/1277951()
__intel_ssse3_rep_memcpy+160:0x00000000157b9488():R:4:0x8000/32768()
__intel_ssse3_rep_memcpy+184:0x0000000076fdf4a0(shared pool|permanent memor,duration 1,cls perm+16114848 ):R:16:0x00000000200040000/8590196736(??)
__intel_ssse3_rep_memcpy+3438:0x00007f7780448670():W:16:0x0000ec00000000000/4151755906482176()
...
__intel_ssse3_rep_memcpy+9254:0x0000000076fdf388(shared pool|permanent memor,duration 1,cls perm+16114568 ):R:16:0xfe56e00000fe56e00000/-2305841916834480128()
__intel_ssse3_rep_memcpy+9258:0x00007f7780448560():W:16:0xfe56e00000fe56e00000/-2305841916834480128()

Going from top to bottom through this snippet:
– qerfxFetch is the query execute rowsource function to query x$ tables, kcrf_strand_ftcb is an helper function (kernel cache recovery file strand fetch copy buffer is my guess) to fetch the kcrf strand data from the shared pool structure; the other functions are memory copy functions.
– Interestingly, the kcrf_strand_ftcb function reads offset 0 of kcrfsg_ just like we seen before. This is quite logical if you think about it, this struct is dynamically allocated in the shared pool, which means it doesn’t have a fixed address, so it needs to be looked up every time something must be read from it.
– The next functions are memcpy functions. I’ve shown the relevant ones. The first memcpy (_intel_fast_memcpy) probably is a helper function that switches to an intel optimised memcpy, __intel_ssse3_rep_memcpy, which takes advantage of intel’s SIMD optimisations (SSE3 means Supplemental Streaming SIMD Extensions 3).
– The memcpy functions at lines 11 and 12 show it’s reading 16 bytes from 0x76fdf4a0, and then writing 16 bytes to 0x7f7780448670.
– 0x76fdf4a0 is the struct for the second strand: kcrfsg_+0 = 0x76fdf388 + 296 = 0x76fdf4b0 + 16 (copy size) = 0x76fdf4a0.
– 0x7f7780448670 is quite probably and address in the process’ PGA, which is used for the intermediate representation of the struct’s data for displaying it for the x$ view.

Even more interesting, this confirms that x$kcrfstrand is the externalisation of the struct that is used together with kcrfsg_, and the calculations that were made previously are correct; kcrfsg_+0 = 0x76fdf388, and that address with 296 added is 0x76fdf4b0, which is the same as 0x76fdf4a0 + 16 (if you look into the above example, the memcpy function copies 16 bytes at a time).

I think it’s pretty safe to assume this kcrf related struct in the shared pool is commonly referred to as ‘KCRFA’ in x$kcrfstrand. Of course we can only guess that the ‘A’ means in kernel cache recovery file, I would say ‘auxiliary’, but is could also very well be the A in ‘strAnd’. It is a structure that clearly was needed when oracle changed the concept of a single log buffer into multiple public redo strands, because identical information of multiple strands need to be stored at the same time, and the information in kcrfsg_ is singular of nature.

Please mind that because of the intermediate representation of the x$ table in PGA, I could not use the trick I showed in my ‘drilldown into the logwriter inner working and communication’, which is query a single column in the x$ view, and see which memory addresses in this struct are touched; the entire struct is copied into PGA and processed before it is used to show the contents, even if only a single column of a single row is queried.

Also, also consider that since the struct data is copied into the process’ PGA in order to make this information human readable, not all offsets (in x$kqfco) of the PGA representation might be applicable to the SGA struct. In fact, I think I have proven that the KCRFA per strand metadata width is 296. Now look at the column offsets from x$kqfco of x$kcrfstrand:

SQL> select c.kqfconam, c.kqfcooff, c.kqfcosiz
from x$kqfco c, x$kqfta t
where t.indx = c.kqfcotab
and t.kqftanam = 'X$KCRFSTRAND'
order by c.kqfcooff;

KQFCONAM                         KQFCOOFF   KQFCOSIZ
------------------------------ ---------- ----------
ADDR                                    0          8
INDX                                    0          4
INST_ID                                 0          4
CON_ID                                  0          2
STRAND_HDR_SCN_KCRFA_CLN                0          8
HIGHEST_SCN_KCRFA_CLN                   0          8
PNEXT_BUF_KCRFA_CLN                    32          8
NEXT_BUF_NUM_KCRFA_CLN                 40          4
BYTES_IN_BUF_KCRFA_CLN                 48          8
AVAILABLE_BUFS_KCRFA_CLN               56          4
MEM_BUFS_AVAILABLE_KCRFA_CLN           60          4
STRAND_HEADER_BNO_KCRFA_CLN            72          4
FLAG_KCRFA_CLN                         76          1
PVT_STRAND_STATE_KCRFA_CLN             77          1
HIGHEST_SUBSCN_KCRFA_CLN               78          2
LWN_BUF_NUM_KCRFA_CLN                  80          4
STRAND_NUM_ORDINAL_KCRFA_CLN           84          4
STRAND_INFO_PTR_KCRFA_CLN              88          8
ACTION_KCRFA                          184          4
LOG_RESIDUE_KCRFA                     188          1
LAST_BUF_WRITTEN_KCRFA                216          4
LAST_BUF_GATHERED_KCRFA               220          4
CONS_LOW_BUFNO_KCRFA                  224          4
FIRST_BUF_KCRFA                       240          8
LAST_BUF_KCRFA                        248          8
TOTAL_BUFS_KCRFA                      256          4
STRAND_SIZE_KCRFA                     260          4
LOWBUF_KCRFA                          272          4
HIBUF_KCRFA                           276          4
WRAPHIBUF_KCRFA                       280          4
LOG_SEQ_KCRFA                         284          4
PTR_KCRF_PVT_STRAND                   304          8
INDEX_KCRF_PVT_STRAND                 312          4
SPACE_KCRF_PVT_STRAND                 316          4
TXN_KCRF_PVT_STRAND                   320          2

This shows that the representation of the KCRFA struct in PGA apparently has a width of 322 bytes, while we know the KCRFA struct itself is 296. I am not sure why this is, the most obvious reason would be some data is processed already in PGA to make easier to show. This might also very well be the reason there is a need for copying it to PGA first, and not display it directly from its shared pool location.

UNDO sizing

90% databases that I see for the first time have the same issue with UNDO tablespace: it’s over sized, yet still causing infamous ORA-1555 errors at times. Here is why.

UNDO is over-sized because of a dumb monitoring which treats UNDO tablespace in the same way as normal datafiles, i.e. the monitoring suggests to have some free space in UNDO. So it notifies DBAs and tells them to increase UNDO datafiles. And most DBAs are doing it straight away without thinking. This is the most common reason why databases end up with ridiculously large UNDO tablespaces.
It shouldn’t be this way. UNDO is a rollback history: there’s some really old history which can be ignored, there’s some history that might be necessary for a long running transaction, delayed block cleanout of flashback query, and finally there’s active transactions rollback history. Unless you have a special test environment, at any single point in time of a database instance nobody knows when and which UNDO data might be required. Oracle provides 3 options to handle this unknown:

  • let Oracle to decide how much undo to keep based on undo_retention and autoextensible undo data files
  • let Oracle to decide how much undo to keep based on the fixed size of UNDO tablespace
  • ask Oracle to guarantee undo_retention setting

Since datafiles are usually autoextensible, option 1 is the most common scenario. There are two possibilities with this option:
1) undo_retention is left default at 900 seconds. This effectively means Oracle doesn’t need to keep a lot of undo history since you didn’t ask for it. And no matter how big your UNDO is, this setup may easily lead to “snapshot too old” errors – Oracle doesn’t need to keep much UNDO, and is happily re-using undo space quickly enough
2) undo_retention is set to relatively high value, like few hours. This should set the minimum undo retention time, and, in theory, should guarantee you enough transactions history in the UNDO to run queries of at least undo_retention duration. However I’ve seen it more than once that setting undo_retention with autoextensible UNDO doesn’t help to avoid ORA-1555 even for queries that are much shorter than undo_retention. I don’t know how to explain this, and I don’t have a test case and I hate Oracle Support

With fixed size UNDO Oracle behaves rather differently and tries to utilize all the allocated UNDO datafiles as much as possible. It ignores undo_retention and just shows you tuned_undoretention that represents how much history you keep in your fixed size UNDO. I think this is the only reliable way to run Oracle DB in production. In fact documentation suggests to use this approach. It’s just not default.

Recently I’ve seen a database with multi terabyte UNDO. Its size was about 20% of all the datafiles in the system. This prompted me to try to write a query which would compute approximate UNDO size required to handle the load.
Here’re the ideas:

  • the minimum UNDO size should be derived from the maximum number of active undo blocks over the run time of the database. It’s most likely not enough, but gives a good starting point to understanding how much space active transactions require in extreme situations
  • the maximum UNDO size should be equal to the maximum of the sum of ‘undo change vector size’ statistic over the sliding window representing target undo_retention with retention guarantee. This value most likely is way more than it is required for a normal database operation.
  • sufficient UNDO size should be somewhere between those two numbers. I think it should be derived empirically starting from some intermediate best guess between min and max; then adjusted if ORA-1555 or inefficient undo space error appear

Here’s the script I’ve come up with:

prompt
prompt UNDO sizing report
prompt ==================
prompt 
prompt Reports current undo set up, and minimum/maximum required undo tablespace size based on historical AWR data
prompt
prompt Note:
prompt   a) if your undo datafiles are not fixed size (all of them), you are doing it wrong most likely
prompt   b) awr_snapshots_count is used as a period to calculate maximum required undo size with guaranteed undo retention
prompt   c) there is no way to calculate sufficient undo size with a good precision and this report is an approximation
prompt 
prompt For feedback mailto:timur.akhmadeev@gmail.com
prompt

col inst_id                         format 99 head 'In|st'
col current_size_mb                 format 999,999,999 head 'Current|undo, MB'
col is_autoextensible               format a4 head 'Auto|ext?'
col undo_retention                  format 999.9 head 'Retention|hours'
col undo_size_min_mb                format 999,999,999 head 'Minimumal|req UNDO, MB'
col undo_size_guarantee_mb          format 999,999,999 head 'Max req|UNDO, MB'
col longest_sql                     format 999,999.9 head 'Longest|SQL, h'
col longest_sql_id                  format a13 head 'Longest|sql_id'
col max_ora1555_cnt                 format 999,999 head 'Max ORA-|1555 cnt'
col max_no_space_cnt                format 999,999 head 'Max no|space cnt'

select
  ua.inst_id,
  ua.current_size_mb,
  ua.is_autoextensible,
  ua.undo_retention/3600 undo_retention,
  um.undo_size_min_mb,
  u.undo_size_guarantee_mb,
  um.longest_sql/3600 longest_sql,
  um.longest_sql_id,
  um.max_ora1555_cnt,
  um.max_no_space_cnt
from
  gv$parameter p,
  ( -- how much undo is required to guarantee undo retention for awr_snapshots_count period
    select 
      inst_id,
      max(required_undo_mb) undo_size_guarantee_mb
    from
    (
      select
        inst_id,
        sum(undo_size) over (partition by inst_id order by begin_interval_time rows &awr_snapshots_count preceding) required_undo_mb
      from
      (     
        select
          s.instance_number inst_id,
          s.begin_interval_time,
          round((ss.value - lag(ss.value) over (partition by s.instance_number order by s.begin_interval_time))/1024/1024) undo_size
        from 
          dba_hist_snapshot s,
          v$database d,
          dba_hist_sysstat ss,
          v$statname n
        where
          s.dbid = d.dbid and
          s.dbid = ss.dbid and
          s.instance_number = ss.instance_number and
          s.snap_id = ss.snap_id and
          ss.stat_id = n.stat_id and
          n.name = 'undo change vector size'
      )
    )
    group by inst_id
  ) u,
  ( -- minimally required undo as max active blocks
    select
      uh.instance_number inst_id,
      max(activeblks * p.value/1024/1024) undo_size_min_mb,
      max(maxquerylen) longest_sql,
      max(maxquerysqlid) keep (dense_rank first order by maxquerylen desc) longest_sql_id,
      max(ssolderrcnt) max_ora1555_cnt,
      max(nospaceerrcnt) max_no_space_cnt
    from
      dba_hist_snapshot s,
      v$database d,
      dba_hist_undostat uh,
      v$parameter p
    where
      s.dbid = d.dbid and
      s.dbid = uh.dbid and
      s.instance_number = uh.instance_number and
      s.snap_id = uh.snap_id and
      p.name = 'db_block_size'
    group by
      uh.instance_number
  ) um,
  ( -- current undo setup
    select
      p.inst_id,
      round(sum(bytes)/1024/1024) current_size_mb,
      max(t.autoextensible) is_autoextensible,
      max(ur.value) undo_retention
    from 
      dba_data_files t, 
      gv$parameter p,
      (select inst_id, value from gv$parameter where name = 'undo_retention') ur
    where
      t.tablespace_name = p.value and
      p.name = 'undo_tablespace' and
      p.inst_id = ur.inst_id
    group by
      p.inst_id
  ) ua
where
  p.inst_id = u.inst_id and
  p.name = 'undo_retention' and
  u.inst_id = ua.inst_id and
  ua.inst_id = um.inst_id
;

Example output:

Enter value for awr_snapshots_count: 5

 In      Current Auto Retention    Minimumal      Max req    Longest Longest       Max ORA-    Max no
 st     undo, MB ext?     hours req UNDO, MB     UNDO, MB     SQL, h sql_id        1555 cnt space cnt
--- ------------ ---- --------- ------------ ------------ ---------- ------------- -------- ---------
  1       65,508 YES         .3        3,678        9,213        1.7 6xxxxxxxxxxxg        0         0
  2       55,508 YES         .3        5,325       26,797        1.7 6xxxxxxxxxxxg        0         0

What does the output mean?

  • this is a 2 node RAC with autoextensible UNDO tablespaces of more than 50G each
  • undo_retention is default 900 seconds (non important rounding error from 0.25)
  • maximum active undo blocks as reported in dba_hist_undostat were few GBs per instance and these numbers are around minimally required UNDO tablespaces
  • maximum UNDO vector size that was generated for 5 consecutive AWR snapshots is 9 and 26 G respectively. Those are required UNDO tablespaces to guarantee undo_retention of 6 hours (hourly snapshots in this DB).
  • few FYI columns: the longest SQL, its sql_id and max number of ORA-1555 and space errors (should be sum perhaps)
  • based on these numbers I think I’d try to set UNDO fixed size about 20G for both nodes. This is smaller than currently allocated UNDO, and it should still be enough to run the DB without errors. This DB didn’t have any undo errors though, so technically, no need to touch it unless it’s broken or there’s not enough space for such UNDO

I tried to use Undo Advisor for this DB and the only result I got was … “Undo tablespace is OK.” </p />
</p></div>

    	  	<div class=

NULL’s vs NOT NULL’s and Performance

When it comes to giving the cost based optimiser the best possible chance to make the “right” decisions, many DBA’s are diligent in keeping statistics up to date, using histograms where appropriate, creating more indexes (or removing surplus indexes).

However one often neglected area is that the the null-ness of columns also impacts the optimiser decisions. NULL and NOT NULL do more than just act as constraints, they also add (or detract) to the value of indexes on those columns. Here’s an example of how the null-ness of a column impacts optimizer decisions. I have a table T which is a copy of DBA_OBJECTS, indexed on OBJECT_ID.


SQL> create table t as
  2  select * from dba_objects
  3  where object_id is not null;

Table created.

SQL>
SQL> create index IX on T ( object_id );

Index created.

Now I want to perform a standard pagination style query, namely, get the first 5 rows in order of OBJECT_ID


SQL>
SQL> set autotrace traceonly
SQL> select *
  2  from
  3    ( select * from t
  4      order by object_id
  5    )
  6  where rownum <= 5;

5 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3299198703

----------------------------------------------------------------------------------------
| Id  | Operation               | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |      |     5 |  2405 |       |  2755   (1)| 00:00:01 |
|*  1 |  COUNT STOPKEY          |      |       |       |       |            |          |
|   2 |   VIEW                  |      | 78750 |    36M|       |  2755   (1)| 00:00:01 |
|*  3 |    SORT ORDER BY STOPKEY|      | 78750 |     9M|    14M|  2755   (1)| 00:00:01 |
|   4 |     TABLE ACCESS FULL   | T    | 78750 |     9M|       |   428   (1)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=5)
   3 - filter(ROWNUM<=5)


Statistics
----------------------------------------------------------
         52  recursive calls
        130  db block gets
       1591  consistent gets
          0  physical reads
      25420  redo size
       2735  bytes sent via SQL*Net to client
        607  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          5  rows processed

I scanned the entire table and burned around 1500 consistent gets. Now I’ll let the database know what I already know, that is, that the OBJECT_ID column is contains no nulls. Then I’ll repeat the same query.


SQL>
SQL> alter table T modify object_id not null;

Table altered.

SQL>
SQL> select *
  2  from
  3    ( select * from t
  4      order by object_id
  5    )
  6  where rownum <= 5;

5 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 3114946973

--------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |     5 |  2405 |     3   (0)| 00:00:01 |
|*  1 |  COUNT STOPKEY                |      |       |       |            |          |
|   2 |   VIEW                        |      |     5 |  2405 |     3   (0)| 00:00:01 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T    | 78750 |     9M|     3   (0)| 00:00:01 |
|   4 |     INDEX FULL SCAN           | IX   |     5 |       |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=5)


Statistics
----------------------------------------------------------
         85  recursive calls
        132  db block gets
         72  consistent gets
          1  physical reads
      27192  redo size
       2735  bytes sent via SQL*Net to client
        607  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          5  rows processed

Woo hoo! 20 times more efficient. The extra information we provided to the database allowed for more execution paths to be considered by the optimizer.

That’s all well and good. By how do we know which columns might be missing an appropriate NOT NULL constraint ?

Well, the following routine might help Smile

The PL/SQL procedure below deduces a ‘number of rows that are null’ count for all columns that are indexed for all tables in the schema (passed as parameter P_SCHEMA), although you can pass a table name to restrict the procedure to just that table. For any column that contains no nulls, the appropriate ‘alter table … ( column_name not null)’ command will be outputted. (Make sure you set serveroutput on).

Notes:

  • The client for whom I wrote this script, had no NOT NULL constraints on any table so the procedure only takes a rough stab at ALL_CONSTRAINTS for existing constraints.
  • It tries to keep things sensible – avoiding complex data types, tables that are auto-generated or used for queues etc, but there’s no guarantee it won’t either miss a table, or pick up it should not.
  • This is a brute force approach – it works by scanning every table in the schema, so use your discretion as to when would be a suitable time for running this routine.  But it will only scan each table once to determine the null count for all candidate columns.
  • (As with any diagnosis script), you should not apply it’s recommendations without some careful thought first.

12.2 version


create or replace 
procedure check_indexed_columns_for_null(
                 p_schema varchar2, 
                 p_table_name varchar2 default null) is
  cursor x is
   select 
     table_name,
     column_name,
     count(*) over ( partition by table_name ) as colcount
     from
     (
       select 
         table_name,
         column_name,
         min(existing_constraint)
       from 
       (
       select  
           a.table_name, 
           a.column_name, 
           ( select  count(*)
             from    all_constraints x,
                     all_cons_columns cc
             where   x.owner = c.owner
             and     x.table_name = c.table_name
             and     cc.owner      = x.owner
             and     cc.constraint_name = x.constraint_name
             and     
            (
               ( x.constraint_type = 'C'  and replace(search_condition_vc,'"') = a.column_name||' IS NOT NULL' )
                   or 
               ( x.constraint_type = 'P'  and cc.column_name = a.column_name )
             )
            ) existing_constraint
       from 
         all_ind_columns a,
         all_tables c,
         all_tab_columns ccol
       where a.index_owner = p_schema
       and a.index_owner = p_schema
       and a.table_name = nvl(upper(p_table_name),a.table_name)
       and c.table_name = a.table_name
       and c.owner      = a.table_owner
       and c.owner      = ccol.owner
       and c.table_name = ccol.table_name
       and a.column_name = ccol.column_name
       and c.secondary = 'N'
       and c.temporary = 'N'
       and c.nested    = 'NO'
       and c.external  = 'NO'
       and ccol.data_type_owner is null
       and ccol.data_type not in ('LONG','LONG RAW','CLOB','UROWID','UNDEFINED','NCLOB','BLOB','BFILE','ROWID')
       and (c.owner,c.table_name) not in ( select owner, queue_table from all_queue_tables where owner = p_schema )
       )
       group by 
         table_name,
         column_name
       having min(existing_constraint) = 0
     );

  str0 varchar2(32767); 
  str1 varchar2(32767); 
  str2 varchar2(32767); 
  str3 varchar2(32767);

  prev varchar2(100) := '*';
  cnt number;
  trailer varchar2(5);

procedure do_sql(thesql varchar2) is
  tcursor integer;
  dummy integer;
begin
  -- dbms_output.put_line(thesql);
  execute immediate thesql;
end;

begin
for i in x loop
  if prev != i.table_name then
    str0 := 'declare ';
    str1 := 'begin select '; str2 := ' into ';
    str3 := ' '; cnt := 1;
  end if;
  if cnt = i.colcount then 
    trailer := ' ';
  else
    trailer := ','||chr(10);
  end if;
  str0 := str0 || 'v'||ltrim(cnt)||' number;';
  str1 := str1 || 'sum(decode('||i.column_name||',null,1,0))'||trailer;
  str2 := str2 || 'v'||ltrim(cnt)||trailer;
  str3 := str3 || 'if v'||ltrim(cnt)||' = 0 then '||
    'dbms_output.put_line(''alter table '||p_schema||'.'||i.table_name||
    ' modify ('||i.column_name||' not null);''); end if;'||chr(10);
  if cnt = i.colcount then
    str2 := str2 ||' from '||p_schema||'.'||i.table_name||';';
    str3 := str3 ||' end;';
    do_sql(str0||' '||str1||' '||str2||' '||str3);
  end if;
  prev := i.table_name;
  cnt := cnt + 1;
end loop;
end;
/
sho err

set serverout on
exec check_indexed_columns_for_null('SCOTT')

11.2 version

(Needs a direct CREATE TABLE grant to the owner of the procedure)


create or replace
procedure check_indexed_columns_for_null(
                 p_schema varchar2,
                 p_table_name varchar2 default null) is
  cursor c_template is
    select table_name, column_name, 0 colcount
    from   all_tab_columns;

  type rowlist is table of c_template%rowtype;  
  r    rowlist;
  rc   sys_refcursor;

  str0 varchar2(32767);
  str1 varchar2(32767);
  str2 varchar2(32767);
  str3 varchar2(32767);

  prev varchar2(100) := '*';
  cnt number;
  trailer varchar2(5);

procedure do_sql(thesql varchar2) is
  tcursor integer;
  dummy integer;
begin
  dbms_output.put_line(thesql);
  execute immediate thesql;
end;

begin

  begin
    execute immediate 'drop table tmp$cons purge';
  exception when others then null;
  end;

  execute immediate 'create table tmp$cons as  select owner, table_name, constraint_name, constraint_type, to_lob(search_condition) search_condition_vc '||
                    'from all_constraints';

  open rc for
    q'{select
      table_name,
      column_name,
      count(*) over ( partition by table_name ) as colcount
      from
      (
        select
          table_name,
          column_name,
          min(existing_constraint)
        from
        (
        select
            a.table_name,
            a.column_name,
            ( select  count(*)
              from    ( select owner, table_name, constraint_name, constraint_type, cast(search_condition_vc as varchar2(4000)) search_condition_vc 
                        from tmp$cons ) x,
                      all_cons_columns cc
              where   x.owner = c.owner
              and     x.table_name = c.table_name
              and     cc.owner      = x.owner
              and     cc.constraint_name = x.constraint_name
              and
             (
                ( x.constraint_type = 'C'  and replace(search_condition_vc,'"') = a.column_name||' IS NOT NULL' )
                    or
                ( x.constraint_type = 'P'  and cc.column_name = a.column_name )
              )
             ) existing_constraint
        from
          all_ind_columns a,
          all_tables c,
          all_tab_columns ccol
        where a.index_owner = :p_schema
        and a.index_owner = :p_schema
        and a.table_name = nvl(upper(:p_table_name),a.table_name)
        and c.table_name = a.table_name
        and c.owner      = a.table_owner
        and c.owner      = ccol.owner
        and c.table_name = ccol.table_name
        and a.column_name = ccol.column_name
        and c.secondary = 'N'
        and c.temporary = 'N'
        and c.nested    = 'NO'
        and (c.owner,c.table_name) not in ( select owner, table_name from all_external_tables where owner = :p_schema )
        and ccol.data_type_owner is null
        and ccol.data_type not in ('LONG','LONG RAW','CLOB','UROWID','UNDEFINED','NCLOB','BLOB','BFILE','ROWID')
        and (c.owner,c.table_name) not in ( select owner, queue_table from all_queue_tables where owner = :p_schema )
        )
        group by
          table_name,
          column_name
        having min(existing_constraint) = 0
      )
      }' using p_schema,p_schema,p_table_name,p_schema,p_schema;

  fetch rc bulk collect into r;
  close rc;
  
  for i in 1 .. r.count loop
    if prev != r(i).table_name then
      str0 := 'declare ';
      str1 := 'begin select '; str2 := ' into ';
      str3 := ' '; cnt := 1;
    end if;
    if cnt = r(i).colcount then
      trailer := ' ';
    else
      trailer := ','||chr(10);
    end if;
    str0 := str0 || 'v'||ltrim(cnt)||' number;';
    str1 := str1 || 'sum(decode('||r(i).column_name||',null,1,0))'||trailer;
    str2 := str2 || 'v'||ltrim(cnt)||trailer;
    str3 := str3 || 'if v'||ltrim(cnt)||' = 0 then '||
      'dbms_output.put_line(''alter table '||p_schema||'.'||r(i).table_name||
      ' modify ('||r(i).column_name||' not null);''); end if;'||chr(10);
    if cnt = r(i).colcount then
      str2 := str2 ||' from '||p_schema||'.'||r(i).table_name||';';
      str3 := str3 ||' end;';
      do_sql(str0||' '||str1||' '||str2||' '||str3);
    end if;
    prev := r(i).table_name;
    cnt := cnt + 1;
  end loop;
end;
/

 

If you want to see the underlying table scan queries that are being run, simply comment back in the “dbms_output.put_line” in the DO_SQL subroutine.

Enjoy!