Search

Top 60 Oracle Blogs

Recent comments

DataGuard

warning: Invalid argument supplied for foreach() in /www/oaktable/sites/all/modules/cck/content.module on line 1284.

No risk to activate Active Data Guard by mistake with SQL Developer SQLcl

If you have a Data Guard configuration without the Active Data Guard license, you can:

  • apply the redo to keep the physical standby synchronized
  • or open the database read-only to query it

but not at the same time.

Risk with sqlplus “startup”

Being opened READ ONLY WITH APPLY requires the Active Data Guard option. But that this may happen by mistake. For example, in sqlplus you just type “startup”, instead of “startup mount”. The standby database is opened read-only. Then the Data Guard broker (with state APPLY-ON) starts MRP and the primary database records that you are using Active Data Guard. And then DBA_FEATURE_USAGE_STATISTICS flags the usage of: “Active Data Guard — Real-Time Query on Physical Standby”. And the LMS auditors will count the option.

The ways to prevent it are unsupported:

Oracle 19c Data Guard sandbox created by DBCA -createDuplicateDB

Here are the commands I use to create a sandbox on Linux with a CDB1 database in a Data Guard configuration. I use the latest (19c) DBCA features to create the Primary and duplicate to the Standby.

I’m doing all in a VM which is a Compute Instance provisioned in the Oracle Cloud. In this example, I have an Oracle Linux 7.6 VM.DenseIO2.24 shape with 320GB RAM and 24 cores but remember that you will not be able to scale up/down so choose according to your credits...

I have 40GB in the / filesystem

OS and filesystem installation

I’ve installed the prerequisites as root (preinstall package, sudo and HugePages — here 200GB out of the 314GB I have):

19c Observe-Only Data Guard FSFO: no split-brain risk in manual failover

Fast-Start Failover (FSFO) is an amazing feature of Oracle Data Guard Broker which brings High Availability (HA)features in addition to the Disaster Recovery (DR) one.

Data Guard as an HA solution

By default, a physical standby database protects from Disaster Recovery (like when your Data Center is on fire or underwater, or with a power cut,…). But it requires a manual action to do the failover. Then, even if the failover is quick (seconds to minutes) and there’s no loss of data (if in SYNC), it cannot be considered as HA because of the manual decision which can take hours. The idea of the manual decision is to understand the cause as it may be better to just wait in case of a transient failure. Especially if the standby site is less powerful and application performance will be degraded.

19c DG Broker export/import configuration

This is something I wanted for a long time: be able to save a broker configuration to be able to re-configure it if needed. What I usually do is maintain a script with all commands. What I dreamed was being able to export the configuration as a script. What we have now, in 19c, is the ability to export/import the configuration as a .xml file.

Actually, the configuration is already stored as XML in the broker configuration files (the .dat ones):

Where to check Data Guard gap?

At work, we had a discussion with well-known colleagues, Luca Canali and Ludovico Caldara, about where we check that Data Guard recovery works as expected without gap. Several views can be queried, depending on the context. Here are a few comments about them.

v$database

This is my preferred because it relies on the actual state of the database, whatever the recovery process is:

SQL> select scn_to_timestamp(current_scn) 
from v$database;
SCN_TO_TIMESTAMP(CURRENT_SCN)
----------------------------------------------------------
22-JAN-19 03.08.32.000000000 PM

This reads the current System Change number (DICUR_SCN from X$KCCDI) and maps it to a timestamp (using the mapping SMON_SCN_TIME table).

Data Guard gap history

V$ARCHIVED_LOG has a lot of information, and one that is interesting in a Data Guard configuration is APPLIED, which a boolean, or rather a VARCHAR2(3) YES/NO as there are no booleans in Oracle SQL. But I would love to see a DATE or TIMESTAMP for it. As a workaround, here is a little AWK script that parses the standby alert.log to get this information. And join it with V$ARCHIVED_LOG.COMPLETION_TIME to see if we had gaps in the past between the archived logs and applied ones.

Oracle archivelog deletion policy

Here are my posts on the dbi-services blog about archivelog deletion policy.

The deletion policy on a dataguard configuration should be:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

for the site where you don’t backup. It can be the standby or the primary.

and:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;

for the site where you do the backups. It can be the primary or the standby.

Some related posts:

n/a

Oracle 12c - incremental backup for DataGuard over network

If you have DataGuard or standby database in your organization you probably will love that new RMAN feature. Since 12c it is possible to catchup standby database using incremental backup using one command. Additional space and time need to run incremental backup, copy over to standby and restore can be limited to time required to run incremental backup over network.

See short example:

Stopping recovery on standby 

[oracle@ip-10-0-1-79 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Sun Jul 7 12:56:24 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:

Oracle 12c - incremental backup for DataGuard over network

If you have DataGuard or standby database in your organization you probably will love that new RMAN feature. Since 12c it is possible to catchup standby database using incremental backup using one command. Additional space and time need to run incremental backup, copy over to standby and restore can be limited to time required to run incremental backup over network.

See short example:

Stopping recovery on standby 

[oracle@ip-10-0-1-79 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Sun Jul 7 12:56:24 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to: