Search

Top 60 Oracle Blogs

Recent comments

data pump

impdp logtime=all metrics=y and 12cR2 parallel metadata

A quick post to show why you should always use LOGTIME=ALL METRICS=Y when using Data Pump. Just look at an example showing the timestamp in front of each line and a message about each task completed by the worker:

15-NOV-17 11:48:32.305: W-5 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
15-NOV-17 11:48:34.439: W-13 Completed 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.439: W-13 Completed by worker 1 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.440: W-13 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
15-NOV-17 11:48:35.472: W-17 Startup took 70 seconds
15-NOV-17 11:48:35.596: W-18 Startup took 70 seconds
15-NOV-17 11:48:35.719: W-20 Startup took 70 seconds
15-NOV-17 11:48:35.841: W-19 Startup took 70 seconds
15-NOV-17 11:48:43.520: W-5 Completed 1714 TABLE objects in 7 seconds

impdp content=metadata_only locks the stats

With Oracle you can learn something every day. Today, preparing a migration to 12.2, I found all tables had locked statistics. I learned that it is the expected behavior since 10.2 when importing metadata_only including statistics, to avoid that the automatic job gathering comes and replaces the stats by ‘0 rows’.

It is documented in DataPump Import Without Data Locks Table Statistics (Doc ID 415081.1) but as I was really surprised about that (and also frustrated to learn a 10.2 thing when migrating to 12.2) that I wanted to test myself.

Archaic Data Transfer- A DBA Rant

There was a great post by Noel Yuhanna on how he deems the number of DBAs required in a database environment by size and number of databases.  This challenge has created a situation where data platforms are searching for ways to remove this roadblock and eliminate the skills needed to manage the database tier.

Data Pump LOGTIME, DUMPFILE, PARFILE, DATA_PUMP_DIR in 12c

Data Pump is a powerful way to save data or metadata, move it, migrate, etc. Here is an example showing few new features in 12cR1 and 12cR2.

New parameters

Here is the result of a diff between 12.1 and 12.2 ‘imp help=y’
CaptureDataPump122

But for this post, I’ll show the parameters that existed in 12.1 but have been enhanced in 12.2

LOGTIME

This is a 12.1 feature. The parameter LOGTIME=ALL displays the system timestamp in front of the messages in at the screen and in the logfile. The default is NONE and you can also set it to STATUS for screen only and LOGFILE for logfile only.

Data Pump Enhancements in Oracle Database 12c

Another one to file under “Not sexy but flippin’ awesome!”

If you are a DBA, you are going to spend a lot of time with Data Pump. All roads seem to lead back to it. :) There are some more headline worthy features, like transportable database, but the two that jumped out at me were actually pretty small, but awesome.

  • “TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y” – Switches table and/or index imports to NOLOGGING for the lifespan of the import operation.
  • “LOGTIME=ALL” – Puts a timestamp in the output message so you can see how long individual operations took.

I wrote up an article about it here.

Cheers

Tim…

Streams Pool is only for Streams? Think Again!

If you don’t use the automatic SGA (i.e. set the sga_target=0) - something I frequently do - and don’t use Streams, you probably have set the parameter streams_pool_size to 0 or not set it at all, since you reckon that the pool is used for Streams alone and therefore would be irrelevant in your environment wasting memory.

SQL Developer 3.1 : Data Pump Wizards…

One of the things that sounded kinda neat in SQL Developer 3.1 was the Data Pump Wizards, so I thought I would have a play with them.

As you would expect, they are pretty straight forward. They can’t do everything you can do with expdp and impdp, but they are pretty cool for on-the-fly tasks.

You can use the wizard to generate data pump definitions using the PL/SQL API. It would have been a nice touch if it gave you the option to see a regular command line or parameter file definition also, since I would be more likely to put that into source control than the API definition of a job. Even so, a nice little feature.

Cheers

Tim…