Search

Top 60 Oracle Blogs

Recent comments

DBA

From Database 18.3 to 18.5 (on Windows)

Contrary to wild rumours on the internet, it was not a fear of the number 13 that led to a numbering jump from version 12c to version 18c. The jump was part of our new, more flexible release mechanism so that we can get fixes and enhancements to customers on a more frequent and predictable schedule. In a nutshell, smaller bundles of features and fixes, more frequently.

I won’t dwell on that – if you’re unfamiliar with the new strategy, the best place to start is  MOS Note 2285040.1, which has a description and a FAQ. But in terms of (as the saying goes) eating one’s own dog food, I downloaded the 18.5 release update which came out this week, and applied it to my 18.3 installation and I thought I’d share the process.

EXPORT not GATHER with DBMS_STATS

Just a short post today on something that came in as a question for the upcoming Office Hours session which I thought could be covered quickly in a blog post without needing a lot of additional discussion for which Office Hours is more suited to.

The question was:

“When I gather statistics using DBMS_STATS, can I just create a statistic table and pass that as a parameter to get the results of the gather”

And the answer simply is “No” Smile but let me clear up the confusion.

Your New Years Resolution

Aligning roughly with the calendar year, based on the Chinese zodiak we’re about to go from the year of the dog to the year of the pig. But for me, in the “Information Technology Zodiak” Smile , 2018 was the year of the hack, just as it was in 2017 and just as it will be for 2019.

I’ve not dedicated much time to keeping a record of all of the high profile breaches this year, but just off the top of my head I can think of:

Automatic sequences not being dropped

One of the nice new things in 12c was the concept of identity columns. In terms of the functionality they provide (an automatic number default) it is really no different from anything we’ve had for years in the database via sequences, but native support for the declarative syntax makes migration from other database platforms a lot easier.

Under the covers, identity columns are implemented as sequences. This makes a lot of sense – why invent a new piece of functionality when you can exploit something that already has been tried and tested exhaustively for 20 years? So when you create a table with an identity column, you’ll see the appearance of a system named sequence to support it.

The phantom tablespace

(Cueing my deep baritone Morpheus voice…) What if I told you that you can reference non-existent tablespaces in your DDL?

OK, it sounds like a gimmick but there is a real issue that I’ll get to shortly. But first the gimmick Smile

I’ve created a partitioned table called “T” (I’ll pause here for your applause at my incredible imagination skills for table naming Smile) and to show you the complete DDL, I’ll extract it using the familiar DBMS_METADATA package.

18c and the ignoring of hints

 

One of the new features in 18c is the ability to ignore any optimizer hints in a session or across the entire database. A motivation for this feature is obviously our own Autonomous Data Warehouse, where we want to optimize queries without the potential “baggage” of user nominated hints strewn throughout the code.

This would seem a fairly easy function to implement, namely, as we parse the SQL, simply rip out anything that is a comment structured as a hint. At the Perth Oracle User Group conference yesterday, I had an interesting question from an attendee – namely, if all optimizer hints are being ignored, then does this mean that every hint will be ignored. In particular, what about the (very useful) QB_NAME hint? If we are just stripping out anything that is in a hint text format, we will lose those as well?

So it’s time for a test!

Add ORDER BY to make ANY query faster

Yes it’s SCBT day here in Perth!

SCBT = Silly Click Bait Title Smile

This post is just a cautionary tale that it is easy to get caught up judging SQL performance solely on a few metrics rather than taking a more common sense approach of assessing performance based on the true requirements of the relevant component of the application.  I say “true requirements” because it may vary depending on what is important to the application for a particular component.

Interval versus Range partitions

One of the nice things about partitioning in the database is that partition pruning can quickly eliminate the requirement to read large amounts of data when the partitioning key(s) are appropriately provided in your queries (and DML). This also extends to queries where the values provided do not map to any partitions.

Here’s a simple example of a range partitioned table which only cover the values from (less than) 1 to a ceiling of 6.

Compressed LOB–my table got bigger?

We had an interesting question on AskTOM the other day about LOB compression. Someone was very concerned that after applying compression to the LOBS in the column of their table, the table actually got larger! Cue much confusion and panic, but after a little exploration, the explanation was pretty simple.

By default, when you create a LOB column in a table, the default storage definition is ENABLE STORAGE IN ROW. As per the documentation:

If you enable storage in row, then the LOB value is stored in the row (inline) if its length is less than approximately 4000 bytes minus system control information

LOBs vs Data – when compression occurs

Just a quick tip for SECUREFILE lobs.

You may be familiar with basic compression on data in tables. It is a dictionary attribute of the table, which is then taken into account when you perform:

  • a SQL Loader direct load, or
  • an INSERT with the APPEND hint, or
  • a bulk bind INSERT in PLSQL using the APPEND_VALUE hint.

Whichever of the above you perform does not really matter – the key thing is that when you set the COMPRESS attribute on a table, this only applies on operations subsequent to the moment at which you altered the table. It does not compress the existing data. Hence when you alter a table to add the attribute, it is instantaneous.