Search

Top 60 Oracle Blogs

Recent comments

compression

Compressed LOB–my table got bigger?

We had an interesting question on AskTOM the other day about LOB compression. Someone was very concerned that after applying compression to the LOBS in the column of their table, the table actually got larger! Cue much confusion and panic, but after a little exploration, the explanation was pretty simple.

By default, when you create a LOB column in a table, the default storage definition is ENABLE STORAGE IN ROW. As per the documentation:

If you enable storage in row, then the LOB value is stored in the row (inline) if its length is less than approximately 4000 bytes minus system control information

LOBs vs Data – when compression occurs

Just a quick tip for SECUREFILE lobs.

You may be familiar with basic compression on data in tables. It is a dictionary attribute of the table, which is then taken into account when you perform:

  • a SQL Loader direct load, or
  • an INSERT with the APPEND hint, or
  • a bulk bind INSERT in PLSQL using the APPEND_VALUE hint.

Whichever of the above you perform does not really matter – the key thing is that when you set the COMPRESS attribute on a table, this only applies on operations subsequent to the moment at which you altered the table. It does not compress the existing data. Hence when you alter a table to add the attribute, it is instantaneous.

Choosing the Right Compression

Choosing the right database table compression matters.  It affects the size of the objects and also the performance of your application.  Compression makes objects smaller, and for many systems, this is an objective in itself, but it also affects performance.  As compressed objects are smaller, less physical I/O is required to access them.  However, that saving is paid for in CPU.  More CPU is required to compress the data when it is written, and then again (albeit usually less CPU) to decompress it when you read it.  The question is whether the saving in I/O pays for the additional CPU.

There are a number of forms of compression to choose between.  They have different characteristics.  The resulting objects are different sizes, and they require different amounts of CPU to compress and decompress.

Licensed for Advanced Compression? Don’t forget the network

We often think of Advanced Compression being exclusively about compressing data “at rest”, ie, on some sort of storage device.  And don’t get me wrong, if we consider just that part of Advanced Compression, that still covers a myriad of opportunities that could yield benefits for your databases and database applications:

  • Heat maps
  • Automatic Data Optimization
  • XML, JSON and LOB compression (including de-duplication)
  • Compression on backups
  • Compression on Data Pump files
  • Additional compression options on indexes and tables
  • Compressed Flashback Data Archive storage
  • Storage snapshot compression

However, if you are licensed for the option, there are other things that you can also take advantage of when it comes to compression of data on the network.

nVision Performance Tuning: 9. Using Compression without the Advanced Compression Licence

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Table compression can significantly decrease the size of tables and reduce the volume of I/O required to retrieve data.  Compression of the ledger, ledger budget, and summary ledger tables can significantly improve the performance of scans in nVision.

Index compression–quick tip

If you’re appropriately licensed and want to use advanced index compression, you can take advantage of the setting a tablespace to automatically add compression as a default at a nominated level in the database.  From the docs:

Here is an example of that in action.   We’ll set our tablespace default accordingly


SQL> create tablespace demo
  2  datafile 'C:\ORACLE\ORADATA\DB122\DEMO.DBF'
  3  size 100M
  4  default index compress advanced high;

Tablespace created.

Now we’ll create a table and an index on it in that tablespace

Debugging

The OTN database forum supplied a little puzzle a few days ago – starting with the old, old, question: “Why is the plan with the higher cost taking less time to run?”

The standard (usually correct) answer to this question is that the optimizer doesn’t know all it needs to know to predict what’s going to happen, and even if it had perfect information about your data the model used isn’t perfect anyway. This was the correct answer in this case, but with a little twist in the tail that made it a little more entertaining. Here’s the query, with the two execution plans and the execution statistics from autotrace:

Implementing Index Compression (and other Physical Storage Options) via Application Designer

There are some performance improvements that require physical storage options to be set on tables or indexes.
One particular technique that I will take as an example for this article is index compression.  A good example in PeopleSoft is the tree node table, PSTREENODE.  It drives many security and hierarchical queries.  It is not updated very frequently, only as new trees are brought on.  Many of the indexes are good candidates for compression.
This compression works by storing repeated column values only one per index leaf block.  Each distinct set of values in the columns up to the prefix length are stored in a symbol table.  The choice of prefix length can significantly effect the compression.  Oracle can calculate the optimal prefix length using

Index Compression – aargh

The problem with telling people that some feature of Oracle is a “good thing” is that some of those people will go ahead and use it; and if enough people use it some of them will discover a hitherto undiscovered defect. Almost inevitably the bug will turn out to be one of those “combinations” bugs that leaves you thinking: “Why the {insert preferred expression of disbelief here} should {feature X} have anything to do with {feature Y}”.

Here – based on index compression, as you may have guessed from the title – is one such bug. I got it first on 11.1.0.7, but it’s still there on 11.2.0.4 and 12.1.0.1

12c In-memory

I wrote a note about the 12c “In-Memory” option some time ago on the OTN Database forum and thought I’d posted a link to it from the blog. If I have I can’t find it now so, to avoid losing it, here’s a copy of the comments I made:

Juan Loaiza’s presentation is probably available on the Oracle site by now, but in outline: the in-memory component duplicates data (specified tables – perhaps with a restriction to a subset of columns) in columnar format in a dedicated area of the SGA. The data is kept up to date in real time, but Oracle doesn’t use undo or redo to maintain this copy of the data because it’s never persisted to disc in this form, it’s recreated in-memory (by a background process) if the instance restarts. The optimizer can then decide whether it would be faster to use a columnar or row-based approach to address a query.