Search

Top 60 Oracle Blogs

Recent comments

performance

Sequence Costs

You’re probably aware of the “identity” construct that appeared in 12.1 and uses Oracle’s sequence mechanism to model the Autonumber or Identity columns that other databases have. A posting from Clay Jackson on the Oracle-L list server suggests that something about their code path has introduced a surprising overhead in 19c … when you’re not using them.

The following code is a slightly modified version of a test case that Clay Jackson posted to demonstrate a strange difference in performance between 12.2 and 19.3

dense_rank

I’ve just been prompted to complete and publish a draft I started a few years ago. It’s (ultimately) about a feature that appeared in 9i but doesn’t seem to show up very often at client sites or as a common solution to performance problems on the various Oracle forums – but maybe that’s not surprising given how slowly analytic functions have been taken up.

I want to work towards the feature by starting with a requirement, then examine several solutions. To supply a touch of realism I’ll create an orders table, which holds a customer id and an order date (including time), ,and then ask for a report of the most recent order for each customer. Here’s some starting data:

Unreal Load Testing in the Cloud

Load testing as part of a cloud migration project is expected and  should be built into the requirements.  The goal is to set yourself up for success.

Log Latency

Recently I had a Cloud Solution Architect, (CSA) escalate an Oracle performance problem after migrating from on-prem to the cloud.

Databases are slow right?

Man, it gets my goat when people start pontificating nonsense on Twitter.

Yeah, I know, I know, I shouldn’t get upset, because 99% of everything on social media is nonsense anyway, but it is when people roll out claims about database tech that I tend to get tetchy Smile

Today it’s that familiar old chestnut: “Constraints in the database make it slow“. And…this one even came with some numbers add some apparent weight!

count(*) – again

I’ve just received an email asking (yet again) a question about counting the number of rows in a table.

We have a large table with a CLOB column, where the CLOB occupies 85% storage space.
when we use select count(*) from , the DBA says that you should not use count(*) as it uses all columns and as this table contains CLOB it results in high CPU usage, where as if we use count(rowid), this brings us faster and same result.

Video : Decoupling to Improve Performance

In today’s video we demonstrate how to cheat your way to looking like you’ve improved performance using decoupling.

This was based on the following article.

This came up in conversation a few days ago, so I thought it was worth resurrecting this demo. It doesn’t really matter what tech stack you use, the idea is still the same.

Oracle and postgres disk IO performance

This post is about one of the fundamentally important properties of a database: how IO is done. The test case I studied is doing a simple full table scan of a single large table. In both Oracle and postgres the table doesn’t have any indexes or constraints, which is not a realistic example, but this doesn’t change the principal topic of the study: doing a table scan.

I used a publicly available dataset from the US bureau of transportation statistics called FAF4.5.1_database.zip
The zipped file is 347MB, unzipped size 1.7GB.

Single Value Column Frequency Histogram Oracle 12c and later

It is hopefully in the meantime well known that Oracle has introduced in version 11g a new algorithm to gather statistics on a table that no longer requires sorting for determining the critical Number Of Distinct Values (NDV) figure - it instead uses a clever "approximate NDV" algorithm which always reads 100% of the table data and therefore in principle generates very accurate statistics. This new algorithm gets used only when the ESTIMATE_PERCENT parameter to the DBMS_STATS.GATHER*STATS calls is left at default or explicitly passed as "DBMS_STATS.AUTO_SAMPLE_SIZE". This new algorithm is also required in case other new features like "Incremental Statistics" should be used.

In 12c Oracle improved this algorithm allowing the generation of Frequency and the new Top Frequency histogram types in a single pass. The new Hybrid histogram type still requires a separate pass.

Where does the log writer spend its time on?

The Oracle database log writer is the process that fundamentally influences database change performance. Under normal circumstances the log writer must persist the changes made to the blocks before the actual change is committed. Therefore, it’s vitally important to understand what the log writer is exactly doing. This is widely known by the Oracle database community.

The traditional method for looking at log writer performance is looking at the wait event ‘log file parallel write’ and the CPU time, and comparing that to the ‘log file sync’ alias “commit wait time”. If ‘log file parallel write’ and ‘log file sync’ roughly match, a commit is waiting on the log writer IO latency, if it isn’t then it’s unclear, and things get vague.

Oracle wait event ‘log file parallel write’ change

This post is about a change in how the time is measured for the event ‘log file parallel write’. This is important for the performance tuning of any change activity in an Oracle database, because with the default commit settings, a foreground session that commits changes waits in the wait event ‘log file sync’, which is a wait on logwriter activity, for which the wait event ‘log file parallel write’ always has been the indicator of the time spend on IO.

Log file sync
First things first: a foreground session normally waits on the wait event ‘log file sync’ when it commits waiting for its change vectors to be written to the online redologfile(s) by the logwriter. It is wrong to always assume a ‘log file sync’ will be present. If, somehow, the logwriter manages to increase the ON DISK SCN to or beyond the foreground session’s commit SCN, there will be no ‘log file sync’ wait event.