Search

Top 60 Oracle Blogs

Recent comments

May 2019

With and without WITH_PLSQL within a WITH SQL statement

OK, let’s be honest right up front. The motivation for this post is solely to be able to roll out a tongue twisting blog post title Smile. But hopefully there’s some value as well in here for you if you’re hitting the error:

ORA-32034: unsupported use of WITH clause

First some background. A cool little enhancement to the WITH clause came along in 12c that allowed PLSQL functions to be defined within the scope of the executing SQL statement. To see the benefit of this, consider the following example that I have a personal affinity with (given my surname).

Let’s say I’ve allowed mixed-case data in a table that holds names.

Advice on fragmentation and shrinkage

If you have performed some sort of data cleanup or similar on a table, then the deleted space will be reused by future insertions. But if

  • that cleanup was the last task you were performing on that table, ie, you were not expecting a lot of new data to ever come in again, or
  • you are performing a lot of full scan queries on that table and you want to make sure they are as efficient as possible

then there may be benefits to performing a shrink on that table to reclaim that space. One of the cool things about the segment advisor is that it will detect if there are some benefits to be gained by shrinking a segment. Here’s an example of that. I create a large table and then delete every 2nd row.

Generic data models … generic applications … ugh

There’s a hesitation to publish this example, because publishing it may be interpreted as an endorsement of this approach and it certainly isn’t. Over the years there have been plenty of articles describing the long term pain that typically comes from generic data models. Here’s a few to whet your appetite.

https://rodgersnotes.wordpress.com/2010/09/21/muck-massively-unified-code-key-generic-three-table-data-model/

Timestamp Oddity

[Editorial note: this is something I started writing in 2013, managed to complete in 2017, and still failed to publish. It should have been a follow-on to another posting on the oddities of timestamp manipulation.]

Just as national language support used to be, timestamps and time-related columns are still a bit of a puzzle to the Oracle world – so much so that OEM could cripple a system if it was allowed to do the check for “failed logins over the last 30 minutes”. And, just like NLS, it’s one of those things that you use so rarely that you keep forgetting what went wrong the last time you used it. Here’s one little oddity that I reminded myself about recently:

PeopleSoft Adminstrator Podcast: #184 – nVision Performance

I recorded a second podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about nVision.

Linux Scripting, Part III

In the previous blog posts, we learned how to set up the first part of a standard shell script- how to interactively set variables, including how to pass them as part of the script execution. In this next step, we’ll use those to build out Azure resources. If you’re working on-premises, you can use this type of scripting with SQL Server 2019 Linux but will need to use CLI commands and SQLCMD. I will cover this in later posts, but honestly, the cloud makes deployment quicker for any business to get what they need deployed and with the amount of revenue riding on getting to market faster, this should be the first choice of any DBA with vision.

DBMS_JOB – the joy of transactions

This is a followup to yesterdays post on DBMS_JOB and is critical if you’re upgrading to 19c soon. Mike Dietrich wrote a nice piece last week on the “under the covers” migration of the now deprecated DBMS_JOB package to the new Scheduler architecture. You should check it out before reading on here.

Mike’s post concerned mainly what would happen during upgrade (spoiler: the DBMS_JOB jobs become scheduler jobs but you can maintain them using the old API without drama), but immediately Twitter was a buzz with a couple of concerns that I wanted to address:

1) What about new jobs submitted via the old API after the upgrade?

Re-partitioning 2

Last week I wrote a note about turning a range-partitioned table into a range/list composite partitioned table using features included in 12.2 of Oracle. But my example was really just an outline of the method and bypassed a number of the little extra problems you’re likely to see in a real-world system, so in this note I’m going to bring in an issue that you might run into – and which I’ve seen appearing a number of times: ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION.

PostgreSQL: measuring query activity(WAL size generated, shared buffer reads, filesystem reads,…)

PostgreSQL: measuring query activity (WAL size generated, shared buffer reads, filesystem reads,…)

When I want to know if my application scales, I need to understand the work done by my queries. No need to run a huge amount of data from many concurrent threads. If I can get the relevant statistics behind a single unit-test, then I can infer how it will scale. For example, reading millions of pages to fetch a few rows will cause shared buffer contention. Or generating dozens of megabytes of WAL for a small update will wait on disk, and penalize the backup RTO, or the replication gap.

I’ll show some examples. From pgsql I’ll collect the statistics (which are cumulative from the start if the instance) before:

select *,pg_current_wal_lsn() from pg_stat_database where datname=current_database() \gset

and calculate the difference to show the delta:

DBMS_JOB – watching for failures

I had a friend point this one out to me recently. They use DBMS_JOB to allow some “fire and forget” style functionality for user, and in their case, the jobs are “best efforts” in that if they fail, it is not a big deal.

So whilst this may sound counter-intuitive, but if you rely on jobs submitted via DBMS_JOB to fail, then please read on.

By default, if a job fails 16 times in a row, it will marked as broken by the the database. Here’s a simple of example that, where the anonymous block will obviously fail because we are dividing by zero each time. I’ll set the job to run every 5 seconds, so that within a couple of minutes we’ll have 16 failures. First I’ll run this on 11.2.0.4